uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,992,663
arxiv
\section{Introduction} We denote by $\mathbb{Z}, \mathbb{N}, \mathbb{Z}_+, \mathbb{C}$ and $\mathbb{C}^*$ the sets of all integers, non-negative integers, positive integers, complex numbers, and nonzero complex numbers, respectively. All vector spaces and algebras in this paper are over $\mathbb{C}$. We denote by $U(L)$ the universal enveloping algebra of the Lie (super)algebra $L$ over $\mathbb{C}$. Throughout this paper, by subalgebras, submodules for Lie superalgebras we mean subsuperalgebras and subsupermodules respectively. Superconformal algebras may be viewed as natural super-extensions of the Virasoro algebra and have been playing a fundamental role in string theory and conformal field theory. In \cite{Ka}, Kac classified all physical superconformal algebras: namely, the $N = 0$ (the Virasoro algebra Vir), $N = 1$ (the super Virasoro algebras), $N = 2, 3$ and $4$ superconformal algebras, the superalgebra $W(2)$ of all vector fields on the $N=2$ supercircle, and a new superalgebra $CK(6)$. Representation theory for superconformal algebras has been the subject of intensive study. It is a challenging problem to give complete classifications of simple weight modules with finite dimensional weight spaces for superconformal algebras. Based on the classification of simple jet modules introduced by Y. Billig in \cite{B} (see also \cite{Rao}), Billig and Futorny gave a complete classification of simple Harish-Chandra modules for Lie algebra of vector fields on a torus with the so-called $A$ cover theory in \cite{BF}. Recently, with the study of jet modules, such classifications were completed for many Lie superalgebras, the $N=1$ Ramond algebra in \cite{CL}, the Witt superalgebra in \cite{XL, BFI}, the affine-Virasoro superalgebra in \cite{CLW, HLW}, etc. On the other hand, with classical methods for the classification of finite irreducible modules over the simple Lie algebras, \cite{LPX0} classified all strong Harish-Chandra modules over the $N=2$ superconformal algebras. The above Lie superalgebras are all $\mathbb{Z}$-graded. However, for the $\frac12\mathbb{Z}$-graded Neveu-Schwarz algebras, it meets many new difficulties. Recently \cite{CL} classified such modules for the $N=1$ Neveu-Schwarz algebra with some complicated constructions. The Ovsienko-Roger Lie algebra $\widehat{\mathfrak L}_\lambda:={\rm Vir}\ltimes \mathcal F_{\lambda}$ was introduced in\cite{OR} to study the extensions of the Virasoro algebra by the densitiy module $\mathcal F_{\lambda}$. The algebras $\mathfrak L_{0}$ is known in literature under the name the twisted Heisenberg-Virasoro algebra and plays an important role in moduli spaces of curves \cite{ADKP}. The algebra $\mathfrak L_{-1}$ is better known under the name the $W(2,2)$-algebra in the context of vertex operator algebras \cite{ZD} and BMS/GCA correspondence \cite{Ba0}. Moreover with the study of the Ovsienko-Roger Lie algebra $\mathfrak L_\lambda$, Harish-Chandra modules for many Lie algebras related to the Virasoro algebra can be classified (see \cite{L, LPX2}). Motivited it we introduce the Ovsienko-Roger Lie superalgebra $\widehat{\mathfrak L}(\lambda, \epsilon)={\rm Vir}\ltimes \mathcal F_{\lambda}, \epsilon=0, \frac12$, where $\mathcal F_{\lambda}=\sum_{i\in\mathbb Z+\epsilon}G_i$ is the odd part of $\widehat{\mathfrak L}(\lambda, \epsilon)$. $\widehat{\mathfrak L}(0, \epsilon)$ is named the Kuper algebras, which is connected with the super Camassa-Holm-type systems (see \cite{G}). It is well known that $\Omega$-operators plays an important role in the classification of irreducible cuspidal modules over many Lie (super)algebras. With the $\Omega$-operators for the Virasoro algebra \cite{BF} and the super Virasoro algebra in \cite{CL, CLL}, we get the $\Omega$-operators on the cuspidal modules for the Ovsienko-Roger superalgebras (Lemma \ref{Omegaoper-OS} below) and then give a uniform method to classify all simple Harish-Chandra modules for the $\mathbb{Z}$-graded and $\frac12\mathbb{Z}$-graded Ovsienko-Roger superalgebras (Theorem \ref{cuspidal} below). With this result we can easily classify all simple Harish-Chandra modules for many related Lie superalgebras (see Section 5). We shall note that we just do our research for $\lambda=-\frac12$ in the whole paper although our calculations and proofs are all suitable for any $\lambda\in\mathbb C$. The paper is organized as follows. In Section \ref{pre}, we collect some basic results for our study. Simple cuspidal modules are classified in Section 3. In Section 4, we classify all simple Harish-Chandra modules for the Ovsienko-Roger superalgebra. Finally, with this classification we can classify all simple Harish-Chandra modules over some related Lie superalgebras including the $N=1$ BMS$_3$ algebra, the super $W(2,2)$, etc. in Section \ref{final}. \section{Preliminaries} \label{pre} In this section, we collect some basic definitions and results for our study. By definition, as a vector space over $\mathbb{C}$, the Virasoro algebra ${\rm Vir}$ has a basis $\{L_m, C \mid m\in\mathbb{Z}\}$, subject to the following relations: \begin{eqnarray}\label{def1} &&[L_m, L_n]=(n-m)L_{m+n}+\delta_{m+n, 0}{1\over 12}(m^3-m)C, \ \forall m, n\in\mathbb{Z}. \end{eqnarray} It is well known that the intermediate series ${\rm Vir}$-modules is $${\mathcal A}_{a,\; b}:=\sum_{i\in\mathbb{Z}}\mathbb{C} v_i \ {\rm with} \ L_mv_i=(a+i+bm)v_{m+i}, Cv_i=0, \forall m, i\in\mathbb{Z}.$$ ${\mathcal A}_{a,\; b}$ is simple if and only if $a\not\in\mathbb{Z}$ or $b\ne 0, 1$. As usual, we use $\mathcal A_{a,\; b}'$ to denote by the irreducible sub-quotient module of ${\mathcal A}_{a,\; b}$ (see \cite{KS}). If $a\in\mathbb{Z}$, then ${\mathcal A}_{a,\; b}\cong {\mathcal A}_{0,\; b}$. So we always suppose that $a\notin\mathbb{Z}$ or $a=0$ in ${\mathcal A}_{a,\; b}$. As in \cite{Fuk, OR}, we denote by ${\mathcal F}_\lambda={\mathcal A}_{0,\; \lambda}, \lambda\in\mathbb{C}$, which is also called density module of the Virasoro algebra. Motivated \cite{OR}, for $\epsilon=0, \frac12$, we can define the Ovsienko-Roger superalgebra $\widehat{\mathfrak L}(\epsilon):={\rm Vir}\ltimes \mathcal F_{-\frac12}$. More precisely $\widehat{\mathfrak L}(\epsilon)$ has a basis $\{L_n, G_r,C\,|\, n\in\mathbb{Z}, r\in\mathbb{Z}+\epsilon\}$ with the brackets \begin{align*} [L_m,L_n]&=(n-m)L_{m+n}+\delta_{m+n,0}\frac{1}{12}(n^3-n)C,\\ [L_m, G_r]&=(r-\frac12m)G_{r+m},\\ [G_r,G_s]&=0, \ \forall m, n\in\mathbb{Z}, r, s\in\mathbb{Z}+\epsilon. \end{align*} Here we shall note that the odd part of $\widehat{\mathfrak L}(\epsilon)$ is spanned by $\{G_{n}\mid n\in\mathbb{Z}+\epsilon\}$. In the case of $\epsilon=0$, $\widehat{\mathfrak L}(0)$ can be realized as the affine-Virasoro superalgebra $\mathbb{C} x\otimes \mathbb{C}[t, t^{-1}]\rtimes {\rm Vir}$, where $\mathbb{C} x$ is the one-dimensional abelian Lie superalgebra. All simple Harish-Chandra modules over $\widehat{\mathfrak L}(0)$ were classified in \cite{CLW}. For the case of $\epsilon=\frac12$, it encounters many new difficulties to classify such modules with usual methods. This paper give a uniform new method to consider both cases of $\epsilon=0$ and $\epsilon=\frac12$. For convenience, we just write our research for the case of $\epsilon=\frac12$. So from now denote by $\widehat{\mathfrak L}=\widehat{\mathfrak L}(\frac12)$ in brief, and $\mathfrak L$ the quotient algebra $\widehat{\mathfrak L}/\mathbb{C} C$. Clearly, $\mathfrak L$ is a $\frac{1}{2}\mathbb{Z}$-graded Lie superalgebra with ${\mathfrak L}_i=\mathbb{C} L_i, \forall i\in\mathbb{Z}$ and ${\mathfrak L}_{i+\frac12}=\mathbb{C} G_{i+\frac12}, \forall i\in\mathbb{Z}$. The subalgebra of $\mathfrak L$ spanned by $\{L_k\,|\, k\in\mathbb{Z}\}$ is isomorphic to the Witt algebra $W$. The following results for $W$-modules will be used. \begin{lemm}\label{Omegaoper}$(${\cite[Corollary 3.7]{BF}}$)$ Let $\Omega_{k, s}^{(m)}=\sum\limits_{i=0}^m(-1)^i\binom{m}{i}L_{k-i}L_{s+i}$. For every $\ell\in\mathbb{Z}_+$ there exists $m\in\mathbb{Z}_+$ such that for all $k, s\in\mathbb{Z}$, $\Omega_{k,s}^{(m)}$ annihilate every cuspidal $W$-module with a composition series of length $\ell$. \end{lemm} All simple $W$-modules with finite dimensional weight spaces were classified in \cite{Ma}. \begin{theo}\label{thm-vir}\cite{Ma} Let $V$ be a simple $W$-module with finite dimensional weight spaces. Then $V$ is a highest weight module, lowest weight module, or $\mathcal A_{a, b}'$ for some $a, b\in\mathbb{C}$. \end{theo} \section{Simple cuspidal $\mathfrak L$-module} In this section, we shall consider cuspidal $\mathfrak L$-modules. The following result was given in \cite{CL} (also see \cite{CLL}). \begin{lemm}\label{Omegaoper-S}\cite{CL} Let $V$ be a cuspidal ${\mathfrak L}$-module. Then there exists $m\in\mathbb{Z}_+$ such that for all $r\in\mathbb{Z}+\frac12, s\in\mathbb{Z}$, $\overline{\Omega}_{r, s}^{(m)}$ annihilate $V$, where $\overline{\Omega}_{r, s}^{(m)}=\sum\limits_{i=0}^m(-1)^i\binom{m}{i}G_{r-i}L_{s+i}$. \end{lemm} \begin{lemm}\label{Omegaoper-OS} Let $V$ be a cuspidal ${\mathfrak L}$-module. Then there exists $m\in\mathbb{Z}_+$ such that $\underline{\Omega}_{r, s}^{(m)}$ annihilate $V$, where \begin{equation} \underline{\Omega}_{r, s}^{(m)}:=\sum\limits_{i=0}^m(-1)^i\binom{m}{i}G_{r-i}G_{s+i}, \ \forall r, s\in\mathbb{Z}+\frac12. \label{omega3} \end{equation} \end{lemm} \begin{proof} For the cuspidal module $V$, by Lemma \ref{Omegaoper-S}, there exists $m\in\mathbb{Z}_+$ such that for all $r\in\mathbb{Z}+\frac12, s\in\mathbb{Z}$, $\overline{\Omega}_{r, s}^{(m)}V=0$, it is \begin{equation} \sum\limits_{i=0}^m(-1)^i\binom{m}{i}G_{r-i}L_{s+i}V=0, \ \forall r\in\mathbb{Z}+\frac12, s\in\mathbb{Z}. \label{omega1} \end{equation} By action of $G_t$ on \eqref{omega1}, we get \begin{equation} \frac12\sum\limits_{i=0}^m(-1)^i\binom{m}{i}(s+i-2t)G_{r-i}G_{s+t+i}V=0, \ \forall r\in\mathbb{Z}+\frac12, s\in\mathbb{Z}. \label{omega2} \end{equation} Choosing $t=t_1, t_2, t_1\ne t_2$ in \eqref{omega2}, we get the lemma. \end{proof} \begin{lemm}\label{nilplemma} Let $V$ be a simple cuspidal ${\mathfrak L}$-module. Then there exists $N\in\mathbb{Z}_+$ such that $\mathfrak L_{\bar1}^NV=0$. \end{lemm} \begin{proof} By Lemma \ref{Omegaoper-OS}, we can get \eqref{omega3}. Multiply both sides of \eqref{omega3} by $G_{s+1} G_{s+2}\cdots G_{s+m}$, $G_{r-j+1}\cdots G_{r-1}G_rG_{s+j+1}\cdots G_{s+m}, 1\le j\le m$ to get \begin{align} &G_{r}G_{s}G_{s+1}\cdots G_{s+m}V=0,\label {nilp-1} \\ &G_{r-j}\cdots G_{r-1}G_rG_{s+j}G_{s+j+1}\cdots G_{s+m}V=0,\ \forall 1\le j\le m. \label {nilp-j} \end{align} Fix some $s\in\mathbb{Z}+\frac12$ and set ${\mathcal O}_n=\{s, s+1, s+2, \cdots, s+n\}$. By \eqref{nilp-1} the following identity \begin{equation} G_{r_0}G_{r_1}\cdots G_{r_{m+1}}V=0 \label{nilp} \end{equation} holds for all $r_0, r_1, \cdots, r_{m+1}\in {\mathcal O}_{m+1}$. By \eqref{nilp-1} and \eqref{nilp-j} we see that $\eqref{nilp}$ holds for all $r_0, r_1, \cdots, r_{m+1}\in {\mathcal O}_{m+2}$. We shall use the induction on $k$ to prove that $$G_{r_0}G_{r_1}\cdots G_{r_{m+1}}V=0$$ for all $r_0, r_1,\cdots, r_{m+1}\in {\mathcal O}_{m+k}$ for all $k\ge 1$. Then, according to the arbitariness of $s$, we get the lemma by choosing $N=m+2$. Suppose that \eqref{nilp} holds for all $r_0, r_1, \cdots, r_{m+1}\in {\mathcal O}_{n}$ and some $n>m+1$. Now we shall prove that \begin{equation} G_{r_0}G_{r_1}\cdots G_{r_m}G_{s+n+1}V=0 \label{nilp-n+1} \end{equation} holds for all $r_0<r_1<\cdots<r_{m}\in {\mathcal O}_{n}$. \noindent{\bf Case 1.} $r_0=s+n-m$. In this case $r_i=s+n-m+i$ for any $i=1, 2, \ldots, m$. So \eqref{nilp-n+1} follows from \eqref{nilp-1} directly. \noindent{\bf Case 2.} $r_0=s+n-m-k$ for some $1\le k\le n-m$. Replaced by $s, r$ by $s+n-m+1, s+n-k$ in \eqref{omega3}, respectively, we get \begin{align*} &\big(G_{s+n-k}G_{s+n-m+1}-\binom{m}{1}G_{s+n-k-1}G_{s+n-m+2}\\ &+\cdots+(-1)^{m-1}\binom{m}{m-1}G_{s+n-k-m+1}G_{s+n}+(-1)^mG_{r_0}G_{s+n+1}\big)V=0. \end{align*} So we get \begin{equation} G_{r_0}G_{s+n+1}V\subset (\sum_{r_i, r_j\in\mathcal O_n} G_{r_i}G_{r_j})V. \end{equation} In this case \eqref{nilp-n+1} follows by inductive hypothesis. \end{proof} \begin{theo}\label{cuspidal} Let $V$ be a simple cuspidal ${\mathfrak L}$-module. Then $V$ is isomorphic to the Harish-Chandra module of the intermediate series: $V=\sum v_i\cong {\mathcal A}_{a, b}'$ for some $a, b\in\mathbb C$ with $L_mv_i=(a+i+bm)v_{m+i}, G_rv_i=0$ for all $m,i\in\mathbb Z, r\in\mathbb{Z}+\frac12$. \end{theo} \begin{proof} Clearly $\dim\,V_i\le p$ for some positive integer $p$ holds for almost $i\in\mathbb Z$ and $C$ acts on $V$ as zero (see \cite{KS}). Now ${\mathfrak L}_{\bar1}^iV$ is ${\mathfrak L}$-submodule since ${\mathfrak L}_{\bar1}^{i+1}V\subset {\mathfrak L}_{\bar1}^iV$ for all $i\in\mathbb N$. So ${\mathfrak L}_{\bar1}V=V$ or ${\mathfrak L}_{\bar1}V=0$. By Lemma \ref{nilplemma}, we get \begin{equation}{\mathfrak L}_{\bar1}^NV=0. \label{grg01}\end{equation} If ${\mathfrak L}_{\bar1}V=V$ then ${\mathfrak L}_{\bar1}^NV=V=0$, which is a contradiction. So ${\mathfrak L}_{\bar1}V=0$ and the proposition follows from Theorem \ref{thm-vir}. \end{proof} \section{Simple Harish-Chandra module} Now we can classify all simple Harish-Chandra modules over $\widehat{\mathfrak L}$. The following result is well-known. \begin{lemm}\label{weightupper} Let $M$ be a weight module with finite dimensional weight spaces for the Virasoro algebra with $\mathrm{supp}(M)\subseteq\lambda+\mathbb{Z}$. If for any $v\in M$, there exists $N(v)\in\mathbb{N}$ such that $L_iv=0, \forall i\geq N(v)$, then $\mathrm{supp}(M)$ is upper bounded. \end{lemm} \begin{lemm}\label{appN=1} Suppose $M$ is a simple weight $\widehat{\mathfrak L}$-module with finite dimensional weight spaces which is not cuspidal, then $M$ is a highest (or lowest) weight module. \end{lemm} \begin{proof} It is essentially same as that of Lemma 4.2 (1) in \cite{CL}. Fix a $\lambda\in\mathrm{supp}(M)$. Since $M$ is not cuspidal, then there is a $k\in\frac12\mathbb{Z}$ such that $\dim M_{-k+\lambda}>2(\dim M_\lambda+M_{\lambda+\frac12}+\dim M_{\lambda+1})$. Without loss of generality, we may assume that $k\in\mathbb{N}$. Then there exists a nonzero element $w\in M_{-k+\lambda}$ such that $L_kw=L_{k+1}w=G_{k+\frac12}w=0$. Therefore, $L_iw=G_{i-\frac12}w=0$ for all $i\geq k^2$, since $[\widehat{\mathfrak L}_i,\widehat{\mathfrak L}_j]=\widehat{\mathfrak L}_{i+j}$. It is easy to see that $M'=\{v\in M\,|\,\dim\widehat{\mathfrak L}^+v<\infty\}$ is a nonzero submodule of $M$, here $\widehat{\mathfrak L}^+=\sum\limits_{n\in\mathbb{Z}_+}(\mathbb{C} L_n+\mathbb{C} G_{n-\frac12})$. Hence $M=M'$. So, Lemma \ref{weightupper} tells us that $\mathrm{supp}(M)$ is upper bounded, that is $M$ is a highest weight module. \end{proof} Combining Lemma \ref{appN=1} and Theorem \ref{cuspidal}, we can get the following result. \begin{theo}\label{main1} Any simple $\widehat{\mathfrak L}$ module with finite dimensional weight spaces is a highest weight module, lowest weight module, or is isomorphic to $\mathcal A_{a,b}'$ for some $a, b\in\mathbb{C}$. \end{theo} \section{Applications}\label{final} Some Lie superalgebras were constructed in \cite{WCB} as an application of the classification of Balinsky-Novikov super-algebras with dimension $2|2$. As applications of the above results, we can classify all Harish-Chandra modules over many Lie superalgebras listed in Table 7 in \cite{WCB}. \subsection{The Lie superalgebra $\frak q$} By definition the Lie superalgebra $\frak q=\frak q_{\bar0}+\frak q_{\bar1}$, where $\frak q_{\bar0}:=\mathbb{C}\{L_m, H_m, C\mid m\in\mathbb{Z}\}$ and $\frak q_{\bar1}=\mathbb{C}\{G_p\mid p\in\mathbb{Z}+\frac12\}$, is a subalgebra of the $N=2$ Neveu-Schwarz superconformal algebra, with the following relations: \begin{eqnarray} && [L_m,L_n]=(n-m)L_{n+m}+{1\over12}(n^3-n)C,\nonumber\\ &&[H_m,H_n]={1\over3}m\delta_{m+n,0}C,\ \ \ \ \ \ \ \ \ \ [L_m, H_n]=nH_{m+n},\nonumber\\ &&\label{gd2} [L_m,G_p]=(p-\frac{m}{2})G_{p+m},\ \ \ \ \ \ \ \ [H_m,G_p]=G_{m+p},\label{qdef3}\\ &&[G_p, G_q]=0,\nonumber \end{eqnarray} for $m,n\in\mathbb{Z},\,p, q\in\mathbb{Z}+\frac12$. Clearly $\mathcal A_{a, b, c}=\sum_{i\in\mathbb{Z}}\mathbb{C} v_i$ is a $\frak q$-module with \begin{eqnarray*} && L_mv_i=(a+bm+i)v_{m+i}, \ H_mv_i=cv_{m+i}, \ G_rv_i=0, \forall m, i\in\mathbb{Z}, r\in\mathbb{Z}+\frac12. \end{eqnarray*} Moreover ${\mathcal {A}}_{a, b, c}$ is simple if and only if $a\not\in\mathbb{Z}$, or $b\ne 0, 1$ or $c\ne 0$. We also use $\mathcal A_{a, b, c}'$ to denote by the simple sub-quotient of $\mathcal A_{a, b, c}$. \begin{prop}\label{cus-q} Any simple cuspidal $\frak q$-module $V$ is isomorphic to the module $\mathcal A_{a, b, c}'$ of the intermediate series for some $a, b, c\in\mathbb{C}$. \end{prop} \begin{proof} Clearly, the subalgebra $\frak q':={\rm span}\{L_m, G_r, C\mid m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12\}$ is isomorphic to $\mathfrak L$. By Theorem \ref{cuspidal}, we can choose an irreducible $\frak q'$-module $V'$ with $G_rV'=0$ for all $r\in\mathbb{Z}+\frac12$. In this case we have $V={\rm Ind}_{\frak q'}^{\frak q}V'$. Moreover we have $G_rV=0$ for all $r\in\mathbb{Z}+\frac12$ by \eqref{qdef3}. In this case the $\frak q$-module $V$ is simple if and only if $V$ is a simple $\frak q_{\bar0}$-module. So the proposition follows from the main theorem in \cite{LvZ}. \end{proof} \begin{rema} Proposition \ref{cus-q} palys a key role in the classification of all simple cuspidal weight module for the $N=2$ Neveu-Schwarz superconformal algebra, see \cite{LPX0}. \end{rema} \subsection{The $N=1$ BMS$_3$ algebra} The Bondi-Metzner-Sachs (BMS$_3$) algebra is the symmetry algebra of asymptotically flat three-dimensional spacetimes \cite{BBM}. It is the semi-direct product of the Virasoro algebra with its adjoint module. The $N=1$ super-BMS$_3$ is a minimal supersymmetric extension of the BMS$_3$ algebra, which has been introduced to describe the asymptotic structure of the $N=1$ supergravity in \cite{BDMT}. \begin{defi}\label{BMS} The $N$=1 BMS$_3$ superalgebra $\mathcal B$ is a Lie superalgebra with a basis $\{L_m, I_m, Q_r, C_1, C_2\mid m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12\}$, with the following commutation relations: \begin{eqnarray*}\label{brackets} {[L_m, L_n]}&=&(m-n)L_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m)C_1,\\ {[L_m, I_n]}&=&(m-n)I_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m)C_2,\\ {[Q_r, Q_s]}&=&2I_{r+s}+{1\over3}\delta_{r+s, 0}\left(r^2-\frac14\right)C_2,\\ {[L_m, Q_r]}&=&\left(\frac{m}{2}-r\right)Q_{m+r},\\ {[I_m,I_n]}&=&[M_n,Q_r]=0, \quad [C_1,\frak g]=[C_2, \frak g]=0 \end{eqnarray*} for any $m, n\in\mathbb{Z}, r, s\in\mathbb{Z}+\frac12$. \end{defi} Note that $\mathcal B=\mathcal B_{\bar0}+\mathcal B_{\bar1}$, where $\mathcal B_{\bar0}:=\mathbb{C}\{L_m, I_m, C_1, C_2\mid m\in\mathbb{Z}\}$ and $\mathcal B_{\bar1}=\mathbb{C}\{Q_p\mid p\in\mathbb{Z}+\frac12\}$. The quotient algebra $\mathcal B/J$ is isomorphic to $\mathfrak L$, where $J=\mathbb{C}\{I_m\mid m\in\mathbb{Z}\}$. Clearly the Vir-module $\mathcal A_{a, b}$ can be become a $\mathcal B$-module with the trivial actions of $I_m, Q_r$ for any $m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12$. \begin{prop}\label{cus-BMS} Any simple cuspidal $\mathcal B$-module $V$ is isomorphic to the module $\mathcal A_{a, b}'$ of the intermediate series for some $a, b\in\mathbb{C}$. \end{prop} \begin{proof} Clearly, the subalgebra $\mathcal B_{\bar0}$ is isomorphic to $W(2, 2)$. By Theorem 4.6 in \cite{GLZ}, we can choose an irreducible $\mathcal B_{\bar0}$-module $V'$ with $I_mV'=0$ for all $m\in\mathbb{Z}$. In this case we have $V={\rm Ind}_{\mathcal B_{\bar0}}^{\mathcal B}V'$. Moreover we have $I_mV=0$ and $[G_r, G_s]V=0$ for all $m\in\mathbb{Z}, r, s\in\mathbb{Z}+\frac12$ by Definition \ref{BMS}. In this case the $\mathcal B$-module $V$ is simple if and only if $V$ is a simple $\mathcal B/J$-module. So the proposition follows from Theorem \ref{cuspidal}. \end{proof} \subsection{The super $W(2,2)$ algebra} By definition, the super $W(2,2)$ algebra is the Lie superalgebra $SW(2,2):= \mathbb{C}\{L_m, I_m, G_r, Q_r, C_1, C_2\mid m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12\}$, with the following relations: \begin{eqnarray}\label{brackets} \begin{array}{lllll} &[L_m, L_n]=(m-n)L_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m)C_1,\\ &[L_m, I_n]=(n-m)I_{m+n}+{1\over12}\delta_{m+n, 0}(m^3-m)C_2,\\ &[G_r, G_s]=2L_{r+s}+{1\over3}\delta_{r+s, 0}(r^2-\frac14)C_1,\\ &[G_r, Q_s]=2I_{r+s}+{1\over3}\delta_{r+s, 0}(r^2-\frac14)C_2,\\ &[L_m, G_r]=(\frac{m}{2}-r)G_{m+r}, \ [L_m, Q_r]=(\frac{m}{2}-r)Q_{m+r}, \\ &[I_m,G_r]=(\frac{m}{2}-r)Q_{m+r}, \end{array} \end{eqnarray} for any $m, n\in\mathbb{Z}, r, s\in\mathbb{Z}+\frac12$. Note that $SW(2,2)=SW(2,2)_{\bar0}+SW(2,2)_{\bar1}$, where $SW(2,2)_{\bar0}:=\mathbb{C}\{L_m, I_m, C_1, C_2\mid m\in\mathbb{Z}\}$ and $SW(2,2)_{\bar1}=\mathbb{C}\{G_p, Q_p\mid p\in\mathbb{Z}+\frac12\}$. Clearly the subalgebra generated by $\{L_m, G_r~|~ m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12\}$ is isomorphic to the $N=1$ Neveu-Schwarz algebra $\frak S$. From \cite{S2} we see that $S(a, b)$ or $\Pi S(a, b)$ is the Harich-Chandra module of intermediate series over $\frak S$ for some $a, b\in\mathbb{C}$, where $S_{a, b}$ defined as follows: \begin{eqnarray*} &&S_{a, b}:=\sum_{i\in\mathbb{Z}}\mathbb{C} x_i+\sum_{k\in\mathbb{Z}+\frac12}\mathbb{C} y_k\ \hbox{with}\\ L_nx_i&=&(a+bn+i)x_{i+n}, \ L_ny_k=(a+(b+\frac12)n+k)y_{k+n},\\ G_rx_i&=&(a+i+2rb)y_{r+i}, \hskip60pt G_ry_k=-x_{r+k}, \end{eqnarray*} for all $n, i\in\mathbb{Z}, r, k\in\mathbb{Z}+\frac12$. Moreover $S_{a, b}$ is simple if and only if $a\not\in\mathbb{Z}$ or $a\in\mathbb{Z}$ and $b\ne0, \frac12$. We also use $S_{a, b}'$ to denote by the simple sub-quotient of $S_{a, b}$. Clearly the $\frak S$-modules $S_{a, b}$ and $\Pi S_{a, b}$ become $SW(2,2)$-modules with trivial actions of $I_m, Q_{m+\frac12}$ for any $m\in\mathbb{Z}$. \begin{prop}\label{cus-p} Any simple cuspidal $SW(2, 2)$-module $V$ is isomorphic to $S_{a, b}'$ or $\Pi S_{a, b}'$ for some $a, b\in\mathbb{C}$. \end{prop} \begin{proof} Set $\frak p={\rm span}\{L_m, I_m, Q_r, C_1, C_2\mid m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12\}$. By Proposition \ref{cus-BMS} we can choose a simple $\frak p$-module $V'$ with $I_mV'=Q_rV'=0$ for all $m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12$. In this case we have $V={\rm Ind}_{\frak p}^{SW(2,2)}V'$. Moreover we have $I_mV=Q_rV=0$ for all $m\in\mathbb{Z}, r\in\mathbb{Z}+\frac12$ by Definition \ref{brackets}. In this case the $SW(2, 2)$-module $V$ is simple if and only if $V$ is a simple $\frak S$-module. So the proposition follows from the main theorem in \cite{S2} (also see Theorem 4.5 in \cite{CL}). \end{proof} \begin{rema} We can easily prove that any simple simple Harish-Chandra modules over the all above Lie superagebras is a cuspidal module, or a highest/lowest weight module as Lemma \ref{appN=1}. So all simple Harish-Chandra modules over the above Lie superalgebras are also classified. \end{rema} \begin{rema} All indecomposible modules of the intermediate series and some other representations were studied in \cite{WGC} and \cite{WFL}. \end{rema} \noindent{\bf Acknowledgement:} This work is partially supported by the NNSF (Nos. 12071405, 11971315, 11871249), and is partially supported by Xinjiang Uygur Autonomous Region graduate scientific research innovation project (No. XJ2021G021). The authors would like to thank Prof. Rencai Lv for helpful discussions.
1,314,259,992,664
arxiv
\section{A Reinforcement Learning Problem} \label{sec: problem formulation} We consider the problem of learning to optimize a random finite-horizon MDP {\medmuskip=0mu \thinmuskip=0mu \thickmuskip=0mu $\mathcal{M} = (\mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{P}, H, \rho)$} over episodes of interaction, where $\mathcal{S} =\{1,..,S\}$ is the state space, $\mathcal{A}=\{1,..,A\}$ is the action space, $H$ is the horizon, and $\rho$ is the initial state distribution. At the start of each episode the initial state $s_0$ is drawn from the distribution $\rho$. In each time period $t=0, \cdots, H-1$ within an episode, the agent observes state $s_t \in \mathcal{S}$, selects action $a_t \in \mathcal{A}$, receives a reward $r_{t+1} \sim \mathcal{R}_{t, s,a}$, and transitions to a new state $s_{t+1} \sim \mathcal{P}_{t, s,a}$. What we consider could be referred to as a Bayesian reinforcement learning setting, in which the unknown episodic nonstationary finite-horizon MDP $\mathcal{M}$ is taken to be a random variable. A policy $\pi$ is a mapping from a state $s \in \mathcal{S}$ and period $t=0,..,H-1$ to an action $a \in \mathcal{A}$. For each MDP $\mathcal{M} = (\mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{P}, H, \rho)$ and policy $\pi$ we define the state-action value function for each period $t$: \vspace{-1mm} \begin{equation} \label{eq: q value tabular} Q^{\mathcal{M}}_{\pi, t}(s, a) := \E_{\mathcal{M},\pi}\left[ \sum_{\tau=t}^{H-1} \overline{r}^{\mathcal{M}}(s_\tau,a_\tau) \Big| s_t = s, a_t=a \right], \end{equation} where $\overline{r}_t^{\mathcal{M}}(s,a) = \mathds{E}[ r_{t+1} | \mathcal{M}, s_t=s, a_t=a]$. The subscript $\pi$ indicates that actions over periods $t,\ldots,H-1$ are selected according to the policy $\pi$. Let $V^{\mathcal{\mathcal{M}}}_{\pi, t}(s) := Q^{\mathcal{M}}_{\pi, t}(s, \pi(s,t))$. A policy $\pi^{\mathcal{M}}$ is optimal for the MDP $\mathcal{M}$ if $\pi^{\mathcal{M}} \in \argmax_{\pi} V^{\mathcal{M}}_{\pi, t}(s)$ for all $s \in \mathcal{S}$ and $t=0,\ldots,H-1$. We will use $\pi^{\mathcal{M}}$ to denote such an optimal policy. Let $\mathcal{O}_\ell = (s_0^\ell, a_0^\ell, r_1^\ell, \ldots, s_{H-1}^\ell, a_{H-1}^\ell, r_H^\ell)$ be the sequence of observations made during episode $\ell$. Let $\mathcal{H}_{L-1} = (\mathcal{O}_\ell: \ell=1,\ldots,L-1)$ denote the history of observations made prior to episode $L$. The agent's behavior is governed by a reinforcement learning algorithm ${\rm alg}$. Immediately prior to the beginning of episode $L$, the algorithm produces a policy $\pi^L = {\rm alg}(\mathcal{S}, \mathcal{A}, \mathcal{H}_{L-1})$ based on the state and action spaces and the history $\mathcal{H}_{L-1} = \left(\mathcal{O}_\ell: \ell = 1,\ldots,L-1\right)$ of observations made over previous episodes. Note that ${\rm alg}$ may be a randomized algorithm, so that multiple applications of ${\rm alg}$ may yield different policies. In episode $\ell$, the agent enjoys a cumulative reward of $\sum_{t=1}^{H} r^\ell_t$. We define the {\it regret} over episode $\ell$ to be the difference between optimal expected value and the sum of rewards generated by algorithm ${\rm alg}$. This can be written as $V_{\pi^\mathcal{M}, t}^{\mathcal{M}}(s^\ell_0) - \sum_{t=0}^{H-1} r_{t+1}$, where actions are generated by a policy $\pi^\ell$ is produced by algorithm ${\rm alg}$ and state transitions and rewards are generate by MDP $\mathcal{M}$. \section{Optimism versus Randomization} In principle, given a history $\mathcal{H}_{L-1}$ of observations gathered over prior episodes, we can generate a point estimate $\hat{Q}_t = \E\left[Q^{\mathcal{M}}_{\pi^\mathcal{M}, t} | \mathcal{H}_{L-1}\right]$ of the optimal state-action value function and apply a greedy policy with respect to this estimate over episode $L$. However, it is often essential to apply a policy that will explore beyond this to make discoveries that amplify expected rewards over subsequent episodes. Optimistic approaches induce exploration by generating optimistic estimates $\overline{Q}_t$ of state-action values and following a greedy policy with respect to optimistic estimates. The idea is that an optimistic estimate $\overline{Q}_t(s,a)$ should represent the highest statistically plausible value of $Q^{\mathcal{M}}_{\pi^\mathcal{M}, t}(s,a)$, given prior knowledge and observed history. An alternative approach is to generate prior to each $L$th episode the optimal value function $\tilde{Q}_t$ for an MDP sampled from the posterior distribution of $\mathcal{M}$ conditioned on the history $\mathcal{H}_{L-1}$. This is equivalent to sampling $\tilde{Q}_t$ from the posterior distribution of $Q^{\mathcal{M}}_{\pi^\mathcal{M}, t}$. As discussed in \cite{Russo2013b,Osband2013,Russo2014}, such a randomized approach can be analyzed through the study of confidence sets, similarly with how optimistic algorithms are typically studied, and offer performance similar to well-designed statistically efficient optimistic approaches. As we will discuss, the performance advantage of randomized approaches arises from the fact that optimistic approaches proposed and applied in the literature forgo statistical efficiency for computational tractability. Empirical evidence suggests that randomization often leads to much faster learning than optimiism. For example, Figure \ref{fig:RiverSwim}, taken from \cite{Osband2013}, plots regret of UCRL2 \cite{Jaksch2010} and PSRL \cite{DeardenFA99,Osband2013} applied to a variation of the \emph{RiverSwim} problem from \cite{Strehl2006}. These are well-studied tabular model-based reinforcement learning algorithms that explore via optimism and randomization, respectively. For each algorithm, many trajectories are plotted, corresponding to independent simulations. For these computations, PSRL began with uninformative Dirichlet priors for transition probabilities and normal-gamma priors for transition rewards. It is clear from these results that, for this problem, PSRL learns much faster than UCRL2. In the next two sections, we present simple analytic examples that provide insight into why randomization offers more desirable behavior than common optimistic approaches. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{combinedGraph.png} \caption{Cumulative regret of UCRL2 and PSRL in the \emph{RiverSwim} environment.} \label{fig:RiverSwim} \end{figure} \section{Decision Coherence across Time Scales} Consider a simple example illustrated in Figure \ref{fig:horizon}. An agent is at the left-most state and must select one of two actions. Action 1 takes the agent along the ``high road'' over which he knows that he will experience reward of 1 over the first transition and a reward of $0$ over the following $H-1$ transitions. Action 2 takes the agent along the low road, where the agent is uncertain about mean rewards over the first $\tau$ transitions. According to the agent's posterior distribution, conditioned on the history of past observations, these mean rewards are independent and identically distributed zero-mean normal random variables with standard deviation $\epsilon/\sqrt{\tau}$. \begin{figure}[htpb] \centering \includegraphics[scale=0.5]{horizonEffect.png} \caption{Influence of horizon on on exploration decision.} \label{fig:horizon} \end{figure} Table \ref{tab:horizon} quantifies the posterior distribution of mean value for each action, which is normal with a particular expectation and standard deviation. A well-designed optimistic approach should invest to explore if the standard deviation $\epsilon$, which represents uncertainty in value of the second action, is sufficiently large relative to the difference in expectations, which is $1$. In particular, the agent should select action $2$ if and only if $c \epsilon > 1$, where $c$ is a tuning parameter that represents the degree of optimism. However, ignoring logarithmic factors, optimistic approaches in the literature (e.g., \cite{Jaksch2010,NIPS2016_6383,DBLP:journals/corr/TangHFSCDSTA16}), are designed to apply an optimistic boost of the form $c \epsilon \sqrt{\tau}$, which results in selecting action $2$ if and only if $c \epsilon \sqrt{\tau} > 1$. This is because these optimistic approaches aim to sum over future standard deviations, where one should more appropriately combine uncertainties by summing variances. This flaw in uncertainty quantification leads to an incoherence in decision making: for any fixed $c$, there are time scales $\tau$ for which the agent will explore when it is not sufficiently uncertain or fail to explore despite sufficient uncertainty. \begin{table}[htpb] \centering \begin{tabular}{|c||c|c|c|} \hline action & expected value & standard deviation & optimistic boost \\ \hline \hline 1 & $1$ & $0$ & $0$ \\ \hline 2 & $0$ & $\epsilon$ & $c \epsilon \sqrt{\tau}$ \\ \hline \end{tabular} \caption{Expectation and standard distribution of action value, and a typical optimistic boost, as a function of horizon.} \label{tab:horizon} \end{table} A typical randomized approach would, for this problem, explore in each episode with probability equal to the posterior probability that the value of action 2 exceeds that of action 1. In particular, randomized approaches allocate effort to exploration proportional to the chances of gaining actionable information. This probability does not depend on $\tau$ and therefore does not suffer from the same sort of incoherence with respect to scalings of $\tau$. \section{Decision Coherence across Space Scales} Now consider an example illustrated in Figure \ref{fig:state}. The diagrams focus on possible transitions from a single state. Action 1 generates an immediate reward of $1$, and is known to transition to a state that leads to no subsequent value. Action $2$ generates no immediate reward and is known to transition to one of $N$ states, each with probability $1/N$. From each possible next state, the agent's posterior distribution models subsequent mean value as an independent zero-mean normal random variable with standard deviation $\sqrt{N}$. \begin{figure}[htpb] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[scale=0.5]{stateEffect1.png} \caption{action 1} \label{fig:state1} \end{subfigure} \begin{subfigure}{.25\textwidth} \centering \includegraphics[scale=0.5]{stateEffect2.png} \caption{action 2} \label{fig:state2} \end{subfigure} \caption{Influence of number of possible next states on on exploration decision.} \label{fig:state} \end{figure} Table \ref{tab:state} quantifies the posterior distribution of mean value for each action, which is normal with a particular expectation and standard deviation. A well-designed optimistic approach should invest to explore if the standard deviation $\epsilon$, which represents uncertainty in value of the second action, is sufficiently large relative to the difference in expectations, which is $1$. In particular, the agent should select action $2$ if and only if $c \epsilon > 1$, where $c$ is a tuning parameter that represents the degree of optimism. However, ignoring logarithmic factors, common optimistic approaches would apply the average among optimistic boosts $c \epsilon \sqrt{N}$ associated with possible next states, which results in selecting action $2$ if and only if $c \epsilon \sqrt{N} > 1$. This is because these optimistic approaches average over standard deviations at possible next states, where one should more appropriately average variances. This flaw in uncertainty quantification leads to an incoherence in decision making: for any fixed $c$, there are values of $N$ for which the agent will explore when it is not sufficiently uncertain or fail to explore despite sufficient uncertainty. \begin{table}[htpb] \centering \begin{tabular}{|c||c|c|c|} \hline action & expected value & standard deviation & optimistic boost \\ \hline \hline 1 & $1$ & $0$ & $0$ \\ \hline 2 & $0$ & $\epsilon$ & $c \epsilon \sqrt{N}$ \\ \hline \end{tabular} \caption{Expectation and standard distribution of action value, and a typical optimistic boost, as a function of the number of possible next states.} \label{tab:state} \end{table} A typical randomized approach would again explore in each episode with probability equal to the posterior probability that the value of action 2 exceeds that of action 1. For our example, it is easy to see that this probability does not depend on $N$ and therefore does not suffer from the same sort of incoherence with respect to scalings of the state space. \section{Closing Remarks} Reinforcement learning holds promise to provide the basis for an artificial intelligence that will manage a wide range of systems and devices to better serve society's needs. To date, its potential has primarily been assessed through learning in simulated systems, where data generation is relatively unconstrained and algorithms are typically trained over tens of millions to trillions of episodes. Migrating this technology to real systems where data collection is costly or constrained by the physical context calls for a focus on statistical efficiency. An important part of that lies in how agents explore when learning. Optimism and randomization offer guiding principles for efficient exploration. We have presented a couple analytic examples that shed light on sources of advantage in the efficiency of randomized approaches, relative to optimistic approaches that have been presented in the literature. In principle, it should be possible to design optimistic approaches that combine uncertainties in a more coherent manner and consequently perform at least as well as randomized approaches, but such approaches may be computationally intractable. A recent area of intense research activity focusses on designing value function learning methods that efficiently explore intractably large state spaces. One thread of work develops count-based optimistic exploration schemes that operate with value function learning \cite{NIPS2016_6383,DBLP:journals/corr/TangHFSCDSTA16}. Though these approaches may be effective for a range of problems, they suffer from incoherencies of the kind illustrated in our examples and therefore are likely to forgo a substantial degree of statistical efficiency. An alternative is offered by methods that sample statistically plausible parameterized value functions \cite{osband2016rlsvi,osband2016deep,osband2017}.
1,314,259,992,665
arxiv
\section{Introduction} A large number of studies have investigated the impact of bars on the evolution of galaxies \citep[see reviews by][and references therein]{sellwood93,kormendy04,gadotti_bars}. One of the expectations that emerges from both observation and theory, in the framework of secular evolution processes induced by bars, is the rejuvenation of the stellar population in the central structural component of disk galaxies. Bars are able to collect gas in the disk from within the bar ends to the central parts of the disk, supposedly helping the building of bulges through central star formation episodes \citep[see][]{athanassoula05}. Simulations such as those of \citet{athmir02}, but including gas, show that the transfer of gas to the center should be fast, $\approx10^8$ yr (E. Athanassoula, priv. comm.). To date, there is evidence for an enhanced star formation rate, i.e. {\em current} star-forming activity, in the centers of barred galaxies, mostly from studies of nuclear H{\sc ii} regions. For instance, \citet{ho97} found that H$\alpha$ emission line luminosities and equivalent widths are enhanced in barred galaxies, as compared to unbarred galaxies, when one considers early-type disk galaxies only \citep[see also][]{huang96,alonso-herrero01,jogee05,ellison11}. Direct evidence supporting bulge building by bars from the {\em ages of stars in bulges} has proven to be much more elusive. Studies based on integrated colors have to deal with uncertainties that arise from the age-metallicity degeneracy and effects of dust extinction \citep[see e.g.][]{gadotti01}. On the other hand, studies based on stellar spectral analysis have been, to date, handicapped by poor statistics \citep{peletier07,perez11}. In this Letter, we use spectral analysis techniques on a large and well defined sample of barred and unbarred disk galaxies, drawn from SDSS, allowing us to compare mean stellar ages of bulges in barred and unbarred galaxies with unprecedented statistical significance. We describe the most relevant features of our data in the next section. Section \ref{sec:analysis} describes relevant details of our spectral analysis techniques, and results are shown in Sect. \ref{sec:res}. We present our main conclusions in Sect. \ref{sec:con}. \section{Sample} \label{sec:sample} The sample used here is based on the one studied in \cite{gadotti09}\footnote{See \url{http://www.sc.eso.org/~dgadotti/buddaonsdss.html}.}, which contains all galaxies in SDSS Data Release 2 with stellar masses larger than $10^{10}$\,M$_\odot$ \citep[from][]{kauffmann03}, at redshift 0.02\,$\leq$\,z\,$\leq$\,0.07, and with axial ratio b/a\,$\geq$\,0.9. These criteria provide a sample which is both representative and suitable for 2D bulge/disc/bar decomposition, as selecting face-on galaxies minimizes dust and projection effects and eases the identification of bars. The reader is referred to that paper for a detailed discussion on selection effects. Through 2D decomposition, \citet{gadotti09} provides reliable structural parameters, such as the total stellar mass, bulge stellar mass, bulge effective radius $r_e$, bulge/total and bar structural parameters, which are used in this Letter. In short, bulge stellar masses were obtained from bulge luminosity and mass-to-light ratio in the $i$-band, the latter derived from the bulge $g-i$ color. Galaxy total stellar masses were obtained by adding the masses of its components. To verify whether a galaxy is barred, typical bar signatures were searched for through inspection of the galaxy image, isophotal contours and a pixel-by-pixel radial intensity profile (as in \citealt{2010MNRAS.407L..41S} and \citealt{2010PASP..122.1397S}). It should be noted that due to the limited spatial resolution of SDSS images, we miss most bars with semimajor axis shorter than $\sim2-3$\,kpc, which are found mainly in very late-type spirals (later than Sc; \citealt{elmegreen2.85}). We selected from the previous sample all disk galaxies with bulges and filtered them in terms of bulge-to-total and signal-to-noise ratios (see \S\ref{sec:res}) so that our effective sample has 251 barred and 324 unbarred galaxies, of which 187 are AGNs according to the classification in \citet{kau03}. No galaxy shows emission lines with equivalent width larger than 20\AA\ and therefore, according to the criterium in \citet[][table 1.1]{peterson97}, all AGNs are type 2. The galaxy stellar mass distribution is similar for barred and unbarred galaxies, with a difference with statistical significance of less than 1$\sigma$, according to a Kolmogorov-Smirnov test (hereafter KS). This means that possible biases, which would be caused by different mass distributions, are absent in our results. For instance, this indicates that global star formation histories are similar in the barred as in the unbarred galaxies in our sample \cite[see][]{kauffmann03}. The spectra, obtained from the SDSS database, have a spectral resolution of $\lambda/\Delta\lambda\sim\,$1800 \citep{york+00} and an average S/N in the spectral region covered by the SDSS $g$-band of $\sim\,$21. The $g$-band spectral region is most relevant for our analysis, as it contains the spectral features most sensitive to the stellar parameters we aim at deriving, i.e. ages and metallicities. The spectra were brought to restframe and corrected for Galactic extinction as in \citet{cid+05}. It should be noted that SDSS spectra are taken through a fixed fiber size on sky of 3", centered at the galaxy center. \citet{gadotti09} shows that for most galaxies in his sample the light within the fiber diameter is emitted mainly by bulge stars. In this study, we have used the results from the decompositions in \citet{gadotti09} to measure the disk contribution. We verify, through KS tests, if the distributions of such disk contamination inside the fiber are similar for barred and unbarred galaxies, avoiding possible related biases. This is discussed on a case-by-case basis. \section{Analysis} \label{sec:analysis} \begin{figure} \begin{center} \epsfig{file=fits.ps,bbllx=30,bblly=345,bburx=505,bbury=550,clip=true,width=\columnwidth} \caption{Example of a {\sc Starlight} fit. The galaxy identification corresponds to SDSS plate-mjd-fiber numbers. Observed spectrum and model fit are shown as black and red lines, respectively. Pixels not considered in the spectral fitting (either emission lines or clipped pixels), are marked as gray points. The average absolute deviation $\overline\Delta = (\sum{| (M_\lambda-O_\lambda)/O_\lambda|})/N_{pixels}$ of the fit is given. The bottom panel shows the residuals $O_\lambda-M_\lambda$. } \label{f:fit} \end{center} \end{figure} We use the spectrum fitting code {\sc Starlight}\footnote{See \url{http://www.starlight.ufsc.br}.} \citep{cid+05} to compare, on a pixel-by-pixel basis, the SDSS spectra to stellar population (SP) models. In short, {\sc Starlight} mixes different computational techniques to fit an observed spectrum $O_\lambda$ with a combination of $N_*$ simple stellar population (SSP) models. Extinction is modelled as due to foreground dust and line-of-sight stellar motions are modelled by a Gaussian distribution $G$ centered at velocity $v_*$ and with dispersion $\sigma_*$. Both kinematical and SP parameters are derived during the fit and the best model spectrum is given by: \begin{equation} M_\lambda=M_{\lambda_0} \Bigg( \displaystyle\sum\limits_{j=1}^{N_*} {x_j b_{j,\lambda} r_\lambda} \Bigg) \otimes G(v_*,\sigma_*) \end{equation} \noindent where $b_{j,\lambda}$ is the spectrum of the $j$th SSP normalized at $\lambda_0$, $r_\lambda$ is the reddening term, $x$ is the population vector whose components $x_j$ (j=1,...,$N_*$) represent the fractional contribution of each SSP to the total synthetic flux at $\lambda_0$, $M_{\lambda_0}$ is the synthetic flux at the normalization wavelength, $G(v_*,\sigma_*)$ is the line-of-sight stellar velocity distribution, and $\otimes$ denotes the convolution operator. Known regions of emission lines in AGNs were masked for the whole sample, whether the galaxy is an AGN or not, to ensure a homogeneous analysis. As only type 2 AGN are present in our sample, the non-stellar AGN contribution is limited to a few percent of the flux and do not affect the SP parameters derived \citep[Cid-Fernandes, priv. comm.]{cid+04}. Bad pixels or sky background residuals are clipped during the fit, when pixels deviate by more than three times the r.m.s. between $O_\lambda$ and $M_\lambda$. We refer the reader to \citet{cid+05} and references therein for more details on {\sc Starlight}. The SP models adopted are those of \citet{vazdekis+10} with an updated stellar library \citep{miles91}, and cover the wavelength range 3540\,--\,7400\AA\, at a resolution of FWHM $\sim$ 2.5\AA. Ages range between 63\,Myr and 18\,Gyr, and metallicities [M/H] between -2.32 and +0.22. A random fit is shown in Fig.\,\ref{f:fit}, where the observed spectra is shown in black and the {\sc Starlight} model in red. \section{Results} \label{sec:res} \begin{figure} \begin{center} \epsfig{file=stelpop.ps,bbllx=40,bblly=70,bburx=594,bbury=380,clip=true,width=\columnwidth} \caption{Ages vs. metallicities of bulges in unbarred and barred galaxies (top and bottom panels, respectively). Results for the whole sample are shown in the left-hand column, and for different bulge sizes in the remaining columns, as indicated.} \label{f:stelpop} \end{center} \end{figure} In Fig.\,\ref{f:stelpop} we show the results of our stellar population analysis, where light-weighted mean ages and metallicites -- normalized at wavelength 4020\AA\, -- are plotted for bulges in barred and unbarred galaxies. The robustness of {\sc Starlight} results were thoroughly discussed in \citet{cid+05}, with tests performed also on SDSS spectra. They conclude that measurement uncertainties -- which can be parametrized by S/N -- dominate the final errors. From their Table 1, the uncertainty on the light-weighted parameters are of $\sim$20\% for a S/N\,=\,10 spectrum. Although we have processed all spectra, we rely our analysis solely on the results from spectra with S/N\,$\geq 10$, which constitute 93\% of the sample. To circumvent our inability of detecting the short bars in galaxies with Hubble types later Sc, we remove from the analysis all galaxies with bulge-to-total luminosities ratios below 0.043, which is the typical ratio for these latest Hubble types \citep{grawor08}. In Fig.\,\ref{f:stelpop} we show the results for the whole sample in the left-hand column, and separated into bulges with $r_e<0.8"$, $0.8"\leq r_e<1.3"$, and $r_e\geq1.3"$ in the remaining columns ($r_e$ in the $i$-band, as given in \citealt{gadotti09}). We note that the parameter space covered by the results does not change significantly with bulges size, which is evidence that there are no significant biases from disk contamination inside the SDSS fiber, even for small bulges. Note that bulge light dominates over disk light through $\approx$ 2 times the bulge $r_e$ from the galaxy center, on average \citep[see][]{moriondo98,morelli08,gadotti09}. There seems to be a lack of very old ($>10$ Gyr) bulges in unbarred galaxies with $r_e\leq0.8"$. This is not the case for barred galaxies. If this was a consequence from disk contamination, one would see bulges in unbarred galaxies to have younger ages on average than in barred galaxies (opposite to what we find -- see below). Moreover, stellar populations younger than $\sim 5$ Gyr are seen equally in small and large bulges, barred and unbarred galaxies. It is for these ages that we find a significant difference between bulges in barred and unbarred galaxies. We thus conclude that our results are not affected by disk contamination. \begin{figure} \begin{center} \epsfig{file=distr.interval0.ps,bbllx=30,bblly=230,bburx=400,bbury=697,clip=true,width=0.7\columnwidth} \caption{Normalized distributions of ages (left-hand column) and metallicities (righ-hand column) for bulges in barred and unbarred galaxies (red and black lines, respectively). Distributions for the whole sample, AGNs and non-active galaxies are given separately. This sample has 251 barred and 324 unbarred galaxies (106 and 81 AGNs, respectively). } \label{f:distribution0} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \epsfig{file=distr.interval1-2.ps,bbllx=35, bblly=230,bburx=375,bbury=715,clip=true,width=0.7\columnwidth}& \end{tabular} \caption{Normalized distributions of ages for bulges with $\log M_{bulge} < 10.1$ on the left-hand side, and $10.1 < \log M_{bulge} < 10.85$ on the right-hand side. The low-mass sub-sample has 160 barred and 157 unbarred galaxies (56 and 25 AGNs, respectively) and the high-mass sub-sample has 91 barred and 167 unbarred galaxies (50 and 56 AGNs, respectively). } \label{f:distribution1} \end{center} \end{figure} In Fig.\,\ref{f:distribution0} we present the normalized distributions of ages and metallicities derived, for an upper limit of bulge stellar mass of $10^{10.85}\rm{M}_\odot$ (above which there are no barred galaxies in our sample). The top panels show the distributions for the whole sample, and the middle and bottom panels show the distributions for AGNs and normal galaxies, respectively. The age distribution of bulges in barred galaxies shows an excess of populations younger than $\sim$4\,Gyr. This feature is enhanced when we divide the sample in normal (non-active) galaxies and AGNs: the excess of young populations is better seen in the distribution of normal galaxies, and disappears in AGNs. In fact, a KS test shows that the probabilities that the bulge mean age distributions are drawn from different populations for barred and unbarred galaxies are 98.65\% for the whole sample, 50.13\% for AGNs and 99.94\% for normal galaxies only. Thus, the difference between the mean stellar ages of bulges in non-active barred and unbarred galaxies is significant to a level of almost 4$\sigma$. The distributions of {\em bulge} stellar mass for the sample in Fig. \ref{f:distribution0} are, however, statistically different for barred and unbarred galaxies. Barred galaxies have less massive bulges than unbarred galaxies, {\em even though their total stellar mass is similar}, and also have larger disk contamination within the fiber. Therefore, and because less massive bulges tend to have younger mean ages, one cannot tell from Fig. \ref{f:distribution0} alone that bars do indeed turn bulges younger. For this reason, we inspected several hundred bulge mass intervals in search for those where the distributions of bulge mass, and the distributions of disk-to-total light ratios inside the fiber, are the same for barred and unbarred galaxies at $\sim1\sigma$ level (ensured with KS tests). We could not find intervals where both AGNs and normal galaxies show equal bulge mass distributions, and then focused on choosing optimal distributions for normal galaxies only (a detailed study of the AGNs will follow in a separate paper). We thus came to two intervals of bulge stellar mass -- below $10^{10.1}\rm{M}_\odot$ and between $10^{10.1}\rm{M}_\odot$ and $10^{10.85}\rm{M}_\odot$ -- whose corresponding results are shown in Fig. \ref{f:distribution1}. In the lower mass bin, the distributions of bulge ages are statistically similar, in contrast with Fig. \ref{f:distribution0}, which refers to the whole sample. However, for the high-mass bin, the distribution of bulge ages for barred galaxies is clearly bimodal. The mixture modelling statistical KMM test \citep[see][]{kmm} indicates that this is so at a confidence level of 99.9993\%, i.e. $>4\sigma$. The same test results in two normal distributions, with peaks at $4.7$ and 10.4 Gyr. We have run the KMM test in all other distributions discussed, and none resulted in a statistical significance larger than $\sim1\sigma$, in particular the distribution of bulge age for unbarred galaxies. This bimodality does not seem to be a result from biased samples, and is a clear evidence of difference between the mean stellar ages in bulges of barred galaxies, as compared to unbarred galaxies. \begin{figure} \begin{center} \epsfig{file=bimodality.ps,bbllx=0,bblly=0,bburx=620,bbury=620,clip=true,width=\columnwidth} \caption{Normalized distributions for bulge ages in non-active galaxies, for several mass intervals as indicated. Distributions for barred and unbarred galaxies are shown in red and black lines, respectively. } \label{f:bimodality} \end{center} \end{figure} To better inspect this bimodality, we show in Fig. \ref{f:bimodality} the age distributions for several bulge mass intervals. This is essentially the same as having matched distributions of bulge mass, since the mass intervals at each panel are narrow. A first signal of bimodality in the age distribution of non-active galaxies appears in the interval $\log M_{bulge}$ between 9.7 and 10.2 (the KMM test yields a significance of 99.8928\%) and the peaks reach comparable strengths, as in Fig. \ref{f:distribution1} (significance of 99.9813\%), in the interval $\log M_{bulge}=$ 9.9 -- 10.4. A similar analysis was done for AGNs, corroborating the results above that those effects are inexistent in AGNs. This result provides statistically-based corroboration that bars have important effects on the processes of bulge building, though the dependence of these effects with bulge mass, and the bimodality in the bulge age distribution, have yet to be explained by theoretical work. Furthermore, bars also help building a reservoir of fuel for AGN activity. {\em In the low bulge mass bin, 35\% of barred galaxies are AGN, whereas this fraction drops to only 16\% when one considers unbarred galaxies. In the high mass bin, 55\% of barred galaxies are AGN, whereas 34\% of unbarred galaxies are AGN.} In both mass bins, if a galaxy is barred it has a higher chance of hosting an AGN. In the high mass bin, processes such as mergers might be acting to help fueling AGN, diminishing the difference in the fraction of AGNs between barred and unbarred galaxies. Bars are not a necessary nor sufficient condition for a galaxy to host an AGN, but these results indicate that in some cases bars do help in fueling AGN activity. There are several studies in the literature -- with opposing results -- on this issue \citep[see e.g.][]{mulreg97,ho97,kna00,lai02}, but they agree that homogeneity on the detection of AGN activity and the assessment on the presence of a bar, as well as sample selection, are critical. Our sample is carefully drawn from a volume-limited sample, the data set is homogeneous, and both the AGN and bar classifications are done throughout in a consistent fashion. Nevertheless, we underline that our sample comprises only massive galaxies and that short bars are mostly missed, as discussed above. Further work is necessary to better understand why the difference between the ages of bulges in barred and unbarred galaxies disappears in AGNs, and why the dichotomy in the bulge ages of massive barred galaxies is not evident in AGNs. A likely interpretation is that feedback from the AGN activity will at some point push gas back from the galaxy center, preventing new star formation episodes \citep[see e.g.][]{schawinski07}. We stress that even if bulges in barred AGNs are typically less massive, they are {\em not} younger than their unbarred counterparts. The metallicity distributions show no important difference between barred and unbarred galaxies, but we will further explore the chemical enrichment in terms of $\alpha$-elements over iron abundances ratios as a function of bulge morphology in a separate paper (P. Coelho \& D. A. Gadotti, in preparation). We have compared the ages with bar structural parameters -- effective surface brightness, effective radius, ellipticity, S\'ersic index, semi-major axis, boxiness and bar-to-total ratio -- but found no evidence for a relation between any of the bar properties and the age of the bulge population. This is not necessarily surprising, as bulge building by bars depends on complex physical processes and time scales, and the availability of gas. For instance, a bar which was once strong, but is now weakened for any reason [bar weakening is more likely to happen than bar destruction, \citep[e.g.][]{athet05}], could have contributed substantially to build a young population in the bulge of its host galaxy (if enough gas was available), and would now weaken a correlation between bar strength (e.g. ellipticity) and bulge age. \section{Conclusions} \label{sec:con} We derived stellar ages for a sample of 575 bulges in disk galaxies, 251 of those containing bars. When we consider the sample with bulges stellar masses $<10^{10.85}\rm{M}_\odot$, we find that the mean stellar ages of bulges of barred galaxies are on average lower than that of unbarred galaxies, at a statistical significance of 99.94\% (or almost 4$\sigma$), when one considers non-active galaxies only. In this sample the galaxy mass distributions are similar between barred and unbarred galaxies. To make the distributions of {\em bulge} stellar mass of barred and unbarred galaxies similar, we split the sample in two bulge mass bins. We find that bulges in massive non-AGN barred galaxies ($\log M >10^{10.1}\rm{M}_\odot$) show a bimodal stellar age distribution, at a confidence lever of 99.9993\%, or more than $4\sigma$. This can be described as two normal distributions, centered at 4.7 and 10.4\,Gyr. This bimodality is present above a characteristic mass $\log M_{bulge}=9.7 - 10.2$ and is absent for unbarred galaxies or AGNs. As discussed above, this is a strong observational evidence that corroborates scenarios of bulge building by secular evolution processes induced by bars. On the other hand, this bimodality, and the ages for the two distributions it consists of, are new constraints yet to be explained by successful theories of bar evolution. We have verified that our results are not caused by biases in the samples compared. We have taken into account, at separate instances, the galaxy mass and bulge mass distributions of barred and unbarred galaxies, in order to have samples with e.g. similar star formation histories. We have also used sub-samples in which the contributions from disk light within the SDSS fiber, through which the spectral information used here is taken, are similarly distributed. Finally, we have not considered very late-type galaxies, for which the presence of a bar cannot be reliably assessed in our sample. Therefore, our results cannot be attributed to a bias in sample selection or a flaw in the methodology we use. Finally, let us point out again that samples of barred and unbarred galaxies with similar galaxy total stellar mass distributions have statistically significantly different distributions of {\em bulge} mass, in the sense that bulges in barred galaxies have lower masses. This seems to be unexpected because one expects bulge building by bars and, secondly, bars grow from disks and thus one would expect changes in the sense of lower {\em disk} masses. Progress in both theoretical and observational results is needed to clarify this trend. \section{Acknowledgments} The authors thank Lia Athanassoula, Roberto Cid-Fernandes, William Schoenell, Lucimara Martins, Rub\'en S\'anchez-Janssen and Alan Alves Brito for useful discussions. We thank the anonymous referee for remarks that greatly improved our paper. PC acknowledges FAPESP support (projects 2008/58406-4 and 2009/09465-0).
1,314,259,992,666
arxiv
\section{Introduction}} Functional is a class of statistical observables, having wide applications in almost all disciplines \cite{Deng2020}. Anomalous diffusions are ubiquitous in the nature world. The distribution of functionals for anomalous diffusion is governed by fractional Feynman-Kac equation \cite{Carmi2010,Deng2020,TurgemanCarmiBarkai:09}. From the point view of practical applications, one has to resort to numerical method to get the solution of the fractional Feynman-Kac equation; there are already some works on this issue; see, e.g., \cite{Chenminghua:15,ChenDeng:18,DengLiQianWang:18,ZhangDeng:17}, but most of the numerical analyses of which need the regularity assumption on the exact solution and the functional. This paper provides the high-order schemes for the fractional Feynman-Kac equation without the regularity requirements on both the exact solution and the functional. In the past few years, the high-order algorithms have been proposed extensively for fractional diffusion equation \cite{chen2015,ford2017,jin2017,li:2020,yan2018}, the nonlocal property of which makes them keep the same computational cost but greatly improve the accuracy. As for fractional Feynman-Kac equation, there are less relative discussions; \cite{Chenminghua:15} studies the high-order scheme to solve backward fractional Feynman-Kac equation with truncated L{\'e}vy flights, and the detailed error and stability analyses for the $1$-st order scheme are provided with some regularity assumptions on the solution; \cite{ChenDeng:18} provides a high-order scheme for the time tempered fractional Feynman-Kac equation when the solution $G(x_{0},\rho,t)\in C^{2}(\Omega)$. In practice, the regularity assumptions on the solution and functional are usually hard to be satisfied. So the robust numerical algorithms without any regularity assumptions are more practicable. Recently, \cite{Sun:2020} provides $1$-st and $2$-nd order schemes to solve homogeneous backward fractional Feynman-Kac equation, and the relative error estimates without regularity assumptions are established, but higher-order scheme of this approach for backward fractional Feynman-Kac equation is still not available and there are many challenges in deriving the error estimates for the case with nonsmooth data. Here, under the condition of ensuring the spatial accuracy, we provide a systematic $k$-th order $(k=1,2,\ldots,6)$ fully discrete scheme with a complete error analysis for inhomogeneous backward fractional Feynman-Kac equation \cite{Carmi2010, Deng2020}, i.e., \begin{equation}\label{equaty} \left\{ \begin{aligned} &\frac{\partial G(x_0,\rho,t)}{\partial t}=\,_0D^{1-\alpha,x_0}_t\Delta G(x_0,\rho,t)\\ &\qquad \qquad \qquad\qquad -\rho U(x_0)G(x_0,\rho,t)+f(x_0,\rho,t), \qquad(x_0,t)\in \Omega\times(0,T],\\ &G(x_0,\rho,0)=G_0(x_0),\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad x_0\in \Omega,\\ &G(x_0,\rho,t)=0,\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad~~~ (x_0,t)\in \partial \Omega\times(0,T], \end{aligned} \right. \end{equation} where $G(x_0,\rho,t)=\int_{-\infty}^{\infty}G(x_0,\mathbb{A},t)e^{-\mathbf{i}\rho \mathbb{A}}d \mathbb{A}$ with $\mathbf{i}$ being the imaginary unit; $G(x_0, \mathbb{A}, t)$ is the joint probability density function of finding the particle with the functional $\mathbb{A}$ at time $t$ and the initial position of the particle at $x_0$; the functional $\mathbb{A}=\int_{0}^{t}U(x_{0}(\tau))d\tau$ with $U(x_0)$ being a prescribed function depending on the concrete applications \cite{Deng2020,Kac:1949} and $x_0(t)$ a trajectory of anomalous diffusion starting at $x_{0}$; $\Delta$ means the Laplace operator; $\alpha\in(0,1)$; $f(x_0,\rho,t)$ is the source term; $\Omega$ is a bounded convex polygonal domain in $\mathbb{R}^{n}$ $(n=1,2,3)$ and we assume that $U(x_{0})$ is bounded in $\bar{\Omega}$ in this paper; $T$ is a fixed final time; $~_{0}D^{\alpha,x_0}_t$ denotes Riemann-Liouville fractional substantial derivative defined by \cite{LiDengZhao:19} \begin{equation}\label{eqRLFD} \begin{aligned} ~_{0}D^{\alpha,x_0}_tG(x_0,\rho,t)=e^{-t\rho U(x_0)}~_0D^{\alpha}_t(e^{t\rho U(x_0)}G(x_0,\rho,t)), \qquad \alpha\in(0,1), \end{aligned} \end{equation} and $~_{0}D^{\alpha}_{t}$ means the Riemann-Liouville fractional derivative with its definition \cite{Podlubny1999} \begin{equation*} ~_{0}D^{\alpha}_{t}G(x_0,\rho,t)=\frac{1}{\Gamma(1-\alpha)}\frac{\partial}{\partial t}\int^t_{0}(t-\xi)^{-\alpha}G(x_0,\rho,\xi)d\xi, \qquad \alpha\in(0,1). \end{equation*} Following \cite{ZhangDeng:17} and using the relationship between the Caputo and Riemann-Liouville fractional derivatives, one can get the equivalent form of Eq. \eqref{equaty}, i.e., \begin{equation}\label{eqretosol} \left\{ \begin{aligned} & \,_0D^{\alpha,x_0}_tG(x_0,\rho,t)-\Delta G(x_0,\rho,t)\\ & ~~ =e^{-\rho U(x_{0})t}\,_0D^{\alpha}_tG(x_0,\rho,0)+\,_{0}I^{1-\alpha,x_{0}}_{t}f(x_0,\rho,t), \quad(x_0,t)\in \Omega\times(0,T],\\ &G(x_0,\rho,0)=G_0(x_0),\qquad\qquad\qquad\qquad\qquad\qquad\qquad x_0\in \Omega,\\ &G(x_0,\rho,t)=0,\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\quad\,\, (x_0,t)\in \partial\Omega\times(0,T], \end{aligned} \right. \end{equation} where $~_{0}I^{\alpha,x_{0}}_{t}$ means Riemann-Liouville fractional substantial integral defined by \cite{LiDengZhao:19} \begin{equation*} \begin{aligned} ~_{0}I^{\alpha,x_0}_tf=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-\tau)^{\alpha-1}e^{-\rho U(x_{0})(t-s)}f(s)ds,\qquad \alpha\in(0,1). \end{aligned} \end{equation*} Compared \eqref{eqretosol} with \eqref{equaty}, we separate the operators $_{0}D^{1-\alpha,x_{0}}_{t}$ and $-\Delta$ to reduce the influences of the regularity of $U(x_{0})$ on convergence order in space, it is more effective to establish the numerical scheme based on \eqref{eqretosol} instead of \eqref{equaty} The correction scheme of higher-order BDF convolution quadrature for fractional evolution equation is provided in \cite{jin2017}. If its idea is applied to solve Eq. \eqref{eqretosol}, it may deteriorate the convergence order in space. The main reason is that the Riemann-Liouville fractional substantial derivative is a time-space coupled nonlocal operator, which makes $\beta(z,x_{0})$ and $L^{2}$ projection $P_{h}$ non-commutable (one can refer to Secs. $2$ and $3$ for the definitions of the operators). Thus, to preserve the optimal convergence rates in space, we build the finite element scheme by applying the $L^{2}$ projection operators $P_{h}$ on $e^{-\rho U(x_{0})t}~_{0}D^{\alpha}_{t}G_{0}$ and $~_{0}I^{1-\alpha,x_{0}}_{t}f$ instead of $G_{0}$ and $f$, which avoids estimating the errors aroused by $\|((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G_{0})-((\beta_{\tau,k}(z,x_{0}))^{\alpha}+A_{h})^{-1}(\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})P_{h}G_{0}\|_{L^{2}(\Omega)}$, and the regularity requirement on $U(x_{0})$ is also relaxed naturally. Moreover, according to the finite element scheme \eqref{eqfinsch} (see Sec. 3), we provide novel high-order fully discrete schemes, which can achieve $k$-th order convergence in time and optimal convergence in space without any regularity assumptions on exact solution. The rest of the paper is organized as follows. We first provide some preliminaries and a regularity estimate for the solution of Eq. \eqref{eqretosol} in Sec. 2. In Sec. 3, we use the finite element method to discretize the Laplace operator and provide novel high-order approximations in time based on the high-order BDF convolution quadrature. The error analyses presented in Sec. 4 show that our schemes can not only preserve $k$-th order convergence in time, but also achieve optimal convergence rate in space. In Sec. 5, extensive numerical experiments are performed to show the effectiveness of the schemes. We conclude the paper with some discussions in the last section. Throughout the paper, the generic constant $C>0$ may be different at different occurrences and $\epsilon>0$ is an arbitrarily small constant. \section{Preliminaries} We set $A=-\Delta$ with a zero Dirichlet condition in the following. For any $ q\geq 0 $, denote the space $ \dot{H}^{q}(\Omega)=\{v\in L^2(\Omega): \|v\|^2_{\dot{H}^q(\Omega)}<\infty \}$ with the norm \cite{Thomee2006} \begin{equation*} \|v\|^2_{\dot{H}^q(\Omega)}=\sum_{j=1}^{\infty}\lambda_j^q(v,\varphi_j)^2, \end{equation*} where $ {(\lambda_j,\varphi_j)} $ are the eigenvalues ordered non-decreasingly and the corresponding eigenfunctions (normalized in the $ L^2(\Omega) $ norm) of operator $A$. Below we define sectors $\Sigma_{\theta}$ and $\Sigma_{\theta,\kappa}$ in the complex plane $\mathbb{C}$, i.e., for $\kappa>0$ and $\pi/2<\theta<\pi$, \begin{equation*} \begin{aligned} &\Sigma_{\theta}=\{z\in\mathbb{C}\setminus \{0\},|\arg z|\leq \theta\}, \\ &\Sigma_{\theta,\kappa}=\{z\in\mathbb{C}:|z|\geq\kappa,|\arg z|\leq \theta\},\\ \end{aligned} \end{equation*} and the contour $\Gamma_{\theta,\kappa}$ is defined by \begin{equation*} \Gamma_{\theta,\kappa}=\{z\in\mathbb{C}: |z|=\kappa,|\arg z|\leq \theta\}\cup\{z\in\mathbb{C}: z=r e^{\pm \mathbf{i}\theta}: r\geq \kappa\}, \end{equation*} oriented with an increasing imaginary part, where $\mathbf{i}$ denotes the imaginary unit and $\mathbf{i}^2=-1$. Then we denote $\|\cdot\|$ as the operator norm from $L^2(\Omega)$ to $L^2(\Omega)$, and use the notation `$\widetilde{u}$' as the Laplace transform of $u$ and the abbreviations $G(t)$, $G_0$ and $f$ for $G(x_0,\rho,t)$, $G_0(x_0)$ and $f(x_0,\rho,t)$ respectively in the following. \iffalse Next, we recall the Laplace transforms of Riemann-Liouville fractional substantial derivative and Riemann-Liouville fractional substantial integral with $\alpha\in(0,1)$\cite{LiDengZhao:19}, i.e. \begin{equation*} \widetilde{~_0D^{\alpha,x_0}_{t}G}(z)=(\beta(z,x_0))^\alpha \tilde{G}(z),~~{\rm and}~~\widetilde{~_0I^{\alpha,x_0}_{t}G}(z)=(\beta(z,x_0))^{-\alpha} \tilde{G}(z), \end{equation*} where \begin{equation}\label{defbeta} \beta(z,x_0)=(z+\rho U(x_0)). \end{equation} For brevity, we denote $\beta(z)$ as $\beta(z,x_0)$ below. \fi According to the Laplace transforms of Riemann-Liouville fractional substantial derivative and integral \cite{LiDengZhao:19}, the Laplace transform representation of Eq. \eqref{eqretosol} can be given as \begin{equation}\label{equsolrep} \tilde{G}(z)=((\beta(z,x_0))^{\alpha}+A)^{-1}(\beta(z,x_0))^{\alpha-1}G_{0}+((\beta(z,x_0))^{\alpha}+A)^{-1}(\beta(z,x_0))^{\alpha-1}\tilde{f}, \end{equation} where \begin{equation}\label{defbeta} \beta(z,x_0)=z+\rho U(x_0) \end{equation} and we denote it briefly by $\beta(z)$ below. Then we present an estimate of $\beta(z)$ and the regularity estimate of the solution of Eq. \eqref{eqretosol}. \begin{lemma}[\cite{DengLiQianWang:18}]\label{lemmaBeta} Let $\beta(z)$ be defined in \eqref{defbeta} and $U(x_0)$ is bounded in $\bar{\Omega}$. By choosing $\theta \in \left(\frac{\pi}{2},\pi\right)$ sufficiently close to $\frac{\pi}{2}$ and $\kappa>0$ sufficiently large (depending on the value $|{{\rho}}|\|U(x_0)\|_{L^{\infty}(\bar{\Omega})}$), we have \begin{enumerate}[(1)] \item For all $x_{0}\in \bar{\Omega}$ and ${{z}}\in \Sigma_{\theta,\kappa}$, it holds that $\beta({{z}}) \in \Sigma_{\frac{3\pi}{4},\frac{\kappa}{2}}$ and \begin{equation}\label{chapter4section2_2prop1conc1} C_1|{{z}}|\leq|\beta({{z}})|\leq C_2|{{z}}|, \end{equation} where $C_1$ and $C_2$ denote two positive constants. So $\beta({{z}})^{1-\alpha}$ and $\beta({{z}})^{\alpha-1}$ are both analytic function of ${{z}}\in \Sigma_{\theta,\kappa} $. \item The operator $((\beta({{z}}))^\alpha+A)^{-1}:L^2(\Omega)\rightarrow L^2(\Omega)$ is well-defined, bounded, and analytic with respect to $z\in \Sigma_{\theta,\kappa}$, satisfying \begin{equation}\label{chapter4section2_2prop1conc21} \|A((\beta(z))^\alpha+A)^{-1}\|\leq C~~~~~ \forall{{z}} \in \Sigma_{\theta,\kappa}, \end{equation} and \begin{equation}\label{chapter4section2_2prop1conc22} \|((\beta({{z}}))^\alpha+A)^{-1}\|\leq C|{{z}}|^{-\alpha}~~~\forall{{z}} \in \Sigma_{\theta,\kappa}. \end{equation} \end{enumerate} \end{lemma} Combining \eqref{equsolrep} and Lemma \ref{lemmaBeta}, one can get the regularity estimate for the solution of Eq. \eqref{eqretosol} (refer to \cite{Sun:2020} for the detailed proof). \begin{theorem}\label{thmreg} Let $G(t)$ be the solution of Eq. \eqref{eqretosol}. Assume $U(x_0)$ is bounded in $\bar{\Omega}$. If $G_{0}\in L^{2}(\Omega)$ and $\int_{0}^{t}(t-s)^{-\sigma\alpha/2}\|f(s)\|_{L^2(\Omega)}ds< \infty$, then we have the estimate \begin{equation*} \|G(t)\|_{\dot{H}^{\sigma}(\Omega)}\leq Ct^{-\sigma\alpha/2}\|G_0\|_{L^2(\Omega)}+C\int_{0}^{t}(t-s)^{-\sigma\alpha/2}\|f(s)\|_{L^2(\Omega)}ds, \quad \sigma\in[0,2]. \end{equation*} \end{theorem} \section{Modified high-order BDF fully discrete scheme} In this section, we first use finite element method to discretize Laplace operator in \eqref{eqretosol}. Then the modified high-order BDF fully discrete scheme is constructed based on finite element semi-discrete scheme and the corresponding correction criteria are also proposed. Let $\mathcal{T}_h$ be a shape regular quasi-uniform partitions of the domain $\Omega$, where $h$ is the maximum diameter. Denote $ X_h $ as piecewise linear finite element space \begin{equation*} X_{h}=\{v_h\in C(\bar{\Omega}): v_h|_\mathbf{T}\in \mathcal{P}^1,\ \forall \mathbf{T}\in\mathcal{T}_h,\ v_h|_{\partial \Omega}=0\}, \end{equation*} where $\mathcal{P}^1$ denotes the set of piecewise polynomials of degree $1$ over $\mathcal{T}_h$. We denote by $(\cdot,\cdot)$ $L^{2}$ inner product and define the $ L^2 $-orthogonal projection $ P_h: L^2(\Omega)\rightarrow X_h $ by \begin{equation*} \begin{aligned} &(P_hu,v_h)=(u,v_h) ~~~~\forall v_h\in X_h.\\ \end{aligned} \end{equation*} Then we use finite element method to discretize the operator $-\Delta$ and the finite element scheme of Eq. \eqref{eqretosol} can be written as: Find $G_{h}(t)\in X_{h}$ such that \begin{equation}\label{eqfinsch} \begin{aligned} &(\,_0D^{\alpha,x_0}_tG_{h},v_{h})+(\nabla G_{h},\nabla v_{h}) =\\ &\qquad\qquad\qquad(e^{-\rho U(x_{0})t}\,_0D^{\alpha}_tG(x_0,\rho,0),v_{h})+(\,_{0}I^{1-\alpha,x_{0}}_{t}f(x_0,\rho,t),v_{h})\qquad \forall v_{h}\in X_{h}. \end{aligned} \end{equation} Different from the traditional finite element scheme, we apply the $L^2$ projection $P_{h}$ on $e^{-\rho U(x_{0})t}\,_0D^{\alpha}_tG(x_0,\rho,0)$ and $\,_{0}I^{1-\alpha,x_{0}}_{t}f(x_0,\rho,t)$ instead of $G_{0}$ and $f$. Thus the errors between $P_{h}(e^{-\rho U(x_{0})t}\,_0D^{\alpha}_tG(x_0,\rho,0))$ and $e^{-\rho U(x_{0})t}\,_0D^{\alpha}_tP_{h}(G(x_0,\rho,0))$ and the ones between $P_{h}(\,_{0}I^{1-\alpha,x_{0}}_{t}f(x_0,\rho,t))$ and $\,_{0}I^{1-\alpha,x_{0}}_{t}P_{h}(f(x_0,\rho,t))$ are no longer needed to be considered, which relaxes the regularity requirement on $U(x_{0})$. See the relative error analyses in Sec. 4 below. Next, we present the modified high-order BDF fully discrete scheme in detail. Let the time step size $\tau=T/N$ with $N\in\mathbb{N}$, $t_i=i\tau$, $i=0,1,\ldots,N$, and $0=t_0<t_1<\cdots<t_N=T$. Introduce the generating function $\delta_{\tau,k}(\zeta)$ \cite{lubich1988-1,lubich1988-2,lubich1996} and $\beta_{\tau,k}(z)$ $(k=1,2,\ldots,6)$ as \begin{equation}\label{betatauOk} \delta_{\tau,k}(\zeta)=\frac{1}{\tau}\delta_{k}(\zeta)=\frac{1}{\tau}\sum_{i=1}^{k}\frac{(1-\zeta)^{i}}{i!},\quad \beta_{\tau,k}(z)=\delta_{\tau,k}(e^{-\tau\beta(z)}), \end{equation} where $\beta(z)$ is defined in \eqref{defbeta}. And $\delta_{k}(\zeta)$ has the following property. \begin{lemma}[\cite{lubich1988-1}]\label{lemdk} $\delta_{k}(\zeta)$ is analytic and without zeros in a neighborhood of the closed unit disc $|\zeta|\leq 1$, with the exception of a zero at $\zeta=1$, and $\delta_{k}(\zeta)$ satisfies that \begin{equation*} |\arg \delta_{k}(\zeta)|\leq \pi -\vartheta_{k}\quad {\rm for}~|\zeta|<1, \end{equation*} where $\vartheta_{k}=90^{\circ}$, $90^{\circ}$, $88^{\circ}$, $73^{\circ}$, $51^{\circ}$, $18^{\circ}$ for $k=1,\ldots,6$, respectively. \end{lemma} Generally, according to the convolution quadrature \cite{jin2017,lubich1988-1,lubich1988-2,lubich1996} generated by $k$-th order BDF, the Riemann-Liouville fractional derivative with $\alpha \in (0,1)$ can be approximated by \begin{equation*} ~_{0}D^{\alpha}_{t}\varphi(t_{n})\approx \sum_{i=0}^{n}d^{\alpha,k}_{i}\varphi^{n-i}, \end{equation*} where $\varphi^{n}=\varphi(t_{n})$ and \begin{equation*} (\delta_{\tau,k}(\zeta))^{\alpha}=\sum_{i=0}^{\infty}d^{\alpha,k}_{i}\zeta^{i}. \end{equation*} Similarly, the Riemann-Liouville fractional substantial derivative can be approximated by \iffalse \begin{equation*} ~_{0}D^{\alpha,x_{0}}_{t}\varphi(t_{n})\approx \sum_{i=0}^{n}d^{\alpha,k}_{i}e^{-\rho U(x_{0})t_{i}}\varphi^{n-i}. \end{equation*} Following the ideas of the derivation of the previous modified higher-order scheme, using Taylor's expansion, we have \begin{equation}\label{eqtaylorf} f(t)=\sum_{i=0}^{k-2}\frac{t^{i}}{i!}\partial^{i}_{t}f(0)+R_{k}(t), \end{equation} where \begin{equation*} R_{k}(t)=\frac{t^{k-1}}{(k-1)!}\partial^{k-1}_{t}f(0)+\frac{t^{k-1}}{(k-1)!}\ast\partial^{k}_{t}f(s) \end{equation*} and $\ast$ denotes the convolution. And then, we correct $k$-th order BDF scheme at each step as: find $G^{n}_{h}\in X_{h}$ such that \fi \begin{equation}\label{eqdisstfd} ~_{0}D^{\alpha,x_{0}}_{t}\varphi(t_{n})\approx \sum_{i=0}^{n}d^{\alpha,k}_{i}e^{-\rho U(x_{0})t_{i}}\varphi^{n-i}. \end{equation} By using \eqref{eqdisstfd}, we have the following $k$-th order BDF fully discrete scheme: Find $G^{n}_{h}\in X_{h}$ such that \begin{equation}\label{equncorrect} \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h} ,v_{h})+(\nabla G^{n}_{h}, \nabla v_{h})\\ &\qquad=\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{n}\rho U(x_{0})}G^{0},v_{h})+\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}(e^{-t_{i}\rho U(x_{0})}f^{n-i},v_{h}), \end{aligned} \end{equation} where $f^{n}=f(t_{n})$ and $G^{0}=G_{0}(x_{0})$. In fact, for \eqref{equncorrect}, the desired $k$-th order accuracy can be reached only under the condition that the solution is regular enough. So here, we try to modify the scheme and get a robust $k$-th order scheme for the case with nonsmooth data. First, by Taylor's expansion, we spilt $f$ into \begin{equation}\label{eqtaylorf} f(t)=\sum_{i=0}^{k-2}\frac{t^{i}}{i!}\partial^{i}_{t}f(0)+R_{k}(t), \end{equation} where \begin{equation*} R_{k}(t)=\frac{t^{k-1}}{(k-1)!}\partial^{k-1}_{t}f(0)+\frac{t^{k-1}}{(k-1)!}\ast\partial^{k}_{t}f(t) \end{equation*} and `$\ast$' denotes the convolution. To capture the regularity property of the solution at starting point, the $k$-th order BDF fully discrete scheme can be modified as: Find $G^{n}_{h}\in X_{h}$ such that \begin{equation}\label{eqfullschemes1} \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h} ,v_{h})+(\nabla G^{n}_{h}, \nabla v_{h})-\sum_{j=1}^{k-1}d^{\alpha,k}_{n-j}a^{(k)}_{j}(e^{-t_{n}\rho U(x_{0})}G^{0},v_{h})\\ &\qquad=\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{n}\rho U(x_{0})}G^{0},v_{h})+\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}(e^{-t_{i}\rho U(x_{0})}f^{n-i},v_{h})\\ &\qquad\quad+\sum_{j=1}^{k-1}a^{(k)}_{j}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}f^{0},v_{h})\\ &\qquad\quad+\sum_{l=1}^{k-2}\sum_{j=1}^{k-1}b^{(k)}_{l,j}\tau^{l}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}\partial^{l}_{t}f(0),v_{h}) \qquad\qquad \forall v_{h}\in X_{h}, \end{aligned} \end{equation} where $a^{(k)}_{j}$ and $b^{(k)}_{l,j}$ are coefficients to be determined below. Next, introduce $A_{h}$ as \begin{equation*} (A_{h}u_{h},v_{h})=(\nabla u_{h},\nabla v_{h})\qquad \forall u_{h},v_{h}\in X_{h}. \end{equation*} Thus \eqref{eqfullschemes1} can be expressed as \begin{equation}\label{eqfullscheme} \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h}-P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0}))+A_{h}G^{n}_{h}\\ &=\sum_{j=1}^{k-1}d^{\alpha,k}_{n-j}a^{(k)}_{j}P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0})+\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}P_{h}(e^{-t_{i}\rho U(x_{0})}f^{n-i})\\ &\quad +\sum_{j=1}^{k-1}a^{(k)}_{j}d^{\alpha-1,k}_{n-j}P_{h}(e^{-t_{n-j}\rho U(x_{0})}f^{0})\\ &\quad +\sum_{l=1}^{k-2}\sum_{j=1}^{k-1}b^{(k)}_{l,j}\tau^{l}d^{\alpha-1,k}_{n-j}P_{h}(e^{-t_{n-j}\rho U(x_{0})}\partial^{l}_{t}f(0)). \end{aligned} \end{equation} \begin{theorem} \label{thmfulldis} The solution of fully discrete scheme \eqref{eqfullscheme} can be represented as \begin{equation}\label{eqfulldissol} \begin{aligned} & G^{n}_{h}= \\ &\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\left ((\beta_{\tau,k}(z))^{\alpha-1}(\delta_{\tau,k}(e^{-z\tau}))^{-1}\mu_{k}(e^{-z\tau})f^{0}\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}\sum_{l=1}^{k-2}P_{h}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\left ((\beta_{\tau,k}(z))^{\alpha-1}\tau\sum_{n=1}^{\infty}R^{n}_{k}e^{-zt_{n}}\right )dz\\ \end{aligned} \end{equation} with the contour $\Gamma^\tau_{\theta,\kappa}=\{z\in \mathbb{C}:\kappa\leq |z|\leq\frac{\pi}{\tau\sin(\theta)},|\arg z|=\theta\}\cup\{z\in \mathbb{C}:|z|=\kappa,|\arg z|\leq\theta\}$ and \begin{equation*} \begin{aligned} &\mu_{k}(\zeta)=\delta_{k}(\zeta)\left (\frac{\zeta}{1-\zeta}+\sum_{j=1}^{k-1}a_{j}^{(k)}\zeta^{j}\right ),\qquad \gamma_{l}(\zeta)=\left (\zeta \frac{d}{d\zeta}\right )^{l}\frac{1}{1-\zeta},\\ &\eta_{k,l}(\zeta)=\left(\frac{\gamma_{l}(\zeta)}{l!}+\sum_{j=1}^{k-1}b^{(k)}_{l,j}\zeta^{j}\right)\tau^{l+1}. \end{aligned} \end{equation*} Here $R^{n}_{k}=R_{k}(t_{n})$. \end{theorem} \begin{proof} Multiplying $\zeta^{n}$ on both sides of $\eqref{eqfullscheme}$ and summing $n$ from $1$ to $\infty$ yield \begin{equation*} \begin{aligned} &\sum_{n=1}^{\infty}\sum_{i=0}^{n-1}d^{\alpha,k}_{i}e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h}\zeta^{n}+\sum_{n=1}^{\infty}A_{h}G^{n}_{h}\zeta^{n}\\ &= \sum_{n=1}^{\infty}\sum_{i=0}^{n-1}d^{\alpha,k}_{i}P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0})\zeta^{n}+\sum_{n=1}^{\infty}\sum_{j=1}^{k-1}d^{\alpha,k}_{n-j}a^{(k)}_{j}P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0})\zeta^{n}\\ &\quad+\sum_{n=1}^{\infty}\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}P_{h}(e^{-t_{i}\rho U(x_{0})}f^{n-i})\zeta^{n} \\ &\quad +\sum_{n=1}^{\infty}\sum_{j=1}^{k-1}d^{\alpha-1,k}_{n-j}a^{(k)}_{j}P_{h}(e^{-t_{n-j}\rho U(x_{0})}f^{0})\zeta^{n}\\ &\quad+\sum_{l=1}^{k-2}\sum_{j=1}^{k-1}\sum_{n=1}^{\infty}b^{(k)}_{l,j}d^{\alpha-1,k}_{n-j}P_{h}(e^{-t_{n-j}\rho U(x_{0})}\tau^{l}\partial^{l}_{t}f(0))\zeta^{n}. \end{aligned} \end{equation*} Using definitions of $\delta_{\tau,k}$ and doing simple calculations lead to \begin{equation*} \begin{aligned} &((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}+A_{h})\sum_{n=1}^{\infty}G^{n}_{h}\zeta^{n}\\ =&P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}\left(\frac{e^{-\tau\rho U(x_{0})}\zeta}{1-e^{-\tau\rho U(x_{0})}\zeta}+\sum_{j=1}^{k-1}a^{(k)}_{j}( e^{-\tau\rho U(x_{0})}\zeta)^{j}\right)G^{0}\right )\\ &+P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\left(\frac{\zeta}{1-\zeta}+\sum_{j=1}^{k-1}a^{(k)}_{j}\zeta^{j}\right)f^{0}\right )\\ &+\sum_{l=1}^{k-2}P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\left(\frac{\gamma_{l}(\zeta)}{l!}+\sum_{j=1}^{k-1}b^{(k)}_{l,j}\zeta^{j}\right)\tau^{l}\partial^{l}_{t}f(0)\right )\\ &+P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\sum_{n=1}^{\infty}R^{n}_{k}\zeta^{n}\right ).\\ \end{aligned} \end{equation*} According to Cauchy's integral formula, it holds that \begin{equation*} \begin{aligned} G^{n}_{h}=&\frac{1}{2\pi \mathbf{i}}\int_{|\zeta|=\xi_{\tau}}\zeta^{-n-1}((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}+A_{h})^{-1}\\ &\cdot P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}\left(\frac{e^{-\tau\rho U(x_{0})}\zeta}{1-e^{-\tau\rho U(x_{0})}\zeta}+\sum_{n=1}^{k-1}a^{(k)}_{n}(e^{-\tau\rho U(x_{0})}\zeta)^{n}\right)G^{0}\right )d\zeta\\ &+\frac{1}{2\pi \mathbf{i}}\int_{|\zeta|=\xi_{\tau}}\zeta^{-n-1}((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}+A_{h})^{-1}\\ &\cdot P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\left(\frac{\zeta}{1-\zeta}+\sum_{n=1}^{k-1}a^{(k)}_{n}\zeta^{n}\right)f^{0}\right )d\zeta\\ &+\frac{1}{2\pi \mathbf{i}}\int_{|\zeta|=\xi_{\tau}}\zeta^{-n-1}((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}+A_{h})^{-1}\\ &\cdot \sum_{l=1}^{k-2}P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\left(\frac{\gamma_{l}(\zeta)}{l!}+\sum_{j=1}^{k-1}b^{(k)}_{l,j}\zeta^{j}\right)\tau^{l}\partial^{l}_{t}f(0)\right )d\zeta\\ &+\frac{1}{2\pi \mathbf{i}}\int_{|\zeta|=\xi_{\tau}}\zeta^{-n-1}((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha}+A_{h})^{-1} \\ &\cdot P_{h}\left ((\delta_{\tau,k}(e^{-\tau\rho U(x_{0})}\zeta))^{\alpha-1}\sum_{n=1}^{\infty}R^{n}_{k}\zeta^{n}\right )d\zeta,\\ \end{aligned} \end{equation*} where $\xi_\tau=e^{-\tau(\kappa+1)}$. Taking $\zeta=e^{-z\tau}$ and deforming $\Gamma^\tau=\{z=\kappa+1+\mathbf{i}y:y\in\mathbb{R}~{\rm and}~|y|\leq \pi/\tau\}$ to $\Gamma^\tau_{\theta,\kappa}$ imply the desired results. \end{proof} To construct the modification criteria, using \eqref{eqtaylorf}, we rewrite the solution of Eq. \eqref{eqretosol} as \begin{equation}\label{eqsolresp1} \begin{aligned} \tilde{G}=&((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}G_{0})\\ &+\sum_{i=0}^{k-2}((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}z^{-i-1}\partial^{i}_{t}f(0))\\ &+((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}\tilde{R}_{k}). \end{aligned} \end{equation} By comparing \eqref{eqfulldissol} and \eqref{eqsolresp1}, to guarantee $\mathcal{O}(\tau^{k})$ in time and $\mathcal{O}(h^{2})$ in space accuracies at the same time, the following estimates are expected, i.e., for $z\in \Gamma_{\theta,\kappa}^{\tau}$, \begin{equation}\label{equreqest} \begin{aligned} &\|\mu_{k}(e^{-z\tau})-1\|\leq C|z|^{k}\tau^{k},\\ &\left \|\eta_{k,l}(e^{-z\tau})-\frac{1}{z^{l+1}}\right \|\leq C|z|^{k-l-1}\tau^{k},\quad l=1,2,\ldots ,k-2,\\ &\|\beta_{\tau,k}(z)-\beta(z)\|\leq C|z|^{k+1}\tau^{k},\\ &\|((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A_h)^{-1} P_h\|\leq Ch^{2}. \end{aligned} \end{equation} It's easy to check that the last two estimates hold automatically; see Lemmas \ref{lemmabetatauOk} and \ref{lemeroper}. For the first two estimates, similar to the derivations of coefficients in Section 2.2 of \cite{jin2017}, the appropriate choices of $a^{(k)}_{j}$ and $b^{(k)}_{l,j}$ in \eqref{equreqest} make them hold; see Tables \ref{tab:defank} and \ref{tabblnk}. \renewcommand\arraystretch{1.2} \begin{table}[htbp] \caption{Value of $a^{(k)}_{j}$} \begin{center} \begin{tabular}{|c|ccccc|} \hline Order & $a^{(k)}_{1}$ & $a^{(k)}_{2}$ & $a^{(k)}_{3}$ & $a^{(k)}_{4}$&$a^{(k)}_{5}$ \\ \hline $k=2$ & $\frac{1}{2}$ & & & & \\ \hline $k=3$ & $\frac{11}{12}$ & $-\frac{5}{12}$ & & & \\ \hline $k=4$ & $\frac{31}{24}$ & $-\frac{7}{6}$ & $\frac{3}{8}$ & &\\ \hline $k=5$ & $\frac{1181}{720}$ & $-\frac{177}{80}$ & $\frac{341}{240}$ & $-\frac{251}{270}$ &\\ \hline $k=6$ & $\frac{2837}{1440}$ & $-\frac{2543}{720}$ & $\frac{17}{5}$ & $-\frac{1201}{720}$&$\frac{95}{288}$ \\ \hline \end{tabular} \end{center} \label{tab:defank} \end{table} \begin{table}[htbp] \caption{Value of $b^{(k)}_{l,j}$} \begin{center} \begin{tabular}{|c|c|ccccc|} \hline Order & & $b^{(k)}_{l,1}$ & $b^{(k)}_{l,2}$ & $b^{(k)}_{l,3}$ & $b^{(k)}_{l,4}$ & $b^{(k)}_{l,5}$ \\ \hline $k=3$ & $l=1$ & $\frac{1}{12}$ & $0$ & & & \\ \hline $k=4$ & $l=1$ & $\frac{1}{6}$ & $-\frac{1}{12}$ & $0$ & & \\ & $l=2$ & $0$ & $0$ & $0$ & & \\ \hline $k=5$ & $l=1$ & $\frac{59}{240}$ & $-\frac{29}{120}$ & $\frac{19}{240}$ & $0$ & \\ & $l=2$ & $\frac{1}{240}$ & $-\frac{1}{240}$ & $0$ & $0$ & \\ & $l=3$ & $-\frac{1}{720}$ & $0$ & $0$ & $0$ & \\ \hline $k=6$ & $l=1$ & $\frac{77}{240}$ & $-\frac{7}{15}$ & $\frac{73}{240}$ & $-\frac{3}{40}$ & $0$ \\ & $l=2$ & $\frac{1}{96}$ & $-\frac{1}{60}$ & $\frac{1}{160}$ & $0$ & $0$ \\ & $l=3$ & $-\frac{1}{360}$ & $\frac{1}{720}$ & $0$ & $0$ & $0$ \\ & $l=4$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline \end{tabular} \end{center} \label{tabblnk} \end{table} \renewcommand\arraystretch{1} \begin{remark} Similar to the proof of Theorem \ref{thmfulldis}, the solution of \eqref{equncorrect} with $f=0$ can be represented by \begin{equation}\label{equncorsol} \begin{aligned} G^{n}_{h}=&\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\left ((\beta_{\tau,k}(z))^{\alpha-1}e^{-\beta(z)\tau}G^{0}\right )dz. \end{aligned} \end{equation} Motivated by the idea provided in \cite{jin2017}, a general way is to try to rewrite \eqref{equncorsol} as \begin{equation}\label{equncorsolneq} \begin{aligned} ((\beta_{\tau,k}(z))^{\alpha}+A_{h})\sum_{n=1}^{\infty}(G^{n}_{h}-P_{h}e^{-\rho U(x_0)t_n}G^{0})e^{-zt_{n}}=&A_{h}P_{h}\left ((\beta_{\tau,k}(z))^{-1}e^{-\beta(z)\tau}G^{0}\right ) \end{aligned} \end{equation} and get $k$-th order scheme by adding some suitable terms to make \begin{equation*} \left \|\beta_{\tau,k}(z)\left (\frac{e^{-\beta(z)\tau}}{1-e^{-\beta(z)\tau}}+\sum_{j=1}^{k-1}a^{(k)}_{j}e^{-\beta(z)t_{j}}\right )-1\right \|\leq C|z|^{k+1}\tau^{k}. \end{equation*} Thus the correction scheme can be got by using Cauchy's integral formula, which only modifies the $k-1$ starting steps. But \eqref{equncorsolneq} holds only when $U(x_{0})$ is a constant. Here, our modified scheme \eqref{eqfulldissol} can be constructed by modifying \begin{equation}\label{eqcorsol} ((\beta_{\tau,k}(z))^{\alpha}+A_{h})\sum_{n=1}^{\infty}G^{n}_{h}=P_{h}((\beta_{\tau,k}(z))^{\alpha-1}e^{-z\tau}G^{0}). \end{equation} Caused by the term $(\beta_{\tau,k}(z))^{\alpha-1}$ in the right hand of Eq. \eqref{eqcorsol}, we need to modify numerical scheme in each step to keep $O(\tau^k)$ convergence in time. \end{remark} \section{Error estimates} In this section, we first provide the temporal error estimates for the modified high-order BDF scheme. Then the optimal spatial convergence is obtained in $L^{2}$- and $H^{1}$-norm. Consider the time semi-discrete scheme \begin{equation}\label{eqsemisch} \left\{ \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}-(e^{-t_{n}\rho U(x_{0})}G^{0}))+AG^{n}\\ &\qquad=\sum_{j=1}^{k-1}d^{\alpha,k}_{n-j}a^{(k)}_{j}(e^{-t_{n}\rho U(x_{0})}G^{0}) +\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}(e^{-t_{i}\rho U(x_{0})}f^{n-i})\\ &\qquad\quad +\sum_{j=1}^{k-1}a^{(k)}_{j}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}f^{0}) \\ &\qquad\quad +\sum_{l=1}^{k-2}\sum_{j=1}^{k-1}b^{(k)}_{l,j}\tau^{l}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}\partial^{l}_{t}f(0)),\quad in~ \Omega,\\ &G^{0}=G_0(x_0),\qquad in ~\Omega,\\ &G^{0}=0, \qquad on~ \partial\Omega. \end{aligned}\right. \end{equation} Using the same procedure as in the proof of Theorem \ref{thmfulldis}, the solution of Eq. \eqref{eqsemisch} can be expressed as \begin{equation}\label{eqtimesemidissol} \begin{aligned} G^{n}=&\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}(\delta_{\tau,k}(e^{-z\tau}))^{-1}\mu_{k}(e^{-z\tau})f^{0}\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )dz\\ &+\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\tau\sum_{n=1}^{\infty}R^{n}_{k}e^{-zt_{n}}\right )dz. \end{aligned} \end{equation} Next, we give two lemmas about $\beta_{\tau,k}(z)$, $k=1,\ldots,6$. \begin{lemma}\label{lemmabetatauOk} Let $\beta_{\tau,k}(z)$ be defined in \eqref{betatauOk} and $U(x_0)$ bounded in $\bar{\Omega}$. Denote $\Sigma^{\tau}_{\theta,\kappa}=\{z\in\mathbb{C}:|z|\geq\kappa,|\arg z|\leq \theta, |Im(z)|\leq \frac{\pi}{\tau},Re(z)\leq \kappa+1\}$ with $Im(z)$ being the imaginary part of $z$ and $Re(z)$ the real part of $z$. By choosing $\theta\in(\frac{\pi}{2},\pi)$ sufficiently close to $\frac{\pi}{2}$ and $\kappa>0$ large enough $($depending on $|\rho|\|U(x_0)\|_{L^{\infty}(\bar{\Omega})}$$)$, there exists a positive constant $\tau_{*}$ $($depending on $\theta$ and $\kappa$$)$ such that the following estimates hold when $\tau\leq \tau_{*}$: \begin{enumerate}[(1)] \item For all $x_{0}\in \bar{\Omega}$ and ${{z}}\in \Sigma^{\tau}_{\theta,\kappa}$, we have $\beta_{\tau,k}({{z}}) \in \Sigma_{\pi-\vartheta_{k}+\epsilon,C\kappa}$, where $\vartheta_{k}$ is given in Lemma \ref{lemdk}, and \begin{equation*} C_1|{{z}}|\leq|\beta_{\tau,k}({{z}})|\leq C_2|{{z}}|. \end{equation*} \item The operator $((\beta_{\tau,k}({{z}}))^\alpha+A)^{-1}:L^2(\Omega)\rightarrow L^2(\Omega)$ is well-defined, bounded, and analytic with respect to $z\in \Sigma^{\tau}_{\theta,\kappa}$, satisfying \begin{equation*} \|A((\beta_{\tau,k}(z))^\alpha+A)^{-1}\|\leq C~~~~~\forall {{z}} \in \Sigma^{\tau}_{\theta,\kappa}, \end{equation*} \begin{equation*} \|((\beta_{\tau,k}({{z}}))^\alpha+A)^{-1}\|\leq C|{{z}}|^{-\alpha}~~~~~\forall{{z}} \in \Sigma^{\tau}_{\theta,\kappa}. \end{equation*} \item For all $x_{0}\in \bar{\Omega}$ and the real number $\gamma$, there holds \begin{equation*} |(\beta(z))^{\gamma}-(\beta_{\tau,k}(z))^{\gamma}|\leq C\tau^{k}|z|^{\gamma+k}\quad \forall z\in\Gamma^{\tau}_{\theta,\kappa}, \end{equation*} where $\beta(z)$ is defined in \eqref{defbeta}. \end{enumerate} \end{lemma} \begin{proof} According to \cite{creedon:1975}, we have $\frac{\delta_{k}(\zeta)}{1-\zeta}\neq 0$ in the neighborhood of the unit circle. By choosing $\kappa\geq 8|\rho|\|U(x_{0})\|_{L^{\infty}(\bar{\Omega})}$, $\theta$ sufficiently close to $\frac{\pi}{2}$ and $\tau\leq \frac{\pi}{\kappa+1}$, we have that $e^{-\tau\beta(z)}$ lies in the neighborhood of the unit circle and there exist two positive constants $C_1$ and $C_2$ such that \begin{equation}\label{eqboundz} C_{1}\leq \left |\frac{\delta_{k}(e^{-\tau\beta(z)})}{1-e^{-\tau\beta(z)}}\right |\leq C_{2} \quad\forall z\in \Sigma^{\tau}_{\theta,\kappa}. \end{equation} From \cite{DengLiQianWang:18}, there holds \begin{equation*} C_{1}|z|\leq |\delta_{\tau,1}(e^{-\tau\beta(z)})|\leq C_{2}|z|, \end{equation*} which leads to \begin{equation*} C_{1}|z|\leq |\delta_{\tau,k}(e^{-\tau\beta(z)})|\leq C_{2}|z|. \end{equation*} Combining Lemma \ref{lemO200}, one has \begin{equation*} \beta_{\tau,k}(z)\in\Sigma_{\pi-\vartheta_{k}+\epsilon}, \end{equation*} which yields the second conclusion by using $|\beta_{\tau,k}(z)|\geq C|z|$ and the resolvent estimate \cite{Jin2016}. As for the third conclusion, there holds \begin{equation*} \begin{aligned} &|(\beta(z))^{\gamma}-(\beta_{\tau,k}(z))^{\gamma}|\\ =&\left |(\beta(z))^{\gamma}-\left(\beta(z)+\mathcal{O}(\tau^k(\beta(z))^{k+1})\right )^{\gamma}\right |\\ =&|(\beta(z))^\gamma|\left |1-\left (1+\mathcal{O}(\tau^k(\beta(z))^{k})\right )^{\gamma}\right |. \end{aligned} \end{equation*} If $\tau|\beta(z)|\leq1/2$, we obtain \begin{equation*} |(\beta(z))^{\gamma}-(\beta_{\tau,k}(z))^{\gamma}|\leq|\beta(z)|^{\gamma}C\tau^k|\beta(z)|^{k}= C\tau^{k}|\beta(z)|^{\gamma+k}. \end{equation*} As for $\tau|\beta(z)|>1/2$, we have \begin{equation*} \begin{aligned} &\tau|z|\geq C\tau|\beta_{\tau,k}(z)|\geq C \qquad \forall z\in\Gamma^{\tau}_{\theta,\kappa},\\ &|(\beta(z))^{\gamma}-(\beta_{\tau,k}(z))^{\gamma}|\leq C|z|^{\gamma}\leq C\tau^{k}|z|^{\gamma+k} \qquad \forall z\in\Gamma^{\tau}_{\theta,\kappa}.\\ \end{aligned} \end{equation*} Thus the third conclusion is obtained. \end{proof} \begin{lemma}\label{lemO200} Let $\delta_{\tau,k}(e^{-z\tau})$ be defined in \eqref{betatauOk}, $U(x_{0})$ bounded in $\bar{\Omega}$, and $L=|{{\rho}}|\|U({{x_{0}}})\|_{L^{\infty}(\bar{\Omega})}$. There exist positive constants $\theta_{0}\in\left (\frac{\pi}{2},\frac{9\pi}{16}\right )$ and $\tau_0$ such that if $\theta\in\left(\frac{\pi}{2},\theta_0\right )$ and $\tau\in (0,\tau_0]$, then \begin{equation}\label{equlem00O2con} \begin{aligned} \delta_{\tau,k}(e^{-\beta(z)\tau})\in \Sigma_{\pi-\vartheta_{k}+\epsilon} \quad ~~\forall z\in \Gamma^{\tau}_{\theta,\kappa}{~~\rm and~~}\forall x_{0}\in \bar{\Omega}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Take $\kappa \, (>8L)$ sufficiently large, $r=Re(\rho U(x_{0}))$, and $\omega=\mathbf{i} \cdot Im(\rho U(x_{0}))$ for $x_{0}\in \bar{\Omega}$. Here we choose $\tau\leq \frac{\pi}{\kappa+1}$ to make $e^{-\tau r}$ lie in the neighborhood of $1$. Taylor's expansion and $\tau|z|\leq C$ give \begin{equation}\label{eqneg1} \begin{aligned} \left |\frac{z\tau e^{-z\tau}}{1- e^{-z\tau}}\right |\leq 1+O(|z|\tau)\leq C. \end{aligned} \end{equation} Combining \eqref{eqboundz}, \eqref{eqneg1}, and the bound of $|\tau\delta_{\tau,k}'(e^{-z\tau})|$, i.e., $|\tau\delta_{\tau,k}'(e^{-z\tau})|\leq 1+\tau|\delta_{\tau,k-1}(e^{-z\tau})|\leq C$ for $k>1$ and $|\tau\delta_{\tau,1}'(e^{-z\tau})|\leq C$, we have, for some $\sigma\in(0,1)$ \begin{equation*} \begin{aligned} \frac{|\delta_{\tau,k}(e^{-r\tau} e^{-z\tau})-\delta_{\tau,k}(e^{-z\tau})|}{|\delta_{\tau,k}( e^{-z\tau})|} & \leq C\left | \frac{\delta_{\tau,k}(e^{-r\tau} e^{-z\tau})}{\delta_{\tau,k}( e^{-z\tau})}-1\right |\\ &\leq CL\left | \frac{\delta_{\tau,k}'(e^{-\sigma r\tau} e^{-z\tau})e^{-z\tau}e^{-\sigma r\tau}\tau}{\delta_{\tau,k}( e^{-z\tau})}\right |\\ &\leq CL|\tau\delta_{\tau,k}'(e^{-\sigma r\tau} e^{-z\tau})|\left | \frac{e^{-z\tau}}{\delta_{\tau,k}( e^{-z\tau})}\right |\\ &\leq CL\left | \frac{\tau e^{-z\tau}}{1- e^{-z\tau}}\right |\\ & \leq C\frac{L}{\kappa}, \end{aligned} \end{equation*} where $\delta_{\tau,k}'(\zeta)$ is the first order derivative about $\zeta$. Taking $\kappa$ large enough results in \begin{equation*} |\arg(\delta_{\tau,k}(e^{-r\tau} e^{-z\tau}))-\arg(\delta_{\tau,k}(e^{-z\tau}))|\leq \epsilon/4. \end{equation*} Similarly, there holds, for some $\sigma\in(0,1)$ \begin{equation*} \begin{aligned} \frac{|\delta_{\tau,k}(e^{-\omega\tau}e^{-r\tau}e^{-z\tau})-\delta_{\tau,k}(e^{-r\tau} e^{-z\tau})|}{|\delta_{\tau,k}( e^{-r\tau}e^{-z\tau})|}\leq& C\left | \frac{\delta_{\tau,k}(e^{-\omega\tau}e^{-r\tau} e^{-z\tau})}{\delta_{\tau,k}(e^{-r\tau} e^{-z\tau})}-1\right |\\ \leq &CL\left | \frac{\delta_{\tau,k}'(e^{-\sigma \omega\tau}e^{-r\tau} e^{-z\tau})e^{-z\tau}e^{-\sigma \omega\tau} e^{-r\tau}\tau}{\delta_{\tau,k}(e^{-r\tau} e^{-z\tau})}\right |\\ \leq &CL|\tau\delta_{\tau,k}'(e^{-\sigma \omega\tau}e^{-r\tau} e^{-z\tau})|\left | \frac{e^{-r\tau}e^{-z\tau}}{\delta_{\tau,k}( e^{-z\tau}e^{-r\tau})}\right |\\ \leq&CL\left | \frac{\tau e^{-z\tau}e^{-r\tau}}{1- e^{-z\tau}e^{-r\tau}}\right |\\ \leq&C\frac{L}{\kappa}. \end{aligned} \end{equation*} Again, when $\kappa$ is large enough, it holds \begin{equation*} |\arg(\delta_{\tau,k}(e^{-\omega\tau}e^{-r\tau} e^{-z\tau}))-\arg(\delta_{\tau,k}(e^{-r\tau}e^{-z\tau}))|\leq \epsilon/4. \end{equation*} From \cite{jin2017}, we have \begin{equation*} \delta_{\tau,k}(e^{-z\tau})\in \Sigma_{\pi-\vartheta_k+\epsilon/2}. \end{equation*} Thus \begin{equation*} \beta_{\tau,k}(z)=\delta_{\tau,k}(e^{-\tau(z+\rho U(x_{0}))})\in \Sigma_{\pi-\vartheta_k+\epsilon}. \end{equation*} \end{proof} According to the above two lemmas, the following temporal error estimates can be obtained. \begin{theorem}\label{thmsemierrorOk} Let $G(t_{n})$ and $G^n$ be the solutions of Eqs. \eqref{eqretosol} and \eqref{eqsemisch}, respectively. Assume $U(x_{0})$ is bounded in $\bar{\Omega}$. If $G_0\in L^2(\Omega)$, $f\in C^{k-1}([0,T],L^{2}(\Omega))$, and $\int_{0}^{t}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds<\infty$, then there holds \begin{equation*} \begin{aligned} \|G(t_{n})-G^{n}\|_{L^2(\Omega)}\leq& Ct_n^{-k}\tau^{k}\|G_0\|_{L^2(\Omega)}+C\sum_{l=0}^{k-1}\tau^{k}t^{l+1-k}_{n}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}\\ &+C\tau^{k}\int_{0}^{t_{n}}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds. \end{aligned} \end{equation*} \end{theorem} \begin{proof} Subtracting \eqref{eqtimesemidissol} from \eqref{eqsolresp1} leads to \begin{equation*} \begin{aligned} \|G(t_{n})-G^{n}\|_{L^2(\Omega)} \leq&C(\uppercase\expandafter{\romannumeral1}+\uppercase\expandafter{\romannumeral2}+\uppercase\expandafter{\romannumeral3}+\uppercase\expandafter{\romannumeral4}), \end{aligned} \end{equation*} where \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}G^{0})dz\\ &\quad-\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )dz\bigg\|_{L^2(\Omega)},\\ \uppercase\expandafter{\romannumeral2}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}z^{-1}f^{0})dz\\ &\quad-\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1} \\ &\quad \cdot \left ((\beta_{\tau,k}(z))^{\alpha-1}(\delta_{\tau,k}(e^{-z\tau}))^{-1}\mu_{k}(e^{-z\tau})f^{0}\right )dz\bigg\|_{L^2(\Omega)},\\ \uppercase\expandafter{\romannumeral3}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0)\right )dz\\ &\quad-\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )dz\bigg\|_{L^2(\Omega)},\\ \uppercase\expandafter{\romannumeral4}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}\tilde{R}_{k}\right )dz\\ &\quad-\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\tau\sum_{n=1}^{\infty}R^{n}_{k}e^{-zt_{n}}\right )dz\bigg\|_{L^2(\Omega)}.\\ \end{aligned} \end{equation*} For $\uppercase\expandafter{\romannumeral1}$, it has \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral1}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}\backslash\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}((\beta(z))^{\alpha-1}G^{0})dz\bigg\|_{L^2(\Omega)}\\ &\quad+\bigg\|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}\left (((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}G_{0}\right)\right.\\ &\left. \quad -((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )\right )dz\bigg\|_{L^2(\Omega)}.\\ \end{aligned} \end{equation*} Combining Eq. \eqref{equreqest} and Lemmas \ref{lemmaBeta} and \ref{lemmabetatauOk} yields \begin{equation*} \begin{aligned} &\left\|((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}G_{0}\right)-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1} \right. \\ & \cdot\left.\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )\right\|_{L^2(\Omega)}\\ & \leq \left\|((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}G_{0}\right)-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}G_{0}\right)\right\|_{L^2(\Omega)}\\ &+\left\|((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}G_{0}\right)-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\right. \\ & \cdot \left.\left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )\right\|_{L^2(\Omega)}\\ & \leq Cz^{k-1}\tau^{k}\|G_{0}\|_{L^2(\Omega)}, \end{aligned} \end{equation*} which leads to \begin{equation*} \uppercase\expandafter{\romannumeral1}\leq C\tau^{k}t^{-k}_{n}\|G_{0}\|_{L^2(\Omega)}. \end{equation*} Similarly, we obtain \begin{equation*} \uppercase\expandafter{\romannumeral2}\leq C\tau^{k}t^{1-k}_{n}\|f^{0}\|_{L^2(\Omega)}. \end{equation*} As for $\uppercase\expandafter{\romannumeral3}$, we have \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral3}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}\backslash \Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0)\right )dz\bigg\|_{L^2(\Omega)}\\ &\quad+\bigg\|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}\bigg(((\beta(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0)\right )\\ &-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\sum_{l=1}^{k-2}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )\bigg)dz\bigg\|_{L^2(\Omega)}. \end{aligned} \end{equation*} Combining Eq. \eqref{equreqest} and Lemmas \ref{lemmaBeta} and \ref{lemmabetatauOk} gives \begin{equation*} \begin{aligned} &\bigg\|((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0)\right )\\ &\quad-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )\bigg\|_{L^2(\Omega)}\\ \leq&\left\|\left(((\beta(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\right)\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0)\right )\right\|_{L^2(\Omega)}\\ &+\left\|((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}z^{-l-1}\partial^{l}_{t}f(0) \right.\right. \\ & \left.\left. -((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )\right\|_{L^2(\Omega)}\\ \leq &Cz^{k-l-2}\tau^{k}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}, \end{aligned} \end{equation*} which implies \begin{equation*} \uppercase\expandafter{\romannumeral3}\leq C\tau^{k}\sum_{l=1}^{k-2}t^{l+1-k}_{n}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}. \end{equation*} As for $\uppercase\expandafter{\romannumeral4}$, it has \begin{equation*} \uppercase\expandafter{\romannumeral4}\leq \uppercase\expandafter{\romannumeral4}_{1}+\uppercase\expandafter{\romannumeral4}_{2}, \end{equation*} where \begin{equation*} \begin{aligned} \uppercase\expandafter{\romannumeral4}_{1}\leq& C\bigg\|\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}z^{-k}\partial^{k-1}_{t}f(0)\right )dz\\ &\quad-\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1} \\ &\quad \cdot\left ((\beta_{\tau,k}(z))^{\alpha-1}\sum_{n=1}^{\infty}\frac{t_{n}^{k-1}}{(k-1)!}\partial^{k-1}_{t}f(0)e^{-zt_{n}}\right )dz\bigg\|_{L^2(\Omega)},\\ \uppercase\expandafter{\romannumeral4}_{2}\leq& C\bigg\|\frac{1}{2\pi\mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt_{n}}((\beta(z))^{\alpha}+A)^{-1}\left ((\beta(z))^{\alpha-1}\tilde{f}\right )dz\\ &\quad-\frac{1}{2\pi\mathbf{i}}\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}((\beta_{\tau,k}(z))^{\alpha}+A)^{-1} \\ &\quad \cdot\left ((\beta_{\tau,k}(z))^{\alpha-1}\sum_{n=1}^{\infty}\left (\frac{t^{k-1}}{(k-1)!}\ast\partial^{k}_{t}f\right )(t_{n})e^{-zt_{n}}\right )dz\bigg\|_{L^2(\Omega)}.\\ \end{aligned} \end{equation*} Simple calculations \cite{lubich1996} lead to \begin{equation} \uppercase\expandafter{\romannumeral4}\leq C\tau^{k}\|\partial^{k-1}_{t}f(0)\|_{L^2(\Omega)}+C\tau^{k}\int_{0}^{t_{n}}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds. \end{equation} \end{proof} Now, we provide the spatial error estimate. \begin{lemma}[\cite{Sun:2020}]\label{lemeroper} Let $v\in L^2(\Omega)$, $U(x_{0})$ be bounded in $\bar{\Omega}$ and $z\in\Sigma^{\tau}_{\theta,\kappa}$ with $\kappa$ largely enough, where $\Sigma^{\tau}_{\theta,\kappa}$ is defined in Lemma \ref{lemmabetatauOk}. Denote $w=((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}v$ and $w_h=((\beta_{\tau,k}(z))^{\alpha}+A_h)^{-1} P_hv$, where $\beta_{\tau,k}(z)$ is defined in \eqref{betatauOk}. Then one has \begin{equation*} \|w-w_h\|_{L^2(\Omega)}+h\|w-w_h\|_{\dot{H}^{1}(\Omega)}\leq Ch^{2}\|v\|_{L^2(\Omega)}. \end{equation*} \end{lemma} \begin{proof} The results can be similarly obtained as the proof in \cite{Sun:2020}. \end{proof} \begin{theorem}\label{thmfullerrorO2} Let $G^{n}$ and $G^{n}_{h}$ be the solutions of Eqs. \eqref{eqtimesemidissol} and \eqref{eqfullscheme} respectively and assume $G_0\in L^2(\Omega)$, $f\in C^{k}([0,T],L^{2}(\Omega))$, $\int_{0}^{t}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds<\infty$ and $U(x_{0})$ is bounded in $\bar{\Omega}$. Then we have \begin{equation} \begin{aligned} &\|G^{n}-G^{n}_{h}\|_{L^2(\Omega)}+h\|G^{n}-G^{n}_{h}\|_{\dot{H}^{1}(\Omega)}\\ &\qquad\qquad \leq Ch^{2}t_{n}^{-\alpha}\|G_0\|_{L^2(\Omega)}+Ch^{2}\sum_{l=0}^{k-1}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}+Ch^{2}\int_{0}^{t_{n}}\left \|\partial^{k}_{t}f(s)\right \|_{L^2(\Omega)}ds. \end{aligned} \end{equation} \end{theorem} \begin{proof} Subtracting \eqref{eqfulldissol} from \eqref{eqtimesemidissol} leads to \begin{equation*} \begin{aligned} &\|G^{n}-G^{n}_{h}\|_{L^2(\Omega)}\\ \leq&C\left \|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}\left (((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\right ) \right. \\ & \cdot \left. \left ((\beta_{\tau,k}(z))^{\alpha-1}\mu_{k}(e^{-\beta(z)\tau})G^{0}\right )dz\right \|_{L^2(\Omega)}\\ &+C\left \|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}} \left (((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\right ) \right. \\ & \cdot \left. \left ((\beta_{\tau,k}(z))^{\alpha-1}(\delta_{\tau,k}(e^{-z\tau}))^{-1}\mu_{k}(e^{-z\tau})f^{0}\right )dz\right \|_{L^2(\Omega)}\\ &+C\left \|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}} \left (((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\right ) \right. \\ & \cdot \left. \sum_{l=1}^{k-2}\left ((\beta_{\tau,k}(z))^{\alpha-1}\eta_{k,l}(e^{-z\tau})\partial^{l}_{t}f(0)\right )dz\right \|_{L^2(\Omega)}\\ &+C\left \|\int_{\Gamma^{\tau}_{\theta,\kappa}}e^{zt_{n}}\left (((\beta_{\tau,k}(z))^{\alpha}+A)^{-1}-((\beta_{\tau,k}(z))^{\alpha}+A_{h})^{-1}P_{h}\right ) \right. \\ & \cdot \left. \left ((\beta_{\tau,k}(z))^{\alpha-1}\tau\sum_{n=1}^{\infty}R^{n}_{k}e^{-zt_{n}}\right )dz\right \|_{L^2(\Omega)}. \end{aligned} \end{equation*} Using Lemmas \ref{lemmabetatauOk} and \ref{lemeroper} lead to \begin{equation*} \|G^{n}-G^{n}_{h}\|_{L^2(\Omega)}\leq Ch^{2}t_{n}^{-\alpha}\|G_{0}\|_{L^2(\Omega)}+Ch^{2}\sum_{l=0}^{k-1}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}+Ch^{2}\int_{0}^{t_{n}}\left \|\partial^{k}_{t}f(s)\right \|_{L^2(\Omega)}ds. \end{equation*} Similarly, we have \begin{equation*} \|G^{n}-G^{n}_{h}\|_{\dot{H}^1(\Omega)}\leq Cht_{n}^{-\alpha}\|G_{0}\|_{L^2(\Omega)}+Ch\sum_{l=0}^{k-1}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}+Ch\int_{0}^{t_{n}}\left \|\partial^{k}_{t}f(s)\right \|_{L^2(\Omega)}ds. \end{equation*} \end{proof} Lastly, according to Theorems \ref{thmsemierrorOk} and \ref{thmfullerrorO2}, the following error estimates of the fully discrete scheme are reached. \begin{theorem}\label{thmfull} Let $G(t)$ and $G_{h}^{n}$ be the solutions of Eqs. \eqref{eqretosol} and \eqref{eqfullscheme} respectively and assume $G_0\in L^2(\Omega)$, $f\in C^{k-1}([0,T],L^{2}(\Omega))$, $\int_{0}^{t}(t-s)^{-\alpha}\|f(s)\|_{L^2(\Omega)}ds<\infty$, $\int_{0}^{t}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds<\infty$, and $U(x_{0})$ is bounded in $\bar{\Omega}$. Then one has \begin{equation} \begin{aligned} \|G(t_{n})-G_{h}^{n}\|_{L^2(\Omega)}\leq& Ch^{2}t_{n}^{-\alpha}\|G_{0}\|_{L^2(\Omega)}+Ch^{2}\sum_{l=0}^{k-1}\|\partial^{l}_{t}f(0)\|_{L^2(\Omega)}\\ &+Ch^{2}\int_{0}^{t_{n}}\left \|\partial^{k}_{t}f(s)\right \|_{L^2(\Omega)}ds\\ &+Ct_n^{-k}\tau^{k}\|G_0\|_{L^2(\Omega)}+C\tau^{k}\sum_{l=0}^{k-1}t_{n}^{l+1-k}\|\partial^{l}_{t}f_{0}\|_{L^2(\Omega)}\\ &+C\tau^{k}\int_{0}^{t_{n}}\|\partial^{k}_{t}f(s)\|_{L^2(\Omega)}ds. \end{aligned} \end{equation} \end{theorem} \section{Numerical experiments} In this section, we perform the numerical experiments to verify the effectiveness of the designed schemes. Since the exact solution $G$ is unknown, to test the space convergence rates, we denote by \begin{equation*} \begin{aligned} E_{h}=\|G^{n}_{h}-G^{n}_{h/2}\|_{L^2(\Omega)}, \end{aligned} \end{equation*} where $G^n_{h}$ means the numerical solution of $G$ at time $t_n$ with mesh size $h$; similarly, to test the temporal convergence rates, we take \begin{equation*} \begin{aligned} E_{\tau}=\|G_{\tau}-G_{\tau/2}\|_{L^2(\Omega)}, \end{aligned} \end{equation*} where $G_{\tau}$ is the numerical solution of $G$ at the fixed time $t$ with step size $\tau$. The spatial and temporal convergence rates can be, respectively, obtained by calculating \begin{equation*} {\rm Rate}=\frac{\ln(E_{h}/E_{h/2})}{\ln(2)},\quad {\rm Rate}=\frac{\ln(E_{\tau}/E_{\tau/2})}{\ln(2)}. \end{equation*} For convience, we take $\Omega=(0,1)$ and choose $T=1$ as a terminal time in the following examples. And all the computations are carried out in Julia 1.4.3 on a personal laptop. To observe the temporal convergence rates clearly, we use 80-bit precision float to do the computation and save the data. \begin{example} In this example, we take $\rho=-1$, \begin{equation*} f(x_0,\rho,t)=0,\quad G(x_{0},\rho,0)=x_{0}(1-x_{0})~~{\rm and}~~U(x_{0})=\chi_{(0.5,1)}(x_{0}), \end{equation*} where $\chi_{(a,b)}$ denotes the characteristic function on $(a,b)$. To investigate the convergence in temporal direction and eliminate the influence from spatial discretization, we take $h=1/100$. The corresponding temporal errors and convergence rates are presented in Table \ref{Tab:homtime} for scheme \eqref{eqfullscheme}. All the convergence rates are steady and can reach up to order $6$. \begin{table}[htbp] \caption{Temporal errors and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/\tau$ & 50 & 100 & 200 & 400 & 800 & Rate \\ \hline & 2 & 1.3916E-06 & 3.4220E-07 & 8.4846E-08 & 2.1124E-08 & 5.2703E-09&$\approx2.0029$ \\ & 3 & 6.6959E-08 & 8.0530E-09 & 9.8763E-10 & 1.2229E-10 & 1.5214E-11&$\approx3.0068$ \\ 0.3 & 4 & 4.5036E-09 & 2.6218E-10 & 1.5824E-11 & 9.7204E-13 & 6.0232E-14&$\approx4.0124$ \\ & 5 & 4.1599E-10 & 1.1158E-11 & 3.2977E-13 & 1.0025E-14 & 3.0900E-16&$\approx5.0197$ \\ & 6 & 3.6025E-07 & 4.3156E-11 & 8.5287E-15 & 1.2764E-16 & 1.9547E-18&$\approx6.0290$ \\ \hline & 2 & 3.6346E-06 & 8.8919E-07 & 2.1988E-07 & 5.4670E-08 & 1.3630E-08&$\approx2.0039$ \\ & 3 & 2.1375E-07 & 2.5479E-08 & 3.1112E-09 & 3.8441E-10 & 4.7774E-11&$\approx3.0083$ \\ 0.7 & 4 & 1.6696E-08 & 9.6070E-10 & 5.7657E-11 & 3.5318E-12 & 2.1854E-13&$\approx4.0144$ \\ & 5 & 1.8665E-09 & 4.5724E-11 & 1.3418E-12 & 4.0650E-14 & 1.2509E-15&$\approx5.0222$ \\ & 6 & 2.7567E-06 & 3.1378E-09 & 3.7837E-14 & 5.6558E-16 & 8.6440E-18&$\approx6.0318$ \\ \hline \end{tabular} \label{Tab:homtime} \end{table} \end{example} \begin{example} Here, we take $\rho=-1$, \begin{equation*} f(x_0,\rho,t)=x_{0}(1-x_{0})e^{-\rho\chi_{(0.5,1)}(x_{0})t},\quad G(x_{0},\rho,0)=0~~{\rm and}~~U(x_{0})=\chi_{(0.5,1)}(x_{0}). \end{equation*} To avoid the influence on temporal errors from the spatial discretization, we choose $h=1/100$. We use \eqref{eqfullscheme} to solve \eqref{eqretosol} and present the corresponding temporal errors and convergence rates in Table \ref{Tab:Nontime}. The convergence rates are steady and can reach up to order $6$. \begin{table}[htbp] \caption{Temporal errors and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/\tau$ & 50 & 100 & 200 & 400 & 800 & Rate \\ \hline & 2 & 1.8340E-06 & 4.5996E-07 & 1.1517E-07 & 2.8817E-08 & 7.2072E-09 & $\approx$ 1.9994 \\ & 3 & 3.0701E-08 & 3.8758E-09 & 4.8686E-10 & 6.1006E-11 & 7.6351E-12 & $\approx$ 2.9982 \\ 0.4 & 4 & 4.5287E-10 & 2.8923E-11 & 1.8274E-12 & 1.1484E-13 & 7.1968E-15 & $\approx$ 3.9961 \\ & 5 & 1.9285E-11 & 7.6798E-13 & 2.3449E-14 & 7.2459E-16 & 2.2518E-17 & $\approx$ 5.0080 \\ & 6 & 9.3685E-08 & 2.6523E-11 & 3.5161E-16 & 5.4074E-18 & 8.2994E-20 & $\approx$ 6.0258 \\ \hline & 2 & 7.6913E-07 & 1.9366E-07 & 4.8588E-08 & 1.2169E-08 & 3.0449E-09 & $\approx$ 1.9987 \\ & 3 & 2.5894E-08 & 3.2146E-09 & 4.0050E-10 & 4.9982E-11 & 6.2428E-12 & $\approx$ 3.0011 \\ 0.6 & 4 & 4.7283E-10 & 2.6989E-11 & 1.6111E-12 & 9.8392E-14 & 6.0786E-15 & $\approx$ 4.0167 \\ & 5 & 6.1981E-11 & 1.7996E-12 & 5.3852E-14 & 1.6473E-15 & 5.0933E-17 & $\approx$ 5.0153 \\ & 6 & 6.2004E-08 & 6.2135E-11 & 1.1952E-15 & 1.8010E-17 & 2.7633E-19 & $\approx$ 6.0262 \\ \hline \end{tabular} \label{Tab:Nontime} \end{table} \end{example} \begin{example} In this example, we take $\rho=-1+\pi\mathbf{i}$, \begin{equation*} \begin{aligned} &f(x_0,\rho,t)=0,\quad G(x_{0},\rho,0)=-5 \chi_{(0,0.5)}(x_{0})+5 \chi_{(0.5,1)}(x_{0}),\\ &\qquad{\rm and}~~U(x_{0})=3 (x_{0}+0.5)^5 \chi_{(0,0.5)}(x_{0}). \end{aligned} \end{equation*} We choose $\tau=1/200$ to decrease the errors caused by temporal discretizations. We use \eqref{eqfullscheme} to solve Eq. \eqref{eqretosol} and present the $L^{2}$- and $H^{1}$-norm errors and convergence rates in Tables \ref{Tab:homspaceL2} and \ref{Tab:homspaceH1}, respectively. All the convergence rates are consistent with the predicted results. \begin{table}[htbp] \caption{$L^{2}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 20 & 40 & 80 & 160 & 4096 & Rate \\ \hline & 2 & 7.0515E-06 & 1.7618E-06 & 4.4038E-07 & 1.1012E-07 & 2.7523E-08 & $\approx$ 2.0003 \\ & 3 & 7.0516E-06 & 1.7618E-06 & 4.4039E-07 & 1.1007E-07 & 2.7470E-08 & $\approx$ 2.0024 \\ 0.3 & 4 & 7.0516E-06 & 1.7618E-06 & 4.4038E-07 & 1.1008E-07 & 2.7469E-08 & $\approx$ 2.0026 \\ & 5 & 7.0516E-06 & 1.7618E-06 & 4.4039E-07 & 1.1009E-07 & 2.7578E-08 & $\approx$ 1.9971 \\ & 6 & 7.0516E-06 & 1.7618E-06 & 4.4039E-07 & 1.1011E-07 & 2.7419E-08 & $\approx$ 2.0056 \\ \hline & 2 & 2.0656E-06 & 5.1611E-07 & 1.2901E-07 & 3.2251E-08 & 8.0774E-09 & $\approx$ 1.9974 \\ & 3 & 2.0656E-06 & 5.1611E-07 & 1.2901E-07 & 3.2251E-08 & 8.0778E-09 & $\approx$ 1.9973 \\ 0.8 & 4 & 2.0656E-06 & 5.1611E-07 & 1.2901E-07 & 3.2248E-08 & 8.0629E-09 & $\approx$ 1.9998 \\ & 5 & 2.0656E-06 & 5.1611E-07 & 1.2901E-07 & 3.2248E-08 & 8.0769E-09 & $\approx$ 1.9973 \\ & 6 & 2.0656E-06 & 5.1611E-07 & 1.2901E-07 & 3.2248E-08 & 8.0928E-09 & $\approx$ 1.9945 \\ \hline \end{tabular} \label{Tab:homspaceL2} \end{table} \begin{table}[htbp] \caption{$H^{1}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 256 & 512 & 1024 & 2048 & 4096 & Rate \\ \hline & 2 & 6.2678E-03 & 3.1320E-03 & 1.5657E-03 & 7.8284E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ & 3 & 6.2678E-03 & 3.1320E-03 & 1.5658E-03 & 7.8285E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ 0.3 & 4 & 6.2678E-03 & 3.1320E-03 & 1.5658E-03 & 7.8285E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ & 5 & 6.2678E-03 & 3.1320E-03 & 1.5658E-03 & 7.8285E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ & 6 & 6.2678E-03 & 3.1320E-03 & 1.5658E-03 & 7.8285E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ \hline & 2 & 1.7994E-03 & 8.9919E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ & 3 & 1.7994E-03 & 8.9919E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ 0.8 & 4 & 1.7994E-03 & 8.9919E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ & 5 & 1.7994E-03 & 8.9919E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ & 6 & 1.7994E-03 & 8.9919E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ \hline \end{tabular} \label{Tab:homspaceH1} \end{table} Furthermore, to show the effectiveness of our scheme and the significance of corrections for all steps, we provide another comparative example. Applying the correction scheme provided in \cite{jin2017} to our problem and taking $L^{2}$ projection on $e^{-t_{n}\rho U(x_{0})}G^{0}$, then one has the fully discrete scheme \begin{equation}\label{eqfullhonw1} \left\{ \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h} ,v_{h})+(A_{h} G^{n}_{h}, v_{h})+a^{(k)}_{n}(A_{h}P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0}),v_{h})\\ &\qquad=\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0}),v_{h}) \qquad \forall v_{h}\in X_{h},\quad 1\leq n\leq k-1,\\ &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h} ,v_{h})+(A_{h} G^{n}_{h}, v_{h})\\ &\qquad=\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(P_{h}(e^{-t_{n}\rho U(x_{0})}G^{0}),v_{h}) \qquad \forall v_{h}\in X_{h},\quad n\geq k,\\ \end{aligned}\right. \end{equation} And the source term $f$, initial data $G(x_{0},\rho,0)$, and $U(x_{0})$ are taken to be the same as the immediately above example. The corresponding $L^{2}$- and $H^{1}$-norm errors and convergence rates are presented in Tables \ref{Tab:homspaceL2w} and \ref{Tab:homspaceH1w}. It's easy to find that the convergence rates of $H^{1}$-norm errors are optimal but the convergence rates of $L^{2}$-norm errors can't achieve $O(h^{2})$. \begin{table}[htbp] \caption{$L^{2}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 256 & 512 & 1024 & 2048 & 4096 & Rate \\ \hline & 2 & 6.5612E-06 & 2.1429E-06 & 1.0074E-06 & 5.1449E-07 & 2.6272E-07 & $\approx$ 0.9696 \\ & 3 & 1.4812E-05 & 7.5213E-06 & 3.8401E-06 & 1.9463E-06 & 9.8064E-07 & $\approx$ 0.9889 \\ 0.3 & 4 & 3.1889E-05 & 1.6237E-05 & 8.2196E-06 & 4.1387E-06 & 2.0772E-06 & $\approx$ 0.9945 \\ & 5 & 5.4773E-05 & 2.7724E-05 & 1.3968E-05 & 7.0136E-06 & 3.5144E-06 & $\approx$ 0.9969 \\ & 6 & 8.2820E-05 & 4.1748E-05 & 2.0980E-05 & 1.0520E-05 & 5.2679E-06 & $\approx$ 0.9978 \\ \hline & 2 & 2.9559E-06 & 1.3904E-06 & 7.0643E-07 & 3.6011E-07 & 1.8223E-07 & $\approx$ 0.9827 \\ & 3 & 1.0324E-05 & 5.2570E-06 & 2.6619E-06 & 1.3405E-06 & 6.7271E-07 & $\approx$ 0.9947 \\ 0.8 & 4 & 2.2209E-05 & 1.1224E-05 & 5.6481E-06 & 2.8338E-06 & 1.4195E-06 & $\approx$ 0.9974 \\ & 5 & 3.7832E-05 & 1.9034E-05 & 9.5523E-06 & 4.7858E-06 & 2.3953E-06 & $\approx$ 0.9985 \\ & 6 & 5.6851E-05 & 2.8532E-05 & 1.4299E-05 & 7.1590E-06 & 3.5818E-06 & $\approx$ 0.9991 \\ \hline \end{tabular} \label{Tab:homspaceL2w} \end{table} \begin{table}[htbp] \caption{$H^{1}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 256 & 512 & 1024 & 2048 & 4096 & Rate \\ \hline & 2 & 6.2678E-03 & 3.1320E-03 & 1.5658E-03 & 7.8285E-04 & 3.9142E-04 & $\approx$ 1.0000 \\ & 3 & 6.2550E-03 & 3.1256E-03 & 1.5625E-03 & 7.8124E-04 & 3.9062E-04 & $\approx$ 1.0000 \\ 0.3 & 4 & 6.2357E-03 & 3.1159E-03 & 1.5577E-03 & 7.7883E-04 & 3.8941E-04 & $\approx$ 1.0000 \\ & 5 & 6.2109E-03 & 3.1036E-03 & 1.5515E-03 & 7.7574E-04 & 3.8787E-04 & $\approx$ 1.0000 \\ & 6 & 6.1818E-03 & 3.0890E-03 & 1.5443E-03 & 7.7210E-04 & 3.8604E-04 & $\approx$ 1.0000 \\ \hline & 2 & 1.7995E-03 & 8.9920E-04 & 4.4953E-04 & 2.2476E-04 & 1.1238E-04 & $\approx$ 1.0000 \\ & 3 & 1.7884E-03 & 8.9367E-04 & 4.4677E-04 & 2.2338E-04 & 1.1169E-04 & $\approx$ 1.0000 \\ 0.8 & 4 & 1.7722E-03 & 8.8558E-04 & 4.4272E-04 & 2.2135E-04 & 1.1068E-04 & $\approx$ 1.0000 \\ & 5 & 1.7524E-03 & 8.7568E-04 & 4.3777E-04 & 2.1888E-04 & 1.0944E-04 & $\approx$ 1.0000 \\ & 6 & 1.7306E-03 & 8.6475E-04 & 4.3231E-04 & 2.1615E-04 & 1.0807E-04 & $\approx$ 1.0000 \\ \hline \end{tabular} \label{Tab:homspaceH1w} \end{table} \end{example} \begin{example} We take $\rho=-1+\mathbf{i}$, \begin{equation*} f(x_0,\rho,t)=x_{0}(1-x_{0})e^{-\rho\chi_{(0.5,1)}(x_{0})t},\quad G(x_{0},\rho,0)=0,~~{\rm and}~~U(x_{0})=\chi_{(0.5,1)}(x_{0}). \end{equation*} We choose $\tau=1/200$ to decrease the influence caused by temporal discretizations. Use \eqref{eqfullscheme} to solve \eqref{eqretosol} and present the corresponding temporal errors and convergence rates in Tables \ref{Tab:NonspaceL2} and \ref{Tab:NonspaceH1}. All the results agree with the predictions. \begin{table}[htbp] \caption{$L^{2}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 20 & 40 & 80 & 160 & 320 & Rate \\ \hline & 2 & 3.3965E-05 & 8.4921E-06 & 2.1231E-06 & 5.3077E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 3 & 3.3964E-05 & 8.4919E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ 0.3 & 4 & 3.3964E-05 & 8.4918E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 5 & 3.3964E-05 & 8.4918E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 6 & 3.3964E-05 & 8.4919E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ \hline & 2 & 3.3965E-05 & 8.4921E-06 & 2.1231E-06 & 5.3077E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 3 & 3.3964E-05 & 8.4919E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ 0.6 & 4 & 3.3964E-05 & 8.4918E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 5 & 3.3964E-05 & 8.4918E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ & 6 & 3.3964E-05 & 8.4919E-06 & 2.1230E-06 & 5.3076E-07 & 1.3269E-07 & $\approx$ 2.0000 \\ \hline \end{tabular} \label{Tab:NonspaceL2} \end{table} \begin{table}[htbp] \caption{$H^{1}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 20 & 40 & 80 & 160 & 320 & Rate \\ \hline & 2 & 2.5503E-03 & 1.2756E-03 & 6.3784E-04 & 3.1893E-04 & 1.5947E-04 & $\approx$ 1.0000 \\ & 3 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ 0.3 & 4 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ & 5 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ & 6 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ \hline & 2 & 2.5503E-03 & 1.2756E-03 & 6.3784E-04 & 3.1893E-04 & 1.5947E-04 & $\approx$ 1.0000 \\ & 3 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ 0.6 & 4 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ & 5 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ & 6 & 2.5502E-03 & 1.2755E-03 & 6.3783E-04 & 3.1892E-04 & 1.5946E-04 & $\approx$ 1.0000 \\ \hline \end{tabular} \label{Tab:NonspaceH1} \end{table} \begin{table}[htbp] \caption{$L^{2}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 20 & 40 & 80 & 160 & 320 & Rate \\ \hline & 2 & 9.0942E-06 & 6.0132E-06 & 5.5959E-06 & 3.3906E-06 & 1.8330E-06 & $\approx$ 0.8873 \\ & 3 & 9.0858E-06 & 6.0174E-06 & 5.5979E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ 0.3 & 4 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ & 5 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ & 6 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ \hline & 2 & 9.0942E-06 & 6.0132E-06 & 5.5959E-06 & 3.3906E-06 & 1.8330E-06 & $\approx$ 0.8873 \\ & 3 & 9.0858E-06 & 6.0174E-06 & 5.5979E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ 0.6 & 4 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ & 5 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ & 6 & 9.0857E-06 & 6.0174E-06 & 5.5980E-06 & 3.3916E-06 & 1.8335E-06 & $\approx$ 0.8874 \\ \hline \end{tabular} \label{Tab:NonspaceL2w} \end{table} \begin{table}[htbp] \caption{$H^{1}$-norm errors in space and convergence rates} \begin{tabular}{|c|c|ccccc|c|} \hline $\alpha$ & $k\backslash 1/h$ & 20 & 40 & 80 & 160 & 320 & Rate \\ \hline & 2 & 2.5451E-03 & 1.2730E-03 & 6.3657E-04 & 3.1829E-04 & 1.5915E-04 & $\approx$ 1.0000 \\ & 3 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ 0.3 & 4 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ & 5 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ & 6 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ \hline & 2 & 2.5451E-03 & 1.2730E-03 & 6.3657E-04 & 3.1829E-04 & 1.5915E-04 & $\approx$ 1.0000 \\ & 3 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ 0.6 & 4 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ & 5 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ & 6 & 2.5450E-03 & 1.2730E-03 & 6.3655E-04 & 3.1828E-04 & 1.5914E-04 & $\approx$ 1.0000 \\ \hline \end{tabular} \label{Tab:NonspaceH1w} \end{table} To show the differences between the two different projections, we also compute this example with the numerical scheme \eqref{eqfullnonwrong}, i.e., we take $L^{2}$ projection on $f$ directly, \begin{equation}\label{eqfullnonwrong} \begin{aligned} &\sum_{i=0}^{n-1}d^{\alpha,k}_{i}(e^{-t_{i}\rho U(x_{0})}G^{n-i}_{h} ,v_{h})+(A_{h} G^{n}_{h}, v_{h})=\sum_{i=0}^{n-1}d^{\alpha-1,k}_{i}(e^{-t_{i}\rho U(x_{0})}P_{h}f^{n-i},v_{h})\\ &\qquad\quad+\sum_{j=1}^{k-1}a^{(k)}_{j}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}P_{h}f^{0},v_{h})\\ &\qquad\quad+\sum_{l=1}^{k-2}\sum_{j=1}^{k-1}b^{(k)}_{l,j}\tau^{l}d^{\alpha-1,k}_{n-j}(e^{-t_{n-j}\rho U(x_{0})}P_{h}\partial^{l}_{t}f(0),v_{h}) \qquad \forall v_{h}\in X_{h}. \end{aligned} \end{equation} The relevant $L^{2}$- and $H^{1}$-norm errors and convergence rates are presented in Tables \ref{Tab:NonspaceL2w} and \ref{Tab:NonspaceH1w}, which show that the numerical scheme \eqref{eqfullnonwrong} delivers an $O(h)$ accuracy in $H^{1}$-norm and only about $O(h^{0.9})$ accuracy in $L^{2}$-norm. Thus, according to the results in Tables \ref{Tab:NonspaceL2} and \ref{Tab:NonspaceL2w}, our schemes can solve Eq. \eqref{eqretosol} more effectively. \end{example} \section{Conclusions} Functional, as an important class of statistical observables, plays a key role in uncovering the mechanism of anomalous dynamics and extending their applications. The probability density function of the functional for anomalous dynamics is governed by fractional Feynman-Kac equation. The time-space coupled operator of the equation, the possible low regularity of the functional, and the complex variables bring the challenges in effectively solving the equation. This paper carefully designs the numerical schemes, which can achieve the optimal time convergence rates up to order $6$ and optical space convergence rate without any regularity assumptions on the solution. The convergence results are theoretically proved and numerically confirmed. More numerical experiments are also performed to show the benefits of the schemes presented in this paper comparing with the existing ones in solving the fractional Feynman-Kac equation. \bibliographystyle{siamplain}
1,314,259,992,667
arxiv
\section{Introduction} Discrete optimization problems defined on graphs are widespread among many scientific disciplines and commonly found in real-world applications. Depending on the properties of the underlying graph, these optimization problems may become so hard to solve that all known algorithms find only very suboptimal solutions, while the optimal ones remain unreachable to algorithms running in polynomial time. A common benchmark to test the effectiveness of search algorithms is represented by optimization problems defined on random graphs (typical case analysis). In this case, the hardness of the optimization problem can be usually controlled by varying continuously a model parameter (e.g. the random graph mean degree or the solution size), and different algorithms can be quantitatively compared on the basis of how close to optimality they can go. Unfortunately, in optimization problems that, in the worst case analysis, are NP-hard and also hard to approximate, a large algorithmic gap is often present in the typical case analysis, i.e.\ all known algorithms stop working at an algorithmic threshold which is bounded far away from the optimal (information theoretical) threshold. Computing the ultimate algorithmic threshold in these hard problems and understanding whether and why such an algorithmic threshold remains below the optimal one are fundamental open questions. The present work takes a step towards the answer of these questions, by studying the problem of finding a large Independent Set (IS) in a Random Regular Graph (RRG). Given a graph $G=(V,E)$, an IS is a subset of vertices $S\subset V$ such that no vertices in $S$ are adjacent, that is $(ij)\notin E,\,\forall\,i,j\in S$. Finding the largest IS in a graph is a fundamental problem (NP-hard in the worst case), tightly related to minimum vertex cover and maximum clique \cite{bollobas1998random}. In physics, the problem is known under the name of hard-core model \cite{hartmann2006phase}, because vertices in $S$ can be seen as particles that have a hard-core interaction and cannot be adjacent. The largest IS thus corresponds to the densest packing configuration in the hard-core model. We call $\rho$ the relative size of the IS, that is $|S|=\rho |V| = \rho N$. On RRG of constant degree $d$ it has been proved that, in the large $N$ limit, IS with $\rho<\rho_{max} \sim 2 \log d / d$ do exist with high probability for $d$ large enough \cite{bollobas1976cliques,frieze1990independence}. However algorithms running in polynomial time cannot find IS with $\rho > \rho_{alg} \sim \log d /d$ for $d$ large enough \cite{coja2015independent}. And actually this algorithmic threshold $\rho_{alg}$ can be achieved with very simple algorithms \cite{grimmett1975colouring}. The algorithmic gap, that is the strict inequality $\rho_{alg}<\rho_{max}$, has been proven for a class of local algorithms in the large $d$ limit \cite{gamarnik2014limits}. In this case, the origin of the algorithmic failure is due to the ergodicity breaking taking place at $\rho_{alg}$: this is a common phenomenon in optimization problems \cite{mezard2005clustering,achlioptas2011solution}, also called clustering or shattering of the solution space. One expects the ergodicity breaking taking place at $\rho_{alg}$ to affect also other types of algorithms. In particular, the sampling of the optimal solutions through numerical methods based on Monte Carlo Markov Chain should become much slower when ergodicity is broken, due to the need of overcoming large barriers. However if one is just interested in finding a single optimal or very close to optimal solution maybe Monte Carlo methods may work better than expected. This is a question never investigated in detail (to the best of our knowledge) and its answer is one of the main motivations for the present work. We are going to analyze the performances of different algorithms, dedicating particular attention to those based on Monte Carlo Markov Chains, and we will try to relate such performances to the relevant phase transitions taking place in the space of IS in the limit of large RRG. Indeed, studying the thermodynamics of the problem via the cavity method the authors of Ref.~\cite{barbier2013hard} showed how the space of IS changes while increasing $\rho$: for $d<16$ it undergoes a continuous phase transition from a Replica Symmetric (RS) phase to a phase described by a Full Replica Symmetry Breaking (FRSB) solution; while for $d\ge16$ the space of IS undergoes a random first-order transition (RFOT) and it can be described by a solution with one step of Replica Symmetry Breaking (1RSB). Let us briefly review the important phase transitions in the RFOT case, each one corresponding to a drastic change in the structure of the set of ISs. At small densities $\rho$, the ISs form a single large cluster (two ISs are considered adjacent if they differ in $o(N)$ vertices) and can be well described by an RS solution that assumes the existence of a single state. Increasing the density, one first finds a dynamical threshold $\rho_d$ above which the space of ISs is divided into an exponential number in $N$ of distinct clusters. This is the ergodicity breaking phase transition that affects local search algorithms and Monte Carlo methods for sampling. At the condensation threshold $\rho_c>\rho_d$ the number of clusters becomes sub-exponential, and beyond the maximum density $\rho_{max}$ there are no more ISs. This last threshold is the equivalent to the sat/unsat threshold in constraint satisfaction problems (CSP). Beside the above thermodynamic transitions, another property has been conjectured to be important for understanding the origin of the algorithmic complexity in CSP: the concept of frozen clusters \cite{achlioptas2006solution,zdeborova2007phase,achlioptas2011solution}. A cluster of solutions is said to be frozen if it contains frozen variables that take the same value in all the solutions of that cluster. The rigidity threshold $\rho_r$ is defined such that for $\rho>\rho_r$ typical clusters are frozen, while above the freezing transition $\rho_f$ all clusters are frozen. In CSP many smart algorithms can find solutions in the clustered phase, but even the most performing ones do not find frozen solutions \cite{marino2016backtracking}. For this reason, the freezing threshold is conjectured to be the ultimate algorithmic threshold. Unfortunately, its analytic computation is a very difficult task, which has been achieved only in random hypergraph bi-coloring at present \cite{braunstein2016large}. We will analyze different kinds of algorithms running in polynomial times. We avoid using algorithms that are known to find the largest IS in time typically growing exponentially in the graph size since these are impractical. Three main classes of polynomial algorithms will be considered: greedy algorithms, Monte Carlo methods and message passing algorithms. Greedy algorithms are very popular \cite{feo1994greedy,feo1995greedy,halldorsson1997greed}, because extremely fast and often provide a reasonably large IS. We will mainly focus on Monte Carlo based algorithms that have been much less studied. Indeed the common belief is that a slow enough Simulated Annealing (SA) is able to reach densities not larger than the bottom of the equilibrium states at $\rho_d$ \cite{zdeborova2010generalization}. Above $\rho_d$ ergodicity is broken and Monte Carlo methods should not be able to sample correctly the equilibrium properties of the model. However, it could always be possible that there are states accessible to the out of equilibrium dynamics that terminate at densities $\rho>\rho_d$ and thus an out-of-equilibrium process can find very large IS with $\rho>\rho_d$. Recently it has been proposed to enhance the weight of deep, large states in an efficient way coupling some replicas of the system, for example in an SA algorithm \cite{baldassi2016unreasonable}. The Replicated SA (RSA) has been seen to enhance the performances of learning in some models of neural networks. Here we apply RSA to the problem of finding the largest IS problem, discovering indeed that this algorithm is able to find solutions when the standard SA is not able to, well beyond $\rho_d$. However, this seems to be true only if the transition is strongly discontinuous (RFOT). In case the transition is weakly discontinuous (or continuous) RSA and SA show similar performances. Finally, we will analyze the behavior of Parallel Tempering (PT). Although PT has been invented to sample {\it at equilibrium} the very rough energy landscape of disordered systems and posterior distributions \cite{hukushima1996exchange,earl2005parallel}, it can be used in the {\it out-of-equilibrium} regime to try to reach some of the lowest energy configurations \cite{moreno2003finding}. Recently the PT has been applied to the planted IS problem, allowing to find the planted configuration in the supposedly hard regime (i.e.\ when the planted IS is very small) in a time that seems to scale polynomially with the system size \cite{angelini2018parallel}. In the random case, we show here that PT is able to find solutions above the algorithmic threshold of the SA and of all the other analyzed algorithms, included the Belief-Propagation with Reinforcement, that is usually the best-performing message passing algorithm in other optimization problems, able to go beyond the rigidity transition \cite{dallasta2008entropy}. We will measure the scaling of the convergence time for PT, showing that indeed it stays polynomial for $\rho>\rho_d$. \section{Problem definition and description of analyzed algorithms} In this section, we report the details of the problem, and of the algorithms whose performances are analyzed in the rest of the paper. The optimization problem we try to solve is to find the largest IS in a given RRG. Being $K$ the size of an IS, we call $\rho=K/N$ its density. Finding the largest IS problem is clearly a zero-temperature problem since it imposes strong constraints on any pair of nearest neighbor vertices not to be in the IS. As usual in a statistical mechanics approach, we can add a temperature parameter $T=1/\beta$ and relax the strong constraints into soft ones. The probability measure can be written as \begin{equation} P(\underline{n}) \propto \exp \bigg[ \mu \sum_{i=1}^N n_i + \beta \sum_{(ij)\in E} n_i n_j\bigg] \label{eq:measure} \end{equation} where $n_i\in\{0,1\}$. In the $T\to 0$ limit, vertices with $n_i=1$ form the IS, and the largest IS can in principle be achieved by sending $\mu\to\infty$ afterwards. In practice, we are going to approach such a limit ($T\to 0$ and $\mu\to\infty$) in two different ways. In the first way, we fix the IS size $K$, such that the first term in the measure in Eq.~(\ref{eq:measure}) is constant and can be ignored, and we study the problem in temperature. In the second way we fix $T=0$, making constraints hard, that is we rewrite the measure as follows \begin{equation} P(\underline{n}) \propto \exp \bigg[ \mu \sum_{i=1}^N n_i\bigg] \prod_{(ij)\in E} (1-n_i n_j) \label{eq:measureT0} \end{equation} and we study the problem increasing $\mu$. We will use many different algorithms, described in the following list. Each algorithm will show its own algorithmic threshold $\rho_{alg}$ above which that algorithm is not able to find IS. \begin{itemize} \item \textbf{Greedy algorithms} (GA) Greedy algorithms are linear time algorithms where variables are set just once during the process of finding an IS. They differ according to the rule which is used to select the next vertex to include in the growing IS. Schematically they work as follows: \begin{itemize} \item start with all $n_i=0$; \item at each step choose a vertex $v$ from the graph and add it to the IS, i.e.\ set $n_v=1$; \item the vertex is chosen uniformly at random in the `random vertex' version (RV GA) and such as to have the smallest degree in the `minimum degree' version (MD GA); \item all the neighbors of the chosen vertex are removed from the graph. \end{itemize} The random vertex version has been designed by Karp and Sipser \cite{Karp1981Maximum} and produces with high probability an IS of size $N\log(d+1)/d$ both at finite and large $d$. The minimum degree version has been introduced in Ref.~\cite{wormald1995differential} and gives better results, at least for finite $d$, while it has the same scaling at large $d$ values. The computational time of the greedy algorithm scales as $O(dN)$, that is linear in the graph size. \item \textbf{Monte Carlo in temperature} ($\beta$MC) We fix the size $K$ of the IS we would like to find and the temperature $T=1/\beta$ to be used in the Monte Carlo algorithm. The algorithm will sample configurations with exactly $K$ variables set to $n=1$, that is $\sum_i n_i=K$; each of these configurations can be equivalently described in terms of the subset of vertices containing a particle $\mathcal{I} \equiv \{i\in V: n_i=1\}$. To each configuration we associate the energy $E(\underline{n}) = \sum_{(ij)\in E} n_i n_j$ counting how many pairs of nearest neighbours are filled ($n=1$). A configuration of zero energy is an IS of size $K$. We start by choosing $\mathcal{I}$ as a random subset of $K$ vertices of $V$. At each step of the algorithm we propose to move a randomly chosen particle to a randomly chosen empty vertex; the particle and the empty vertex do not need to be nearest neighbors, so the algorithm is not standard diffusion. Calling $\underline{n}$ the current configuration and $\underline{n}'$ the proposed configuration, we follow standard Metropolis rule for accepting the proposed configuration, that is we accept the change with probability 1 if $E(\underline{n}') \le E(\underline{n})$, and with probability $\exp[\beta(E(\underline{n}')-E(\underline{n}))]$ otherwise. As done conventionally, we define a Monte Carlo Sweeps (MCS) the attempt to move a randomly chosen particle, repeated $K$ times. We stop the algorithm when a configuration $\underline{n}_{IS}$ with $E(\underline{n}_{IS})=0$ is found, that corresponds to an IS. \item \textbf{Parallel Tempering in temperature} ($\beta$PT) We consider $N_\beta$ replicas, each one with exactly $K$ variables set to $n=1$ as in the $\beta$MC method discussed above. Each replica undergoes a standard Metropolis evolution at inverse temperature $\beta_i=\beta_{max}-i\cdot \Delta\beta$, $i\in[0,N_\beta-1]$. Every 5 steps of $\beta$MC a temperature swapping step is attempted for each pair of configurations at nearby temperatures $\beta_i$ and $\beta_{i+1}$; the temperature swap is accepted with probability \begin{equation} p=\text{min}\((1,e^{\((\beta_i-\beta_{i+1}\))\((E_i-E_{i+1}\))}\)), \label{eq:flipping} \end{equation} where $E_i$ is the current value of the energy of the $i$-th replica. The algorithm is stopped when a replica (usually the one with the lowest temperature) reaches a zero energy configuration. \item \textbf{Simulated Annealing in chemical potential} ($\mu$SA) Working directly at zero temperature, i.e. sampling the measure in Eq.~(\ref{eq:measureT0}), we run a Simulated Annealing scheme in the following way. We start from the empty configuration $n_i=0\;\forall i$ that certainly satisfy all the constraints and from a null chemical potential $\mu=0$. At each step of the SA algorithm we increase the chemical potential by $\Delta\mu$ and we do a Monte Carlo sweep, that corresponds to the attempt to update each of the $N$ variables $n_i$ following the usual Metropolis rule: in practice if $n_i=0$, we set $n_i=1$ only if all the nearest neighbors are empty, and if $n_i=1$ we set $n_i=0$ with probability $\exp(-\mu)$. We stop the SA algorithm at a value $\mu_{max}$ where we observe the IS density $\rho=\sum_i n_i /N$ not increasing any more on any reasonable timescale. The algorithm, at fixed parameter $\Delta\mu$, is linear in the size $N$. \item \textbf{Replicated Simulated Annealing in chemical potential} ($\mu$RSA) In Ref. \cite{baldassi2016unreasonable} a replicated version of the SA is proposed to sample with higher probability states with larger entropy. To define the Replicated SA, we introduce $R$ replicas of the variables on the same RRG, and a coupling between the different replicas according to the following measure: \begin{equation} P(\underline{n}^1,\ldots,\underline{n}^R) \propto \exp \bigg[ \mu \sum_{a=1}^R \sum_{i=1}^N n_i^a + \gamma \sum_{a< b} \sum_{i=1}^N n_i^a n_i^b\bigg] \prod_{a=1}^R \prod_{(ij)\in E} (1-n_i^a n_j^a) \label{eq:repH_HC} \end{equation} We then run the SA algorithm on this replicated system, fixing the value of $\gamma$ and incrementing the value of $\mu$ as in the $\mu$SA. At variance to numerical experiments in Ref.~\cite{baldassi2016unreasonable}, where $\gamma$ is incremented during the annealing, we prefer to keep $\gamma$ fixed as we have seen that varying $\gamma$ does not improve the final result. \item \textbf{Parallel Tempering in chemical potential} ($\mu$PT) We consider $N_\mu$ replicas of the system, each replica being at a different chemical potential: $\mu_i=\mu_{max}-i\cdot \Delta\mu$, $i\in[0,N_\mu-1]$. For each replica, we run 5 Metropolis Monte Carlo sweeps at the corresponding chemical potential and then we try to swap configurations between close by values of the chemical potential with probability \begin{equation} p=\text{min}\((1,e^{\((\mu_i-\mu_{i+1}\))\((-K_i+K_{i+1}\))}\)), \label{eq:flippingPT} \end{equation} where $K_i$ is the actual number of variables set to 1 in the $i$-th replica. We stop the simulation if a replica (usually the one of index 0) reaches the IS size $K$ we aim at. \item \textbf{Belief Propagation with Reinforcement} (BPR) The Belief Propagation equations for the present problem were already derived in Ref.~\cite{barbier2013hard}: \begin{equation} \pi_{i\to j}=\frac{e^{\mu}\prod_{k\in\partial i\backslash j}(1-\pi_{k\to i})}{1+e^{\mu}\prod_{k\in\partial i\backslash j}(1-\pi_{k\to i})}, \end{equation} where $\pi_{i\to j}$ is the probability to have $n_i=1$ in a modified graph where edge $(ij)$ has been removed. These equations for $\mu<\mu_c$ converge to a homogeneous paramagnetic fixed point (FP). To turn the BP equations into a solver, one can add a reinforcement term, initially introduced in Ref.~\cite{braunstein2006learning}, with two parameters $\gamma$, $dt$ that tune respectively the strength and the speed of update of the reinforcement term. Practically the eqs. for the update of the messages becomes: \begin{equation} \pi^{t+1}_{i\to j}=\frac{e^{\mu}[\theta_i(t)]^{1-\gamma_t}\prod_{k\in\partial i\backslash j}(1-\pi^t_{k\to i})}{1+e^{\mu}[\theta_i(t)]^{1-\gamma_t}\prod_{k\in\partial i\backslash j}(1-\pi^t_{k\to i})}, \end{equation} with $\theta_i(t)=\prod_{k\in\partial i}(1-\pi^{t-1}_{k\to i})$ and $\gamma_t=\gamma^{\lfloor t\ dt\rfloor}$. The FP reached when reinforcement is present is a completely magnetized one, that is the marginal probabilities for the values of $n_i$ are such that $P[n_i=1]\in\{0,1\}$, and thus each variable is surely in the IS or surely outside of it. Thus the FP reached by BPR does correspond to an IS. \end{itemize} The attentive reader probably notices that the above list is not including all possible Monte Carlo schemes: for example (Replicated) Simulated Annealing in temperature and simple Monte Carlo in chemical potential are missing. For this reason, we spend now a few words discussing our choice of the analyzed algorithms and explaining how the present work is organized. The first algorithms in the list are greedy algorithms. They are clearly suboptimal and have been run just to give an idea of the IS size which is very easy to find in linear time. This information will also be useful to set the parameters of more refined algorithms as PT. The algorithmic thresholds for the GA, as for all the other analyzed algorithms, are reported in Table \ref{Tab:rho_max}. We then analyze in Sec.~\ref{Sec:betaalg} the algorithms at fixed density, $\beta$MC and $\beta$PT. In these algorithms, the size of the IS one is looking for is fixed to $K$ and what is changed is the inverse temperature parameter $\beta$, that in turn varies the number of links within the set $\mathcal{I}$ representing the putative IS. Naturally, in the limit $\beta\to\infty$, no more links inside $\mathcal{I}$ are allowed and we obtain a true IS. We start the discussion about stochastic algorithms with the analysis of $\beta$MC because this is an adaptation to the IS problem of commonly used local search algorithms, e.g.\ WALKSAT or ASAT \cite{aurell2004WALKSAT,ardelius2006ASAT}, which have been applied with success to problems like random $K$-SAT or random graph coloring: the main difference being that $\beta$MC respects the detailed balance condition, while WALKSAT or ASAT do not. We then study the $\beta$PT algorithm because this is the most common way to improve Monte Carlo sampling methods in glassy systems. This algorithm seems to scale superlinearly, but still polynomially, with the problem size $N$ as shown in detail in Sec.~\ref{Sec:scalingN}. Then in Sec.~\ref{Sec:T0alg} we move to analyse stochastic algorithms that work directly at zero temperature, $\mu$SA, $\mu$RSA and $\mu$PT, where links inside $\mathcal{I}$ are not allowed and the tuning parameter is the chemical potential $\mu$. We do not study the $\beta$SA because the extrapolation of the algorithmic threshold in that case is a long and difficult task \cite{Budzynski18}: one should find the threshold for any given $\mu$ and then extrapolate int the $\mu\to\infty$ limit. The extrapolation of the algorithmic threshold is instead direct for the $\mu$SA algorithm, and for this reason, we prefer to study this version of SA. We will see that the $\mu$PT have an algorithmic threshold similar to the $\beta$PT one, thus showing that the performances of PT are rather robust. Finally in Sec.~\ref{Sec:BPR} we compare the results obtained via the stochastic algorithms with the outcome of BPR, which is a powerful message passing algorithm, widely used to solve problems defined on random graphs. \begin{table}[t] \resizebox{\textwidth}{!}{% \begin{tabular}{ |l||l|l| l | l | l | l |l|l|l|l|l|} \hline $d$ &$\rho_d$& $\rho_c$ & $\rho_{max}$ & RV GA & MD GA & $\beta$MC & $\beta$PT & $\mu$SA & $\mu$RSA & $\mu$PT & BPR \\ \hline \hline 20 & 0.1830 & 0.1833 & 0.1948 & 0.1512(1) & 0.1737(1) & 0.1906(4) & 0.1943(2) & 0.1937(1) & 0.19370(5) & 0.1945(1) & 0.1933(1) \\ \hline 100 & 0.0638 & 0.0664 & 0.0674& 0.0447(1) & 0.0572(2) & 0.0642(1) & 0.0657(1) & 0.06470(3) & 0.06479(1) & 0.0655(1) & 0.0650(1) \\ \hline \end{tabular} } \caption{Relevant physical thresholds $\rho_d$, $\rho_c$ and $\rho_{max}$ reported from Ref.~\cite{barbier2013hard} and the algorithmic thresholds found in this work for many different algorithms searching for the largest IS in a RRG of degree $d=20$ and $d=100$ } \label{Tab:rho_max} \end{table} We will mainly analyze the problem at $d=20$, where the transition is still near to the continuous one, and $d=100$ where the transition is distinctly 1RSB. In Table~\ref{Tab:rho_max}, the values for the $\rho_d$, $\rho_c$ and $\rho_{max}$, together with the thresholds for the maximum density reached by the analyzed algorithms are reported. \section{Maximum density reached by fixed-density algorithms}\label{Sec:betaalg} In this section we look at the performances of the fixed-density algorithms, namely $\beta$MC and $\beta$PT. For these algorithms, if we measure the running time in Monte Carlo Sweeps (MCS) a linear dependence on $N$ is hidden in the single MCS (that takes a time proportional to $N$) and we can limit ourselves to measure the number of MCS needed to reach the wanted solution in order to understand the computational complexity of this class of algorithms. In Fig.~\ref{Fig:T_d20} we show the number of MCS needed by $\beta$MC and $\beta$PT to converge to an IS of a given density $\rho$. Also, the results for $\mu$PT are shown for comparison. For what concerns $\beta$MC, the optimal value of $\beta$ maximizing the probability of reaching an IS, i.e.\ a zero energy configuration, is likely to depend on $N$. Consequently, the convergence time will depend on $N$, since we expect the Monte Carlo dynamics to slow down when the temperature is decreased. Nevertheless, we are not going to make this detailed study, because, as shown in Fig.~\ref{Fig:T_d20}, standard Monte Carlo run at a single temperature is easily outperformed by Parallel Tempering. The time to find an IS of a given density $\rho$ is clearly diverging approaching the algorithmic threshold $\rho_{alg}$. In order to estimate the algorithmic threshold we need to perform an extrapolation. The best data interpolation is obtained via a power law divergence \begin{equation} \tau=\frac{C}{(\rho_{alg}-\rho)^\nu}\;, \label{eq:tau(rho)} \end{equation} where $C$, $\nu$ and $\rho_{alg}$ are the fitting parameters (specific to each different algorithm). The best fitting curves are shown with full lines in Fig.~\ref{Fig:T_d20}. The extrapolated algorithmic threshold are reported in Table \ref{Tab:rho_max}, while the best fitting values for the $\nu$ exponent can be found in Table \ref{Tab:nu}. Data in Fig. \ref{Fig:T_d20} are for size $N=5\cdot 10^4$, that is large enough that finite size effects are not present in the estimation of $\rho_{alg}$. The dependence of $C$ and $\nu$ on the size will be discussed in Sec.~\ref{Sec:scalingN}. We notice that both versions of PT (in temperature and chemical potential) have very similar algorithmic thresholds. This may suggest that at that density value there is some unavoidable hardness that affects both versions of PT. Our PT scheduling is not particularly optimized on purpose, because we believe that if an unavoidable algorithmic barrier arises at a certain density value, this should affect any version of Monte Carlo based algorithms. The only parameter that we decide to fix in an (almost) optimal way is $\beta_{min}$, i.e.\ the lowest value for the inverse temperature: indeed a too low $\beta_{min}$ requires a larger running time without any performances improvement (too many replicas at high temperature are useless), while a too large $\beta_{min}$ does not allow the configurations to decorrelate fast enough. We find that a very good choice for $\beta_{min}$ is the inverse temperature such that the actual density of the larger IS among the $K$ variables with $n=1$ is almost the maximum IS density reached by the best greedy algorithm. This means that the replica at $\beta_{min}$ can easily travel in the whole configurational space and this is enough for the PT algorithm to work properly. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{d20}\hfill \includegraphics[width=0.47\textwidth]{d100} \caption{\label{Fig:T_d20} Convergence time for $\beta$MC (with parameter $\beta=11$), $\beta$PT (with parameters $\beta_{max}=11$, $\Delta\beta=0.4$, $N_\beta=20$) and $\mu$PT algorithm (with parameters $\mu_{max}=6$, $\Delta\mu=0.2$, $N_\mu=20$) for $N=5\cdot10^4$ and $d=20$ (left) or $d=100$ (right). The vertical lines show the theoretical thresholds for comparison.} \end{figure} \begin{table} \begin{center} \begin{tabular}{ |l||l|l| l |} \hline & $\beta$MC & $\beta$PT & $\mu$PT \\ \hline $d=20$ & 4.2(2) & 3.12(4) & 3.34(7)\\ \hline $d=100$ & 4.0(1) & 4.2(2) & 3.2(1) \\ \hline \end{tabular} \end{center} \caption{Fitting the divergence of the convergence time shown in Fig. \ref{Fig:T_d20} via the power law $\tau=C(\rho_{alg}-\rho)^{-\nu}$, the best fitting values for $\nu$ are the ones shown in this table.} \label{Tab:nu} \end{table} \subsection{Scaling with $N$ for the $\beta$PT}\label{Sec:scalingN} We have seen that the PT algorithm is able to find solutions in a region of $\rho$ where other algorithms fail. The next important question to answer is how the number of PT iterations needs to be scaled with $N$ in order to find an IS of density $\rho$. The issue is particularly relevant above $\rho_d$ and approaching $\rho_{alg}$ where the convergence time diverges. To analyze the scaling with $N$, we implement an optimized choice of the temperatures in the PT algorithm, whose derivation is in Appendix \ref{App}. The optimized temperatures scheduling requires a number of replicas in a range $\beta\in[0,\beta_{max}]$ that scales as $\sqrt{N}$. However, the replicas in the range $\beta\in[0,\beta_{min}]$ are useless and can be safely ignored without altering PT performances. In practice we end up with $N_\beta\sim40$ in the worst case studied ($d=100$, $N=10^5$ and $\rho=0.0646$). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{WalkPTd100}\hfill \includegraphics[width=0.46\textwidth]{exponents} \caption{Left: MCS to find a solution for $d=100$ for different values of $\rho$ as a function of the size $N$ of the graph for the optimized $\beta$PT. Errors are smaller than points. The fits are of the kind $\tau(N)=a N^{b}$. Right: Dependence of the exponent $b$ on the distance from the algorithmic threshold $\rho_{alg}-\rho$. The fit is of the type $b=c_1+c_2\cdot\log(\rho_{alg}-\rho)$. The right border of the plot corresponds to $\rho=0$.} \label{Fig:scalingN} \end{figure} To study the size dependence of the convergence time, we run all our $\beta$PT simulations with the temperature set defined in Eq.~(\ref{eq:betasPT}) with $r=r_\text{opt}$, between $\beta_{min}$ and $\beta_{max}$. In Fig.~\ref{Fig:scalingN} we show for $d=100$ the results in a wide range of densities (similar behaviour is observed for $d=20$). The running times grow as a power law in $N$ \begin{equation} \tau(N)=a(\rho)\cdot N^{b(\rho)}\,, \label{eq:tau(N)} \end{equation} where the main $\rho$ dependence is in the prefactor $a(\rho)$ that diverges at $\rho_{alg}$ as in Eq.~(\ref{eq:tau(rho)}). However there is also a slight dependence on $\rho$ in the exponent $b$. We plot $b$ as a function of $\rho$ in the right panel of Fig.~\ref{Fig:scalingN}, together with a fit of the type $b(\rho)=c_1+c_2\cdot\log(\rho_{alg}-\rho)$, that interpolates nicely the data. We notice that this behaviour is the one to make Eqs.~(\ref{eq:tau(rho)}) and (\ref{eq:tau(N)}) compatible, since they are particular cases of the more general expression \begin{equation} \log(\tau(\rho,N))=\log(c)-\nu'\log(\rho_{alg}-\rho)+c_1\log(N)+c_2\log(N)\log(\rho_{alg}-\rho)\,. \label{eq:tau(N,rho)} \end{equation} For a fixed value of $N$ we recover Eq.~(\ref{eq:tau(rho)}) with $C=c N^{c_1}$ and $\nu=\nu'-c_2 \log(N)$. From the data shown in Fig.~\ref{Fig:scalingN} it is evident that the exponent $b$ is positive even in the ``easy'' region and it seems to go to zero only for $\rho\simeq0$. This means that using PT to find IS always requires a running time growing more than linearly in $N$. We think this is due to the fact that PT is a sophisticated algorithm developed to find solutions when the energy landscape is complex. For $\rho<\rho_d$, when there is just a single state, PT is thus suboptimal (maybe with a different choice of the parameters it could become a linear algorithm in this region, this kind of optimization is, however, out of our scope: we introduced PT to reach solutions in the hard region). The time divergence as a power law approaching a given density, as in Eq.~(\ref{eq:tau(rho)}), is reminiscent of what happens in a first order phase transition, thus suggesting that at $\rho_{alg}$ an extensive barrier develops that makes impossible to reach states with $\rho>\rho_{alg}$ in polynomial time. The weak dependence on $N$, instead, suggests that some long range correlations may develop in the states in which the dynamics fall for $\rho<\rho_{alg}$ (this is discussed in the next section). \section{Looking at the freezing} As already mentioned, it has been conjectured for other optimization problems that the threshold for the appearance of hardness in polynomial time algorithms corresponds to the freezing threshold, that is the lowest density such that all clusters are frozen. We want to check this conjecture in the present problem. For practical purposes let us define a \emph{cluster} as the set of solutions (i.e.\ valid ISs) that are ``connected'' via paths where each step is the flip of just two variables. A cluster of solutions is frozen if it contains frozen variables, that is if there is at least one variable fixed to a given value in all the configurations of the cluster. Above the rigidity threshold almost all the dominant clusters are frozen (but clusters with larger internal entropy might be not frozen). The freezing threshold corresponds to the density at which each cluster of solutions is frozen. In this section we study the \emph{escape time}, $t_{esc}$, which is the time needed by an algorithm that moves only between solutions to go away from the initial configuration. More precisely, we first find a solution with a given algorithm, then we apply the $\beta$MC algorithm at $\beta=\infty$ (that is a kind of diffusive dynamics at fixed zero energy and fixed size of the IS) and we measure the time needed to ``free'' each variable from its starting value, that is to find that variable in a value different from the starting one. Looking at Fig.~\ref{Fig:T_esc}, the first important observation is that all the analyzed algorithms show the same $t_{esc}$ at a fixed density of the IS. This means that they all find the same kind of solutions (when they can find one). \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{T_esc} \caption{\label{Fig:T_esc} Escape time from the solution reached by different algorithms ($d=20$, $N=5\cdot10^4$).} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{n_freezed_d20}\hfill \includegraphics[width=0.47\textwidth]{n_freezed_d100} \caption{Starting from an IS of density $\rho$ found via the $\beta$PT algorithm, we measure the fraction of variables that have not changed their value during a pure diffusive dynamics ($\beta$MC algorithm with $\beta=\infty$). Results are for $d=20$ (left), $d=100$ (right) and a single sample of the size indicated in the legend. } \label{Fig:n_freezed_d20} \end{figure} The escape time diverges as a power law at a threshold density $\rho_r$ (see the fits in Fig.~\ref{Fig:T_esc}). From the data we estimate $\rho_r(d=20)=0.1890(6)$ and $\rho_r(d=100)=0.0639(2)$. The observation that the same threshold holds for different kind of algorithms suggests us to conjecture that $\rho_r$ does actually correspond to the rigidity threshold, that is the density where the typical clusters become frozen and the escape time from it thus diverges. The values of $\rho_r$ are compatible with the thresholds for the $\beta$MC algorithm, while $\beta$PT and $\mu$PT can find solutions of densities greater than $\rho_r$. At this point, it is natural to check whether the solutions found by the PT algorithms at densities larger than $\rho_r$ are frozen or not. To answer this question we find a solution at density $\rho>\rho_r$ with the $\beta$PT algorithm, then we run the $\beta$MC algorithm at $\beta=\infty$ (the diffusive algorithm) and we look at the persistence, that is the fraction of variables that have not changed during the diffusive dynamics. The results are shown in Fig.~\ref{Fig:n_freezed_d20} for a single sample: the fraction of frozen variables seems to decrease in an extremely slow way, mostly logarithmically in time with evident jumps (corresponding to avalanches of variables that are set free altogether). It is worth noticing that the slowness of the diffusive dynamics around the initial solution found by PT is only due to entropic effects, given than the diffusive dynamics keeps the energy constant. In Fig.~\ref{Fig:n_freezed_d20} we also notice some interesting finite size effects. For the largest sizes, the diffusive dynamics eventually makes every variable unfrozen, although the escape time is some orders of magnitude larger than the time needed by PT to reach that particular solution (suggesting that PT follows a smart path that is not affected by entropic barriers!). For smaller sizes, frozen variables persist longer and eventually we observe that the diffusive dynamics is not able to leave the cluster: the fraction of frozen variables becomes constant in time. This is a strong evidence that the IS found by PT for small enough $N$ belong to frozen clusters (a similar phenomenon has been observed also in other models when solved for example via the Reinforcement algorithm \cite{zdeborova2009statistical}). The above observations support the following scenario: the PT algorithm is able to find IS beyond the rigidity threshold $\rho_r$ and in this rigid phase, for small enough sizes, there is a non zero probability that PT finds a solution in a rare frozen cluster. However, for large $N$, the solutions found by PT seems to be all unfrozen and thus we deduce that the PT algorithmic threshold is bounded above by the freezing threshold. We are strongly tempted to conjecture that the two thresholds, $\rho_{alg}$ for PT and $\rho_f$, do actually coincide, but we do not have firm arguments in support. We have also checked that the solutions found by the PT algorithms above $\rho_d$ are not equilibrium solutions. To do this, we find a solution at $\rho>\rho_d$ with the $\beta$PT or $\mu$PT algorithms. We then initialize BP on that solution and we check whether BP converges to a fixed point close to the solution found by PT. If it is so, this means that the PT solution lays inside one of the states (and replica symmetry holds within a state) that form the 1RSB structure that characterises the equilibrium measure for densities slightly above $\rho_d$. However, we find that BP does not converge (neither to the paramagnetic fixed point nor to a fixed point close to the PT solution). This lack of convergence suggests that the solution found by PT is probably inside a state that is not replica symmetric, but probably FRSB, as found in other models \cite{zdeborova2010generalization}. Indeed it is well known that states reached by the out-of-equilibrium dynamics may be FRSB even when equilibrium states are 1RSB \cite{montanari2003nature,montanari2004cooling}. \section{Zero temperature algorithms} \label{Sec:T0alg} In this section, we analyze a different kind of algorithms, the ones running directly at zero temperature. This means that links inside $\mathcal{I}$ are not allowed, i.e.\ the algorithm always works with a valid IS. For this class of algorithms, the varying parameter is the chemical potential $\mu$ that changes the average density of the IS. The limit $\mu\to\infty$ should correspond to the largest possible IS. First of all, we run $\mu$SA. It is a common belief that a slow enough SA should reach the bottom of the equilibrium states at $\rho_d$. The algorithmic thresholds, computed as the average over $100$ samples of the maximum density $\rho$ reached when $\mu\to\infty$ in a SA with $\Delta\mu=10^{-7}$ and $N=5\cdot 10^4$, are reported in Table \ref{Tab:rho_max}. As one can notice, for $d=20$ the inequalities $\rho_{alg}>\rho_c>\rho_d$ hold, implying that the states that dominate the measure at $\rho_d$ can be followed deeply beyond $\rho_c$. This is compatible with the fact that at $d = 20$ the transition is still near to a continuous FRSB one and thus the ergodicity breaking is less pronounced. For $d=100$ instead $\rho_{alg}<\rho_c$, consistently with the fact that the transition is distinctly 1RSB and ergodicity breaking takes place in a much more marked way. We then pass to analyze the $\mu$RSA algorithm. We take inspiration from Ref.~\cite{baldassi2016unreasonable}, where a replicated version of the SA is proposed to sample with higher probability states with larger entropy. In the context of ISs one can identify a state in the following way: starting from a maximal IS, that is an IS that cannot be increased any further by just adding vertices to the IS itself, and considering this maximal IS as the ``bottom of a valley'' in a usual energy landscape, one can build a state by the set of the ISs which are a subset of the maximal one (the construction has to be refined when one finds ISs which are a subset of more than one maximal IS, but we do not need such a detailed description for the present argument). According to this construction it is likely that states corresponding to the largest IS are also those of largest entropy. So the use of an algorithm that favours states according to their entropy is likely to be beneficial also in the search for the largest IS. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{SA} \caption{\label{Fig:SA} Comparison between $\mu$SA and $\mu$RSA for 50 samples of size $N=5\cdot 10^4$ and $d=100$ (parameters are $\Delta\mu=10^{-7}$, $R=3$ and $\gamma=1$).} \end{figure} We run $\mu$RSA with parameters $R=3$, $\gamma=1$. In Fig.~\ref{Fig:SA} its performances are compared with those of $\mu$SA in the case $d=100$ (their algorithmic thresholds can be found in Table~\ref{Tab:rho_max}). It is remarkable that the improvement of RSA with respect to SA is practically null for $d=20$ and very tiny for $d=100$. While for $d=20$ one may claim that the improvement is absent because the model has a very weakly discontinuous phase transition (the range where the phase transition is continuous is very close by), for $d=100$ the 1RSB scenario holds clearly, but we do not see any improvement by reweighting states according to their internal entropy. This observation raises some doubts about what RSA is actually doing and why is not working as expected. Moreover, given that the performances of RSA are clearly worst than those of PT (see their algorithmic thresholds in Table~\ref{Tab:rho_max}), we arrive at the conclusion that there are more and less efficient ways to couple replicas. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{WalkPTd20}\hfill \includegraphics[width=0.47\textwidth]{PTd20} \caption{\label{Fig:scalingNd20} Number of iterations to find a solution for $d=20$ and $\rho=0.19,0.192$ as a function of the problem size $N$ for the optimized $\beta$PT (left) and the $\mu$PT (right) algorithms. The behaviour of the two algorithms is very similar.} \end{figure} \begin{table}[t] \begin{center}\begin{tabular}{|l ||l|l||l| l |} \hline &$d=20$ $\rho=0.190$ & $d=20$ $\rho=0.192$ & $d=100$ $\rho=0.064$ & $d=100$ $\rho=0.0646$ \\ \hline $\beta$PT & 0.339(8) & 0.69(7) & 0.357(7) & 0.42(2) \\ \hline $\mu$PT & 0.40(1) & 0.68(1) & 0.336(8) & 0.44(3) \\ \hline \end{tabular} \caption{The convergence time in Figs. \ref{Fig:scalingNd20} diverges as $\tau(N)=a\cdot N^b$. In the table the comparison between values of $b$ for $\beta$PT and $\mu$PT} \label{Tab:b_comparison} \end{center} \end{table} We move now to the analysis of the $\mu$PT algorithm. We use $N_\mu=21$ replicas evenly spaced by $\Delta\mu=0.2$ in the range $\mu\in[2,6]$ for $d=20$ and $N_\mu=31$ replicas evenly spaced by $\Delta\mu=0.15$ in the range $\mu\in[2,6.5]$ for $d=100$. We have already anticipated in Sec.~\ref{Sec:betaalg} that the behavior of $\mu$PT is equivalent to the one of $\beta$PT and in particular the algorithmic thresholds of the two algorithms are compatible. In Fig.~\ref{Fig:scalingNd20} we show that also the scaling of their running times with $N$ is similar, as the time needed to reach a solution of a given density scales as $\tau=a N^b$. In Table \ref{Tab:b_comparison} we make the comparison between the exponent $b$ of the two algorithms at the same values of $d$ and $\rho$. These data confirm that the PT algorithm is a very robust one. \section{Comparison with advanced Message Passing Algorithms} \label{Sec:BPR} We have seen that Monte Carlo based algorithms easily outperform greedy algorithms and can reach densities well above the dynamical threshold $\rho_d$, passing also the rigidity threshold $\rho_r$ and for $d=20$ even beyond the condensation threshold $\rho_c$, thus approaching closely the maximum density $\rho_{max}$. This looks like a great result, but in order to put it under the right light, we need a comparison with a some other algorithm that is expected to work efficiently on this kind of optimization problems. Since the problem is defined on a random graph we expect message passing algorithms to be particularly well suited. For this reason, we have run also BPR on this problem. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{Reinf20} \includegraphics[width=0.47\textwidth]{Reinf100} \caption{\label{Fig:Reinf} Average density of the ISs found by BP+Reinforcement as a function of the chemical potential $\mu$ at different values of $N$ and of the BPR parameters.} \end{figure} In Fig.~\ref{Fig:Reinf} we show the average density of IS found by the BPR algorithm, as a function of the chemical potential $\mu$, for different values of the BPR parameters. Let us just mention that below a certain chemical potential $\mu_L$, the solutions found by the BPR algorithm are always $n_i=0$, $\forall i$. The value of $\mu_L$ is the one that generates, using the RS solution of the model from \cite{barbier2013hard}, a density $\rho_L$ that roughly corresponds to the threshold density for the random vertex GA (for both $d=20$ and $d=100$ we have $\mu_L=2.15(5)$). We have run the BPR algorithm in a broad range of chemical potentials and for different choices of the BPR parameters. The best results have been obtained with the choice $\gamma=0.999$ and $dt=10$. The maximum density reached can be deduced from the data shown in Fig.~\ref{Fig:Reinf} and it is clearly lower than the thresholds for the PT algorithms. Our best estimates are reported in Table~\ref{Tab:rho_max}. We notice that the threshold density for the BPR algorithm is very similar to the one of the RSA algorithm and this is expected from Ref.~\cite{baldassi2016unreasonable}. \section{Conclusions} We have done a comparative study of algorithms to find the largest IS in a RRG of degree $d=20$ and $d=100$. Our aim was to understand the actual performances of different kind of algorithms (greedy, message passing and especially Monte Carlo based), and to connect their algorithmic thresholds with thermodynamical phase transitions. For both values of $d$ the set of IS undergoes a RFOT varying the IS density $\rho$, however for $d=20$ the transition is weakly discontinuous because of the vicinity to the range where the transition is continuous ($d<16$); for $d=100$ the transition is markedly discontinuous as in the large degree limit. While Table~\ref{Tab:rho_max} summarizes thermodynamical and algorithmic thresholds, we list below the most relevant conclusions that we achieved: \begin{itemize} \item Only greedy algorithms get stuck below the dynamical threshold, while all the other algorithms easily pass beyond $\rho_d$; the relevance of the dynamical threshold for smart optimization algorithms seems very limited. \item Also the condensation threshold at $\rho_c$ seems to play no role at all in describing the performances of the best optimization algorithms. \item The simplest version of Monte Carlo algorithms seems to work roughly until the rigidity threshold at $\rho_r$, defined as the density where the time to diffuse away from a typical IS diverges. \item More sophisticated Monte Carlo schemes (SA and PT) find IS beyond $\rho_r$, but without frozen variables, thus showing the ability of finding IS in atypical unfrozen states. \item Replicated SA does not show any sensible improvement over standard SA for this problem, especially for $d=20$. \item Belief Propagation with Reinforcement has an algorithmic threshold similar to Replicated SA. \item Parallel Tempering is by far the best algorithm for solving this problem and can find IS of a very large density that no other algorithm can find. \item Different versions of PT (in temperature and chemical potential) show almost the same algorithmic threshold, and this strongly suggests an universal behavior linked to an underlying phase transition. We conjecture the PT algorithmic threshold to coincide with the freezing threshold, i.e.\ PT is able to find an unfrozen IS as long as there is one. \item Running times of PT are super-linear, but still polynomial in $N$. Algorithmic thresholds for super-linear algorithms are likely to be larger than those for linear algorithms, but a theory for the formers is completely lacking. \end{itemize} Our results clearly show the need for a theory for advanced Monte Carlo algorithms, like Parallel Tempering, which is at present lacking. Only by understanding analytically this class of algorithms we can hope to approach the ultimate algorithmic threshold for a broad class of hard optimization problems. \begin{acknowledgments} This research has been supported by the European Research Council under the European Unions Horizon 2020 research and innovation programme (grant No.~694925 -- Lotglassy, G. Parisi). \end{acknowledgments}
1,314,259,992,668
arxiv
\subsection{Remaining potential and Good-Turing estimator} A good algorithm for OIMP should aim at selecting the influencer $k$ with the largest potential for influencing its children $A_k$. However, the true potential value of an influencer is \emph{a priori} unknown to the decision maker. In the following, we index trials by $t$ when referring to the time of the algorithm, and we index trials by $n$ when referring to the number of selections of the influencer. For example, the $t$-th spread initiated by the algorithm is noted $S(t)$ whereas the $n$-th spread of influencer $k$ is noted $S_{k,n}$. \begin{definition}[Remaining potential $R_{k}(t)$\label{def:mm}] Consider an influencer $k \in [K]$ connected to $A_k$ basic nodes. Let $S(1), \ldots, S(t)$ be the set of nodes that were activated during the first $t$ trials by the seeded influencers. The \emph{remaining potential} $R_k(t)$ is the expected number of \emph{new} nodes that would be activated upon starting the $t+1$-th cascade from $k$: \[ R_{k}(t) := \sum_{u \in A_k} \mathds{1}\left\{u \notin \bigcup_{i=1}^{t} S(i)\right\} p_k(u), \] where $\mathds{1}\{\cdot\}$ denotes the indicator function. \end{definition} Definition~\ref{def:mm} provides a formal way to obtain the remaining potential of an influencer $k$ at a given time. The optimal policy would simply select the influencer with the largest remaining potential at each time step. The difficulty is, however, that the probabilities $p_k(u)$ are unknown. Hence, we have to design a \emph{remaining potential estimator} $\hat{R}_{k}(t)$ instead. It is important to stress that the remaining potential is a random quantity, because of the dependency on the spreads $S(1), \dots, S(t)$. Furthermore, due to the diminishing returns property, the sequence $(S_{k,n})_{n\geq 1}$ is stochastically decreasing. Following ideas from~\cite{good53, bubeck13}, we now introduce a version of the Good-Turing statistic, tailored to our problem of rapidly estimating the remaining potential. Denoting by $n_k(t)$ the number of times influencer $k$ has been selected after $t$ trials, we let $S_{k,1}, \ldots, S_{k,n_k(t)}$ be the $n_k(t)$ cascades sampled independently from influencer $k$. We denote by $U_k(u, t)$ the binary function whose value is $1$ if node $u$ has been activated \emph{exactly} once by influencer $k$ -- such occurrences are called \emph{hapaxes} in linguistics -- and $Z_k(u, t)$ the binary function whose value is $1$ if node $u$ has never been activated by influencer $k$. The idea of the Good-Turing estimator is to estimate the remaining potential as the proportion of hapaxes in the $n_k(t)$ sampled cascades, as follows: \[ \hat{R}_k(t) := \frac{1}{n_k(t)} \sum_{u \in A_k}U_k(u, t) \prod_{l \neq k} Z_l(u, t). \] Albeit simple, this estimator turns out to be quite effective in practice. If an influencer is connected to a combination of both nodes having high activation probabilities and nodes having low activation probabilities, then successive traces sampled from this influencer will result in multiple activations of the high-probability nodes and few of the low-probability ones. Hence, after observing a few spreads, the influencer's potential will be low, a fact that will be captured by the low proportion of hapaxes. In contrast, estimators that try to estimate each activation probability independently will require a much larger number of trials to properly estimate the influencer's potential. To verify this assumption in reality, we conducted an analysis of the empirical activation probabilities from a Twitter dataset. Specifically, we used a collection of tweets and re-tweets gathered via crawling in August 2012. For each original tweet, we find all corresponding retweets, and, for each user, we compute the empirical probability of a retweet occurring -- this, in our case, is a proxy measure for influence probability. Specifically, for every user $v$ ``influenced'' by $u$, i.e., $v$ retweeted at least one original tweet from $u$ -- we compute the estimated diffusion probability: $p_{u,v} = \left|\text{$u$'s tweets retweeted by $v$}\right| / \left|\text{tweets by $u$}\right|$. In Fig.~\ref{fig:histogram} (left), we show the survival function of resulting empirical probabilities in a log-log plot. We can see that most probabilities are small -- the 9th decile has value $0.045$. In Fig.~\ref{fig:histogram} (right), we simulated the activation probabilities of a set of $50$ nodes whose activation probabilities are chosen randomly from the Twitter empirical probabilities. Most of the sampled values are low, except a few relatively high ones. Using this sample as the activation probabilities of an hypothetical influencer node, we observe on Fig.~\ref{fig:simtwitter} (left) the cumulative influence spread. The curve first shows a steep increase until approximately $20$ rounds, where users with high probabilities of conversion have already been activated, while remaining ones are difficult to activate. \begin{figure}[t] \centering \includegraphics[height=4.5cm]{figures/twitter_analysis/loglog-survival.pdf} \includegraphics[height=4.5cm]{figures/twitter_analysis/sample50.pdf} \caption{(left) Twitter empirical retweet probabilities. (right) Sample of $50$ empirical retweet probabilities. \label{fig:histogram}} \end{figure} In Fig.~\ref{fig:simtwitter} (right), we compare the Good-Turing estimator to a Bayesian estimator that maintains a posterior (through a Beta distribution) on the unknown activation probabilities, updating the posterior after each trial, similarly to \cite{lei15}. In the Bayesian approach, the remaining potential can be estimated by summing over the means of the posterior distributions corresponding to nodes that have not been activated so far. On Fig.~\ref{fig:simtwitter} (right), the curves are averaged over $200$ runs, and the shaded regions correspond to the $95\%$ quantiles. Clearly, the Good-Turing estimator is much faster than its Bayesian counterpart in estimating the actual remaining potential. Varying the number of nodes --~here equal to 50~-- shows that the time needed for the Bayesian estimator to provide a reliable estimate of the remaining potential is proportional to the number of nodes, whereas it grows only sub-linearly for the Good-Turing estimator. \begin{figure}[t] \centering \includegraphics[height=4.5cm]{figures/twitter_analysis/conversions.pdf} \includegraphics[height=4.96cm]{figures/twitter_analysis/estimators.pdf} \caption{(left) Influence spread against number of rounds. (right) Bayesian estimator against Good-Turing estimator.\label{fig:simtwitter}} \end{figure} \paragraph*{Remark} While bearing similarities with the traditional missing mass concept, we highlight one fundamental difference between the remaining potential and the traditional missing mass studied in~\cite{bubeck13}, which impacts both the algorithmic solution and the analysis. Since at each step, after selecting an influencer, \emph{every} node connected to that influencer is sampled, the algorithm receives a larger feedback than in~\cite{bubeck13}, whose feedback is in $[0,1]$. However, on the contrary to~\cite{bubeck13}, the hapaxes of an influencer $(U_k(u, t))_{u \in A_k}$ are independent. Interestingly, the quantity $\lambda_k := \sum_{u \in A_k} p(u)$, which corresponds to the expected number of basic nodes an influencer activates or re-activates in a cascade, will prove to be a crucial ingredient for our problem. \subsection{Upper confidence bounds} Following principles from the bandit literature, the \textsc{GT-UCB}\ algorithm relies on \emph{optimism in the face of uncertainty}. At each step (trial) $t$, the algorithm selects the highest upper-confidence bound on the remaining potential -- denoted by $b_k(t)$ -- and activates (plays) the corresponding influencer $k$. This algorithm achieves robustness against the stochastic nature of the cascades, by ensuring that influencers who ``underperformed'' with respect to their potential in previous trials may still be selected later on. Consequently, \textsc{GT-UCB}\ aims to maintain a degree of \emph{exploration} of influencers, in addition to the \emph{exploitation} of the best influencers as per the feedback gathered so far. \begin{algorithm} \caption{ -- \textsc{GT-UCB}\ ($L = 1$)} \begin{algorithmic}[1]\small \REQUIRE{Set of influencers $[K]$, time budget $N } \STATE{\textbf{Initialization:} play each influencer $k\in[K]$ once, observe the spread $S_{k,1}$, set $n_k=1$} \STATE{For each $k\in [K]$: update the reward $W=W\cup S_{k,1}$} \FOR{$t = K + 1, \ldots, N$}\label{alg:for} \STATE Compute $b_k(t)$ for every influencer $k$ \STATE Choose $k(t) = \argmax_{k \in [K]} b_k(t)$ \label{alg:optimism} \STATE Play influencer $k(t)$ and observe spread $S(t) \STATE Update cumulative reward: $W= W \cup S(t)$ \STATE Update statistics of influencer $k(t)$: $n_{k(t)}(t+1) = n_{k(t)}(t) + 1$ and $S_{k,n_k(t)} = S(t)$. \ENDFOR \label{alg:endfor} \RETURN $W$ \end{algorithmic} \label{alg:gooducb} \end{algorithm} Algorithm~\ref{alg:gooducb} presents the main components of \textsc{GT-UCB}\ for the case $L=1$, that is, when a single influencer is chosen at each step. The algorithm starts by activating each influencer $k\in[K]$ once, in order to initialize its Good-Turing estimator. The main loop of \textsc{GT-UCB}\ occurs at lines \ref{alg:for}-\ref{alg:endfor}. Let $S(t)$ be the observed spread at trial $t$, and let $S_{k,s}$ be the result of the $s$-th diffusion initiated at influencer $k$. At every step $t > K$, we recompute for each influencer $k \in [K]$ its index $b_k(t)$, representing the upper confidence bound on the expected reward in the next trial. The computation of this index uses the previous samples $S_{k,1},\ldots,S_{k,n_k(t)}$ and the number of times each influencer $k$ has been activated up to trial $t$, $n_k(t)$. Based on the result of Theorem~\ref{th:confidence_bounds} --~whose statement and proof are delayed to Section~\ref{sec:analysis}~--, the upper confidence bound is set as: \begin{align}\label{eq:ucb} b_k(t) = \hat{R}_k(t) + \left(1+\sqrt{2}\right)\sqrt{\frac{\hat{\lambda}_k(t) \log(4t)}{n_k(t)}} + \frac{\log(4t)}{3n_k(t)}, \end{align} where $\hat{R}_k(t)$ is the Good-Turing estimator and $\hat{\lambda}_k(t) := \sum_{s=1}^{n_k(t)} \frac{|S_{k,s}|}{n_k(t)}$ is an estimator for the expected spread from influencer $k$. Then, in line~\ref{alg:optimism}, \textsc{GT-UCB}\ selects the influencer $k(t)$ with the largest index, and initiates a cascade from this node. The feedback $S(t)$ is observed and is used to update the cumulative reward set $W$. We stress again that $S(t)$ provides only the Ids of the nodes that were activated, with no information on \emph{how} this diffusion happened in the hidden diffusion medium. Finally, the statistics associated to the chosen influencer $k(t)$ are updated. \subsection{Extensions for the case $L>1$} Algorithm~\ref{alg:gooducb} can be easily adapted to select $L > 1$ influencers at each round. Instead of choosing the influencer maximizing the Good-Turing UCB in line~\ref{alg:optimism}, we can select those having the $L$ largest indices. Note that $k(t)$ then becomes a \emph{set} of $L$ influencers. A diffusion is initiated from the associated nodes and, at termination, all activations are observed. Similarly to~\cite{vaswani17}, the algorithm requires feedback to include the influencer responsible for the activation of each node, in order to update the corresponding statistics accordingly. \subsection{Confidence interval for the remaining potential} In the following, to simplify the analysis and to allow for a comparison with the oracle strategy, we assume that the influencers have \emph{non intersecting support}. This means that each influencer's remaining potential and corresponding Good-Turing estimator does not dependent on other influencers. Hence, for notational efficiency, we also omit the subscript denoting the influencer $k$. After selecting the influencer $n$ times, the Good-Turing estimator is simply written $\hat{R}_n = \sum_{u \in A} \frac{U_n(u)}{n}$. We note that the non-intersecting assumption is for theoretical purposes only -- our experiments are done with influencers that can have intersecting supports. The classic Good-Turing estimator is known to be slightly biased (see Theorem $1$ in~\cite{mcallester00} for example). We show in Lemma~\ref{lem:bias} that our remaining potential estimator adds an additional factor $\lambda = \sum_{u \in A} p(u)$ to this bias: \begin{lemma}[]\label{lem:bias} The bias of the remaining potential estimator is \[ \mathbb{E}[R_n] - \mathbb{E}[\hat{R}_n] \in \left[-\frac{\lambda}{n},0\right]. \] \end{lemma} \begin{proof} \small \begin{align*} \mathbb{E}[R_n] - &\mathbb{E}[\hat{R}_n] = \sum_{u \in A} \left[p(u)(1 - p(u))^n - \frac{n}{n} p(u)(1 - p(u))^{n-1} \right] \\ &= - \frac{1}{n} \sum_{u \in A} p(u) \times np(u)(1 - p(u))^{n-1} \\ &= -\frac{1}{n} \mathbb{E}\left[\sum_{u \in A} p(u)U_n(u) \right] \in \left[-\frac{\sum_{u\in A}p(u)}{n}, 0\right]\qedhere \end{align*} \end{proof} Since $\lambda$ is typically very small compared to $|A|$, in expectation, the estimation should be relatively accurate. However, in order to understand what may happen in the worst-case, we need to characterize the deviation of the Good-Turing estimator: \begin{theorem}\label{th:confidence_bounds} With probability at least $1 - \delta$, for $\lambda = \sum_{u \in A} p(u)$ and $\beta_n := \left(1 + \sqrt{2}\right) \sqrt{\frac{\lambda \log(4/\delta)}{n}} + \frac{1}{3n}\log\frac{4}{\delta}$, the following holds: \[ - \beta_n - \frac{\lambda}{n} \leq R_n - \hat{R}_n \leq \beta_n. \] \end{theorem} Note that the additional term appearing in the left deviation corresponds to the bias of our estimator, which leads to a non-symmetrical interval. \begin{proof} We prove the confidence interval in three steps: \begin{inparaenum}[(1)] \item Good-Turing estimator deviation, \item remaining potential deviation, \item combination of these two inequalities to obtain the final confidence interval. \end{inparaenum} Here, the child nodes are assumed to be sampled \emph{independently}, which is a simplification compared to the classic missing mass concentration results that relies on negatively associated samples~\cite{mcallester00,mcallester03}. On the other hand, since we may activate several nodes at once, we need original concentration arguments to control the increments of both $\hat{R}_n$ and $R_n$. \textbf{($1$)~Good-Turing deviations.} Let $X_n(u) := \frac{U_n(u)}{n}$. We have that \begin{align*} v &:= \sum_{u \in A}\mathbb{E}[X_n(u)^2] = \frac{1}{n^2} \sum_{u \in A} \mathbb{E}[U_n(u) \leq \frac{\lambda}{n}. \end{align*} Moreover, clearly the following holds: $X_n(u) \leq \frac{1}{n}$. Applying Bennett's inequality (Theorems~2.9,~2.10 in~\cite{boucheron13}) to the independent random variables $\{X_n(u)\}_{u \in A}$ yields \begin{align}\label{eq:gtdeviation} \mathbb{P}\left(\hat{R}_n - \mathbb{E}[\hat{R}_n] \geq \sqrt{\frac{2\lambda\log(1/\delta)}{n}} + \frac{\log(1/\delta)}{3n}\right) \leq \delta. \end{align} The same inequality can be derived for left deviations. \textbf{($2$)~Remaining potential deviations.} Remember that $Z_n(u)$ denotes the indicator equal to $1$ if $u$ has never been activated up to trial $n$. We can rewrite the remaining potential as $R_n = \sum_{u \in A} Z_n(u) p(u).$ Let $Y_n(u) = p(u)(Z_n(u) - \mathbb{E}[Z_n(u)])$ and $q(u) = \mathbb{P}(Z_n(u) = 1) = (1 - p(u))^n$. For some $t > 0$, we have next that \begin{align*} \mathbb{P}(&R_n - \mathbb{E}[R_n] \geq \epsilon) \leq e^{-t \epsilon} \prod_{u \in A} \mathbb{E}\left[e^{t Y_n(u)}\right] \\ &= e^{t\epsilon} \prod_{u\in A} \left(q(u)e^{t p(u)(1 - q(u))} + (1 - q(u))e^{-t p(u) q(u)}\right) \\ &\leq e^{-t\epsilon} \prod_{u\in A} \exp(p(u)t^2/(4n)) = \exp\left(-t\epsilon + t^2/(4n) \lambda \right). \end{align*} The first inequality is well-known in exponential concentration bounds and relies on Markov's inequality. The second inequality follows from~\cite{berend13} (Lemma~3.5). Then, choosing $t = \frac{2n\epsilon}{\lambda}$, we obtain \begin{align}\label{eq:mmdeviation} \mathbb{P}\left(R_n - \mathbb{E}[R_n] \geq \sqrt{\frac{\lambda\log(1/\delta)}{n}}\right) \leq \delta. \end{align} We can proceed similarly to obtain the left deviation. \textbf{(3) Putting it all together.} We combine Lemma~\ref{lem:bias} with Eq.~(\ref{eq:gtdeviation}), (\ref{eq:mmdeviation}), to obtain the final result. Note that $\delta$ is replaced by $\frac{\delta}{4}$ to ensure that both the left and right bounds for the Good-Turing estimator and the remaining potential are verified. \end{proof} \subsection{Theoretical guarantees} We now provide an analysis of the \emph{waiting time} (defined below) of \textsc{GT-UCB}, by comparing it to the waiting time of an oracle policy, following ideas from~\cite{bubeck13}. Let $R_k(t)$ be the remaining potential of influencer $k$ at trial number $t$. This differs from $R_{k,n}$, which is the remaining potential of influencer $k$ once \emph{it} has been played $n$ times. \begin{definition}[Waiting time] Let $\lambda_k = \sum_{u\in A_k} p(u)$ denote the expected number of activations obtained by the first call to influencer $k$. For $\alpha \in (0,1)$, the \emph{waiting time} $T_{UCB}(\alpha)$ of \textsc{GT-UCB}\ represents the round at which the remaining potential of \emph{each} influencer $k$ is smaller than $\alpha \lambda_k$. Formally, \[ T_{UCB}(\alpha) := \min \{t : \forall k \in [K], R_k(t) \leq \alpha\lambda_k\}. \] \end{definition} The above definition can be applied to any strategy for influencer selection and, in particular, to an oracle one that knows beforehand the $\alpha$ value that is targeted, the spreads $(S_{k,s})_{k\in[K], 1\leq s \leq t}$ sampled up to the current time, and the individual activation probabilities $p_k(u), u \in A_k$. A policy having access to all these aspects will perform the fewest possible activations on each influencer. We denote by $T^*(\alpha)$ the waiting time of the oracle policy. We are now ready to state the main theoretical property of the \textsc{GT-UCB}\ algorithm. \begin{theorem}[Waiting time]\label{th:waitingtime} Let $\lambda^{\text{min}} := \min_{k \in [K]} \lambda_k$ and let $\lambda^{\text{max}} := \max_{k \in [K]} \lambda_k$. Assuming that $\lambda^{\text{min}} \geq 13$, for any $\alpha \in \left[\frac{13}{\lambda^\text{min}}, 1\right]$, if we define $\tau^* := T^*\left(\alpha - \frac{13}{\lambda^{\text{min}}}\right)$, with probability at least $1 - \frac{2K}{\lambda^{\text{max}}}$ the following holds: \begin{align*} T_{\text{UCB}}(\alpha) \leq \tau^* + K\lambda^{\text{max}} \log(4\tau^* + 11K\lambda^{\text{max}}) + 2K. \end{align*} \end{theorem} The proof of this result is given in Appendix~\ref{app:wtanalysis}. Unsurprisingly, Theorem~\ref{th:waitingtime} says that \textsc{GT-UCB}\ must perform slightly more activations of the influencers than the oracle policy. With high probability -- assuming that the best influencer has an initial remaining potential that is much larger than the number of influencers -- the waiting time of \textsc{GT-UCB}\ is comparable to $T^*(\alpha')$, up to factor that is only logarithmic in the waiting time of the oracle strategy. $\alpha'$ is smaller than $\alpha$ --~hence $T^*(\alpha')$ is larger than $T^*(\alpha)$-- by an offset that is inversely proportional to the initial remaining potential of the worst influencer. This essentially says that, if we deal with large graphs, and if the influencers trigger reasonably large spreads, our algorithm is competitive with the oracle. \section{Useful Lemmas} \begin{lemma}[Bennett's inequality (Theorem 2.9 and 2.10 \cite{boucheron13})] \label{lem:bennett} Let $X_1,\ldots,X_n$ be independent random variables with finite variance such that $X_i \leq b$ for some $b > 0$ for all $i \leq n$. Let $S := \sum_{i=1}^n \left(X_i - \mathbb{E}[X_i]\right)$ and $v := \sum_{i=1}^n \mathbb{E}[X_i^2]$. Writing $\phi(u) = e^u - u - 1$, then for all $t > 0$, \begin{align*} \log \mathbb{E}\left[e^{tS}\right] \leq \frac{v}{b^2} \phi(bt) \leq \frac{vt^2}{2(1 - bt/3)}. \end{align*} This implies that, $\mathbb{P}\left(S > \sqrt{2v\log \nicefrac{1}{\delta}} + \frac{b}{3} \log \nicefrac{1}{\delta} \right) \leq \delta$. \end{lemma} \begin{lemma}[Lemma 7 -- \cite{berend13}]\label{lem:berend} Let $n \geq 1$, $\lambda \geq 0$, $p \in [0,1]$ and $q = (1 - p)^n$. Then, \begin{align} qe^{\lambda p(1-q)} + (1-q)e^{-\lambda pq} \leq \exp(p\lambda^2/(4n))\label{eq:berend1}\\ qe^{\lambda p(q-1)} + (1-q)e^{\lambda pq} \leq \exp(p\lambda^2/(4n))\label{eq:berend2} \end{align} \end{lemma} \section{Analysis of the Waiting Time of \textsc{GT-UCB}\ Algorithm} \label{app:wtanalysis} \begin{lemma}\label{lem:evolutionestmm} For any $s \geq 3$, $\mathbb{P}\left(\hat{R}_{s} \leq \hat{R}_{s-1} - \frac{\lambda}{e(s-2)} - \sqrt{\frac{2\lambda}{s-1}\log(1/\delta)} - \frac{1}{3(s-1)}\log(1/\delta)\right) \leq \delta.$ \end{lemma} \begin{proof} Denote by $X_s(x) := \frac{U_{s-1}(x)}{s-1} - \frac{U_{s}(x)}{s} \leq \frac{1}{s-1}$. We can rewrite $\hat{R}_{s-1} - \hat{R}_{s} = \sum_{x \in A} X_s(x)$ and can easily verify that \begin{align} v(x) := \mathbb{E}\left[X_s(x)^2\right] = p(x)(1 - p(x))^{s-2} \left(\frac{1}{s-1} - \frac{1 - p(x)}{s}\right) \leq \frac{p(x)}{s-1}. \label{eq:mmvx} \end{align} Let $t > 0$. By applying Lemma\ref{lem:bennett}, one obtains \begin{align*} \mathbb{P}\left(\hat{R}_{s-1} - \hat{R}_{s} \geq \mathbb{E}\left[\hat{R}_{s-1} - \hat{R}_{s}\right] + \sqrt{\frac{2\lambda}{s - 1}\log (1/\delta)} + \frac{1}{3(s - 1)}\log (1/\delta)\right) \leq \delta. \end{align*} We conclude remarking that $\mathbb{E}[X_s(x)] = p(x)^2(1-p(x))^{s-2} \leq \frac{p(x)}{e(s-2)}$, that is, $\mathbb{E}[\hat{R}_{s-1} - \hat{R}_{s}] \leq \frac{\lambda}{e(s-2)}$. \end{proof} \begin{theorem}[Waiting time] Denote $\lambda^{\text{min}} := \min_{k \in [K]} \lambda_k$ and $\lambda^{\text{max}} := \max_{k \in [K]} \lambda_k$. Assume that $\lambda^{\text{min}} \geq 13$. Then, for any $\alpha \in \left[\frac{13}{\lambda\text{min}}, 1\right]$, if we define $\tau^* := T^*\left(\alpha - \frac{13}{\lambda^{\text{min}}}\right)$, with probability at least $1 - \frac{2K}{\lambda^{\text{max}}}$, \begin{align*} T_{\text{UCB}}(\alpha) \leq \tau^* + K\lambda^{\text{max}} \log(4\tau^* + 11K\lambda^{\text{max}}) + 2K. \end{align*} \end{theorem} \begin{proof} Let us define the following confidence bounds: \begin{align*} b^+_{k,s}(t) &:= (1 + \sqrt{2})\sqrt{\frac{3\lambda_k\log(2t)}{s}} + \frac{\log(2t)}{s}, \\ b^-_{k,s}(t) &:= (1 + \sqrt{2})\sqrt{\frac{3\lambda_k\log(2t)}{s}} + \frac{\log(2t)}{s} + \frac{\lambda_k}{s}\text{, and} \\ c^-_{k,t}(t) &:= \frac{\lambda}{e(s-2)} + \sqrt{\frac{6\lambda_k\log(t)}{s-1}} + \frac{\log(t)}{s - 1}. \end{align*} Let $S > 0$. Using these definitions, we introduce the following events: \begin{align*} \mathcal{F} &:= \left\{\forall k \in [K], \forall t > S, \forall s \leq t, \hat{R}_{k,s} - b^-_{k,s}(t) \leq R_{k,s} \leq \hat{R}_{k,s} + b^+_{k,s}(t) \right\}, \\ \mathcal{G} &:= \left\{\forall k \in [K], \forall s \geq S, \hat{R}_{k,s} \geq \hat{R}_{k,s-1} - c^-_{k,s}(t) \right\}, \\ \mathcal{E} &:= \mathcal{F} \cap \mathcal{G}. \end{align*} Using Theorem \ref{th:confidence_bounds}, Lemma \ref{lem:evolutionestmm} and a union bound, one obtains $\mathbb{P}(\mathcal{E}) \geq 1 - \frac{2K}{S}$ (by setting $\delta \equiv \frac{1}{t^3}$). Indeed, \begin{align*} \mathbb{P}\left(\bar{\mathcal{E}}\right) \leq \mathbb{P}(\bar{\mathcal{F}}) + \mathbb{P}(\bar{\mathcal{G}}) \leq 2 \sum_{k=1}^K \sum_{t>S} \sum_{s \leq t} \frac{1}{t^3} = 2K \sum_{t > S} \frac{1}{t^2} \leq \frac{2K}{S}. \end{align*} In the following, we work on the event $\mathcal{E}$. Recall that we want to control $T_{UCB}(\alpha)$, the time at which every influencer attains a remaining potential smaller than $\alpha$ following \textsc{GT-UCB}\ strategy. We aim at comparing $T_{UCB}(\alpha)$ to $T^*(\alpha)$, the same quantity following the omniscient strategy. With that in mind, one can write: \begin{align*} &T_{UCB}(\alpha) = \min \left\{t : \forall k \in [K], R_{k,N_k(t)} \leq \alpha \lambda_k \right\}, \\ &T^*(\alpha) = \sum_{k = 1}^K T^*_k(\alpha) \text{, where } T^*_k(\alpha) = \min \left\{s : R_{k,s} \leq \alpha \lambda_k \right\}. \end{align*} Following ideas from \cite{bubeck13}, we can control $T_{UCB}(\alpha)$ by comparing it to $U(\alpha)$ defined below, and which replaces the remaining potential by an upper bound on the \textit{estimator} of the remaining potential (the Good-Turing estimator). Indeed, recall that we can control this on event $\mathcal{F}$. \[ U(\alpha) = \min\left\{t \geq 1 : \forall k \in [K], \hat{R}_{k, N_k(t)} + b^+_{k,N_k(t)}(t) \leq \alpha \lambda_k \right\}. \] Let $S'\geq S$. On event $\mathcal{E}$, one has that $T_{UCB}(\alpha) \leq \max(S',U(\alpha))$. If $U(\alpha) \geq S'$, one has \begin{align*} R_{k,N_k(U(\alpha))} &\geq \hat{R}_{k,N_k(U(\alpha))} - b^-_{k,N_k(U(\alpha))}(U(\alpha)) \tag*{(we are on event $\mathcal{F}$ and $U(\alpha) > S' \geq S$)} \\ &\geq \hat{R}_{k,N_k(U(\alpha)) - 1} - b^-_{k,N_k(U(\alpha))}(U(\alpha)) - c^-_{k,N_k(U(\alpha))}(U(\alpha)) \tag*{(where are on event $\mathcal{G}$)} \\ &\geq \left(\alpha \lambda_k - b^+_{k,N_k(U(\alpha))-1}(U(\alpha)) \right) - b^-_{k,N_k(U(\alpha))}(U(\alpha)) - c^-_{k,N_k(U(\alpha))}(U(\alpha)) \end{align*} The third inequality's justification is more evolved. Let $t$ be the time such that $N_k(t) = N_k(U(\alpha)) - 1$ and $N_k(t+1) = N_k(U(\alpha))$. This implies that $k$ is the chosen expert at time $t$, that is, the one maximizing the \textsc{GT-UCB}\ index. Moreover, since $t < U(\alpha)$, one knows that this index is greater than $\alpha \lambda_k$. If $N_k(U(\alpha)) \geq S' + 2$, some basic calculations lead to \begin{align*} R_{k,N_k(U(\alpha))} \geq \alpha \lambda_k - 11 \sqrt{\frac{\lambda_k \log(2U(\alpha))}{S'}} - \frac{3\log(2U(\alpha))}{S'} - \frac{3\lambda_k}{2S'} \end{align*} We denote by $\lambda^{max} := \max_k \lambda_k$. If we take $S' = \lambda^{max} \log(2U(\alpha))$, we can rewrite the previous inequality as \begin{align*} R_{k,N_k(U(\alpha))} \geq \alpha \lambda_k - 11 - \frac{3}{\lambda^{max}} - \frac{3}{2} \end{align*} Thus, by definition of $T^*_k(\alpha)$, and if $\lambda^{max} > 6$, one gets \begin{align*} N_{k, U(\alpha)} \leq T_k^*\left(\alpha - \frac{13}{\lambda_k} \right) + S' + 2. \end{align*} Finally, if we denote by $\lambda^{min} = \min_k \lambda_k$, we obtain that \begin{align*} U(\alpha) &\leq K(S' + 2) + T^*\left(\alpha - \frac{13}{\lambda^{min}} \right). \end{align*} We now apply Lemma \ref{lem:bubeck3}. We obtain that \begin{align*} U(\alpha) \leq 2K + \tau^* + K\lambda^{max} \log \left(8K + 4\tau^* + 10K\lambda^{max}\right) \leq \tau^* + K\lambda^{max} \log \left(4\tau^* + 11K\lambda^{max}\right) + 2K . \end{align*} We conclude with $T_{UCB}(\alpha) \leq \max(S', U(\alpha))$. \end{proof} \begin{lemma}[Lemma 3 from \cite{bubeck13}] \label{lem:bubeck3} Let $a > 0$, $b \geq 0.4$, and $x \geq e$, such that $x \leq a + b \log x$. Then one has \begin{align*} x \leq a + b \log(2a + 4b\log(4b)) . \end{align*} Moreover, we add that if $b \geq 3$, then $x \leq a + b \log (2a + 5b)$. \end{lemma} \section{Confidence intervals in the influencer fatigue setting} \label{sec:appendixfatigue} In this section, we consider a single influencer and omit its index $k$. We recall that we make the assumption that influencers have non-intersecting support. Thus, after selecting the influencer $n$ times, the remaining potential can be rewritten $$ R_n = \sum_{u \in A} \mathds{1}\{u \text{ never activated }\} p_{n+1}(u), $$ --~$n+1$ because this is the remaining potential for the $n+1$th spread~-- and the corresponding Good-Turing estimator is $$ \hat{R}_n = \frac{1}{n} \sum_{u \in A} U^{\gamma}_n(u), $$ where $U^{\gamma}_n(u) = \sum_{i = 1}^n \mathds{1}\{X_1 = \ldots = X_{i-1} = X_{i+1} = \ldots = X_n = 0, X_i = 1\} \frac{\gamma(n+1)}{\gamma(i)}$. \paragraph*{Estimator bias.} Lemma~\ref{lem:rottingbias} shows that the estimator of the remaining potential for the influencer fatigue setting is hardly biased. \begin{lemma}\label{lem:rottingbias} Denoting $\lambda = \sum_{u \in A} p(u)$, the bias of the remaining potential estimator is $$ \mathbb{E}[R_n] - \mathbb{E}[\hat{R}_n] \in \left[-\gamma(n+1)\frac{\lambda}{n},0\right]. $$ \end{lemma} \begin{proof} We have that $$\mathbb{E}[U^\gamma_n(u)] = \sum_{i = 1}^n p_i(u) \prod_{j \neq i} (1 - p_j(u)) \frac{\gamma(n+1)}{\gamma(i)} = p_{n+1}(u) \sum_{i = 1}^n \prod_{j \neq i} (1 - p_j(u)). $$ We now can compute the bias of the estimator: \begin{align*} \mathbb{E}[R_n] - \mathbb{E}[\hat{R}_n] &= \frac{1}{n} \sum_{u \in A} p_{n+1}(u) \left[\sum_{i = 1}^n \prod_{j = 1}^n (1 - p_j(u)) - \sum_{i = 1}^n \prod_{j \neq i} (1 - p_j(u)) \right] \\ &= \frac{1}{n} \sum_{u \in A} p_{n+1}(u) \sum_{i = 1}^n \prod_{j \neq i} (1 - p_j(u)) [1 - p_i(u) - 1] \\ &= -\frac{1}{n} \sum_{u \in A} p_{n+1}(u) \sum_{i = 1}^n p_i(u) \prod_{j \neq i} (1 - p_j(u)) \\ &= -\frac{1}{n} \mathbb{E}\left[\sum_{u \in A} p_{n+1}(u)U_n(u) \right] \in \left[-\frac{\sum_{u\in A}p_{n+1}(u)}{n}, 0\right] \end{align*} Note that the random variable $U_n(u)$ correspond to the hapax definition given in the original OIMP problem, that is, $U_n(u) = \mathds{1}\{u \text{ activated exactly once}\}$. \end{proof} Unsurprisingly, we obtain the same bias for the case where $\gamma$ is constant equal to 1 (no fatigue). \paragraph*{Confidence Intervals.} To derive an optimistic algorithm, we need confidence intervals on the remaining potential. We operate in three steps: \begin{enumerate} \item \textbf{Good-Turing deviations:} Remember that $\hat{R}_n = \sum_{u \in A} \frac{U_n^{\gamma}(u)}{n}$. We have next that \begin{align*} \mathbb{E}[U_n^{\gamma}(u)^2] &= \sum_{i = 1}^n p_i(u) \prod_{j \neq i} (1 - p_j(u)) \frac{\gamma(n+1)^2}{\gamma(i)^2} \\ &= \sum_{i = 1}^n p_{n+1}(u) \prod_{j \neq i} (1 - p_j(u)) \frac{\gamma(n+1)}{\gamma(i)} \\ &\leq np(u)\gamma(n+1). \end{align*} Thus, we have that $v := \sum_{u \in A} \mathbb{E}\left[\frac{U^{\gamma}_n(u)^2}{n^2}\right] \leq \frac{\sum_{u \in A} p_{n+1}(u)}{n}$. Applying Bennett's inequality to the independent random variables $\{X^{\gamma}_n(u)\}_{u \in A}$ yields \begin{align} \mathbb{P}\left(\hat{R}_n - \mathbb{E}\left[\hat{R}_n\right] \geq \sqrt{\frac{2\lambda_{n+1} \log(1 / \delta)}{n}} + \frac{1}{3n} \log(1/\delta) \right) \leq \delta, \end{align} where $\lambda_n := \gamma(n)\sum_{u\in A} p(u)$. The same inequality can be derived for left deviations. \item \textbf{Remaining potential deviations:} Remember that $R_n = \sum_{u \in A} Z_n(u) p_{n+1}(u)$ where $Z_n(u) = \mathds{1}\{u \text{ never activated}\} = \mathds{1}\{X_1(u) = \cdots = X_n(u) = 0\}$. We denote $Y_n(u) = p_{n+1}(u)(Z_n(u) - \mathbb{E}[Z_n(u)])$ and $q_n(u) = \mathbb{P}(Z_n(u) = 1) = \prod_{i = 1}^n (1 - p_s(u))$. We have that \begin{align*} \mathbb{P}\left(R_n - \mathbb{E}[R_n] \geq \epsilon\right) &\leq e^{-t\epsilon}\prod_{u \in A} \mathbb{E}\left[e^{tY_n(u)}\right] \\ &= e^{-t\epsilon}\prod_{u \in A} \left(\mathbb{P}(Z_n(u) = 1) e^{tp_{n+1}(u)(1 - q_n(u))} + \mathbb{P}(Z_n(u) = 0) e^{-tp_{n+1}(u)q_n(u)}\right) \\ &= e^{-t\epsilon} \prod_{u \in A} \left(q_n(u) e^{tp_{n+1}(u)(1 - q_n(u))} + (1 - q_n(u)) e^{-tp_{n+1}(u)q_n(u)}\right) \\ &\leq e^{-t\epsilon} \prod_{u \in A} \exp\left(\frac{p_{n+1}(u) t^2}{4n}\right) \tag*{(by Eq.~\ref{eq:dev1} in Lemma~\ref{lemma:dev})} \end{align*} Minimizing on $t$, we obtain (for $t = \frac{2\epsilon n}{\sum_{u \in A} p_{n+1}(u)})$, $$ \mathbb{P}\left(R_n - \mathbb{E}[R_n] \geq \epsilon\right) \leq \exp\left(\frac{-\epsilon^2n}{\sum_{u \in A}p_{n+1}(u)}\right). $$ We can proceed similarly to obtain the left deviation. \end{enumerate} \vspace{2mm} Putting it all together, we obtain the following confidence intervals in Theorem~\ref{th:confidence_bounds_fatigue}, which can be used in the design of the optimistic algorithm \textsc{Fat-GT-UCB}. \begin{theorem}\label{th:confidence_bounds_fatigue} With probability at least $1 - \delta$, for $\lambda_n = \gamma(n) \sum_{u \in A} p(u)$ and $\beta_n := \left(1 + \sqrt{2}\right) \times \sqrt{\frac{\lambda_{n+1} \log(4/\delta)}{n}} + \frac{1}{3n}\log\frac{4}{\delta}$, the following holds: \[ - \beta_n - \frac{\lambda}{n} \leq R_n - \hat{R}_n \leq \beta_n. \] \end{theorem} \begin{lemma}[Adaptation of Lemma 3.5 in~\cite{berend13}\label{lemma:dev}] Let $n \geq 1$, $p \in [0, 1]$, $\gamma: \mathbb{N} \to [0,1]$ a non-increasing function and $t \geq 0$. We denote $p_n = \gamma(n)p$ and $q_n = \prod_{i \leq t} (1 - p_i)$. \begin{align} &(a)~~q_n e^{tp_n(1 - q_n)} + (1 - q_n) e^{-tp_nq_n} \leq \exp\left(\frac{p_nt^2}{4n}\right) \label{eq:dev1}\\ &(b)~~q_n e^{tp_n(q_n - 1)} + (1 - q_n) e^{tp_nq_n} \leq \exp\left(\frac{p_nt^2}{4n}\right) \end{align} \end{lemma} \begin{proof} Let $q_n' = (1 - p_n)^n$. Clearly, $q_n \leq q_n'$. \vspace{2mm} \noindent(a) Using Theorem 3.2 in~\cite{berend13} with $p \equiv q_n$ and $t \equiv tp_n$, we have that, $$ q_n e^{tp_n(1 - q_n)} + (1 - q_n) e^{-tp_nq_n} \leq \exp\left(\frac{1 - 2q_n}{4\log((1-q_n)/q_n)}t^2p_n^2\right). $$ So it suffices to show that $$ (1 - 2q_n)t^2p_n^2/4\log((1-q_n)/q_n) \leq p_nt^2/4n, $$ or equivalently, $$ (1 - 2q_n)p_n\log((1-q_n)/q_n) \leq \log(1 - p_n) / \log(q_n'). $$ Rearranging, we obtain, $$ L(q_n, q_n') := \frac{(1 - 2q_n)\log(1/q_n')}{\log((1-q_n)/q_n)} \leq \frac{1 / \log(1 - p_n)}{p_n} := R(p_n). $$ As is \cite{berend13}, we show that $L \leq 1 \leq R$. The second inequality is true (see \cite{berend13}). The left-hand side can be written $L(q_n, q_n') = L_1(q_n) L_2(q_n')$. Clearly, $L_2(q_n') \geq 0$. If $L_1(q_n) \leq 0$, the left-hand side is negative and thus, $L(q_n, q_n') \leq 1$. Else, $L_1(q_n) \geq 0$, we can upper bound the right-hand side as $L(q_n, q_n') \leq \frac{(1 - 2q_n)\log(1/q_n)}{\log((1-q_n)/q_n)}$ (because $q_n \leq q_n'$), which is proven to be less than $1$ in \cite{berend13}. This concludes the proof. \vspace{2mm}\noindent(b) It is shown in the proof Lemma 3.5 (b) in~\cite{berend13} that $$ L(t) := \frac{1}{t^2p_n^2} \log\left[q_ne^{-tp_n(1-q_n)} + (1 - q_n)e^{tp_n q_n}\right] \leq \frac{1}{t^2p_n^2}\frac{t^2p_n}{4\log q_n/\log(1-p_n)} =: R(q_n). $$ It suffices to show that $R(q_n) \leq R(q_n')$ to obtain the desired inequality. This is true because $$ 0 \leq \frac{\log(1 - p_n)}{\log q_n} \leq \frac{\log(1 - p_n)}{\log q_n'}. $$ \end{proof} \section{Introduction} \label{sec:introduction} \input{introduction} \section{Setting} \label{sec:problem} \input{problem} \section{Algorithm} \label{sec:algorithm} \input{algorithm} \section{Analysis} \label{sec:analysis} \input{analysis} \section{OIMP with influencer fatigue} \label{sec:variant} \input{variant-short} \section{Experiments} \label{sec:experiments} \input{experiments} \section{Other related work} \label{sec:related} \input{related} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \section*{Acknowledgments} This work was partially supported by the French research project ALICIA (grant ANR-13-CORD-0020). \bibliographystyle{plain} \subsection{Extracting influencers from graphs} \label{sec:influencers} \textsc{GT-UCB}\ does not make any assumptions about the topology of the nodes under the scope of influencers. Indeed, in many settings it may be more natural to assume that the set of influencers is given and that the activations at each trial can be observed, while the topology of the underlying graph $G$ remain unknown. In other settings, we may start from an existing social network $G$, in which case we need to extract a set of $K$ representative influencers from it. Ideally, we should choose influencers that have little intersection in their ``scopes of influence'' to avoid useless seed selections. While this may be interpreted and performed differently, from one application to another, we discuss next some of the most natural heuristics for selecting influencers which we use in our experiments. \textbf{MaxDegree.} This method selects the $K$ nodes with the highest out-degrees in $G$. Note that by this criterion we may select influencers with overlapping influence scopes. \textbf{Greedy MaxCover.} This strategy follows the well-known greedy approximation algorithm for selecting a cover of the graph $G$. Specifically, the algorithm executes the following steps $K$ times: \begin{enumerate} \item Select the node with highest out-degree \item Remove all out-neighbors of the selected node \end{enumerate} To limit intersections among influencer scopes even more, nodes reachable by more than $1$ hops may be removed at step~(2). \textbf{DivRank~\cite{mei10}.} DivRank is a PageRank-like method relying on reinforced random walks, with the goal of producing diverse high-ranking nodes, while maintaining the rich-gets-richer paradigm. We adapted the original DivRank procedure by inverting the edge directions. In doing so, we get influential nodes instead of prestigious ones. By selecting the $K$ highest scoring nodes as influencers, the diversity is naturally induced by the reinforcement of random walks. This ensures that the influencers are fairly scattered in the graph and should have limited impact on each other. \textbf{Influence maximization approximated algorithms.} The fourth method we tested in our experiments assigns uniformly at random a propagation probability to each edge of $G$, assuming the IC model. Then, a state-of-the-art influence maximization algorithm -- PMC in our experiments -- is executed on $G$ to get the set of $K$ influencers having the highest potential spread. \subsection{Graph datasets} Similarly to~\cite{lei15}, we tested our algorithm on HepPh and DBLP, two publicly available collaboration networks. HepPh is a citation graph, where a directed edge is established when an author cited at least one paper of another author. In DBLP undirected edges are drawn between authors which have collaborated on at least one indexed paper. The datasets are summarized in Table~\ref{table:datasets}. We emphasize that we kept the datasets relatively small to allow for comparison with computation-heavy baselines, even though \textsc{GT-UCB}\ easily scales to large data, as will be illustrated in Section~\ref{sec:twitterexp}. \begin{table}[h] \centering \caption{Summary of the datasets.\label{table:datasets}} \begin{tabular}{lcccc} \toprule \textbf{Dataset} & HepPh & DBLP & Twitter \\ \midrule \# of nodes & $34.5K$ & $317K$ & $11.6M$ \\ \# of edges & $422K$ & $2.1M$ & $38.4M$ \\ \bottomrule \end{tabular} \end{table} \textbf{Diffusion models.} In the work closest to ours, Lei et al.~\cite{lei15} compared their solution on the Weighted Cascade instance of IC, where the influence probabilities on incoming edges sum up to 1. More precisely, every edge $(u,v)$ has weight $1 / d_v$ where $d_v$ is the in-degree of node $v$. In this experimental study, and to illustrate that our approach is diffusion-independent, we added two other diffusion scenarios to the set of experiments. First, we included the tri-valency model (TV), which associates randomly a probability from $\{0.1, 0.01,0.001\}$ to every edge and follows the IC propagation model. We also conducted experiments under the Linear Threshold (LT) model, where the edge probabilities are set like in the WC case and where thresholds on nodes are sampled uniformly from $[0,1]$. \textbf{Baselines.} We compare \textsc{GT-UCB}\ to several baselines. \textsc{Random} chooses a random influencer at each round. \textsc{MaxDegree} selects the node with the largest degree at each step $i$, where the degree does not include previously activated nodes. Finally, \textsc{EG} corresponds to the confidence-bound explore-exploit method with exponentiated gradient update from~\cite{lei15}; it is the state-of-the-art method for the OIMP problem (code provided by the authors). We use this last baseline on WC and TV weighted graphs and tune parameters in accordance to the results of their experiments: Maximum Likelihood Estimation is adopted for graph update and edge priors are set to Beta($1,20$). Note that \textsc{EG} learns parameters for the IC model, and hence is not applicable for LT. These baselines are compared to an \textsc{Oracle} that knows beforehand the diffusion model together with probabilities. At each round, it runs an influence maximization approximated algorithm -- PMC for IC propagation, SSA for LT. Note that previously activated nodes are not counted when estimating the value of a node with PMC or SSA, thus, making \textsc{Oracle} an adaptive strategy. All experiments are done by fixing the trial horizon $N=500$, a setting that is in line with many real-world marketing campaigns, which are fairly short and do not aim to reach the entire population. \begin{figure*}[t!] \centering \subfloat[HepPh (WC -- Impact of $K$)\label{fig:hepphinfluencersWC}]{\includegraphics[width=0.3\textwidth]{{figures/experiments/hepph/WC/k1.nexperts}.pdf}} ~~ \subfloat[HepPh (WC -- Influencer extraction)\label{fig:hepphreductionWC}]{\includegraphics[width=0.3\textwidth]{{figures/experiments/hepph/WC/k1.reduction}.pdf}} \\ \subfloat[DBLP (WC -- Impact of $K$)\label{fig:DBLPinfluencersWC}]{\includegraphics[width=0.3\textwidth]{{figures/experiments/dblp/WC/k1.nexperts}.pdf}} ~~ \subfloat[DBLP (WC -- Influencer extraction)\label{fig:DBLPreductionWC}]{\includegraphics[width=0.3\textwidth]{{figures/experiments/dblp/WC/k1.reduction}.pdf}} \caption{Impact of $K$ and the influencer extraction criterion on influence spread.\label{fig:hyperparameters}} \end{figure*} \textbf{Choice of the influencers.} We show in Fig.~\ref{fig:hepphreductionWC} and~\ref{fig:DBLPreductionWC} the impact of the influencer extraction criterion on HepPh and DBLP under WC model. We can observe that the spread is only slightly affected by the extraction criterion: different datasets lead to different optimal criteria. On the HepPh network, DivRank clearly leads to larger influence spreads. On DBLP, however, the extraction method has little impact on resulting spreads. We emphasize that on some other graph and model combinations we observed that other extraction routines can perform better than DivRank. In summary, we note that \textsc{GT-UCB}\ performs consistently as long as the method leads to influencers that are well spread over the graph. In the following, for each graph, we used DivRank as the influencer extraction criterion in accordance with these observations. In Fig.~\ref{fig:hepphinfluencersWC} and~\ref{fig:DBLPinfluencersWC}, we measure the impact of the number of influencers $K$ on the influence spread. We can observe that, on DBLP, a small number of influencers is sufficient to yield high-quality results. If too many influencers (relative to the budget) are selected (e.g. $K=200$), the initialization step required by \textsc{GT-UCB}\ is too long relative to the full budget, and hence \textsc{GT-UCB}\ does not reach its optimal spread -- some influencers still have a large remaining potential at the end. On the other hand, a larger amount of influencers leads to greater influence spreads on HepPh: this network is relatively small ($34.5K$ nodes), and thus half of the nodes are already activated after $400$ trials. By having more influencers, we are able to access parts of the network that would not be accessible otherwise. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{{figures/experiments/dblp/WC/k1.time.nexperts50.nreduction3}.pdf} \caption{DBLP (WC) -- Execution time.\label{fig:executiontime}} \end{figure} \textbf{\textsc{GT-UCB}\ vs. baselines.} We evaluate the execution time of the different algorithms in Fig.~\ref{fig:executiontime}. As expected, \textsc{GT-UCB}\ largely outperforms~\textsc{EG} (and~\textsc{Oracle}). The two baselines require the execution of an approximated influence maximization algorithm at each round. In line with~\cite{arora17}, we observed that SSA has prohibitive computational cost when incoming edge weights do not sum up to $1$, which is the case with both WC and TV. Thus, both~\textsc{Oracle} and~\textsc{EG} run PMC on all our experiments with IC propagation. \textsc{GT-UCB}\ is several orders of magnitude faster: it concentrates most of its running time on extracting influencers, while statistic updates and UCB computations are negligible. In Fig.~\ref{fig:baselines}, we show the growth of the spread for \textsc{GT-UCB}\ and baselines. For each experiment, \textsc{GT-UCB}\ uses $K=50$ if $L=1$ and $K=100$ if $L = 10$. First, we can see that~\textsc{MaxDegree} is a strong baseline in many cases, especially for WC and LT. \textsc{GT-UCB}\ results in good quality spreads across every combination of network and diffusion model. Interestingly, on the smaller graph HepPh, we observe an increase in the slope of spread after initialization, particularly visible at $t=50$ with WC and LT. This corresponds to the step when \textsc{GT-UCB}\ starts to select influencers maximizing $b_k(t)$ in the main loop. It shows that our strategy adapts well to the previous activations, and chooses good influencers at each iteration. Interestingly, \textsc{Random} performs surprisingly well in many cases, especially under TV weight assignment. However, when certain influencers are significantly better than others, it cannot adapt to select the best influencer unlike \textsc{GT-UCB}. \textsc{EG} performs well on HepPh, especially under TV weight assignment. However, it fails to provide competitive cumulative spreads on DBLP. We believe that~\textsc{EG} tries to estimate too many parameters for a horizon $T = 500$. After reaching this time step, less than $10\%$ of all nodes for WC, and $20\%$ for TV, are activated. This implies that we have hardly any information regarding the majority of edge probabilities, as most nodes are located in parts of the graph that have never been explored. \begin{figure*}[t!] \centering \subfloat[HepPh (WC -- $L=1$)\label{fig:HepphWC}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/hepph/WC/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (WC -- $L=1$)\label{fig:DBLP1WC}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/WC/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (WC -- $L=10$)\label{fig:DBLP5WC}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/WC/k10.algos.nexperts100.nreduction3}.pdf}} \\ \subfloat[HepPh (TV -- $L=1$)\label{fig:HepphTV}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/hepph/TV/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (TV -- $L=1$)\label{fig:DBLP1TV}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/TV/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (TV -- $L=10$)\label{fig:DBLP5TV}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/TV/k10.algos.nexperts100.nreduction3}.pdf}} \\ \subfloat[HepPh (LT -- $L=1$)\label{fig:HepphLT}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/hepph/LT/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (LT -- $L=1$)\label{fig:DBLP1LT}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/LT/k1.algos.nexperts50.nreduction3}.pdf}} ~ \subfloat[DBLP (LT -- $L=10$)\label{fig:DBLP5LT}]{\includegraphics[width=0.28\textwidth]{{figures/experiments/dblp/LT/k10.algos.nexperts100.nreduction3}.pdf}} \caption{Growth of spreads against the number of rounds.\label{fig:baselines}} \vspace{-2mm} \end{figure*} \subsection{Experiments on Twitter} \label{sec:twitterexp} We continue the experimental section with an evaluation of \textsc{GT-UCB}\ on the Twitter data, introduced as a motivating example in Section~\ref{sec:algorithm}. The interest of this experiment is to observe actual spreads, instead of simulated ones, over data that does not provide an explicit influence graph. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{{figures/experiments/twitter/tweet_graph/k1.algos.nexperts50.nreduction0}.pdf} \includegraphics[width=0.4\textwidth]{{figures/experiments/twitter/tweet_graph/k10.algos.nexperts100.nreduction0}.pdf} \caption{Twitter spread against rounds: (left) $L = 1$ (right) $L = 10$.\label{fig:twitter}} \vspace{-2mm} \end{figure} From the retweeting logs, for each \emph{active} user $u$ -- a user who posted more than $10$ tweets -- we select users having retweeted at least one of $u$'s tweets. By doing so, we obtain the set of potentially influenceable users associated to active users. We then apply the greedy algorithm to select the users maximizing the corresponding set cover. These are the influencers of \textsc{GT-UCB}\ and \textsc{Random}. \textsc{MaxDegree} is given the entire reconstructed network (described in Table~\ref{table:datasets}), that is, the network connecting active users to re-tweeters. To test realistic spreads, at each step, once an influencer is selected by \textsc{GT-UCB}, a random cascade initiated by that influencer is chosen from the logs and we record its spread. This provides realistic, model-free spread samples to the compared algorithms. Since Twitter only contains successful activations (re-tweets) and not the failed ones, we could not test against \textsc{EG}, which needs both kinds of feedback. In Fig.~\ref{fig:twitter}, we show the growth of the diffusion spread of \textsc{GT-UCB}\ against \textsc{MaxDegree} and \textsc{Random}. Again, \textsc{GT-UCB}\ uses $K=50$ if $L=1$ and $K=100$ if $L = 10$. We can see that \textsc{GT-UCB}\ outperforms the baselines, especially when a single node is selected at each round. We can observe that \textsc{MaxDegree} performs surprisingly well in both experiments. We emphasize that it relies on the knowledge of the entire network reconstructed from retweeting logs, whereas \textsc{GT-UCB}\ is only given a set of (few) fixed influencers. \subsection{Influencer fatigue} We conclude the experimental section with a series of experiments on Twitter data and taking into account influencer fatigue. In a similar way to Section~\ref{sec:twitterexp}, we compute the set of potentially influenceable users (the support) associated to all active users --~the set of all users who retweeted at least one tweet from the active user. We then choose 20 influencers as follows: we take the 5 best influencers, that is, the 5 active users with the largest support; then, the 51st to 55th best influencers, then, the 501st to 505th best influencers, and finally the 5 worst influencers. By doing so, we obtain a set of 20 influencers with diverse profiles, roughly covering the possible influencing outcomes. Ideally, a good algorithm that takes into account influencer fatigue, would need to focus on the 5 best influencers at the beginning, but would need to move to other influencers when the initially optimal ones start to lose influence due to fatigue. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/experiments/fatigue/twitter_fat_gt_ucb.pdf} \includegraphics[width=0.4\textwidth]{figures/experiments/fatigue/twitter_fat_gt_ucb_gamma2.pdf} \caption{\textsc{Fat-GT-UCB}\ vs competitors on Twitter logs with (left) $\gamma_1$, (right) $\gamma_2$. \label{fig:fat_gt_ucb}} \end{figure} We compare \textsc{Fat-GT-UCB}\ to \textsc{GT-UCB}, which does not use the influence fatigue function, and to the \textsc{Random} baseline. As in Section~\ref{sec:twitterexp}, when an algorithm selects an influencer, we can choose a random spread from the logs (belonging to the selected influencer), and we can now simulate the fatigue by removing every user in the spread with probability $\gamma(n)$, where $n$ is the number of times the influencer has already been played. We show the results of this comparison in Fig.~\ref{fig:fat_gt_ucb}. We tested with two different weariness functions, namely $\gamma_1(n) = 1 / n$ and $\gamma_2(n) = 1 / \sqrt{n}$. We can see that, in both scenarios, \textsc{Fat-GT-UCB}\ performs the best, showing that our UCB-like approach can effectively handle the notion of influencer fatigue in the OIMP problem. Unsurprisingly, \textsc{GT-UCB}\ performs better with weariness function $\gamma_2$ than is does with $\gamma_1$: the former has a lower diminishing impact and thus, the penalty of not incorporating fatigue is less problematic with $\gamma_2$. \subsection{Background} Given a graph $G = (V, E)$, the traditional problem of influence maximization is to select a set of seed nodes $I \subseteq V$, under a cardinality constraint $|I|=L$, such that the expected \emph{spread} --~that is, the number of activated nodes~-- of an influence cascade starting from $I$ is maximized. Formally, denoting by the random variable $S(I)$ the spread initiated by the seed set $I$, influence maximization aims to solve the following optimization problem: \[ \argmax_{I \subseteq V, |I|=L} \mathbb{E}[|S(I)|]. \] As mentioned before, a plethora of algorithms have been proposed to solve the influence maximization problem, under specific diffusion models. These algorithms can be viewed as \emph{full-information} and \emph{offline} approaches: they choose all the seeds at once, in one step, and they have the complete diffusion configuration, i.e., the graph topology and the influence probabilities. In the \emph{online} case, during a sequence of $N$ (called hereafter the \emph{budget}) consecutive trials, $L$ seed nodes are selected at each trial, and \emph{feedback} on the achieved spread from these seeds is collected. \subsection{Influence maximization via influencers} The short timespan of campaigns makes parameter estimation very challenging within small horizons. In other cases, the topology -- or even the existence -- of a graph is too strong an assumption. In contrast to~\cite{lei15}, we do not try to estimate edge probabilities in some graph, but, instead, we assume the existence of a known set of spread seed candidates -- in the following referred to as the \emph{influencers} -- who are the only access to the medium of diffusion. Formally, we let $[K] := \{1, \ldots, K\}$ be a set of influencers up for selection; each influencer is connected to an unknown and potentially large base (the influencer's \emph{support}) of basic nodes, each with an unknown activation probability. For illustration, we give in Figure~\ref{fig:depth1} an example of this setting, with $3$ influencers connected to $4$, $5$, and $4$ basic nodes, respectively. Now, the problem boils down to estimating the value of the $K$ influencers, which is typically much smaller than the number of parameters of the diffusion model. The medium over which diffusion operates may be a diffusion graph but we make no assumption on that, meaning that the diffusion may also happen in a completely unknown environment. Finally, note that by choosing $K = |V|$ influencers, the classic influence maximization problem can be seen as a special instance of our setting. \begin{figure}[t] \centering \includegraphics[height=3.3cm]{figures/influencers.pdf} \caption{Three influencers with associated activation probabilities $p_k(u)$.} \label{fig:depth1} \end{figure} We complete the formal setting by assuming the existence of $K$ sets $A_k \subseteq V$ of basic nodes such that each influencer $k \in [K]$ is connected to each node in $A_k$. We denote $p_k(u)$ the probability for influencer $k$ to activate the child node $u \in A_k$. In this context, the diffusion process can be abstracted as follows. \begin{definition}[Influence process] When an influencer $k \in [K]$ is selected, each basic node $u \in A_k$ is \emph{sampled} for activation, according to its probability $p_k(u)$. The \emph{feedback} for $k$'s selection consists of all the activated nodes, while the associated \emph{reward} consists only of the \emph{newly activated} ones. \end{definition} \paragraph*{Remark} Limiting the influence maximization method to working with a small subset of the node base may allow to accurately estimate their value more rapidly, even in a highly uncertain environment, hence the algorithmic interest. At the same time, this is directly motivated by marketing scenarios involving marketers who may not have knowledge of the entire diffusion graph, only having access to a few influential people who can diffuse information (the influencers in our setting), or may simply prefer such a two-step flow of diffusion for various reasons, such as establishing credibility. Moreover, despite the fact that we model the social reach of every influencer by 1-hop links to the to-be-influenced nodes, these edges are just an abstraction of the activation probability, and may represent in reality longer paths in an underlying unknown real influence graph $G$. \subsection{Online influencer marketing with persistence} We are now ready to define the \emph{online influencer marketing with persistence} task. \begin{problem}[OIMP] \label{def:problem} Given a set of influencers $[K] := \{1, \ldots, K\}$, a \emph{budget} of $N$ trials, and a number $1 \leq L \leq K$ of influencers to be activated at each trial, the objective of the \emph{online influencer marketing with persistence} (OIMP) is to solve the following optimization problem: \[ \argmax_{I_n \subseteq [K], |I_n|=L, \forall 1\leqslant n\leqslant N} \mathbb{E}\left|\bigcup_{1\leqslant n \leqslant N} S(I_n) \right|. \] \end{problem} As noticed in~\cite{lei15}, the offline influence maximization can be seen as a special instance of the online one, where the budget is $N=1$. Note that, in contrast to persistence-free online influence maximization --~considered, e.g., in \cite{vaswani15, wen16}~-- the performance criterion used in OIMP displays the so-called \emph{diminishing returns property}: the expected number of nodes activated by successive selections of a given seed is decreasing, due to the fact that nodes that have already been activated are discounted. We refer to the expected number of nodes remaining to be activated as the \emph{remaining potential} of a seed. The diminishing returns property implies that there is no static best set of seeds to be selected, but that the algorithm must follow an adaptive policy, which can detect that the remaining potential of a seed is small and switch to another seed that has been less exploited. Our solution to this problem has to overcome challenges on two fronts: (1) it needs to estimate the potential of nodes at each round, without knowing the diffusion model nor the activation probabilities, and (2) it needs to identify the currently best influencers, according to their estimated potentials. Other approaches for the online influence maximization problem rely on estimating diffusion parameters~\cite{lei15, vaswani15, wen16} -- generally, a distribution over the influence probability of each edge in the graph. However, the assumption that one can estimate accurately the diffusion parameters -- and notably the diffusion probabilities -- may be overly ambitious, especially in cases where the number of allowed trials (the budget) is rather limited. A limited trial setting is arguably more in line with real-world campaigns: take as example political or marketing campaigns, which only last for a few weeks. In our approach, we work with parameters on \emph{nodes}, instead of edges. More specifically, these parameters represent the potentials of remaining spreads from each of the influencer nodes. We stress that this potential can evolve as the campaign proceeds. In this way, we can go around the dependencies on specific diffusion models, and furthermore, we can remove entirely the dependency on a detailed graph topology. \subsection{Model adaptation} The OIMP problem with influencer fatigue can be defined as follows. \begin{problem}[OIMP with \emph{influencer fatigue}\label{def:problem-fatigue}] Given a set of influencers $[K]$, a \emph{budget} of $N$ trials, a number $1 \leq L \leq K$ of influencers to be activated at each trial, the objective of online influencer marketing with persistence (OIMP) and with \emph{influencer fatigue} is to solve the following optimization problem: $$ \argmax_{I_n \subseteq [K], |I_n|=L, \forall 1\leqslant n\leqslant N} \mathbb{E}\left|\bigcup_{1\leqslant n \leqslant N} S(I_n) \right|, $$ knowing that, at the $s$-th selection of an influencer $k \in [K]$, the probability that $k$ activates some basic node $u$ is: $$ p_s(u) = \gamma(s) p(u), $$ for $\gamma: \mathbb{N}^* \to (0, 1]$ a \emph{known} non-increasing function and $p(u) \in [0,1]$. \end{problem} Our initial OIMP formulation can be seen as a special instance of the one with influencer fatigue, where the non-increasing function $\gamma$ --~referred to as the weariness function in the following~-- is the constant function $n \mapsto 1$. We follow the same strategy to solve this new OIMP variant, by estimating the remaining potential of a given influencer by an adaptation of the Good-Turing estimator. What makes the problem more complex in this setting is the fact that our hapax statistics must now take into account the round at which they occured. \subsection{The \textsc{Fat-GT-UCB}~algorithm} As we did previously, to simplify the analysis, we assume that the influencers have \emph{non intersecting support}. We redefine the remaining potential in the setting with influencer fatigue as $$ R_k(t) := \sum_{u \in A_k} \mathds{1}\{u \text{ never activated}\} \gamma(n_k(t) + 1) p(u), $$ where $p(u)$ is the probability that the influencer activates node $u$, independently of the number of spreads initiated by the influencer. Again, the remaining potential is equal to the expected number of additional conversions upon starting the $t+1$-th cascade from $k$. The Good-Turing estimator adapted to the setting with influencer fatigue is defined as follows: $$ \hat{R}_k(t) = \frac{1}{n_k(t)} \sum_{u \in A_k} U^\gamma_{n_k(t)}(u), $$ where $U^\gamma_{k,n}(u) := \sum_{1 \leq i \leq n} \mathds{1}\{X_{k,1}(u) = \ldots = X_{k, i-1}(u) = X_{k, i + 1}(u) = \ldots = X_{k, n}(u) = 0, X_{k, i}(u) = 1\} \frac{\gamma(n+1)}{\gamma(i)}$. In short, if $i$ is the round at which a hapax has been activated, we reweight it by the factor $\gamma(n+1) / \gamma(i)$ since we are interested in its contribution at the $n+1$-th spread initiated by the influencer. We provide a formal justification to this estimator by computing its bias in Appendix~\ref{sec:appendixfatigue}. Following the same strategy and principles from the bandit literature, the \textsc{Fat-GT-UCB}\ adaptation of \textsc{GT-UCB}\ selects at each step (trial) $t$ the highest upper-confidence bound on the remaining potential -- denoted by $b_k(t)$ -- and activates (plays) the corresponding influencer $k$. The upper confidence bound can now be set as follows (the full details can also be found in Appendix~\ref{sec:appendixfatigue} -- see Theorem~\ref{th:confidence_bounds_fatigue}): \begin{align}\label{eq:fatucb} b_k(t) = \hat{R}_k(t) + \left(1+\sqrt{2}\right)\sqrt{\frac{\hat{\lambda}_k(t) \log(4t)}{n_k(t)}} + \frac{\log(4t)}{3n_k(t)}, \end{align} where $\hat{R}_k(t)$ is the Good-Turing estimator and $$ \hat{\lambda}_k(t) := \frac{\gamma(n_k(t) + 1)}{n_k(t)} \sum_{s=1}^{n_k(t)} \frac{|S_{k,s}|}{\gamma(s)} $$ is an estimator for the expected spread from influencer $k$. \subsection{Model adaptation} Our variant to the OIMP with influencer fatigue can be reframed as follows: \begin{problem}[OIMP via \emph{rotting} influencers\label{def:problem-fatigue}] Given a set of influencers $[K]$, a \emph{budget} of $N$ trials, a number $1 \leq L \leq K$ of influencers to be activated at each trial, the objective of online influence maximization with persistence (OIMP) via \emph{rotting} influencers is to solve the following optimization problem: $$ \argmax_{I_n \subseteq [K], |I_n|=L, \forall 1\leqslant n\leqslant N} \mathbb{E}\left|\bigcup_{1\leqslant n \leqslant N} S(I_n) \right|, $$ knowing the probability that, at its $s$-th selection, an influencer $k$ $\in [K]$ activates some basic node $u$ is: $$ p_s(u) = \gamma(s) p(u), $$ for $\gamma: \mathbb{N} \to (0, 1]$ a \emph{known} non-increasing function and $p(u) \in [0,1]$. \end{problem} Our initial OIMP formulation can be seen as a special instance of the one via rotting influencers, where the non-increasing function $\gamma$ --~referred to as the weariness function in the following~-- is the identity. We will follow the same strategy to solve it, by estimating the remaining potential of a given influencer by an adaptation of the Good-Turing estimator. What makes the problem more complex in this rotting setting is the fact that hapaxes must now incorporate the round of their activation. \subsection{The \textsc{Fat-GT-UCB}~algorithm} As we did previously, to simplify the analysis, we assume that the influencers have \emph{non intersecting support}. Let $n$ be the number of times a given influencer has been selected and $S_1, \ldots, S_n$ the corresponding initiated spreads. We redefine the remaining potential in the setting with influencer fatigue as $R_n := \sum_{u \in A} \mathds{1}\{u \text{ never activated}\} \gamma(n) p(u)$ where $p(u)$ is the probability that the influencer activates node $u$, independently of the number of spreads initiated by the influencer. Again, the remaining potential is equal to the expected number of additional conversions at time $n$ given the nodes previously activated. The Good-Turing estimator adapted to the rotting setting is defined as follows: $$ \hat{R}_n = \sum_{u \in A} \frac{U^\gamma_n(u)}{n}, $$ where $U^\gamma_n(u) := \sum_{i < n} \mathds{1}\{X_0 = \ldots = X_{i-1} = X_{i+1} = \ldots = X_{n-1} = 0, X_i = 1\} \frac{\gamma(n)}{\gamma(i)}$. In short, if $i$ is the round at which a hapax has been activated, we reweight it by the factor $\gamma(n) / \gamma(i)$ since we are interested in its contribution at the $n$-th round. We now justify formally this estimator by computing its bias. Following principles from the bandit literature, the \textsc{Fat-GT-UCB}\ algorithm relies on \emph{optimism in the face of uncertainty}. At each step (trial) $t$, the algorithm selects the highest upper-confidence bound on the remaining potential -- denoted by $b_k(t)$ -- and activates (plays) the corresponding influencer $k$. This algorithm achieves robustness against the stochastic nature of the cascades, by ensuring that influencers who ``underperformed'' with respect to their potential in previous trials may still be selected later on. Consequently, \textsc{GT-UCB}\ aims to maintain a degree of \emph{exploration} of influencers, in addition to the \emph{exploitation} of the best influencers as per the feedback gathered so far. \begin{algorithm} \caption{ -- \textsc{Fat-GT-UCB}\ ($L = 1$)} \begin{algorithmic}[1]\small \REQUIRE{Set of influencers $[K]$, time budget $N$, fatigue function $\gamma$} \STATE{\textbf{Initialization:} play each influencer $k\in[K]$ once, observe the spread $S_{k,1}$, set $n_k=1$} \STATE{For each $k\in [K]$: update the reward $W=W\cup S_{k,1}$} \FOR{$t = K + 1, \ldots, N$}\label{alg:fatfor} \STATE Compute $b_k(t)$ for every influencer $k$ \STATE Choose $k(t) = \argmax_{k \in [K]} b_k(t)$ \label{alg:fatoptimism} \STATE Play influencer $k(t)$ and observe spread $S(t)$ \STATE Update cumulative reward: $W= W \cup S(t)$ \STATE Update statistics of influencer $k(t)$: $n_{k(t)}(t+1) = n_{k(t)}(t) + 1$ and $S_{k,n_k(t)} = S(t)$. \ENDFOR \label{alg:fatendfor} \RETURN $W$ \end{algorithmic} \label{alg:fatgooducb} \end{algorithm} \hl{TODO: clean all this part to avoid repetitions} Algorithm~\ref{alg:fatgooducb} presents the main components of \textsc{Fat-GT-UCB}\ for the case $L=1$, that is, when a single influencer is chosen at each step. The algorithm starts by activating each influencer $k\in[K]$ once, in order to initialize its Good-Turing estimator. The main loop of \textsc{Fat-GT-UCB}\ occurs at lines \ref{alg:fatfor}-\ref{alg:fatendfor}. Let $S(t)$ be the observed spread at the trial $t$, and let $S_{k,s}$ be the result of the $s$-th diffusion initiated at influencer $k$. At every step $t > K$, we recompute for each influencer $k \in [K]$ its index $b_k(t)$, representing the upper confidence bound on the expected reward in the next trial. The computation of this index uses the previous samples $S_{k,1},\ldots,S_{k,n_k(t)}$ and the number of times each influencer $k$ has been activated up to trial $t$, $n_k(t)$. Based on the result of Theorem~\ref{th:confidence_bounds}, the upper confidence bound is set as: \begin{align}\label{eq:fatucb} b_k(t) = \hat{R}_k(t) + \left(1+\sqrt{2}\right)\sqrt{\frac{\hat{\lambda}_k(t) \log(4t)}{n_k(t)}} + \frac{\log(4t)}{3n_k(t)}, \end{align} where $\hat{R}_k(t)$ is the Good-Turing estimator and $$ \hat{\lambda}_k(t) := \frac{\gamma(n_k(t) + 1)}{n_k(t)} \sum_{s=1}^{n_k(t)} \frac{|S_{k,s}|}{\gamma(s)} $$ is an estimator for the expected spread from influencer $k$. Then, in line~\ref{alg:fatoptimism}, \textsc{Fat-GT-UCB}\ selects the influencer $k(t)$ with the largest index, and initiates a cascade from this node. The feedback $S(t)$ is observed and is used to update the cumulative reward set $W$. Note that $S(t)$ provides only the Ids of the nodes that were activated, with no information on \emph{how} this diffusion happened in the hidden diffusion medium. Finally, statistics associated to the chosen influencer $k(t)$ are updated.
1,314,259,992,669
arxiv
\section{Introduction} In the field of the $f$-electron systems, the phenomena which originate from the multipole degrees of freedom have been studied intensively since such degrees of freedom, in addition to the dipole, are expected to become sources of exotic ordering and physical properties. The quadrupole moment couples to the lattice and its influence can be detected, for example, by ultrasonic measurements. In recent years, even the effects of the octupole moment have been investigated. One of the most representative phenomena discovered in multipole physics is the quadrupole and octupole ordering in NpO$_2$~\cite{Paixao2002,Tokunaga2005, Kubo2005PRBR,Kubo2005PRB,Kubo2005PRBB} and Ce$_x$La$_{1-x}$B$_6$.~\cite{Akatsu2003,Kubo2003,Kubo2004, Morie2004,Mannix2005,Kuwahara2007,Inami2014} While it is in general difficult to detect the octupole moment, resonant x-ray scattering,~\cite{Paixao2002,Mannix2005} NMR,~\cite{Tokunaga2005} anisotropic magnetization,~\cite{Morie2004} and neutron scattering~\cite{Kuwahara2007} experiments have confirmed the octupole order. In these compounds, the crystalline electric field (CEF) ground state is the $\Gamma_8$ quartet, which has sufficient degrees of freedom to possess quadrupole and octupole moments in addition to the dipole moment. Then, the $\Gamma_8$ quartet has been regarded as an ideal system for multipole physics. However, large degeneracy, such as in a quartet, is not a necessary condition to possess higher-order multipole moments. If the CEF ground state does not have the dipole moment but is not a singlet, this state inevitably has the higher-order multipole degrees of freedom. In fact, the $\Gamma_3$ doublet state under a cubic CEF, which we will explore in this paper, does not have the dipole but has the quadrupole moments $O^0_2$ and $O^2_2$ with the $\Gamma_{3g}$ symmetry and the octupole moment $T_{xyz}$ with the $\Gamma_{2u}$ symmetry. The absence of the dipole moment is also an advantage of the $\Gamma_3$ systems since we can focus only on the higher-order multipoles. In the $\Gamma_3$ state, the degeneracy is not due to the Kramers theorem, which is applicable only to an ion with odd number of $f$ electrons. Thus, we consider an ion with even number of $f$ electrons. In particular, Pr$^{3+}$ ion has two $f$ electrons and in some Pr compounds, the CEF ground state is the $\Gamma_3$ doublet. In recent years, interesting phenomena, which probably originate from the multipole degrees of freedom, have been reported for Pr compounds with the $\Gamma_3$ CEF ground state. In PrPb$_3$, incommensurate quadrupole ordering has been reported.~\cite{Onimaru2005} PrIr$_2$Zn$_{20}$ and some of other Pr compounds with the same crystal structure (Pr 1-2-20 compounds) with the $\Gamma_3$ CEF ground state become superconducting at low temperatures,~\cite{Onimaru2010,Sakai2012, Matsubayashi2012,Onimaru2012,Tsujimoto2014,Onimaru2016} which might be mediated by multipole fluctuations. In this paper, to elucidate multipole phenomena of the $\Gamma_3$ systems, we derive the multipole interactions from a simple model only with $f$-$f$ direct hopping. While the actual exchange process would be through orbitals other than the $f$ orbital, such a process can be represented by effective $f$-$f$ hopping. An important point is that the symmetry of the $f$ orbital restricts the form of the hopping for both cases of the direct and effective hopping and we will obtain qualitatively the same results for the multipole interactions.~\cite{Kubo2005PRBB} The anisotropy in the multipole moments are closely tied to the real space direction and the multipole interactions are intrinsically anisotropic. It is in sharp contrast to the isotropic spin-spin interaction in a system without spin-orbit coupling. Thus, the nature of the multipole interactions can depend drastically on lattice structure. In the present study, we pay attention to this point of the multipole interactions. Then, we derive the multipole interactions for simple cubic (sc), bcc, and fcc lattices. We also compare the results of the present model for the $f^2$-$\Gamma_3$ systems with those of the $\Gamma_8$ model for the $f^1$ systems~\cite{Kubo2005PRBR,Kubo2005PRB} to find common features among these two classes. \section{Ground and intermediate states}\label{model} To construct electronic states, we first include the effect of the spin-orbit coupling in the one-electron states and consider only the $f$-electron states with the total angular momentum $j=5/2$. These states split into the states with $\Gamma_7$ and $\Gamma_8$ symmetry under a cubic CEF [see Fig.~\ref{level_scheme}(a)]. \begin{figure} \includegraphics[width=0.99\linewidth] {level_scheme.eps} \caption{\label{level_scheme} (Color online) Electron configurations for (a) $\Gamma_7$ state of $f^1$, (b) $\Gamma_3$ state of $f^2$, and (c) $\Gamma_6$ state of $f^3$. The bold lines denote spin singlets composed of the $\Gamma_7$ and $\Gamma_8$ orbitals. } \end{figure} The $\Gamma_7$ states at site $\bm{r}$ are given by \begin{subequations} \begin{align} c^{\dagger}_{\bm{r} 7 \uparrow} |0 \rangle &\equiv \frac{1}{\sqrt{6}} \left(a^{\dagger}_{\bm{r} 5/2} -\sqrt{5} a^{\dagger}_{\bm{r} -3/2} \right)|0 \rangle,\\ c^{\dagger}_{\bm{r} 7 \downarrow} |0 \rangle &\equiv \frac{1}{\sqrt{6}} \left(a^{\dagger}_{\bm{r} -5/2} -\sqrt{5} a^{\dagger}_{\bm{r} 3/2} \right)|0 \rangle, \end{align} \end{subequations} where $a^{\dagger}_{\bm{r} j_z}$ is the creation operator of the electron with the $z$-component $j_z$ of the total momentum at $\bm{r}$ and $|0\rangle$ denotes the vacuum state. The $\Gamma_8$ states are given by \begin{subequations} \begin{align} c^{\dagger}_{\bm{r} \alpha \uparrow} |0 \rangle &\equiv \frac{1}{\sqrt{6}} \left( \sqrt{5}a^{\dagger}_{\bm{r} 5/2} +a^{\dagger}_{\bm{r} -3/2} \right)|0 \rangle,\\ c^{\dagger}_{\bm{r} \alpha \downarrow} |0 \rangle &\equiv \frac{1}{\sqrt{6}} \left( \sqrt{5}a^{\dagger}_{\bm{r} -5/2} +a^{\dagger}_{\bm{r} 3/2} \right)|0 \rangle,\\ c^{\dagger}_{\bm{r} \beta \uparrow} |0 \rangle &\equiv a^{\dagger}_{\bm{r} 1/2}|0 \rangle,\\ c^{\dagger}_{\bm{r} \beta \downarrow} |0 \rangle &\equiv a^{\dagger}_{\bm{r} -1/2}|0 \rangle. \end{align} \end{subequations} In the above equations, $\sigma=\uparrow$ or $\downarrow$ denotes the Kramers degeneracy of the one-electron states, while it is not a real spin because of the spin-orbit coupling. In the following, however, we call it spin for simplicity. In actual situations, the $f^2$-$\Gamma_3$ doublet is mainly composed of two singlets between $\Gamma_7$ and $\Gamma_8$ orbitals, and then, we assume an antiferromagnetic interaction between the $\Gamma_7$ and $\Gamma_8$ orbitals. Such an interaction would be justified by perturbatively including the effects of the sixth order terms in the CEF, which cannot be included as a one-electron potential for $j=5/2$ states but are indispensable to stabilize the $\Gamma_3$ state.~\cite{Hotta2006} The model Hamiltonian is \begin{equation} H=H_{\text{kin}}+H_{\text{loc}}. \end{equation} $H_{\text{kin}}$ is the kinetic energy term which we will discuss later. The local part is given by \begin{equation} H_{\text{loc}} = \Delta \sum_{\bm{r}}(n_{\bm{r}8}-n_{\bm{r}7}) +J_{78} \sum_{\bm{r}} \bm{s}_{\bm{r}7} \cdot \bm{s}_{\bm{r}8}, \end{equation} where \begin{subequations} \begin{align} n_{\bm{r} 7}&=\sum_{\sigma}c^{\dagger}_{\bm{r} 7 \sigma}c_{\bm{r} 7 \sigma},\\ n_{\bm{r} 8}&= \sum_{\tau \sigma}c^{\dagger}_{\bm{r} \tau \sigma}c_{\bm{r} \tau \sigma},\\ \bm{s}_{\bm{r} 7}&=\frac{1}{2}\sum_{\sigma \sigma'} c^{\dagger}_{\bm{r} 7 \sigma} \bm{\sigma}_{\sigma \sigma'} c_{\bm{r} 7 \sigma'},\\ \bm{s}_{\bm{r} 8}&=\frac{1}{2}\sum_{\tau \sigma \sigma'} c^{\dagger}_{\bm{r} \tau \sigma} \bm{\sigma}_{\sigma \sigma'} c_{\bm{r} \tau \sigma'}. \end{align} \end{subequations} Here, $\tau=\alpha$ or $\beta$ and $\bm{\sigma}$ are the Pauli matrices. $\Delta$ denotes the CEF level splitting [see Fig.~\ref{level_scheme}(a)] and $J_{78}$ denotes the coupling constant of the antiferromagnetic interaction between the $\Gamma_7$ and $\Gamma_8$ orbitals [see Fig.~\ref{level_scheme}(b)]. Then, for a sufficiently large $J_{78}$, the $f^2$ ground states are spin singlets composed of the $\Gamma_7$ and $\Gamma_8$ orbitals [see Fig.~\ref{level_scheme}(b)]: \begin{equation} \begin{split} |\tau (\bm{r}) \rangle &\equiv \frac{1}{\sqrt{2}} (c^{\dagger}_{\bm{r} \tau \uparrow}c^{\dagger}_{\bm{r} 7 \downarrow} -c^{\dagger}_{\bm{r} \tau \downarrow}c^{\dagger}_{\bm{r} 7 \uparrow})|0\rangle\\ &= \frac{i}{\sqrt{2}} \sigma^y_{\sigma \sigma'} c^{\dagger}_{\bm{r} \tau \sigma}c^{\dagger}_{\bm{r} 7 \sigma'}|0\rangle\\ &\equiv B_{\sigma \sigma'} c^{\dagger}_{\bm{r} \tau \sigma}c^{\dagger}_{\bm{r} 7 \sigma'}|0\rangle. \end{split} \end{equation} The repeated indices should be summed hereafter. These states constitute a basis of the $\Gamma_3$ representation of cubic symmetry. Note that the present model is one of the simplest models to realize the $\Gamma_3$ ground state and we should improve it if we deal with the CEF excited states. For example, when we accommodate two electrons in the $\Gamma_8$ orbitals, we obtain six states with energy $2\Delta$, but they should split into three levels. To describe such splitting in the CEF excited states, it is necessary to include the interactions between $\Gamma_8$ orbitals. Thus, we should restrict ourselves to low energy states around the $\Gamma_3$ CEF ground state in the present simplified model. We consider the exchange process between nearest-neighbor sites with the $\Gamma_3$ ground state. Among the intermediate $f^1$-$f^3$ states, we consider only the lowest energy states. If the $f^3$ site has zero or two $\Gamma_7$ electrons, it cannot gain the energy from the antiferromagnetic interaction. Concerning the $f^1$ states, we assume that the $\Gamma_7$ state has lower energy, i.e., $\Delta>0$. However, $\Delta$ should be sufficiently smaller than $J_{78}$ for the realization of the $\Gamma_3$ ground state in the $f^2$ configurations. Then, each site should have one $\Gamma_7$ electron in the intermediate states. That is, only the hopping between the $\Gamma_8$ orbitals is allowed. In the following, we explicitly write the intermediate states and evaluate the matrix elements of the exchange processes. The intermediate $f^1$ states are the $\Gamma_7$ states [see Fig.~\ref{level_scheme}(a)], \begin{equation} |\sigma(\bm{r}) \rangle \equiv c^{\dagger}_{\bm{r} 7 \sigma} |0\rangle. \end{equation} We calculate the matrix elements of the annihilation operator of the $\Gamma_8$ electron between the $f^1$ and the $\Gamma_3$ states. The effect of the annihilation operator on the $\Gamma_3$ state is written as \begin{equation} \begin{split} c_{\bm{r} \tau \sigma}|\tau'(\bm{r})\rangle &= c_{\bm{r} \tau \sigma} B_{\sigma' \sigma''} c^{\dagger}_{\bm{r} \tau' \sigma'} c^{\dagger}_{\bm{r} 7 \sigma''} |0\rangle\\ &= \delta_{\tau \tau'} B_{\sigma \sigma'} c^{\dagger}_{\bm{r} 7 \sigma'} |0\rangle\\ &\equiv B^{\tau'}_{\tau \sigma; \sigma'} |\sigma'(\bm{r}) \rangle. \end{split} \end{equation} Then, we obtain the matrix element as \begin{equation} \langle \sigma'(\bm{r}) | c_{\bm{r} \tau \sigma}|\tau'(\bm{r})\rangle = B^{\tau'}_{\tau \sigma; \sigma'}. \end{equation} Note that $B^{\tau' *}_{\tau \sigma; \sigma'}=B^{\tau'}_{\tau \sigma; \sigma'}$, since we have defined these states with real coefficients from the basis $\Gamma_7$ and $\Gamma_8$ states. The ground states among the $f^3$ states for a strong antiferromagnetic interaction $J_{78}$ between the $\Gamma_7$ and $\Gamma_8$ orbitals are the $\Gamma_6$ states [see Fig.~\ref{level_scheme}(c)], \begin{subequations} \begin{align} \begin{split} |\tilde{\uparrow}(\bm{r}) \rangle \equiv& \frac{1}{\sqrt{6}} ( 2c^{\dagger}_{\bm{r} \alpha \uparrow} c^{\dagger}_{\bm{r} \beta \uparrow} c^{\dagger}_{\bm{r} 7 \downarrow}\\ &-c^{\dagger}_{\bm{r} \alpha \uparrow} c^{\dagger}_{\bm{r} \beta \downarrow} c^{\dagger}_{\bm{r} 7 \uparrow} -c^{\dagger}_{\bm{r} \alpha \downarrow} c^{\dagger}_{\bm{r} \beta \uparrow} c^{\dagger}_{\bm{r} 7 \uparrow} ) |0\rangle\\ =& \frac{1}{\sqrt{3}} \left[ c^{\dagger}_{\bm{r} \alpha \uparrow}|\beta(\bm{r})\rangle -c^{\dagger}_{\bm{r} \beta \uparrow}|\alpha(\bm{r})\rangle \right], \end{split}\\ \begin{split} |\tilde{\downarrow}(\bm{r}) \rangle \equiv& \frac{1}{\sqrt{6}} ( 2c^{\dagger}_{\bm{r} \alpha \downarrow} c^{\dagger}_{\bm{r} \beta \downarrow} c^{\dagger}_{\bm{r} 7 \uparrow}\\ &-c^{\dagger}_{\bm{r} \alpha \downarrow} c^{\dagger}_{\bm{r} \beta \uparrow} c^{\dagger}_{\bm{r} 7 \downarrow} -c^{\dagger}_{\bm{r} \alpha \uparrow} c^{\dagger}_{\bm{r} \beta \downarrow} c^{\dagger}_{\bm{r} 7 \downarrow} ) |0\rangle\\ =& \frac{1}{\sqrt{3}} \left[ -c^{\dagger}_{\bm{r} \alpha \downarrow}|\beta(\bm{r})\rangle +c^{\dagger}_{\bm{r} \beta \downarrow}|\alpha(\bm{r})\rangle \right]. \end{split} \end{align} \end{subequations} Note that, in a local model considering all the 14 $f$-orbitals, we obtain the $\Gamma_6$ ground state when we accommodate three electrons for a realistic parameter set to obtain a $\Gamma_3$ ground state in an $f^2$ case.~\cite{Hotta2006} Thus, the intermediate $\Gamma_6$ state is reasonable. Note also that the states $c^{\dagger}_{\bm{r} \alpha \sigma}|\beta(\bm{r})\rangle +c^{\dagger}_{\bm{r} \beta \sigma}|\alpha(\bm{r})\rangle$ are represented by (spin singlet composed of two $\Gamma_8$ orbitals)$\otimes \Gamma_7$ and they do not gain the antiferromagnetic energy. The matrix element of the creation operator is given by \begin{equation} \langle \tilde{\sigma}'(\bm{r})|c^{\dagger}_{\bm{r} \tau \sigma} | \tau'(\bm{r}) \rangle = i\sigma^y_{\tau \tau'} \sigma^z_{\sigma \sigma'} \frac{\sqrt{3}}{2} \equiv \tilde{B}^{\tau'}_{\tau \sigma; \sigma'}. \end{equation} We note that $\tilde{B}^{\tau' *}_{\tau \sigma; \sigma'}=\tilde{B}^{\tau'}_{\tau \sigma; \sigma'}$. \section{Hopping}\label{hopping} The hopping processes are described by the kinetic energy term of the Hamiltonian for the $\Gamma_8$ orbitals: \begin{equation} \begin{split} H_{\text{kin}} &=\sum_{\bm{r},\bm{\mu},\tau,\sigma,\tau^{\prime},\sigma^{\prime}} c^{\dagger}_{\bm{r}+\bm{\mu} \tau \sigma} t^{\bm{\mu}}_{\tau \sigma; \tau^{\prime} \sigma^{\prime}} c_{\bm{r} \tau^{\prime} \sigma^{\prime}},\\ &=\sum_{\bm{r},\bm{\mu},\nu,\nu^{\prime}} c^{\dagger}_{\bm{r}+\bm{\mu} \nu} t^{\bm{\mu}}_{\nu \nu^{\prime}} c_{\bm{r} \nu^{\prime}}, \end{split} \end{equation} where the vector $\bm{\mu}$ connects nearest-neighbor sites. Here, we have introduced an abbreviation $\nu=(\tau,\sigma)$. Since $H_{\text{kin}}$ is Hermitian, $t^{\bm{\mu} *}_{\nu \nu'}=t^{-\bm{\mu}}_{\nu' \nu}$. In this study, we consider only the $\sigma$ bonding $(ff\sigma)$ for the hopping integrals. Although the hopping integrals were derived in Ref.~\onlinecite{Hotta2003} for the sc lattice and in Ref.~\onlinecite{Kubo2005PRB} for the other lattices, here we write down again the hopping integrals for readers' convenience. To write out the hopping integral $t^{\bm{\mu}}$ for each lattice structure concisely, we define $4\times4$ matrices as follows \begin{subequations} \begin{align} \tilde{1}_{\tau \sigma; \tau^{\prime} \sigma^{\prime}} &\equiv \delta_{\tau \tau^{\prime}} \delta_{\sigma \sigma^{\prime}},\\ \tilde{\bm{\tau}}_{\tau \sigma; \tau^{\prime} \sigma^{\prime}} &\equiv \bm{\sigma}_{\tau \tau^{\prime}} \delta_{\sigma \sigma^{\prime}},\\ \tilde{\bm{\sigma}}_{\tau \sigma; \tau^{\prime} \sigma^{\prime}} &\equiv \delta_{\tau \tau^{\prime}} \bm{\sigma}_{\sigma \sigma^{\prime}},\\ \tilde{\eta}^{\pm} &\equiv (\pm \sqrt{3}\tilde{\tau}^x-\tilde{\tau}^z)/2. \end{align} \end{subequations} Then, the hopping integrals for the sc lattice are given by \begin{subequations} \begin{align} t^{(1,0,0)} &= [\tilde{1}-\tilde{\eta}^+]t_1,\\ t^{(0,1,0)} &= [\tilde{1}-\tilde{\eta}^-]t_1,\\ t^{(0,0,1)} &= [\tilde{1}-\tilde{\tau}^z]t_1, \end{align} \end{subequations} where we have set the lattice constant as unity and $t_1$=$3(ff\sigma)/14$. For the bcc lattice, \begin{subequations} \begin{align} t^{(1/2,1/2,1/2)} &= [\tilde{1} +\tilde{\tau}^y (+\tilde{\sigma}^x+\tilde{\sigma}^y+\tilde{\sigma}^z)/\sqrt{3}]t_2, \label{t111}\\ t^{(-1/2,1/2,1/2)} &= [\tilde{1} +\tilde{\tau}^y (+\tilde{\sigma}^x-\tilde{\sigma}^y-\tilde{\sigma}^z)/\sqrt{3}]t_2,\\ t^{(1/2,-1/2,1/2)} &= [\tilde{1} +\tilde{\tau}^y (-\tilde{\sigma}^x+\tilde{\sigma}^y-\tilde{\sigma}^z)/\sqrt{3}]t_2,\\ t^{(1/2,1/2,-1/2)} &= [\tilde{1} +\tilde{\tau}^y (-\tilde{\sigma}^x-\tilde{\sigma}^y+\tilde{\sigma}^z)/\sqrt{3}]t_2, \label{t11-1} \end{align} \end{subequations} with $t_2$=$2(ff\sigma)/21$. For the fcc lattice, \begin{subequations} \begin{align} t^{(0,1/2,1/2)}&= [\tilde{1}+(\tilde{\eta}^+ -4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^x)/7]t_3, \label{t011}\\ t^{(1/2,0,1/2)}&= [\tilde{1}+(\tilde{\eta}^- -4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^y)/7]t_3, \\ t^{(1/2,1/2,0)}&= [\tilde{1}+(\tilde{\tau}^z -4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^z)/7]t_3, \\ t^{(0,1/2,-1/2)}&= [\tilde{1}+(\tilde{\eta}^+ +4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^x)/7]t_3, \\ t^{(-1/2,0,1/2)}&= [\tilde{1}+(\tilde{\eta}^- +4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^y)/7]t_3, \\ t^{(1/2,-1/2,0)}&= [\tilde{1}+(\tilde{\tau}^z +4\sqrt{3} \tilde{\tau}^y \tilde{\sigma}^z)/7]t_3, \end{align} \end{subequations} with $t_3$=$(ff\sigma)/8$. Except for the sc lattice, the hopping integrals are complex numbers and dependent on $\sigma$. Note that $t^{-\bm{\mu}}=t^{\bm{\mu}}$. \section{Multipole interaction}\label{results} By employing the second-order perturbation theory with respect to $H_{\text{kin}}$, we derive the effective Hamiltonian: \begin{equation} \begin{split} H^{\text{(eff)}} = \sum_{a,b,u}\sum_{m \ne 0} |0, a \rangle \langle 0, a| &H_{\text{kin}} \frac{|m, u \rangle \langle m, u|}{E_0-E_m}\\ \times &H_{\text{kin}} |0, b \rangle \langle 0, b|. \end{split} \end{equation} Here, $|0, a \rangle$ is a ground state without $H_{\text{kin}}$ with the energy $E_0$ and $|m, u \rangle$ is an $m$-th excited state with the energy $E_m$. In the following, we consider only the first excited states among the intermediate states, which are described by a pair of nearest-neighboring $f^1$ and $f^3$ sites discussed above. Then, we need to evaluate the following matrix element: \begin{equation} \begin{split} -&\Delta E \times H^{\text{(eff)}}_{\tau_1 \tau_2; \tau'_1 \tau'_2}(\bm{r}_1,\bm{r}_2)\\ = \sum_{u} &\langle \tau_1(\bm{r}_1) \ \tau_2(\bm{r}_2) |H_{\text{kin}} |1,u \rangle \\ \times&\langle 1,u| H_{\text{kin}}| \tau'_1(\bm{r}_1) \ \tau'_2(\bm{r}_2) \rangle, \end{split} \end{equation} where $\Delta E=E_1-E_0=J_{78}/2$. This matrix element denotes the transitions of the states: $\tau'_1 \rightarrow \tau_1$ at $\bm{r}_1$ and $\tau'_2 \rightarrow \tau_2$ at $\bm{r}_2$. The part of the element in which the intermediate $f^1$ state is located at $\bm{r}_2$ and $f^3$ state is located at $\bm{r}_1$ is given by \begin{equation} \begin{split} & \sum_u \langle \tau_1(\bm{r}_1) \ \tau_2(\bm{r}_2) | c^{\dagger}_{\bm{r}_2 \nu_2} t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} c_{\bm{r}_1 \nu_1} |1, u \rangle\\ &\times \langle 1, u| c^{\dagger}_{\bm{r}_1 \nu'_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2} c_{\bm{r}_2 \nu'_2} | \tau'_1(\bm{r}_1) \ \tau'_2(\bm{r}_2) \rangle\\ =& t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2}\\ &\times \langle \tau_1(\bm{r}_1) | c_{\bm{r}_1 \nu_1} |\tilde{\sigma}_1(\bm{r}_1)\rangle\langle\tilde{\sigma}_1(\bm{r}_1)| c^{\dagger}_{\bm{r}_1 \nu'_1} | \tau'_1(\bm{r}_1) \rangle\\ &\times \langle \tau_2(\bm{r}_2) | c^{\dagger}_{\bm{r}_2 \nu_2} |\sigma_2(\bm{r}_2)\rangle\langle\sigma_2(\bm{r}_2)| c_{\bm{r}_2 \nu'_2} | \tau'_2(\bm{r}_2) \rangle\\ =& t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2} \tilde{B}^{\tau_1 *}_{\nu_1 \sigma_1} \tilde{B}^{\tau'_1}_{\nu'_1 \sigma_1} B^{\tau_2 *}_{\nu_2 \sigma_2} B^{\tau'_2}_{\nu'_2 \sigma_2}\\ =& \text{Tr}[ B^{\tau_2 \text{T}} t^{\bm{r}_2-\bm{r}_1} \tilde{B}^{\tau_1} \tilde{B}^{\tau'_1 \text{T}} t^{\bm{r}_1-\bm{r}_2} B^{\tau'_2} ], \end{split} \end{equation} where Tr and T denote trace and transpose of a matrix, respectively. Similarly, the part of the element with the intermediate $f^1$ state at $\bm{r}_1$ and $f^3$ state at $\bm{r}_2$ is given by \begin{equation} \begin{split} & \sum_u \langle \tau_1(\bm{r}_1) \ \tau_2(\bm{r}_2) | c^{\dagger}_{\bm{r}_1 \nu'_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2} c_{\bm{r}_2 \nu'_2} |1,u \rangle\\ &\times \langle 1,u| c^{\dagger}_{\bm{r}_2 \nu_2} t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} c_{\bm{r}_1 \nu_1} | \tau'_1(\bm{r}_1) \ \tau'_2(\bm{r}_2) \rangle\\ =& t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2}\\ &\times\langle \tau_1(\bm{r}_1) | c^{\dagger}_{\bm{r}_1 \nu'_1} |\sigma_1(\bm{r}_1)\rangle\langle\sigma_1(\bm{r}_1)| c_{\bm{r}_1 \nu_1} | \tau'_1(\bm{r}_1) \rangle\\ &\times \langle \tau_2(\bm{r}_2) | c_{\bm{r}_2 \nu'_2} |\tilde{\sigma}_2(\bm{r}_2)\rangle\langle\tilde{\sigma}_2(\bm{r}_2)| c^{\dagger}_{\bm{r}_2 \nu_2} | \tau'_2(\bm{r}_2) \rangle\\ =& t^{\bm{r}_2-\bm{r}_1}_{\nu_2 \nu_1} t^{\bm{r}_1-\bm{r}_2}_{\nu'_1 \nu'_2} B^{\tau_1 *}_{\nu'_1 \sigma_1} B^{\tau'_1}_{\nu_1 \sigma_1} \tilde{B}^{\tau_2 *}_{\nu'_2 \sigma_2} \tilde{B}^{\tau'_2}_{\nu_2 \sigma_2}\\ =& \text{Tr}[ \tilde{B}^{\tau'_2 \text{T}} t^{\bm{r}_2-\bm{r}_1} B^{\tau'_1} B^{\tau_1 \text{T}} t^{\bm{r}_1-\bm{r}_2} \tilde{B}^{\tau_2} ]\\ =& \text{Tr}[ \tilde{B}^{\tau_2 \text{T}} t^{\bm{r}_2-\bm{r}_1 *} B^{\tau_1} B^{\tau'_1 \text{T}} t^{\bm{r}_1-\bm{r}_2 *} \tilde{B}^{\tau'_2} ]. \end{split} \end{equation} Then, the total matrix element of the effective Hamiltonian is \begin{equation} \begin{split} &-\Delta E \times H^{\text{(eff)}}_{\tau_1 \tau_2; \tau'_1 \tau'_2}(\bm{r}_1,\bm{r}_2)\\ =& \text{Tr}[ B^{\tau_2 \text{T}} t^{\bm{r}_2-\bm{r}_1} \tilde{B}^{\tau_1} \tilde{B}^{\tau'_1 \text{T}} t^{\bm{r}_1-\bm{r}_2} B^{\tau'_2} ]\\ &+\text{Tr}[ \tilde{B}^{\tau_2 \text{T}} t^{\bm{r}_2-\bm{r}_1 *} B^{\tau_1} B^{\tau'_1 \text{T}} t^{\bm{r}_1-\bm{r}_2 *} \tilde{B}^{\tau'_2} ]. \end{split} \end{equation} By straightforward algebraic calculations of the matrices $B^{\tau}$, $\tilde{B}^{\tau}$, and $t^{\bm{\mu}}$, we can evaluate this equation for each lattice structure. The obtained effective Hamiltonian can be rewritten by using the multipole operators for the $\Gamma_3$ state defined by \begin{subequations} \begin{align} O^0_{2 \bm{r}} &=\sum_{\tau \tau'} |\tau(\bm{r})\rangle \sigma^z_{\tau \tau'} \langle \tau'(\bm{r})|,\\ O^2_{2 \bm{r}} &=\sum_{\tau \tau'} |\tau(\bm{r})\rangle \sigma^x_{\tau \tau'} \langle \tau'(\bm{r})|,\\ T_{xyz \bm{r}} &=\sum_{\tau \tau'} |\tau(\bm{r})\rangle \sigma^y_{\tau \tau'} \langle \tau'(\bm{r})| \label{Txyz}. \end{align} \end{subequations} $O^0_{2 \bm{r}}$ and $O^2_{2 \bm{r}}$ are the quadrupole moments with $\Gamma_{3g}$ symmetry and $T_{xyz \bm{r}}$ is the octupole moment with $\Gamma_{2u}$ symmetry. In the following, we show the derived multipole interactions in the Fourier transformed from. Previously, we have also derived the multipole interactions for $f^1$ systems with the $\Gamma_8$ CEF ground state by a similar method.~\cite{Kubo2005PRBR,Kubo2005PRB} We will compare the multipole interactions for the present $f^2$-$\Gamma_3$ model with those for the $f^1$-$\Gamma_8$ model. \subsection{sc lattice} For the sc lattice, we obtain only the following quadrupole interaction, \begin{equation} \begin{split} H^{\text{(eff)}}&= \frac{3}{2}\sum_{\bm{q}} \biggl[ \cos q_z O^0_{2 \bm{q}}O^0_{2 -\bm{q}}\\ +&\cos q_x \frac{1}{4} (\sqrt{3}O^2_{2 \bm{q}}-O^0_{2 \bm{q}}) (\sqrt{3}O^2_{2 -\bm{q}}-O^0_{2 -\bm{q}})\\ +&\cos q_y \frac{1}{4} (\sqrt{3}O^2_{2 \bm{q}}+O^0_{2 \bm{q}}) (\sqrt{3}O^2_{2 -\bm{q}}+O^0_{2 -\bm{q}}) \biggr], \end{split} \end{equation} in the unit of $t^2_1/\Delta E$. We can intuitively understand why this interaction is dominant since the $z$ direction is congenial to $3z^2-r^2$ ($O^0_2$) symmetry [see Fig.~\ref{nearest_neighbor_interaction}(a)]. \begin{figure} \includegraphics[width=0.85\linewidth] {nearest_neighbor_interaction.eps} \caption{\label{nearest_neighbor_interaction} (Color online) Schematic figures of the electronic states on the nearest-neighboring sites preferred by the interaction (a) along the $z$ direction (antiferro arrangement of the $O^0_2$ moments) and (b) along [111] direction (antiferro arrangement of the $T_{xyz}$ moments). The gradation of color in (b) indicates the anisotropic distribution of the dipole moment. } \end{figure} Also in the $\Gamma_8$ model, this quadrupole interaction is the main interaction and the $\Gamma_{2u}$ octupole interaction is absent. Note that the derived model is the same as a model for ferromagnetic insulating manganites describing only the orbital degrees of freedom of $e_g$ electrons,~\cite{Shiina1997,Brink1999,Ishihara2000,Kubo2002JPSJNo5} except for the overall coefficient. This model has continuously degenerate ground states in the mean-field level due to the frustration which originates from the anisotropic interaction. If we approximate the ordering vector for PrPb$_3$ by $\bm{q}=(\pi,\pi,0)$ and assume this ordering vector to the model, we obtain an ordering of the $O^2_2$ moment by the mean-field theory. This is out of accord with the experimental indications of the $O^0_2$ ordering.~\cite{Onimaru2004,Onimaru2005} For the $O^0_2$ ordering in PrPb$_3$, we need to improve the present theory, for example, by considering the long-range interactions, which are also important to stabilize the incommensurate ordering observed in PrPb$_3$. \subsection{bcc lattice} For the bcc lattice, we obtain only the following octupole interaction, \begin{equation} \begin{split} H^{\text{(eff)}} = 6\sum_{\bm{q}} &\cos(q_x/2)\cos(q_y/2)\cos(q_z/2)\\ &\times T_{xyz \bm{q}}T_{xyz -\bm{q}}, \end{split} \end{equation} in the unit of $t^2_2/\Delta E$ and the ground state of this effective model is the staggered ordered state of the octupole moments. Since the [111] direction is congenial to $xyz$ symmetry, we can naturally understand that this interaction is dominant [see Fig.~\ref{nearest_neighbor_interaction}(b)]. Also in the $\Gamma_8$ model, this octupole interaction is the main interaction and the $\Gamma_{3g}$ quadrupole interaction is absent. The existence of the $\tilde{\tau}_y$ term in Eqs.~\eqref{t111}--\eqref{t11-1} suggests the interaction of the $T_{xyz}$ moments [Eq.~\eqref{Txyz}], in accord with the present result. We will discuss this point in the next subsection. If ordering of this type of octupole moments occurs, we will observe an anomaly in the specific heat as in an ordinary phase transition, but the determination of the order parameter will be challenging since neither the dipole nor quadrupole moments will be induced, in contrast to the octupole order in NpO$_2$ and in Ce$_x$La$_{1-x}$B$_6$ where quadrupole moments are induced.~\cite{Paixao2002,Akatsu2003, Kubo2003,Kubo2004,Tokunaga2005,Kubo2005PRBR,Kubo2005PRB,Kubo2005PRBB, Inami2014} The possibility of the ordering of this octupole moment had also been discussed for an $e_g$-electron model for manganites in a ferromagnetic metallic phase.~\cite{Takahashi2000,Maezono2000, Brink2001,Khomskii2001} However, it was revealed that this ordering is unstable against fluctuations beyond the mean-field theory.~\cite{Kubo2002JPSJNo1} On the other hand, in the present model for $f$ electrons, we have a clear picture for the realization of the $T_{xyz}$ ordering and it should be stable against fluctuations. Note also that, in the diamond structure, the nearest-neighbor sites locate $(1/4,1/4,1/4)$ and so on from the origin, and thus, we obtain only the $\Gamma_{2u}$ octupole interaction as in the bcc lattice. Therefore, we may expect strong fluctuations of the octupole moments in the Pr 1-2-20 systems, in which Pr ions form the diamond structure. \subsection{fcc lattice} For the fcc lattice, we obtain both quadrupole and octupole interactions, \begin{equation} \begin{split} H^{\text{(eff)}} = \frac{3}{49}&\sum_{\bm{q}} \biggl[ \cos(q_x/2)\cos(q_y/2) O^0_{2 \bm{q}}O^0_{2 -\bm{q}}\\ +&\cos(q_y/2)\cos(q_z/2)\\ \times& \frac{1}{4} (\sqrt{3}O^2_{2 \bm{q}}-O^0_{2 \bm{q}}) (\sqrt{3}O^2_{2 -\bm{q}}-O^0_{2 -\bm{q}})\\ +&\cos(q_z/2)\cos(q_x/2)\\ \times& \frac{1}{4} (\sqrt{3}O^2_{2 \bm{q}}+O^0_{2 \bm{q}}) (\sqrt{3}O^2_{2 -\bm{q}}+O^0_{2 -\bm{q}}) \biggr]\\ +\frac{144}{49}&\sum_{\bm{q}} [\cos(q_x/2)\cos(q_y/2)\\ +&\cos(q_y/2)\cos(q_z/2)\\ +&\cos(q_z/2)\cos(q_x/2)] T_{xyz \bm{q}}T_{xyz -\bm{q}}, \end{split} \end{equation} in the unit of $t^2_3/\Delta E$. Broadly speaking, the fcc lattice has characteristics between the sc and bcc lattices and, as a result, we have obtained both quadrupole and octupole interactions. In the $\Gamma_8$ model, the $\Gamma_{2u}$ octupole interaction competes with a $\Gamma_{4u}$ dipole and octupole interaction and a $\Gamma_{5u}$ octupole interaction, which are absent here since the $\Gamma_3$ doublet does not have these degrees of freedom. The $\Gamma_{3g}$ quadruple interaction is weak but finite in the $\Gamma_8$ model and it is also similar to the present $\Gamma_3$ model. Since the octupole interaction is larger than the quadrupole interaction, the ground state of the model is the staggered ordered state of the octupole moments at least in the mean-field theory within two-sublattice structures. In general, the quadruple and octupole interactions may compete with each other, but at least in the present simple model, the octupole interaction is dominant. The large difference in the magnitude of the interactions originates from the coefficients in the hopping integral. The ratio of the coefficient of $\tilde{\eta}^{+}$ to that of $\tilde{\tau}^y$ is 1 to $-4\sqrt{3}$ in Eq.~\eqref{t011} and the ratio of the square of them is 1 to 48; it is the ratio of the quadrupole and octupole interactions. In the sc lattice, the hopping integral does not have a $\tilde{\tau}^y$ term and the octupole interaction is absent. In the bcc lattice, the hopping integral does not have an $\tilde{\eta}^+$, $\tilde{\eta}^-$ or $\tilde{\tau}^z$ term and the quadrupole interaction is absent. However, in general, it is not so simple. For example, if the hopping is isotropic, that is, there is no $\tilde{\eta}^+$, $\tilde{\eta}^-$, $\tilde{\tau}^z$, or $\tilde{\tau}^y$ term, we obtain an isotropic Heisenberg-type interaction, i.e., both quadrupole and octupole interactions. \begin{table}[t] \caption{\label{dominant_interactions} Dominant interactions in each lattice for the $f^2$-$\Gamma_3$ model (present study) and for the $f^1$-$\Gamma_8$ model (Refs.~\onlinecite{Kubo2005PRBR,Kubo2005PRB}). } \begin{ruledtabular} \begin{tabular}{cccc} CEF state & sc & bcc & fcc \\ \hline $f^2$-$\Gamma_3$ & $\Gamma_{3g}$ quadrupole & $\Gamma_{2u}$ octupole & $\Gamma_{2u}$ octupole\\ $f^1$-$\Gamma_8$ & $\Gamma_{3g}$ quadrupole & $\Gamma_{2u}$ octupole & $\Gamma_{2u}$, $\Gamma_{4u}$, $\Gamma_{5u}$ \end{tabular} \end{ruledtabular} \end{table} In Table~\ref{dominant_interactions}, we summarize the dominant interactions in each lattice for the $f^2$-$\Gamma_3$ model obtained here and for the the $f^1$-$\Gamma_8$ model (Refs.~\onlinecite{Kubo2005PRBR,Kubo2005PRB}). \section{Multipole interactions in another simplified model} In this section, we discuss another simple model to describe the $\Gamma_3$ CEF ground state. Here, we omit the $\Gamma_7$ orbital and construct the $\Gamma_3$ states only from the $\Gamma_8$ orbitals. This model is too simple to discuss realistic situations, but by comparing with the results in the previous section, we can recognize how much the multipole interactions are altered by the choice of the model. By omitting the $\Gamma_7$ orbital, the derivation of the multipole interactions becomes rather simple since the intermediate $f^1$-$f^3$ states do not split. For the sc lattice, we obtain no multipole interaction, i.e., the second-order perturbation theory merely gives an energy shift. It means that this model is too simple. For the bcc and fcc lattices, we obtain only the octupole interaction. Thus, the dominance of the octupole interaction in the bcc and fcc lattices is common between the models in this section and in the previous sections. Therefore, we expect that the characteristic features of the multipole interactions summarized in Table~\ref{dominant_interactions} will not change, even if we use different ways to construct the $\Gamma_3$ state, except for special cases such as the sc lattice in this section. \section{Summary} We have investigated the multipole interactions by the second-order perturbation theory to a simple model for the $f^2$ ions with the $\Gamma_3$ non-Kramers doublet ground state under a cubic CEF, in particular, by paying attention to the lattice structure. We have obtained the $\Gamma_{3g}$ quadrupole interaction for a sc lattice and the $\Gamma_{2u}$ octupole interaction for a bcc lattice. For an fcc lattice, we have obtained both interactions. These characteristics are the same as those for the $f^1$-$\Gamma_8$ model. Thus, we expect that such tendencies or correspondences between the dominant multipole interactions and the lattice structures are common as long as the ground CEF state has these multipole degrees of freedom. While several kinds of multipole order are possible to occur in general, the $\Gamma_{2u}$ octupole order is particularly fascinating since it will induce neither the dipole nor quadrupole moments, even though the specific heat will show an anomaly at the transition point as in an ordinary phase transition. In this regard, it would be interesting to search bcc lattices and diamond structure for the $\Gamma_{2u}$ order since we have obtained a strong interaction for this kind of moments both in the $f^2$-$\Gamma_3$ and $f^1$-$\Gamma_8$ models. The general forms of the multipole interactions have been derived in Ref.~\onlinecite{Sakai2003}. For example, another form of the quadrupole interaction is possible for a sc lattice in general. We expect that such components appear when we introduce hopping integrals other than $(ff\sigma)$. Thus, we should note that the applicability of the present results are limited to the cases where the (effective) hopping processes are mainly described by $(ff\sigma)$. \begin{acknowledgments} This work was supported by JSPS KAKENHI Grant Numbers JP15K05191, JP16H04017, and JP16H01079 (J-Physics). \end{acknowledgments}
1,314,259,992,670
arxiv
\section{Introduction} With recent successes in AI, there is a great interest in deploying autonomous agents into our day to day lives. In order to cohabit successfully with humans, it is highly important that the AI agent behaves in a way that is aligned with the human preferences. Ideally, we want a system that will enable every day users to specify their preferences over AI system behavior. In the reinforcement learning (RL) literature, the current go to approach for specifying behavioral preferences is through preference-based reinforcement learning techniques (\cite{christiano2017deep}, \cite{lee2021pebble}) that try to learn the human's preference interactively through trajectory comparisons. These techniques are useful for tacit knowledge tasks. However, it would be highly inefficient to use these techniques in scenarios where the preference can simply be specified in symbolic terms. Another way for specifying behavioral preferences is through modifying rewards; but it can be fairly non-intuitive for a lay user to come up with a reward structure that leads to the preferred behavior \cite{hadfield2017inverse}. In addition, specifying rewards becomes more challenging when the system is operating over an inscrutable high-dimensional state representation (like images). This makes using the recently proposed \textit{neuro-symbolic} framework in \cite{kambhampati2022symbols} more appropriate for tasks when the user's preference can be stated in terms of symbolic concepts. This framework consists of a symbolic interface that enables communication with the user while the agent uses some inscrutable internal representation for the task. In this work, we propose an AI system named \textit{PRESCA} (PREference Specification through Concept Acquisition) that maintains a symbolic interface made of propositional state variables called concepts (\cite{sreedharan2020bridging}) that the user can use to specify their preferences to the agent. If the concept that is relevant to the user's preference is missing from the interface, then \textit{PRESCA} will try to learn this concept online. Once it is learned, \textit{PRESCA} uses it to train the agent in aligning itself with the user's preferences. The concept is also added to the interface to support preference specifications by future users. Thus, the cost of learning the concept gets amortized when future users make use of the concept. The focus of this work, will be to propose a method that allows us to learn human concepts effectively. A simple way for the system to learn a concept, may be to learn a grounding from the concept to system states. One could in theory learn these groundings from a set of positive and negative examples of the concept. Obtaining these examples, however, is a challenging problem as there is no clear way for the user to generate these examples. One method to obtain these examples is for the system to present the user with states as queries and ask them whether the concept is present in each state. A naive strategy to generate these queries could be to randomly sample states from the environment. However, if the positive examples are too sparse in the state space, then this strategy would lead to a large number of queries. \begin{wrapfigure}{R}{0.6\textwidth} \includegraphics[scale=0.30]{Images/PRESCA.png} \caption{Overview of \textit{PRESCA}. (1) The user specifies their preference in terms of some symbolic concept. If concept is not present in the symbolic interface, then the user provides its causal relationship to some known concept. (2) \textit{PRESCA} then generates likely positive examples and negative examples of the concept and query their label to the user. (3) After getting labels, \textit{PRESCA} learns a classifier for the target concept and (4) incorporates user's preference in agent's training. (5) Finally, the concept is added to the interface.} \label{fig:1} \vspace{-7pt} \end{wrapfigure} Our objective is to make the data collection process more feedback efficient by automatically gathering likely positive and negative examples of the concept and then querying the user for their labels. To this end, we leverage the causal association the target concept has with the concepts that already exist in the symbolic interface. This causal knowledge can be automatically obtained if a symbolic PDDL-like model(\cite{geffner2013concise}) is available (\cite{helmert2004planning}). If such a domain model is not present, then we expect the user of the system to have some understanding about the task dynamics and provide the causal relationship. In figure \ref{fig:1}, we provides the overall flow of the user's interaction with the \textit{PRESCA} system. In the following section, we first introduce the planning domain that we use to illustrate and evaluate our approach. This is followed by a formal description of the environment model and the symbolic interface used by our AI system. We then provide a methodology that efficiently learns a new concept and then uses the learned concept to guide the agent's training. In the evaluation section (section \ref{evaluation}), we show the performance of our approach on a Minecraft domain given increasingly complex causal relations. We follow that up with a discussion that compares our approach to existing AI approaches that can also be potentially used to incorporate user's preference (including the preference-based RL approaches (\cite{christiano2017deep}, \cite{lee2021pebble}) and the TAMER framework (\cite{knox2009interactively}, \cite{warnell2018deep})). Finally, we discuss all the planned improvements we intend to make to the methodology in section \ref{planned_improvements}. \section{Illustrative example} \begin{wrapfigure}{R}{0.6\textwidth} \includegraphics[scale=0.285]{Images/domain_image.png} \caption{(a) Instance of the Minecraft environment with two possible plans marked with arrows. The user prefers that the agent avoid going into the storage area (indicated by the red arrows) (b) the causal model of the Minecraft environment. } \label{fig:2} \vspace{-7pt} \end{wrapfigure} We will use a version of the 2-D Minecraft inspired environment \cite{andreas2017modular} (figure \ref{fig:2} (a)) to illustrate the ideas of the paper and evaluate our proposed technique. In this domain, the goal is to drop a ladder at the docker. The set of actions the agent can take include, rotating left or right, moving forward, picking or dropping an object, a crafting action, and a no-op action. To accomplish the goal, the agent needs to first obtain a ladder. There are two ways to achieve this. One way is for the agent to pick up a stick, and a plank; and then use the crafting station to craft a ladder. Another way is to first move into the storage area, then pick up a broken ladder and then use the crafting station to repair the ladder. Once the agent has the ladder, it can move to the docker and drop it. We show the two possible plans in figure \ref{fig:2}(a). There is also a human observer who wants the agent to solve the task in the way they prefer. This particular human wants the agent to avoid going inside the storage area. Since one way of solving the task involves moving into the storage area, the human must communicate this preference to the agent. In this scenario, the agent is operating on some state encoding that the human cannot interpret. Thus, they cannot specify their preference directly in terms of state features. However, if the human was capable of communicating with the AI system in symbolic terms, it could simply ask the system to avoid any state where the fact \textit{in\_storage\_area} might be true. In this case, \textit{in\_storage\_area} is a propositional fact that is \textit{true} in every state where the agent is inside the storage area. To support such a symbolic specification, the AI system should be able to correctly ground and thereby interpret the potential concepts that the user wants to use. In this case, \textit{PRESCA} learns a classifier that can predict whether in a given state the agent is in the storage area. Learning this classifier requires the user to provide positive and negative examples of the concept \textit{in\_storage\_area}. In this work, we make this data collection process more feedback efficient. To do so, we leverage the precedence relation \textit{in\_storage\_area} has with other concepts that are already known. Figure \ref{fig:2} (b), illustrates the causal relationship between various concepts in the domain. For e.g., the concept \textit{in\_storage\_area} precedes \textit{has\_broken\_ladder}. This causal link implies that the agent must be inside the storage area to collect the broken ladder. Note that there can also be concepts with multiple possible causes like the \textit{has\_ladder} concept. Now, if any of the descendent of the concept \textit{in\_storage\_area} is already known, then our proposed method tries to use this causal knowledge to gather likely positive examples of the concept \textit{in\_storage\_area}. \section{Problem Setting} We consider an RL problem in which an agent interacts with an unknown environment (\cite{sutton2018reinforcement}). The environment is modeled as a Markov Decision Process (MDP) . An MDP $\mathcal{M}$ can be formally defined as tuple $M = \langle S, A, T, R, \gamma, S_o \rangle$, where: $S$ is the set of states in the environment, $A$ is the set of actions that the agent can take, $T$ is the transition function where $T(s, a, s')$ gives the probability that agent will be in state $s'$ after taking action $a$ in state $s$, $R$ is the reward function where $R(s, a, s')$ gives the reward obtained for the transition $\langle s, a, s' \rangle$, $\gamma$ is the discounting factor, and $S_o$ is the set of all possible initial states. A policy $\pi (a | s)$ gives the probability that the agent will take action $a \in A$ while in the state $s \in S$. The value of a state $s$ given a policy $\pi$ , $V_{\pi}(s)$ is the expected cumulative discounted future reward that the agent obtains when following $\pi$ from the state $s$. For an MDP, $\mathcal{M}$ , the optimal policy is the policy that maximizes the value for every state. Our setting additionally considers the agent to be goal-directed. This means that there is a set of goal states $\mathbb{G}$ and the agent tries to reach one of them. We assume that the set $\mathbb{G}$ is known and that the reward function $R$ is set up in a way that any optimal policy must reach one of the goal states with probability $>0$. In this work, we are interested in a human-AI interaction setting, where the agent will interact with multiple users over its lifetime. In each interaction, there will be a human-in-the-loop, who wants the agent to achieve the goal subject to their preferences. Thus, we develop the system, \textit{PRESCA}, that would allow any user to communicate their preference to the agent. Now, the state representation used in the model $\mathcal{M}$ may be inscrutable to a user i.e. the user cannot directly use it to specify their preference (e.g., an image based state representation). Therefore, we consider the presence of a symbolic interface which is a set $F_S$ of propositional state variables or concepts that the any user would understand. In any given state $s$, the concept $C_{\mathbb{Z}} \in F_S$ may be \textit{true} or \textit{false}. The user can specify two types of trajectory preferences using concepts present in $F_S$: (a) the agent must pass through some state where $C_{\mathbb{Z}}$ is \textit{true} or (b) the agent must avoid states where $C_{\mathbb{Z}}$ is \textit{true}. In the worst case, \textit{PRESCA} starts with singleton set $F_S = \{{C_{\mathbb{G}}\}}$ where $C_{\mathbb{G}}$ refers to the goal concept i.e. $C_{\mathbb{G}}$ is \textit{true} in all states $s \in \mathbb{G}$ and \textit{false} for every other state. In a specific interaction with a user, if their preference cannot be specified in terms of any existing concept in $F_S$ then a new target concept, $C_{\mathbb{T}}$, must be learned. By learning a concept $C_{\mathbb{T}}$, we mean learning some grounding in the state representation used by the agent. In our case, this grounding takes the form of a binary classifier that takes as input a state $s$ and outputs \textit{true} if the concept $C_{\mathbb{T}}$ is \textit{true} in $s$ and \textit{false} otherwise. From now on we will the notations $C_{\mathbb{Z}}$ to refer to both the concept as well as its grounding. Now to learn the classifier for the target concept $C_{\mathbb{T}}$, the system must collect positive and negative examples of the concept from the user. For this, our system presents the user with state queries where they must choose whether the concept is present or absent in the state. Once the classifier $C_{\mathbb{T}}$ has been learned, it is used to train an agent's policy that aligns with the user's preference. Also, the classifier $ C_{\mathbb{T}}$ is added to the set $F_S$ to support preference specifications by future users. Note that while we provide the mechanism for learning new concepts, we suspect that most often we won't need to expand the concept set in the symbolic interface. We now precisely define the objective of the \textit{PRESCA} system for a single interaction with a user when some new concept needs to be learned. Given the environment $\mathcal{M}$ , a user, a symbolic interface $F_S$ , and a target concept $C_{\mathbb{T}}$, the system must: (a) minimize the number of queries made to the user to learn $C_{\mathbb{T}}$, and (b) use $C_{\mathbb{T}}$ to train an agent's policy that achieves the goal while aligning with the user's preference. \section{Method} Our focus in this work is to make the data collection process for learning the target concept $C_{\mathbb{T}}$ more efficient. For the agent training, we rely on straightforward reward shaping and option learning techniques. We start by describing the semantics of a causal model and stating some simple assumptions about concepts that \textit{PRESCA} uses to gather likely positive and negative examples of $C_{\mathbb{T}}$. We then describe both the data collection, and agent training process in detail. \subsection{Causal model semantics} The \textit{PRESCA} system uses the causal relationship between the target concept $C_{\mathbb{T}}$ and some already known concept $C_{\mathbb{K}} \in F_S$ to gather candidate states that most likely contain $C_{\mathbb{T}}$. The causal relationship is simply a partial causal model of the domain. Intuitively, the causal model for a domain is a directed graph with nodes representing concepts and edges representing some causal association. Figure \ref{fig:2}(b) shows an example causal model for the Minecraft domain. In the causal model, the connections between some concept $C_\mathbb{Z}$ and all its parents concepts reflect an abstract relationship the parent concepts have with $C_\mathbb{Z}$ in the dynamics of the domain. Informally, the relation $C_{\mathbb{T}} \rightarrow C_{\mathbb{K}}$, where $C_{\mathbb{T}}$ is the only parent of $C_{\mathbb{K}}$, dictates that if in a transition $\langle s, a, s'\rangle$ , the concept $C_{\mathbb{K}}$ is \textit{true} in $s'$ while it was \textit{false} in $s$, then $C_{\mathbb{T}}$ must have been true in the state $s$. One can understand such causal relations by relating them to preconditions and effects from symbolic models (like PDDL \cite{geffner2013concise}). In this case, the relation $C_{\mathbb{T}} \rightarrow C_{\mathbb{K}}$, may be due to some abstract symbolic action whose precondition is $C_{\mathbb{T}}$ and one of the effects are $C_{\mathbb{K}}$. In this work, the causal model is represented using a syntax that is similar to that of structured causal models (\cite{glymour2016causal}). More formally, we define the causal model $\mathcal{M}_{S}$ as a tuple $\langle V_{S}, E_{S}, f_{S} \rangle$ where $V_S$ and $E_S$ correspond to nodes and edges of a directed graph such that each node $v \in V_S$ corresponds to some concept; and $f_{S}$ is the set of structural equations that specify an abstract relationship between a child concept and its parents concepts. Each equation in $f_S$, that relates the parent concepts $C_{\mathbb{X}_1}, C_{\mathbb{X}_2} \dots C_{\mathbb{X}_n}$ to their common child concept $C_{\mathbb{Y}}$, is of the form $C_{\mathbb{Y}} = u(C_{\mathbb{X}_1}, C_{\mathbb{X}_2} \dots C_{\mathbb{X}_n})$ where $u()$ is a boolean function. We limit $u()$ to be in conjunctive normal form made of positive literals. Now, for any concept $C_{\mathbb{Z}}$, let $C_{\mathbb{Z}}(s)$ denote the predicate that indicates whether or not $C_{\mathbb{Z}}$ is \textit{true} in the state $s$. Given this notation, we now define the grounding of each structural equation in transitions within the domain. \begin{defn} \label{def1} Given a concept $C_{\mathbb{Y}} \in {V}_{S}$, its structural equation $C_{\mathbb{Y}} = u(C_{\mathbb{X}_1}, C_{\mathbb{X}_2} \dots C_{\mathbb{X}_n})$ and a transition $\tau = \langle s, a, s'\rangle$ in $\mathcal{M}$, if $C_{\mathbb{Y}}(s')$ is \textit{true} and $C_{\mathbb{Y}}(s)$ is \textit{false}, then the following equation holds for $\tau$: $C_{\mathbb{Y}}(s') = u(C_{\mathbb{X}_1}(s), C_{\mathbb{X}_2}(s) \dots C_{\mathbb{X}_n}(s))$. \end{defn} For instance, in the Minecraft domain (figure \ref{fig:2}), the structural equation for \textit{has\_ladder} is of the form $C_{has\_ladder} = C_{has\_broken\_ladder} \lor C_{has\_stick\_and\_plank}$ . Thus, if there is a transition where the agent picks up a ladder, then from definition \ref{def1}, we know that the agent either had the broken ladder; or both a stick and a plank in the original state. \subsection{Concept locality and rarity} \label{assumptions} \textit{PRESCA} also leverages certain simple assumptions about concepts for data collection. In particular, we make a \textit{concept locality} assumption: if a concept $C_{\mathbb{Z}}$ is true in some state $s$, then it would most likely be true in a small neighborhood around $s$. More specifically, if we take the set of all states $S'$ where each state $s' \in S'$ can be reached from $s$ within some small number of actions, then the states in $S'$ will likely have the concept $C_{\mathbb{Z}}$ \textit{true} in them. The intuition behind this assumption is that the agent would usually have to take some specific sequence of actions to make the concept be no longer \textit{true}. For instance in our Minecraft domain, if the agent is holding an object then it must take the \textit{drop} action to no longer hold it. Another assumption we make is about \textit{concept rarity}. Specifically, for any concept $C_{\mathbb{Z}}$, the proportion of states in which the concept $C_{\mathbb{Z}}$ is \textit{true} will be very low i.e. $|S_{C_{\mathbb{Z}}}| / |S| \ll 1$ where $S_{C_{\mathbb{Z}}}$ is the set of all states with $C_{\mathbb{Z}}$ \textit{true}. \raggedbottom \RestyleAlgo{ruled} \SetKwComment{Comment}{/* }{ */} \begin{algorithm} \small \caption{Algorithm to train classifier for target concept, $C_{\mathbb{T}}$}\label{alg:one} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKw{Break}{break} \Input{$\mathcal{M_{\mathbb{T} \rightarrow \mathbb{K}}}$, Environment Env, $\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} = \langle C_{\mathbb{T}} \rightarrow C_{\mathbb{I}_1} \rightarrow C_{\mathbb{I}_2} \dots C_{\mathbb{I}_n} \rightarrow C_{\mathbb{K}} \rangle$, episode\_length, random\_walk\_length, total\_episodes} \Output{classifier $C_{\mathbb{T}}$} $C_{\mathbb{T}}^{current} \gets C_{\mathbb{I}_n}$; $C_{\mathbb{K}}^{current} \gets C_{\mathbb{K}}$; $N^{+}, N^{-}, \mathcal{U} \gets $ SetBudget\&MinSeed($C_{\mathbb{T}}^{current}$); $Seed_{C_{\mathbb{T}}^{current}} \gets \{\}$ \; \While{\textit{true}}{ \While{$|Seed_{C_{\mathbb{T}}^{current}}| < \mathcal{U}$}{ \For{$t \gets 0$ \KwTo episode\_length}{ $s \gets $ Env.InitialState(); $a \gets $ UniformActionSelection(); $s' \gets $ Env.ExecuteAction($a$)\; \If{$C_{\mathbb{K}}^{current}(s') \land \neg C_{\mathbb{K}}^{current}(s)$}{ \eIf{$C_{\mathbb{K}}^{current} \neq C_{\mathbb{G}} \lor$ $C_{\mathbb{T}}^{current}$ is one possible cause of $C_{\mathbb{K}}^{current}$}{ $C_{\mathbb{T}}^{current} (s) \gets $ Query.User($s$) \; \lIf{$C_{\mathbb{T}}^{current} (s)$}{ $Seed_{C_{\mathbb{T}}^{current}} \gets Seed_{C_{\mathbb{T}}^{current}} \cup s$ } }{ $Seed_{C_{\mathbb{T}}^{current}} \gets Seed_{C_{\mathbb{T}}^{current}} \cup s$ } \Break } } } $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{+}, \mathcal{S}_{C_{\mathbb{T}}^{current}}^{-} \gets$ Get examples using Algorithm 2 \& 3 \; $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{+} \gets \mathcal{S}_{C_{\mathbb{T}}^{current}}^{+} \cup Seed_{C_{\mathbb{T}}^{current}}$\; $C_{\mathbb{T}}^{current} \gets $ Classifier($\mathcal{S}_{C_{\mathbb{T}}^{current}}^{-}$, $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{+}$ ); \lIf{$C_{\mathbb{T}}^{current} = C_{\mathbb{T}}$}{\Return $C_{\mathbb{T}}^{current}$} $C_{\mathbb{K}}^{current} \gets C_{\mathbb{T}}^{current}$ ; $C_{\mathbb{T}}^{current} \gets$ Predecessor $(C_{\mathbb{T}}^{current}, \mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}})$ \; $N^{+}, N^{-}, \mathcal{U} \gets $ SetBudget\&MinSeed($C_{\mathbb{T}}^{current}$); } \end{algorithm} \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \small \While{Budget $N^{+}$ exhausted}{ $s \gets $ SampleState($Seed_{C_{\mathbb{T}}^{current}}$) \; Env.SetState($s$) \; \For{$t \gets 0$ \KwTo random\_walk\_length}{ $a \gets $ UniformActionSelection() ; $s' \gets $ Env.ExecuteAction($a$)\; $C_{\mathbb{T}}^{current} (s') \gets $ Query.User($s'$) \; \lIf{$C_{\mathbb{T}}^{current}(s')$}{ $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{+} \gets \mathcal{S}_{C_{\mathbb{T}}^{current}}^{+} \cup s$' } } } \Return $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{+}$ \caption{Get positive examples} \end{algorithm} \end{minipage} \hfill \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \small $S \gets \{ \}$ \; \For{$i \gets 0$ \KwTo total\_episodes}{ $S \gets S \cup \text{Env.RunEpisode(episode\_length)} $; } \While{Budget $N^{-}$ exhausted}{ $s \gets $ SampleState($S$) \; $C_{\mathbb{T}}^{current} (s) \gets $ Query.User($s$) \; \lIf{$\neg C_{\mathbb{T}}^{current}(s)$}{ $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{-} \gets \mathcal{S}_{C_{\mathbb{T}}^{current}}^{-} \cup s$ } } \Return $\mathcal{S}_{C_{\mathbb{T}}^{current}}^{-}$ \caption{ Get negative examples} \end{algorithm} \end{minipage} \subsection{Data collection} To make the data collection efficient, \textit{PRESCA} gathers candidate states that most likely contain $C_{\mathbb{T}}$. It does so by using the causal relation between the target $C_{\mathbb{T}}$ and some known concept $C_{\mathbb{K}} \in F_S$. This causal relationship can either be given by the user or it can be derived from a symbolic domain model using \cite{helmert2004planning} if the domain model is available (as is the case in many \textit{neuro-symbolic} techniques like \cite{illanes2020symbolic}). The causal relation is a partial specification of the causal model $\mathcal{M}_S$ of the domain, denoted as $\mathcal{M}_{\mathbb{T} \rightarrow \mathbb{K}}$ where $\mathcal{M}_{\mathbb{T} \rightarrow \mathbb{K}} = \langle V_{\mathbb{T} \rightarrow \mathbb{K}}, E_{\mathbb{T} \rightarrow \mathbb{K}}, f_{\mathbb{T} \rightarrow \mathbb{K}}\rangle$ such that $\langle V_{\mathbb{T} \rightarrow \mathbb{K}}, E_{\mathbb{T} \rightarrow \mathbb{K}} \rangle$ is a subgraph of $\langle V_S, E_S \rangle$ and $f_{\mathbb{T} \rightarrow \mathbb{K}} \subseteq f_S$. The causal relation $\mathcal{M}_{\mathbb{T} \rightarrow \mathbb{K}}$ must necessarily contain a path from $C_{\mathbb{T}}$ to some concept $C_{\mathbb{K}}$ where $C_{\mathbb{K}}$ is known. We denote this path as $\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}}$. Additionally, it must also contain the structural equation of the node $C_\mathbb{K}$ and each of the intermediate nodes on the path $\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}}$. We first describe our data collection process when the length of $\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}}$ denoted as $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} |$ is $1$ i.e. when an immediate child concept is known. We then describe the technique for the case where the $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} | > 1$ i.e. some non-child descendent of the target concept is known. \paragraph{Data collection when $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} | = 1$} Let $\mathcal{N}^{+}$ be the budget for the number of queries our system can make to the user to collect positive examples. We first let the agent explore the environment with a policy that selects actions uniformly until it encounters a transition $\langle s, a, s' \rangle$ with the concept $C_{\mathbb{K}}$ \textit{false} in $s$ and \textit{true} in $s'$. Now, let $u$ be the boolean function in the structural formulae for $C_{\mathbb{K}}$. Then, in the trivial case where $C_{\mathbb{K}} = C_{\mathbb{G}}$ and $C_{\mathbb{T}}$ is one of the disjunctive clauses in the CNF form of $u$, we can directly infer that $s$ contains $C_{\mathbb{T}}$. Let us look more in depth into the case where one of these conditions do not hold. The first condition is violated when the concept $C_{\mathbb{K}}$ is grounded as a classifier that was previously learned and thus our predictions about $C_{\mathbb{K}}$ could be wrong (although it is less likely as \textit{PRESCA} tries to learn highly accurate classifiers). The second condition is violated if there are multiple possible causes of $C_{\mathbb{K}}$ and $C_{\mathbb{T}}$ being \textit{true} is just one of them. For e.g., $u$ could be $C_{\mathbb{T}} \lor C_{\mathbb{T'}}$. In this case, even if our prediction about $C_{\mathbb{K}}$ is correct, we can only infer that either $C_{\mathbb{T}}$ or $C_{\mathbb{T'}}$ is true in $s$. In both of these cases, we confirm from the user that $s$ indeed contains $C_{\mathbb{T}}$. If we find that $s$ contains $C_{\mathbb{T}}$, then we add the state $s$ to the set $Seed_{C_{\mathbb{T}}}$. We refer to the positive examples collected directly using some causal relation as \textit{seed} examples. Once the query is complete, the agent restarts the exploration from some uniformly sampled initial state. This process continues until $|Seed_{C_{\mathbb{T}}}| = \mathcal{U}$ . Once the set of \textit{seed} examples has been collected, the interaction moves on to the next stage in which we use the \textit{seed} examples to expand our collection of likely positive examples. For this, we leverage \textit{concept locality} (section \ref{assumptions}) by simply performing random walk of short length from a randomly sampled state from $Seed_{C_{\mathbb{T}}}$ and querying the trajectory's states to the user for their label. This is repeated until the remaining budget $N^{+}$ is exhausted. For collecting likely negative examples of $C_{\mathbb{T}}$, we leverage \textit{concept rarity} (section \ref{assumptions}) by sampling states from state space $S$. More specifically, we first collect a large set of states, $\hat{\mathcal{S}}$ by letting the agent randomly explore the environment with some maximum episode length. We then randomly sample states from $\hat{\mathcal{S}}$ and query the user for its label until the budget $\mathcal{N}^{-}$ for negative examples is exhausted. Once the data collection is done, we learn a binary classifier for the concept $C_{\mathbb{T}}$. \paragraph{Data collection when $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} | > 1$}: Let the path from the concept $C_{\mathbb{T}}$ to $C_{\mathbb{K}}$ in $\mathcal{M}_{\mathbb{T} \rightarrow \mathbb{K}}$ be of the form $C_{\mathbb{T}} \rightarrow C_{\mathbb{I}_1} \rightarrow C_{\mathbb{I}_2} \dots C_{\mathbb{I}_n} \rightarrow C_{\mathbb{K}}$. In order to learn a classifier for $C_{\mathbb{T}}$, we need the classifier for the concept $C_{\mathbb{I}_1}$. Once it is known, the problem effectively becomes the same as the case $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} | = 1$ that was described above. However, in order to learn the classifier for $C_{\mathbb{I}_1}$, we need the classifier for $C_{\mathbb{I}_2}$ and so on. To operationalize this, we use a simple approach where we learn the classifiers for $C_{\mathbb{I}_n}$, $C_{\mathbb{I}_{n-1}}$, $\dots$ $C_{\mathbb{I}_1}$, $C_{\mathbb{T}}$ in sequence. Each sub-problem of learning a concept $C_{\mathbb{I}_i}$ given $C_{\mathbb{I}_i} \rightarrow C_{\mathbb{I}_{i+1}}$, is treated as the case $|\mathcal{P_{\mathbb{T} \rightarrow \mathbb{K}}} | = 1$ with assigned budgets ($\mathcal{N}^{+}$and $\mathcal{N}^{-}$) and the minimum number of seed examples $\mathcal{U}$. Note that we only require the intermediate concept's classifiers to be accurate enough that it allows us to get some minimum number of seed examples of $C_{\mathbb{T}}$. This means that, potentially, we do not need highly accurate classifiers for the intermediate concepts and we can further reduce the amount of feedback taken from the user. In the current setup, we leverage this by keeping $\mathcal{N}^{+}$, $\mathcal{N}^{-}$ and $\mathcal{U}$ for learning each of the intermediate concepts lower than that for the target concept. We provide the complete algorithm for learning the target concept $C_{\mathbb{T}}$ in algorithm\ref{alg:one}. In our implementation, we also maintain a set of states that have already been queried. If a state has already been queried, then we reuse its label and skip querying the user. \subsection{Using the learned concept during training} Once we have the classifier for $C_{\mathbb{T}}$ we want to use it during the agent's training to incorporate the user's preference. As stated earlier, we support preferences of two types: the agent must avoid states where $C_{\mathbb{T}}$ is \textit{true} or the agent must visit a state where $C_{\mathbb{T}}$ is \textit{true} . For the former case, we simply modify the MDP $\mathcal{M}$'s reward function $R$ to $R'$ such that the agent receives a high negative reward, $r_{\mathbb{T}}$ when visiting any state where $C_{\mathbb{T}}$ is \textit{true}. We rely on $r_{\mathbb{T}}$ having a large enough negative value such that any optimal policy would achieve the goal while avoiding states with $C_{\mathbb{T}}$ \textit{true}. For the latter case, we cannot simply provide a positive reward for visiting state with $C_{\mathbb{T}}$. This is because if it is possible to make $C_{\mathbb{T}}$ \textit{false} then the agent would simply learn to cycle between a state where $C_{\mathbb{T}}$ is \textit{true} and a state where $C_{\mathbb{T}}$ is \textit{false}. Instead, we propose an option learning based approach. An option (\cite{sutton1999between}) is a temporal action defined as a tuple $\mathcal{O} = \langle I, \pi, \beta\rangle$ where $I$ is the set of states in which the agent can begin executing the option, $\pi$ is the policy that the agent follows when executing $\mathcal{O}$, and $\beta(s)$ gives the probability of terminating the option $\mathcal{O}$ when the agent reaches state $s$. In our case, we keep $\beta$ as the set of states, which when reached by the agent, deterministically terminates the option. To train the agent's policy, we treat $C_{\mathbb{T}}$ and $C_{\mathbb{G}}$ as subgoals for the agent to complete the task. We assume a simpler setting where these subgoals are serializable \cite{korf1987planning}. Given that, we learn the policy for an option $\mathcal{O}_{C_{\mathbb{T}}} = \langle S_0, \pi, S_{C_{\mathbb{T}}}\rangle$, that drives the agent from any initial state $s \in S_0$ to some state $s \in S_{C_{\mathbb{T}}}$ where $S_{C_{\mathbb{T}}}$ are the set of states where $C_{\mathbb{T}}$ is \textit{true} and the policy for another option $\mathcal{O}_{C_{\mathbb{G}}} = \langle S_{C_{\mathbb{T}}}, \pi, \mathbb{G}\rangle$, that drives the agent from any state in $S_{C_{\mathbb{T}}}$ to one of the goal states. Then, a meta policy chooses an option $\mathcal{O}_{C_{\mathbb{T}}}$ when $s \in S_o$ and switches to the option $\mathcal{O}_{C_{\mathbb{G}}}$ when $s \in S_{C_{\mathbb{T}}}$. \section{Evaluation} \label{evaluation} \begin{wraptable}{R}{0.6\textwidth} \scriptsize \begin{center} \label{tab:table1} \begin{tabular}{ |p{0.90cm}|p{0.75cm}|p{0.75cm}|p{0.90cm}|p{0.75cm}|p{0.75cm}| } \hline Technique & Chain length & Goal achieved & Preference aligned & Average steps & Queries \\ \hline \multirow{3}{4em}{PRESCA} & 1 & 89\% & 100\% & 27.57 & 2k \\ & 2 & 79\% & 100\% & 26.87 & 3.5k \\ & 3 & 88\% & 100\% & 31.03 & 5k\\ \hline Baseline & - & 88\% & 0\% & 28.30 & -\\ \hline \end{tabular} \caption{Results using \textit{PRESCA} and baseline approach} \end{center} \vspace{-7pt} \end{wraptable} We validate \textit{PRESCA} by using it on the Minecraft domain (figure \ref{fig:2}). We have $k=10$ possible initial states for the domain in which the positions of all objects (except the agent) is randomly selected. These states correspond to $10$ unique maps for the domain. Recall that the goal of the agent is to drop a ladder at the docker. The user's preference is that the agent must avoid going into the storage area. Thus, the target concept for \textit{PRESCA} is \textit{in\_storage\_area}. The rewards for the domain are set up in a way that the optimal policy will violate the user's preference. Thus, if \textit{PRESCA} is able to learn a policy that completes the task while following the user's preference, then we can correctly attribute this result to \textit{PRESCA}. We provide detailed information about the domain in appendix \ref{appendix1}. We consider the case where \textit{PRESCA} has been given the causal model shown in figure \ref{fig:2}(b). We show the performance of our approach under multiple settings wherein we vary the number of intermediate concepts that are present between the target concept \textit{in\_storage\_area} and the known concept. Note that if the concept \textit{has\_ladder} or \textit{has\_broken\_ladder} is known, then this represents a scenario where that concept's classifier was learned in a previous interaction with some user. Accordingly, we learn a classifier for these concepts given the condition that the goal concept (\textit{ladder\_at\_docker}) is known using algorithm \ref{alg:one}. We now describe the experimental setting in detail. In each setting, we set the known concept as either \textit{has\_broken\_ladder}, \textit{has\_ladder} or \textit{ladder\_at\_docker} which corresponds to the number of intermediate concepts being $0, 1, $ and $2$ respectively. We use a simulated human as the user and it accurately answers queries that ask whether a concept is present in a given state. In each setting, we use \textit{PRESCA} that learns the target concept \textit{in\_storage\_area} given the known concept and subsequently trains an RL agent to align with the user's preference. For the concept classifiers, we train a convolutional neural network on RGB image representation of the state. For training the agent, we use PPO \cite{schulman2017proximal} algorithm that produces a stochastic policy. For evaluating \textit{PRESCA}, we run the trained agent on each map for $10$ trials, and report the percentage of times the agent achieved the goal, the percentage of times the agent achieved the goal while aligning with the user's preference, and the average number of steps taken to achieve the goal. We provide further details on the state representation and network architectures used in appendix \ref{appendix2}. For all applications of algorithm \ref{alg:one}, we keep the budget for positive example ($N^{+}$), budget for negative examples ($N^{-}$) and the minimum number of seed examples ($\mathcal{U}$) as $N^{+} = 375, N^{-} = 1125,$ $\mathcal{U} = 40$ for the intermediate concepts and $N^{+} = 500, N^{-} = 1500,$ $\mathcal{U} = 40$ for the target concepts respectively. In table \ref{tab:table1}, we report the results for \textit{PRESCA} in each setting, and we also show the results for a baseline RL agent that was trained using the domain's original rewards. As we can see from the table, agents trained using \textit{PRESCA} can complete the task majority of the times, and it always aligns with the user's preference while the baseline always violates the user's preference. \section{Related work} In this work, we develop a system that allows users to specify their preferences in symbolic terms to an RL agent that uses some high-dimensional inscrutable state representation to learn the task. We see this as an instantiation of the \textit{neuro-symbolic} framework proposed in \cite{kambhampati2022symbols}. Many recent \textit{neuro-symbolic} works have shown the usefulness of symbolic interfaces. In particular, works presented in \cite{guan2022leveraging}, \cite{illanes2020symbolic} and, \cite{yang2018peorl} make use of symbolic knowledge to train the agent efficiently. The communication from the agent to the human has also been explored in \cite{sreedharan2020bridging}, that produces explanations for an RL agent's decisions in symbolic terms. As described in \cite{kambhampati2022symbols}, one of challenges of using a symbolic interface is that of expanding the preexisting symbolic vocabulary. In \textit{PRESCA}, we provide a mechanism that enables vocabulary expansion with minimal efforts from the users. In terms of the overarching objective of \textit{PRESCA}, which is to allow for preference specification to an RL agent, a closely related work is that of taskable RL presented in \cite{illanes2020symbolic} that allows for specification of goals in symbolic terms to the RL agent. However, that work assumes that all the relevant concepts are already available with the system. In contrast, \textit{PRESCA} relaxes this assumption, and allows learning of new concepts online by collecting positive and negative examples of the concept from the user. There are notable human-in-the-loop approaches that can also be used to specify and incorporate user's preference. These include the preference-based reinforcement learning approaches (\cite{christiano2017deep}, \cite{lee2021pebble}) that query the user about their preference between two trajectory segments. Then they learn a reward function that would produce the same ranking between trajectories as the user. This reward function is used to train the agent in solving the task. Another popular approach is the TAMER framework that involves the user giving ratings to actions taken by the agent (\cite{knox2009interactively}, \cite{warnell2018deep}). Then the agent learns the human's rating function and greedily chooses actions that maximize its value at each step. These techniques are useful for tacit knowledge tasks where the user's preference cannot be stated in terms of concepts. However, if the user's preference can be stated in symbolic terms, then using these approaches will be unnecessarily cumbersome for the user. Another limitation with these approaches, unlike \textit{PRESCA}, is that the user cannot provide feedback that is specific to their preference and must provide feedback for the entire task. We believe that this would require much more feedback from the user, than it would for \textit{PRESCA} to learn the relevant concept. Also note that \textit{PRESCA} adds the learned concept to the symbolic interface thus amortizing the efforts of learning. Extensions to the TAMER framework have also been proposed that learn from the environment reward signal while using the human feedback for accelerated learning (\cite{knox2010combining}, \cite{arakawa2018dqn}). However, these frameworks learn a policy that maximizes environment's reward and therefore, it might not align with the user's preference. When learning concepts, \textit{PRESCA} tries to reduce the feedback complexity. There are active learning works in machine learning literature \cite{settles2009active}, that also deal with reducing the number of labels needed to learn a classifier. However, active learning suffers when there is class imbalance (skew) \cite{kazerouni2020active} in the data which can often be severe in RL settings in case the concepts are rare and the state space is huge. Moreover, \textit{PRESCA} can easily be complemented with active learning using simple strategies like refining candidate queries, produced by \textit{PRESCA}, using informativeness scores from active learning. \section{Conclusion and future work} \label{planned_improvements} This paper proposes the \textit{PRESCA} system through which everyday users can specify their preference to an AI agent. \textit{PRESCA} facilitates this communication by maintaining a symbolic interface made up of user understandable concepts. It also supports learning new concepts if needed. We show how \textit{PRESCA} leverages causal relationship between concepts to make the learning process more feedback efficient. While we presented a preliminary methodology to operationalize \textit{PRESCA}, there are several improvements that we plan to do. First, we want to leverage the fact that the system doesn't require learning highly accurate classifiers for the intermediate concepts on the causal chain. To achieve this, we intend to develop a meta-controller that will decide between investing budget in learning some intermediate concept vs using some already learned classifier for getting seed examples of the target concept. Second, for the case when there are multiple possible causes of the known concept, we intend to do an iterative clustering based approach which will cluster states that appear prior to the known concept to identify the cluster corresponding to the target concept. Finally, we plan to experiment with taking richer feedback from the user in terms of image patches relevant to the concept. This can be used to find likely negative examples of a concept which will allow us to relax the assumption about \textit{concept rarity}. \bibliographystyle{unsrt}
1,314,259,992,671
arxiv
\section{Introduction} As the only elementary scalar in the Standard Model (SM), the Higgs boson presents a unique opportunity as a window to physics beyond the Standard Model (SM). The operator $H^\dagger H$ is the lowest dimensional operator which is both a gauge and Lorentz singlet. As such, it occurs time and again as the means by which physics uncharged under the SM gauge symmetries communicates with the Standard Model. In particular, it is an effective mechanism by which scalar dark matter (DM) can talk to the ordinary matter \cite{Burgess:2000yq}, as is required if we wish to understand its abundance in the Universe today as the result of thermal processes acting in a standard cosmological history. In the present work, we focus on the case in which the dark matter is a spin one vector boson. At first glance, it would appear that this case (much like scalar DM) offers a renormalizable connection between the dark matter and the Higgs \cite{Djouadi:2011aa,Lebedev:2011iq}, \begin{equation} \mathcal{L} \supset \lambda ~H^\dagger H ~V_\mu V^\mu~, \label{eq:naive} \end{equation} where $V_\mu$ is a massive vector field which plays the role of dark matter and $\lambda$ is a dimensionless coupling. But this form, while invariant under the SM gauge symmetries, is misleading. Just like the SM $W$ and $Z$ bosons, a well-behaved UV description of $V$ requires that it be associated with a gauge symmetry (the most simple construction of which would be an Abelian U(1)$^\prime$, though one could also consider non-Abelian theories as well), spontaneously broken to give $V$ a mass. The term in Eq.~(\ref{eq:naive}) violates the U(1)$^\prime$, and must be engineered via its spontaneous breaking. One tempting avenue would be to charge the Higgs itself under U(1)$^\prime$. In that case the Higgs kinetic term $(D_\mu H)^\dagger (D^\mu H)$ contains Eq.~(\ref{eq:naive}), and the mass of $V$ will arise as part of the vacuum expectation value (VEV) of $H$, naturally connecting the scale of the $V$ mass to the electroweak scale. However, this construction contains other terms which mix $V$ with the SM $Z$ boson, with the result that $V$ will inevitably end up unstable and contribute unacceptably to precision electroweak measurements unless it is very light (implying that it is very weakly coupled). This regime, though worth pursuing, is not very interesting for particle physics at the weak scale, and not very amenable to exploration through Higgs measurements at the LHC. The situation is very different when the $V$ mass is the result of a VEV living in a different scalar particle $\Phi$ which is a SM gauge singlet. In that case, there is no dangerous mixing with the SM $Z$ boson, and the gauge coupling can be relatively large, \begin{eqnarray} {\cal L} &~ \supset ~ & -\frac{1}{4} V_{\mu \nu} V^{\mu \nu} + \left( D_\mu \Phi \right)^\dagger \left( D^\mu \Phi \right) - V (\Phi) + \lambda_P ~ |H|^2 |\Phi|^2~, \label{eq:module1} \end{eqnarray} where $D_\mu \Phi \equiv \partial_\mu \Phi - g Q_\Phi V_\mu \Phi$ is the usual covariant derivative for a particle of charge $Q_\Phi$ and $V(\Phi)$ is a U(1)$^\prime$-invariant potential designed to induce a VEV $\langle \Phi \rangle = v_\phi$, producing a mass for $V$, \begin{eqnarray} m_V^2 & = & g^2Q_\Phi^2~ v_\phi^2~. \label{eq:Vmass} \end{eqnarray} We have also included a scalar Higgs portal coupling $\lambda_P$, which leads to tree-level mixing between the SM Higgs boson and the Higgs mode of $\Phi$, effectively implementing the Higgs portal. As a construction implementing the Higgs portal, it is well motivated and has been extensively explored in the literature\footnote{It also provides a mechanism to stabilize the Higgs potential \cite{Duch:2015jta} and/or generate a first order electroweak phase transition \cite{Chao:2014ina}.} \cite{Hambye:2008bq,Farzan:2012hh,Baek:2012se,Baek:2013qwa,Baek:2014jga,Baek:2014goa,Ko:2014gha,Gross:2015cwa,DiChiara:2015bua,Chen:2015dea,Kim:2015hda}. However, it does not represent the {\em only} possible UV completion. In this work, we explore an alternative completion which realizes the Higgs portal as a consequence of additional heavy fermions which are charged under both U(1)$^\prime$ and the SM gauge symmetries. At one loop, these fermions mediate an interaction between the Higgs and the DM somewhat in analogy with the effective Higgs-gluon vertex induced by the top quarks in the SM. This {\em radiative} UV completion leads to different phenomenology and singles out different interesting regions of parameter space. This article is organized as follows. In Sec.~\ref{model}, we discuss a simplified picture to illustrate the most important physics behind this concept, followed by the full matter content of the UV theory. In Sec.~\ref{results}, we examine the phenomenology in light of experimental probes, such as direct detection, the invisible Higgs width, and relic abundance. We first focus on the case where the simplified picture is valid, with and without also considering mixing generated by a Scalar Higgs Portal. We then examine the effect of the full radiative portion of the UV theory. We reserve Sec.~\ref{conclusion} for conclusions and summary. \section{Radiative Higgs Portal for Vector Dark Matter} \label{model} \subsection{Particle Content and Structure} A radiative model often has multiple paths to the same low energy physics, since the mediating particles are not themselves involved in the initial and final states. Starting with the basic module of Eq.~(\ref{eq:module1}), we aim for a construction which adds fermions mediating an interaction of the form~(\ref{eq:naive}) such that: \begin{itemize} \item the vector particle $V$ remains stable at the radiative level, which in particular requires that it does not kinetically mix with the SM electroweak interaction; \item the full gauge structure SU(3)$_C \times$ SU(2)$_W \times$ U(1)$_Y \times$ U(1)$^\prime$ remains free from gauge anomalies; \item there are no large contributions to the SM Higgs coupling to gluons or photons in contradiction with LHC measurements \cite{Flechl:2015foa}. \end{itemize} The first of these is the most subtle. Generically, communication between the SM Higgs and $V$ requires that the mediator fermions be charged under both U(1)$^\prime$ and the Standard Model, which typically will induce processes involving an odd number of $V$'s, resulting in their decay. The simplest example of such a process is the kinetic mixing between $V$ and hypercharge. Such dangerous processes can be forbidden by a charge-conjugation symmetry, under which $V$ is odd. In analogy with Furry's theorem of QED \cite{Furry:1937}, this symmetry forbids processes involving an odd number of $V$'s at energies below the masses of the mediator fermions. \begin{table} \centering \caption{Charge assignments for fermions $\psi$, $\chi$, and $n$ and complex scalar $\Phi$.} \begin{tabular}{cccc} ~~Field~~~ & ~~~(SU(2)$_W$, U(1)$_Y$, U(1)$^\prime$)~~~~~~~~~~~~~ & ~~Field~~~ & ~~~(SU(2)$_W$, U(1)$_Y$, U(1)$^\prime$) \\ \hline \hline $\psi_{1\alpha}$ & (2, \nicefrac{1}{2}, ~1) & $\psi_{2\alpha}$ & (2, \nicefrac{1}{2}, -1) \\ $\chi_{1\alpha}$ & (2, \nicefrac{-1}{2}, -1) & $\chi_{2\alpha}$ & (2, \nicefrac{-1}{2}, ~1) \\ $n_{1\alpha}$ & (1, ~0, ~-1) & $n_{2\alpha}$ & (1, ~0, ~~1) \\ \hline $\Phi$~ & (1, ~0, ~~$Q_\Phi$) \\ \end{tabular} \label{tab:NPtran} \end{table} Cancelling gauge anomalies further suggests that the additional fermions appear in vector-like pairs under both the SM and U(1)$^\prime$ gauge symmetries, whereas renormalizable coupling to the Higgs requires fields in SU(2)$_W$ representations of size $n$ and $n+1$ (and have hypercharges differing by $1/2$). A minimal set of particles satisfying these conditions is shown in Table~\ref{tab:NPtran}, consisting of four SU(2)$_W$ doublets and two singlets. (Different) pairs of the doublets are vector-like under both U(1)$_Y$ and U(1)$^\prime$, cancelling gauge anomalies, and a U(1)$^\prime$ charge conjugation is implemented by $f_1 \! \leftrightarrow \! f_2$ (where $f = \psi, \chi, n$). We have left the U(1)$^\prime$ charge of $\Phi$ as a free non-zero parameter which controls the dark matter mass as per Eq.~(\ref{eq:Vmass}). Choosing $Q_\Phi=\pm1$ would allow the $\Phi$ VEV to mix the SM lepton doublets with the new fermions, which would be strongly constrained by precision measurements and ruin the U(1)$^\prime$ charge conjugation symmetry. Choosing $Q_\Phi=\pm2$ would allow for Yukawa interactions of $\Phi$ with pairs of the new fermions, which would complicate the analysis of their mass eigenstates. We will restrict ourselves to other values for $Q_\Phi$, which avoids these features, and serves simply to adjust the mass of $V$. It's worth pointing out that this implies that the lightest of the fermionic states is also stable, and will be present in the Universe to some degree as a second component of dark matter. However, provided its mass is much larger than $m_V$, fermion anti-fermion pairs will annihilate efficiently into weak bosons and $V$'s, leaving it as a negligible fraction of the dark matter. In 2-component Weyl notation, the Lagrangian contains mass terms and Yukawa interactions for the new fermions, \begin{equation} \begin{aligned} \mathcal L &~ \supset -m ~ \epsilon^{ab} \left( \psi_{1a} \chi_{1b} + \psi_{2a} \chi_{2b} \right) - m_n ~ n_1 n_2\\ &- y_{\psi}~\epsilon^{ab} \left( \psi_{1a} H_b n_1 + \psi_{2a} H_b n_2 \right) - y_{\chi} \left( \chi_{1} H^* n_2 + \chi_{2} H^* n_1 \right) + h.c. \end{aligned} \end{equation} where $a$ and $b$ are SU(2)$_W$ indices, the SM Higgs $H$ is defined to transform as a (2,~$\nicefrac{-1}{2}$,~0), and spin indices have been suppressed. The U(1)$^\prime$ charge conjugation symmetry, $f_1 \! \leftrightarrow \! f_2$ is manifest. After electroweak symmetry-breaking, the mass terms can be written as, \begin{eqnarray} \label{eq:mass} \mathcal L_m &= - N^T M_n N' - E^T M_e E' + h.c. \end{eqnarray} where \begin{eqnarray} N &= \begin{bmatrix} \psi_{1n}\\ \chi_{2n}\\ n_2 \end{bmatrix}, ~~~~~N' = \begin{bmatrix} \psi_{2n}\\ \chi_{1n}\\ n_1 \end{bmatrix}, ~~~~~E = \begin{bmatrix} \psi_{1e}\\ \chi_{2e}\\ \end{bmatrix}, ~~~~~E' = \begin{bmatrix} \chi_{1e}\\ \psi_{2e}\\ \end{bmatrix}, \end{eqnarray} assemble collections of the electrically neutral ($N$ and $N^\prime$) and charged ($E$ and $E^\prime$) components of the fermions, and the mass matrices are given by, \begin{eqnarray} M_n = \begin{bmatrix} 0 & -m & -y_{\psi}v/\sqrt{2} \\ -m & 0 & y_{\chi}v/\sqrt{2} \\ -y_{\psi}v/\sqrt{2} & y_{\chi}v/\sqrt{2} & m_n \end{bmatrix}, ~~~~~ M_e = \begin{bmatrix} m & 0 \\ 0 & m \end{bmatrix}. \end{eqnarray} In the mass basis, there are three electrically neutral and two charged Dirac fermions, all of which interact with the dark matter $V$ diagonally, since the states that mix all carry the same $U(1)'$ charge. Their coupling to the SM Higgs will involve the mixing matrices which transform from the gauge to the mass basis. Note that by construction the electrically charged fermions receive no contributions from $\langle H \rangle$, implying that they do not interact with the Higgs boson and lead to no one-loop correction to its effective coupling to photons. Our choice to arrange $N$ such that they also receive no contributions from $\Phi$ implies that the fermions do not renormalize the usual Higgs portal coupling $\lambda_P$ of Eq.~(\ref{eq:module1}) at one-loop (starting at two loops, there are contributions mediated by a mixture of the fermions and $V$ itself). In order to better extract the features of the radiative model, we self-consistently assume that $\lambda_P$ is small enough to be subdominant in the majority of the remainder of this work. \subsection{$\sigma_{\rm SI}$ and Higgs Invisible Width} \begin{figure} \centering \includegraphics[width=0.35\textwidth]{loop} \caption{Representative triangle diagram contributing to the Higgs--dark matter interaction.} \label{fig:loop} \end{figure} Both the direct detection cross-section and the Higgs invisible decay width result from triangle diagrams (see Fig.~\ref{fig:loop}). Integrating out the fermion $\psi$ running in the loop, the $h-V-V$ interaction can be encoded by two form factors: \begin{equation} - \left( \frac{1}{4} A(p^2) ~h ~V^{\mu\nu}V_{\mu\nu} + \frac{1}{2} B(p^2)~ h ~V^{\mu}V_{\mu} \right) \label{eq:eftloop} \end{equation} with coefficients $A$ and $B$ which are (in the on-shell DM limit, $k_1^2 = k_2^2 = m_V^2$) functions of the fermion masses and mixings, $m_V$, and the momentum through the Higgs line, $p^2$. Reasonably compact analytic expressions for $A$ and $B$ are derived in Appendix~\ref{app:tri}. We observe that $B(p^2)\rightarrow 0$ in the limit $m_V \rightarrow 0$ (i.e.\ when the $U(1)^\prime$ symmetry is restored), as is required by gauge invariance, see Appendix~\ref{app:tri}. In terms of $A$ and $B$, the cross section for non-relativistic scattering of $V$ with a nucleon $n$ is given by, \begin{eqnarray} \sigma_{\rm SI} &= &\frac{1}{4\pi m_h^4} \left(\frac{f_n}{v}\right)^2 \left(\frac{m_n^2}{m_n+m_V}\right)^2 |B(0) - A(0) ~m_V^2|^2 \label{eq:loopsigma} \end{eqnarray} where the momentum transfer through the Higgs is approximated as $p^2 \approx 0$, \begin{eqnarray} f_n = \sum_{q=u,d,s} f_{Tq}^{(n)} + \frac{2}{9} f_{TG}^{(n)}, \end{eqnarray} and we use the hadronic matrix elements $f_{Tq}$, from DarkSUSY \cite{Gondolo:2004sc}. Because of the tiny up and down Yukawa couplings, scattering mediated by a Higgs is to good approximation iso-symmetric. The same three point vertex function also describes the invisible decay width of the Higgs boson, \begin{eqnarray} \Gamma(h \rightarrow V V) &=& \frac{1}{64\pi m_h} \sqrt{1-4\frac{m_V^2}{m_h^2}} \left[ \left| A(m_h^2) \right|^2 m_h^4 \left(1 - 4\frac{m_V^2}{m_h^2} + 6\frac{m_V^4}{m_h^4}\right) \right. \label{eq:loopwidth}\\ & & \left. + 6 \operatorname{Re}\left( A^*(m_h^2) B(m_h^2) \right) m_h^2 \left(1-2\frac{m_V^2}{m_h^2}\right) + \frac{1}{2} \left| B(m_h^2) \right|^2 \frac{m_h^4}{m_V^4} \left(1 - 4 \frac{m_V^2}{m_h^2} + 12 \frac{m_V^4}{m_h^4} \right)\right] \nonumber \end{eqnarray} where the Higgs is on-shell, $p^2 = m_h^2$. Note that because for small $m_V$ the coefficient $B(p^2) \propto m_V^4$, this expression is finite in the limit $m_V \rightarrow 0$, as it should be. \subsection{Annihilation Cross Section and Relic Abundance} \begin{figure} \includegraphics[width=0.35\textwidth]{boxhh} \hspace*{1cm} \includegraphics[width=0.35\textwidth]{boxww} \caption{Representative box diagrams which contribute to DM annihilation into pairs of Higgs or electroweak bosons.} \label{fig:box} \end{figure} Pairs of dark matter can annihilate through the three point coupling of Fig.~\ref{fig:loop} through an (off- or on-shell) SM Higgs, leading to final states containing heavy quarks and/or weak bosons. These contributions exhibit a strong resonant behavior when $m_V \simeq m_h / 2$. The gauge and Higgs boson final states also receive contributions at the same order from box diagrams (see Fig.~\ref{fig:box}), which contribute to processes including $VV \rightarrow hh, ZZ, WW, \gamma\gamma, hZ, Z\gamma$. These box diagrams are sensitive to more of the details of the UV theory, receiving contributions from the charged fermions as well as the neutral ones. As a result, simple analytic forms are not particularly illuminating, and we evaluate them using FeynArts \cite{Hahn:2000kx}, FormCalc, and LoopTools \cite{Hahn:1998yk}. In the following section, we compute the full annihilation cross section including all of the accessible SM final states. \section{Experimental Constraints and Parameter Space} \label{results} In this section, we examine the interesting parameter space, finding the regions consistent with the LUX limits on the spin independent DM-nucleon scattering cross-section \cite{Akerib:2013tjd}; and the invisible decay width of the Higgs produced via vector boson fusion (VBF) as constrained by CMS with 19.7 fb$^{-1}$ at 8 TeV \cite{Chatrchyan:2014tja}. In the latter, we include the off-shell Higgs contribution following the technique presented in \cite{Endo:2014cca}, simulating VBF Higgs production with HAWKv2.0 \cite{Denner:2011id}. We also identify the regions leading to the correct thermal relic abundance for a standard cosmology, computing the loop diagrams with FeynArts \cite{Hahn:2000kx}, FormCalc, and LoopTools \cite{Hahn:1998yk}, which is then linked into micrOMEGAsV4.0 \cite{Belanger:2013oya}. Because of the relatively large number of parameters, we build up insight into the phenomenology gradually by considering three different limits of the full theory. Initially in Sec.~\ref{sec:singleF}, we consider the limit in which one of the neutral fermions is much lighter than both the other two neutral states and both of the charged ones, and the coupling $\lambda_P$ is small enough to be neglected. We follow this in Sec.~\ref{sec:singleHP} by allowing $\lambda_P$ to be large enough that there is relevant mixing between $h$ and the Higgs mode of $\Phi$. Finally, in Sec.~\ref{sec:fullF} we switch off $\lambda_P$ once more, but consider the case where all mediator fermions have comparable masses. \subsection{Single Fermion Limit} \label{sec:singleF} We begin with the case where the charged fermions and the two heavier neutral states are much heavier than the lightest neutral state, effectively decoupling from the phenomenology, and $\lambda_P$ can be ignored. As before we assume the physical scalar contained in $\Phi$ is heavy enough to be ignored. In this limit, the relevant parameters are the $U(1)^\prime$ gauge coupling $g$, Yukawa coupling to the light fermion $y$, light fermion mass $m_\psi$, and the vector dark matter mass $m_V$. As we will see below, the correct thermal relic density can only be achieved for annihilation in the Higgs funnel region, for which one can neglect the box diagram contributions. In that case, the gauge and Yukawa couplings always appear in the combination $y g^2$, leaving only three relevant parameter combinations. \begin{figure} \includegraphics[width=0.475\textwidth]{couplim} \hspace*{0.5cm} \includegraphics[width=0.475\textwidth]{relicplot} \caption{Left: Upper limits on $yg^2$ from VBF Higgs collider and direct detection constraints, with a fermion of mass 400 GeV. Right: The corresponding lower limit on the relic abundance for a standard cosmology.} \label{fig:simplim} \end{figure} Fig.~\ref{fig:simplim}, shows the collider and direct detection limits, plotted as the upper bound on $yg^2$ as a function of the dark matter mass, and the translation of those upper limits into a lower limit on the relic abundance, assuming a standard cosmology, for the case when the single relevant fermion has a mass of 400~GeV. Despite the fact that the limits on the couplings are relatively weak, the conclusion is nonetheless that aside from a narrow region in the Higgs funnel region, additional interactions would be required to deplete the dark matter relic density enough to saturate the observed relic density. \subsection{Single Fermion with Scalar Mixing} \label{sec:singleHP} Building on the single fermion limit, we now allow for substantial $\lambda_P$ such that the radial modes of $H$ and $\Phi$ experience significant mixing, resulting in two CP even scalars we denote by $h$ and $h_2$. Describing this limit requires three additional free parameters, which we take to be the mass of the second scalar $m_{h_2}$, $\langle \Phi \rangle = v_\phi$, and the Higgs-scalar mixing angle $\alpha$. For small $\alpha$, the form factors of Eqn.~(\ref{eq:eftloop}) are shifted: \begin{equation} \begin{aligned} A(p^2) &\rightarrow \left(1-\frac{\alpha^2}{2} \right) A(p^2) \\ B(p^2) &\rightarrow \left(1-\frac{\alpha^2}{2} \right) B(p^2) - 2 \alpha \frac{m_V^2}{v_\phi} \end{aligned} \end{equation} where the additional contribution is the tree level contribution to $B(p^2)$ from the induced $\Phi$ component in $h$. In addition to the shift in the effective $h$-$V$-$V$ coupling, the $h_2$ state acquires a coupling to the SM given by the corresponding SM Higgs coupling multiplied by $\alpha$. \begin{figure} \includegraphics[width=0.495\textwidth]{WMixg1247} \includegraphics[width=0.495\textwidth]{WMixg1242}\\[0.65cm] \includegraphics[width=0.495\textwidth]{WMixg01247} \includegraphics[width=0.495\textwidth]{WMixg01242} \caption{Exclusion regions on $yg^2$ for various parameters in the Higgs-Scalar mixing model. The left (right) two plots are for a scalar lighter (heavier) than the Higgs. The top (bottom) two plots are for a mixing angle of $\alpha = 0.1(0.01)$.} \label{fig:wmixing} \end{figure} In Fig.~\ref{fig:wmixing}, we indicate the bounds on $y g^2$ as a function of the vector mass for various benchmark values of the remaining free parameters as indicated, with shaded regions showing points excluded by the CMS invisible Higgs width bounds (green), and the LUX bounds on $\sigma_{\rm SI}$ (yellow). Note the appearance of ``blind spots" in the direct detection plane coming from interference between loop- and tree-level contributions to the $h$-$V$-$V$ vertex and/or between $h$ and $h_2$ exchange \cite{Baek:2014jga}. Blue shading indicates regions where the dark matter is over-abundant in a standard cosmology. Unshaded regions are allowed by current data and do not over-close the Universe, with points close to the boundaries of the blue shading typically predicting a relic density close to the observed value. Such regions consistent with collider and direct searches are again typically in funnel regions for annihilation through $h$ and $h_2$, when it is heavier than $h$ itself. Additional parameter space also opens up for larger DM masses, where annihilation $VV \rightarrow h ~ h_2$ becomes viable. \subsection{Full Matter Content} \label{sec:fullF} As our final limit, we return to $\lambda_P \ll 1$ but allow for all of the fermions to have comparable masses. We consider three benchmark sets of masses and Yukawa interactions summarized in Table~\ref{tab:bench}, which contains the model parameters associated with the fermion sector, $m$, $m_n$, $y_\psi$, and $y_\chi$, as well as the resulting spectrum of neutral state masses $M_N$ and the coefficient of the $h$-$\bar{N}_i$-$N_j$ coupling in the mass basis, $Y_{ij}$, with the mass eigenstates ordered as $M_{N_1}>M_{N_2}>M_{N_3}$. Table~\ref{tab:benchgauge} and Eqn.~\ref{eq:gauge}, summarize the corresponding interaction of the gauge bosons with the new fermions. With these quantities fixed, we explore the plane of the U(1)$^\prime$ gauge coupling $g$ and the mass of the dark matter $m_V$. The new electrically charged fermions may be pair produced or produced in association with a new neutral fermion at colliders. For the regime of interest, the charged fermions decay solely to one of the neutral fermions and a W boson. The charged states are sufficiently similar to charginos in the MSSM that chargino searches may be applied. LEP searches require the charged fermion to be heavier than 100 GeV \cite{Heister:2002mn,Abdallah:2003gv,Acciarri:2000wy,Abbiendi:2002vz}. LHC searches find similar bounds which strengthen as the charged state becomes very long lived \cite{Aad:2014vma,Khachatryan:2014mma}. The lightest charged state among our benchmarks is 300 GeV, which is safe from these constraints. Some couplings are taken to be quite large to help highlight the features of this model in observables. In choosing such large values for the gauge and yukawa couplings, one may be concerned that perturbativity breaks down or that higher order corrections should not be ignored. The latter case may even reduce the relic abundance when properly taken into account, which would open up available parameter space. Alternately, smaller couplings may be chosen which would reduce the range of viable dark matter masses. However, neither case appreciably alter our conclusions. \begin{table} \centering \caption{Benchmark parameter sets, and resulting neutral fermion masses and Higgs couplings.} \begin{tabular}{cccc||cc} $m$~~~ & $m_n$ & $~~~y_\psi~~~$ & ~~~$~y_\chi~$~~~ & ~~~~~$M_N$ (GeV)~~~ & ~~~Y \\ \hline & & & & & \\ 800 GeV~~~~~ & 250 GeV & 1 & $-0.5$ & $\begin{bmatrix} 832 \\ 807 \\ 274 \end{bmatrix}$ & $\begin{bmatrix} -0.25 & -0.04 & 0.71 \\ 0.04 & -0.06 & 0.26 \\ -0.71 & 0.26 & -0.19 \end{bmatrix}$ \\ & & & & & \\ 300 GeV~~~~~ & 200 GeV & 4 & $-2$ & $\begin{bmatrix} 848 \\ 810 \\ 238 \end{bmatrix}$ & $\begin{bmatrix} -3.0 & -0.81 & -0.56 \\ 0.81 & -3.0 & -0.47 \\ 0.56 & -0.47 & -0.02 \end{bmatrix}$ \\ & & & & & \\ 500 GeV~~~~~ & 1000 GeV & 4 &~ 4 & $\begin{bmatrix} 1770 \\ 500 \\ 265 \end{bmatrix}$ & $\begin{bmatrix} -3.9 & 0 & 0.98 \\ 0~ & ~~~0~~~ & ~0 \\ -0.98 & 0 & -3.9 \end{bmatrix}$ \end{tabular} \label{tab:bench} \end{table} \begin{eqnarray} \label{eq:gauge} \mathcal L_{gauge} &= e \left(\bar{E_1}\gamma^\mu E_1 - \bar{E_2}\gamma^\mu E_2\right)\left( A_\mu + \frac{(1 - 2s_w^2)}{2 c_w s_w} Z_\mu\right) ~ + ~ \frac{e}{2 c_w s_w} \bar{N_i}\gamma^\mu ~G^Z_{ij}~ N_j ~Z_\mu \nonumber\\ &+ \frac{e}{\sqrt{2}s_w} \left[ \left( \bar{E_1}\gamma^\mu ~G^{W1}_i~ N_i + \bar{N_i}\gamma^\mu ~G^{W2}_i~ E_2 \right) W_\mu^+ + h.c.\right] \nonumber \\ &+ g \left(\bar{E_i}\gamma^\mu E_i + \bar{N_i}\gamma^\mu N_i\right) V_\mu \end{eqnarray} \begin{table} \centering \caption{Gauge couplings matrices defined in Eqn.~\ref{eq:gauge} represented in fermion mass basis.} \begin{tabular}{cccc||ccc} $m$~ & $m_n$ & $~~y_\psi~~$ & ~~$~y_\chi~$~~ & ~$G^Z$~ & ~~$G^{W1}$ & ~~$G^{W2}~$\\ \hline & & & & & & \\ 800 GeV~~ & 250 GeV & 1 & $-0.5$ & $\begin{bmatrix} 0.01~\gamma^5 & -0.98 & -0.11 \\ -0.98 & 0.03~\gamma^5 & -0.17~\gamma^5 \\ -0.11 & -0.17~\gamma^5 & -0.04~\gamma^5 \end{bmatrix}$ & $\begin{bmatrix} 0.70 \\ 0.70-0.01~\gamma^5 \\ 0.08+0.12~\gamma^5 \end{bmatrix}$ & $\begin{bmatrix} 0.70 \\ -0.70-0.01~\gamma^5 \\ -0.08+0.12~\gamma^5 \end{bmatrix}$ \\ & & & & & & \\ 300 GeV~~ & 200 GeV & 4 & $-2$ & $\begin{bmatrix} 0.20~\gamma^5 & -0.36 & 0.69 \\ -0.36 & 0.38~\gamma^5 & -0.35~\gamma^5 \\ 0.69 & -0.35~\gamma^5 & -0.59~\gamma^5 \end{bmatrix}$ & $\begin{bmatrix} -0.56+0.09~\gamma^5 \\ -0.26+0.36~\gamma^5 \\ 0.65+0.23~\gamma^5 \end{bmatrix}$ & $\begin{bmatrix} -0.56-0.09~\gamma^5 \\ 0.26+0.36~\gamma^5 \\ -0.65+0.23~\gamma^5 \end{bmatrix}$ \\ & & & & & & \\ 500 GeV~~ & 1000 GeV & 4 & 4 & $\begin{bmatrix} 0 & -0.61 & 0 \\ -0.61 & 0 & 0.79~\gamma^5 \\ 0 & 0.79~\gamma^5 & 0 \end{bmatrix}$ & $\begin{bmatrix} -0.43 \\ -0.71 \\ 0.56~\gamma^5 \end{bmatrix}$ & $\begin{bmatrix} 0.43 \\ -0.71 \\ -0.56~\gamma^5 \end{bmatrix}$ \end{tabular} \label{tab:benchgauge} \end{table} \begin{figure} \includegraphics[width=0.9\textwidth]{Boxes_ddcol} \caption{Upper bound on the gauge coupling, $g$, for the three benchmark parameters. VBF Higgs collider constraints are in solid and direct detection constraints are dashed lines. Note that for the direct detection constraints we assume the local abundance of DM is $0.3$ GeV/cm$^3$ whereas the prediction from the model, for conventional thermal history, is often smaller, see Figure~\ref{fig:fullrelic}.} \label{fig:fulllim} \end{figure} In Fig.~\ref{fig:fulllim}, we show upper bounds on $g$ as a function of the vector mass. We find that the collider and direct detection constraints are relatively weak, often less constraining than perturbativity. Despite the mass of the lightest neutral state being similar for all three benchmarks, constraints are significantly stronger for the second and third cases, where the Yukawa couplings are stronger. In terms of the dominant contribution to the effective $h$-$V$-$V$ coupling, in the first and third models, the lightest neutral state is the dominant contribution, whereas in the second benchmark model the lightest state has a small Yukawa coupling and is less important than the second lightest state, which has a much larger coupling. \begin{figure} \includegraphics[width=0.9\textwidth]{Boxes_benchrelic} \caption{The vector relic abundance for the three benchmark parameters. The gauge coupling here is chosen to be $g=3.5$.} \label{fig:fullrelic} \end{figure} In Fig.~\ref{fig:fullrelic}, we plot the relic abundance for the benchmark parameters with a large, fixed gauge coupling of $g = 3.5$, to make comparisons between the benchmarks more apparent. Note that for our second and third benchmark models, this value is mildly excluded by limits on the invisible width of the Higgs for $m_V \leq 60$~GeV. All benchmarks can be thermal relics when the vector can resonantly annihilate through a Higgs, causing the sharp dip at $m_V\sim m_h/2$. We also find that the second benchmark can attain a thermal relic for vector masses above 100 GeV, and third may be a thermal relic above 80 GeV. The success at larger DM masses is due to annihilation channels with two bosons in the final state. Of the three benchmarks, the second has the lightest charged states. This allows efficient annihilation through loops involving the charged fermions, such as those which result in the $WW$ and $ZZ$ final states. The third benchmark, also benefits from this with slightly heavier charged states. However, this case also has large Yukawas causing a marked drop in the relic abundance when DM is heavy enough to annihilate to two Higgs bosons. \section{Conclusion} \label{conclusion} We have explored a simplified model in which the dark matter is a spin one vector particle which interacts with the Standard Model predominantly through Higgs exchange. Unlike the more usually considered Higgs portal based on the quartic interaction $\lambda_P$, we mediate the interaction radiatively, via a loop of heavy fermions charged under both the dark U(1)$^\prime$ as well as the SM electroweak interaction. By construction, the theory is anomaly free, has a heavy vector particle which is effectively stable, and leads to no large deviations in the properties of the SM Higgs. This last feature, together with the possibility to completely decouple the U(1)$^\prime$-breaking Higgs $\Phi$ from the SM are the primary features which distinguish the radiative model from the quartic-induced Higgs portal as far as dark matter phenomenology is concerned. Of course, the UV structure of the radiative model is also far richer, with a family of electroweakly charged particles whose decays produce gauge bosons and missing momentum, a signature already under study in the context of the neutralinos and charginos of a supersymmetric theory. These states are the true avatars of the radiative Higgs portal. The thermal relic density suggests that their masses are at most around TeV, raising the hope that they could be found at the LHC run II or a future high energy collider. \acknowledgments AD is supported by the Fermilab Graduate Student Research Program in Theoretical Physics and in part by NSF Grant No.~PHY-1316792. Fermilab is operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. The work of TMPT is supported in part by NSF grant PHY-1316792 and by the University of California, Irvine through a Chancellor's Fellowship.
1,314,259,992,672
arxiv
\section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \usepackage{graphics} \usepackage{hyperref} \usepackage{color} \usepackage{multirow} \usepackage{hhline} \usepackage{subfigure} \usepackage{lipsum} \newcommand{\textit{etc}}{\textit{etc}} \newcommand{\textit{et al}.}{\textit{et al}.} \newcommand{\textit{et al}. }{\textit{et al}. } \newcommand{Eqn.}{Eqn.} \newcommand{Eqn. }{Eqn. } \newcommand{\textit{w.r.t.} }{\textit{w.r.t.} } \newcommand{\textit{cf.} }{\textit{cf.} } \newcommand{\textit{i}.\textit{e}.}{\textit{i}.\textit{e}.} \newcommand{\textit{e}.\textit{g}.}{\textit{e}.\textit{g}.} \newcommand{\argmax}[1]{\underset{#1}{\operatorname{arg}\,\operatorname{max}}\;} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \title{Hierarchical Transformer with Spatio-Temporal Context Aggregation for Next Point-of-Interest Recommendation} \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author[1]{Jiayi Xie} \author[1,*]{Zhenzhong Chen} \affil[1]{School of Remote Sensing and Information Engineering, Wuhan University} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \maketitle \input{sections/0_abstract} \vspace{0.4cm} \end{@twocolumnfalse} ] \newcommand\blfootnote[1]{% \begingroup \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \endgroup } {\blfootnote{* Corresponding author.}} \input{sections/1_introduction} \input{sections/2_relatedwork} \input{sections/3_method} \input{sections/4_experiments} \input{sections/5_conclusion} \section{Introduction} With the prevalence of location-based services provided by applications such as Foursquare, Uber, Facebook, users are getting used to sharing their location-based experiences and acquiring location-aware services online. One of the most common location-based services is Point-of-Interest (POI) recommendation, which aims to predict the POI that is most likely to be visited by users. Next POI recommendation is a sub-field of POI recommendation that focuses on exploiting the user's historical trajectory to discover the potential sequential behavior patterns. On the one hand, next POI recommendation satisfies the personalized needs of users and alleviates information overload. On the other hand, it helps location-based service providers to provide intelligent location services, such as location-aware advertising, real-time Q\&A revolving around POIs, \textit{etc}. Therefore, next POI recommendation plays an increasingly important role in location-based services, and has attracted lots of attention from researchers in both academia and industry. Next POI recommendation has been extensively studied \cite{tois22survey, neucom22survey}. Early studies focus on feature engineering (\textit{e}.\textit{g}., geospatial, temporal, social, content feature) and conventional machine learning models, such as Markov Chain (MC) based stochastic models and Matrix Factorization (MF) models to capture POI-POI transitions \cite{ijcai13successive, ijcai15prme}. These models rely on a strong assumption that the next POI for users to check-in is only determined by the last one or several check-ins. However, the next POI visited by a user is also highly correlated to other previous check-ins. As deep learning methods have shown promising performance compared to conventional methods, recent work turned to utilizing deep learning approaches to boost the performance of next POI recommendation \cite{nips13word2vec, aaai16strnn}. Among them, many studies adopt Recurrent Neural Network (RNN) to mine more complicated sequential patterns in long-term semantics \cite{aaai16strnn, aaai19stgn, ijcai20asppa}. They extend RNNs with the ability to effectively incorporate various contextual information, especially the spatio-temporal context. Some work also takes advantage of other deep learning techniques, such as attention mechanisms \cite{kdd20geosan, www21stan}, memory networks \cite{kdd19memory}, pre-training models \cite{aaai21pretraining}, meta-learning paradigms \cite{kdd21metalearning}, \textit{etc}. \begin{figure*}[htb] \centering \begin{minipage}[t]{0.5\linewidth} \centering \subfigure[Map]{ \includegraphics[width=0.905\linewidth]{figures/example_a.pdf}} \end{minipage} \begin{minipage}[t]{0.481\linewidth} \centering \vspace{22pt} \subfigure[Check-in Sequence]{ \includegraphics[width=\linewidth]{figures/example_b.pdf} \label{fig:test} }\vspace{22pt} \subfigure[Subsequence Aggregation]{ \includegraphics[width=\linewidth]{figures/example_c.pdf}} \end{minipage} \caption{A check-in sequence example.} \label{fig:example} \end{figure*} Most previous methods assume that users only have preferences for some specific POIs, ignoring short-term structural patterns exhibited in user movements \cite{kdd11mf}. Such short-term structural patterns are highly personalized that can be caused by temporal regularities and geospatial constraints, resulting in multiple semantic subsequences at multi-level that comprise consecutive check-ins. The next visiting POI of the user could be correlated to several subsequences, which represent preferences beyond POI-level. As a special case shown in Figure \ref{fig:example}, a user often visits the area near her workplace (\textit{i}.\textit{e}., POI 2, 3) because of the convenience of visiting, instead of preferring a specific restaurant. Thus, other restaurants in that area could be the potential POI she would like to visit (\textit{i}.\textit{e}., POI 9). Such a high-level pattern can be inferred if she periodically visits the workplace area. Furthermore, the short-term sequential patterns could be multiple granularities. In particular, the user visits POIs in the home-workplace area in a regular manner (except POI 7, 8), such that the combination of check-in subsequences in the home area and workplace area could constitute the subsequence of a higher level granularity. In order to exploit the aforementioned hierarchical structure of the check-in sequence, a naive way is to use a fixed-length subsequence embedding under an implicit assumption that the fixed-length subsequence is suitable for all short-term structural patterns in the check-in sequence. Such hard sequence partition neglects the personalized sequential behavior and could damage the semantic information. Nevertheless, it is challenging to identify and integrate multi-level semantic subsequences due to the difficulty of pre-defining granularities and lengths of subsequences \cite{ijcai20asppa}. How to comprehensively understand the overall sequential behavior patterns of users by discovering multi-level semantic subsequences remains to be explored. In light of the above, in this work, we aim to explore the latent hierarchical structure of the user movement by adaptively locating the semantic subsequence of multiple granularities in the check-in sequence. We propose a Spatio-Temporal context AggRegated Hierarchical Transformer (STAR-HiT) for next POI recommendation, which employs a stack of hierarchical encoders to jointly model the spatio-temporal context and capture the latent hierarchical structure in the check-in sequence. In particular, we design and stack the hierarchical encoder to recursively encode the spatio-temporal context and explicitly locate semantic subsequences, and generate subsequence representations to form a new sequence with a higher level of granularity. Note that every single check-in in the original check-in sequence can be regarded as the shortest subsequence. In each encoder, the global attention layer is utilized to capture the spatio-temporal correlations between subsequences, such that subsequences with similar sequential patterns could be associated. After the global context modeling, the sequence partition layer learns to adaptively locate next-level subsequences, followed by a local attention layer performed within each identified subsequence to enhance subsequence modeling using the corresponding local context. Finally, the subsequence aggregation layer fuses the representations in each next-level subsequence individually to form the subsequence representation, thereby generating a new sequence with a higher level of granularity. This sequence is then fed into the next encoder for further subsequence integrating and sequence abstraction. The intermediate representations in each encoder are learned to be expressive about the spatio-temporal context based on the global and local attention mechanisms; meanwhile, the sequence partition and subsequence aggregation operation can flexibly discover semantic subsequences of different positions and lengths. To summarize, this work makes the following main contributions: \begin{enumerate} \item A novel next POI recommendation model STAR-HiT, consisting of stacked hierarchical encoders, is proposed to capture the latent hierarchical structure of the check-in sequence, from which the personalized movement pattern is revealed for the recommendation. \item A hierarchical encoder is designed to encode the spatio-temporal context and adaptively identify subsequences with different positions and lengths in the input sequence, then generate a sequence with a higher level of granularity. By stacking multiple hierarchical encoders, semantic subsequences of different granularities are recursively identified and integrated, so as to expose the overall hierarchical structure presented in the user movement. \item By capturing multi-level semantic subsequences that uncover the hierarchical structure, STAR-HiT guarantees the robustness and explainability of recommendations. Extensive experiments conducted on three public datasets demonstrate that our proposed STAR-HiT outperforms state-of-the-art models by a large margin whilst providing explanations for recommendations. \end{enumerate} The remaining of this paper is organized as follows: in Section 2, we review the related work. Section 3 expounds on the proposed STAR-HiT in detail, followed by experimental results with analysis on three public datasets in Section 4. Finally, Section 5 concludes the work. \section{Related Work} In this section, we first review the related work in the field of sequential recommendation and next POI recommendation. Then, we take a brief look at hierarchical Transformers applied to Natural Language Processing (NLP) and Computer Vision (CV). \subsection{Sequential Recommendation} Sequential recommendation mines behavior patterns in user action (\textit{e}.\textit{g}., click, watch, comment) sequences. Early work usually models an item-item transition pattern based on Markov chains. For example, Rendle \textit{et al}. \cite{fpmc} proposed the Factorized Personalized Markov Chains (FPMC) model that combines matrix factorization and Markov chains to incorporate both general preference and sequential behavior. Some studies follow this work and extend it to higher-order Markov chains \cite{fossil, recsys17mc}, where an $L$-order Markov chain is utilized to make predictions based on the $L$ previous actions. In general, Markov chain based models mainly focus on the latest short-term preference, performing relatively well in high-sparsity scenarios. With recent advances in deep learning, lots of work adopts neural network architectures, such as Convolutional Neural Networks (CNNs) \cite{wsdm18cnn, wsdm19cnn}, Recurrent Neural Networks (RNNs) \cite{iclr16gru, ijcai17timelstm}, Graph Neural Networks (GNNs) \cite{ijcai19gnn, aaai19gnn}, \textit{etc}. Among them, RNN is the most commonly used backbone, which encodes long-term dependencies in variable-length sequences. It performs well on dense datasets while exhibiting relatively poor performance on sparse datasets. Furthermore, some advanced deep learning techniques are utilized to enhance modeling capabilities, such as attention mechanisms \cite{ijcai18attn}, memory networks \cite{sigir18memory}, data augmentation \cite{sigir21augment}, denoising \cite{www22filter} and pre-training techniques \cite{www21pretraining}, \textit{etc}. More recently, Transformer-based approaches have shown remarkable performance in sequential recommendation. For instance, Kang \textit{et al}. \cite{icdm18sasrec} proposed a Self-Attention based Sequential Recommendation model (SASRec) that directly utilizes the stacked self-attention blocks in the vanilla Transformer to capture the correlation between every two items in the action sequence. SASRec outperforms state-of-the-art MC/CNN/RNN-based sequential recommendation methods on both sparse and dense datasets. Sun \textit{et al}. \cite{cikm19bert4rec} employed deep bidirectional self-attention to better model user behavior sequences. Later on, Li \textit{et al}. \cite{wsdn20tisasrec} improved SASRec by explicitly modeling the timestamps of interactions to emphasize the temporal influence. Wu \textit{et al}. \cite{recsys20ssept} introduced the personalization into SASRec by learning user embeddings with stochastic shared embeddings regularization. Moreover, Liu \textit{et al}. \cite{sigir21pretrainedtrm} alleviated the cold-start issue by augmenting short sequences with a pre-trained Transformer, which is trained on the reversed behavior sequences. \subsection{Next POI Recommendation} Next POI recommendation can be regarded as a special case of sequential recommendation, where geospatial influence is one of the most crucial context to be involved. Similar to the research on sequential recommendation, early studies on next POI recommendation mainly utilize Markov chains and matrix factorization \cite{ijcai13successive, ijcai15prme, aaai16infer}. Cheng \textit{et al}. \cite{ijcai13successive} extended FPMC model to FPMC-LR that models the personalized POI transition while considering users' movement constraint, namely, moving around a localized region. Feng \textit{et al}. \cite{ijcai15prme} further replaced the matrix factorization method with a metric embedding method and exploited the pair-wise ranking scheme to learn parameters. He \textit{et al}. \cite{aaai16infer} adopted a third-rank tensor to model the successive check-in behaviors and incorporated the softmax function to fuse the personalized Markov chain with users' latent behavior patterns. With the surge of deep learning research, a large variety of next POI recommendation approaches leveraging different deep learning paradigms have been proposed. Among them, some work employs the word2vec framework \cite{nips13word2vec} to learn representations of POIs, which can reflect the contextual relationships of several continuously visited POIs \cite{aaai17poi2vec, www17geoteaser}. In addition, most existing models are based on RNNs, which have been widely applied to sequential data due to their powerful capability to exhibit temporal dynamics. For example, Liu \textit{et al}. \cite{aaai16strnn} adopted RNN to model the user's check-in sequence. The proposed ST-RNN model captures the spatial and temporal context with the time and distance transition matrices. Yang \textit{et al}. \cite{tois17neural} jointly modeled social networks and mobile trajectories by deriving user representations from social networks and adopting two different RNNs to encode long- and short-term sequential influence. Feng \textit{et al}. \cite{www18deepmove} enhanced GRU by utilizing attention mechanisms that capture the multi-level periodicity of users' mobility from long-range and sparse trajectories. Moreover, Guo \textit{et al}. \cite{aaai20arnn} employed LSTM as the recurrent layer, and designed a meta-path based random walk over a knowledge graph to discover location neighbors based on heterogeneous factors. Zhao \textit{et al}. \cite{aaai19stgn} extended the LSTM gating mechanism with the spatial and temporal gates to capture the user’s space and time preference. Zhao \textit{et al}. \cite{ijcai20asppa} introduced a binary boundary detector into RNN and modified the gating mechanism to learn the sequential patterns of semantic subsequences in the check-in sequence, then utilized a power-law attention mechanism to integrate the spatio-temporal context. Sun \textit{et al}. \cite{aaai20lstpm} combined a non-local network and a geo-dilated LSTM to leverage both long-term preference and geographical influence. Zang \textit{et al}. \cite{tois22cha} explored the category hierarchy of POIs to develop an attention-based knowledge graph for POI representation learning, and then proposed a spatial-temporal decay LSTM to capture the personalized behavior pattern. The above RNN-based models present many novel and enlightening contributions for introducing spatio-temporal context modeling into neural networks. Nevertheless, as aforementioned, RNN-based models need relatively dense data to train, while data sparsity is one of the most crucial problems in practical scenarios \cite{neucom22survey}. Similar to sequential recommendation, some work also takes advantage of other deep learning techniques, such as attention mechanisms \cite{www21stan, kdd20geosan}, memory networks \cite{kdd19memory}, pre-training strategies \cite{aaai21pretraining}, meta-learning paradigms \cite{kdd21metalearning, tois22meta}, \textit{etc}. In particular, Zhou \textit{et al}. \cite{kdd19memory} proposed a Topic-Enhanced Memory Network that combines the topic model and memory network to jointly capture the global structure of latent patterns and local neighborhood-based features, while incorporating geographical influence by calculating a comprehensive geographical score. Kim \textit{et al}. \cite{kdd21metalearning} proposed an adaptive weighting scheme based on meta-learning to alleviate the class imbalance problem and noise of the input data. Cui \textit{et al}. \cite{tois22meta} jointly utilized a sequential knowledge graph and a meta-learning strategy to learn and optimize latent embeddings, thus modeling user check-in patterns while alleviating the data sparsity issue. As for attention-based models, Luo \textit{et al}. \cite{www21stan} employed a bi-layer attention architecture that allows global point-point interaction within the trajectory. The proposed method explicitly models the spatio-temporal context by embedding the spatio-temporal relation matrix of the trajectory. Lian \textit{et al}. \cite{kdd20geosan} developed a self-attention model to encode the check-in sequence, and encoded the hierarchical gridding of geospatial information with another self-attention based encoder. Lin \textit{et al}. \cite{aaai21pretraining} built a pre-training model to adaptively generate embeddings for locations based on their specific contextual neighbors. Although some studies have been devoted to investigating the short-term periodicity in the check-in sequence to optimize the network architecture \cite{www18deepmove, aaai20arnn, ijcai20asppa, tois22cha}, most previous work neglected the complicated yet structural patterns exhibited in user movements. In this work, we adopt the Transformer structure that has shown the convincing capability of dealing with long-term dependencies in sequences, and capture the latent hierarchical structure of user movements by adaptively discovering the multi-level semantic subsequences in an explicit way. \subsection{Hierarchical Transformers} Transformer architectures are based on the self-attention mechanism that learns the relationships among every element of a sequence, which can attend to complete sequences, thereby comprehensively understanding long-term context. Transformer was first proposed by Vaswani \textit{et al}. \cite{nips17trm} for machine translation, and has since become the state-of-the-art method in many NLP tasks. There are two mainstream approaches to enhancing the Transformer for better modeling longer-range dependencies with higher efficiency: variant self-attention mechanisms \cite{nips20bigbird, emnlp20} and hierarchical Transformer structures \cite{acl19hit, acl19hibert, cikm20hit, acl21hit}. Here we focus on the latter that leverages the natural hierarchical structure present in the syntax. Liu \textit{et al}. \cite{acl19hit} developed a multi-document summarization model and adopted a local Transformer layer and a global Transformer layer to encode the intra- and inter-paragraph contextual information, respectively. Yang \textit{et al}. \cite{cikm20hit} proposed a Siamese Multi-depth Transformer for document representation learning and matching, which contains sentence blocks and document context modeling. Wu \textit{et al}. \cite{acl21hit} effectively modeled long documents by a hierarchical Transformer following the sentence-document-sentence encoding strategy such that both sentence level and document level context could be integrated. The breakthroughs of Transformers in NLP have sparked great interest in CV tasks. Transformers for vision tasks usually segment the input image into a sequence of patches and capture long-range dependencies among patches. For example, Dosovitskiy \textit{et al}. \cite{vit} introduced the Vision Transformer for image classification, the first work directly applying the Transformer architecture and dispensing with convolutions entirely. To fit the Transformer, the input image is split into fix-size patches and linearly embedded into flat tokens to construct the input sequence. Liu \textit{et al}. \cite{swintrm} extended Vision Transformer by shifted windows, improving the efficiency by limiting the self-attention computation solely within each local window. They constructed hierarchical representations that start from small-sized patches and gradually merged neighboring patches in deeper layers. Wang \textit{et al}. \cite{pvt} improved Vision Transformer by incorporating the pyramid structure from convolutional neural networks. They utilized fine-to-coarse image patches to reduce the sequence length of Transformer as the network deepens, such that the input sequence elements can be set to pixel-level for dense prediction without increasing computational cost. Later on, Chen \textit{et al}. \cite{dpt} drew on the deformable convolution \cite{deformablecnn} and replaced the pre-defined patches with learnable patches in a data-driven way. As the proposed deformable patch embedding module splits the image into patches in a deformable way with learnable patch size and location, the semantics in patches can be well preserved. Our work is partially inspired by the hierarchical enhancement of Transformer structures. In spite of the outstanding performance that these hierarchical Transformers have shown in various tasks, they cannot be directly used for next POI recommendation. Unlike the syntactic knowledge that helps pre-define the multi-scale subsequences for language modeling, the subsequences in the check-in sequence are personalized; thus, the extraction by a fixed length is unsuitable. Besides, the grid-topology spatial structure of visual data distinguishes them from check-in sequences, leading to the incompatibility of visual hierarchical Transformers to next POI recommendation. Nevertheless, these pioneering studies motivate us to extend Transformers to adaptively learn semantic multi-grained subsequences in the check-in sequence for better sequential behavior understanding. \section{Methodology} In this section, we first formulate the next POI recommendation task, then elaborate on the details of the proposed \textbf{S}patio-\textbf{T}emporal context \textbf{A}gg\textbf{R}egated \textbf{Hi}erarchical \textbf{T}ransformer (STAR-HiT). The notations mainly used in this article are listed in Table \ref{tab:notations}. \begin{table*} \centering \caption{Notations Used in This Article} \begin{tabular}{l|p{10cm}} \toprule \makebox[0.15\linewidth][c]{{Variables}} & \makebox[0.7\linewidth][c]{{Description}} \\ \midrule $m$, $n$, $L$ & the number of users, POIs, and the length of the check-in sequence \\ $d$, $d_h$, $d_k$ & dimensions of latent representations \\ $u$, $\mathcal{U}$ & a user and the user set \\ $p$, $\mathcal{P}$ & a POI and the POI set \\ $S_u$, $\mathcal{S}$ & the check-in sequence of the user $u$, the check-in sequence set\\ $s^{(u)}_t$ & the $t$-th check-in of the user $u$ \\ $g$, $\tau$ & geographic location and timestamp \\ $\Delta^{\operatorname{S}} \in \mathbb{R}^{L\times L}$ & spatial relation matrix \\ $\Delta^{\operatorname{T}} \in \mathbb{R}^{L\times L}$ & temporal relation matrix \\ $k$ & the initial length of the subsequence \\ $l$ & the number of stacked hierarchical encoders \\ $ \mathbf{E}(u) \in \mathbb{R}^{L\times d} $ & the representation matrix of the check-in sequence $S_u$ \\ $\mathbf{E}(u)^{(l)} \in \mathbb{R}^{\lceil \frac{L}{k^{l}} \rceil \times d}$ & the representation matrix after $l$ hierarchical encoders of the check-in sequence $S_u$ \\ \bottomrule \end{tabular} \label{tab:notations} \end{table*} \subsection{Problem Statement} Let $ \mathcal{U} = \{u_1, u_2, \dots, u_m\} $, $ \mathcal{P} = \{p_1, p_2, \dots, p_n\} $ be the set of users and POIs, respectively. $\mathcal{S}=\{S_{1}, S_{2}, \ldots, S_{m}\}$ represents the set of user check-in sequences. For each user $u$, her check-in trajectory in chronological order is denoted as $ S_{u} = \{s^{(u)}_t \mid t = 1,2,\dots, L \} $, where $ s^{(u)}_t = (p_t, g_t, \tau_t) $ is the $t$-th check-in that user $u$ visits POI $ p_{t} \in \mathcal{P} $ with the geographic location $g_i = (latitude = \alpha_i, longitude = \beta_i)$ at timestamp $ \tau_t $. Next POI recommendation aims to recommend the POI that is most likely to be visited by the user at the next time step. Given a user $u$ with her check-in sequence $S_{u}$, the goal is to predict the next visiting POI $p_{t+1} \in \mathcal{P}$. The task can be formulated as estimating the personalized ranking score to the POI by: \begin{equation} \hat{y}_{u,p} = f_{\Theta}(p \in \mathcal{P} \mid u, S_u), \end{equation} \noindent where $f_{\Theta}(\cdot)$ denotes the underlying model with parameters $\Theta$, and $\hat{y}_{u,p}$ is the predicted score for the check-in that user $u$ would like to visit POI $p$ at next time step. The top-k POIs ranked by predicted scores are the final recommendations. \subsection{Overall Architecture} As shown in Figure \ref{fig:framework}, our proposed STAR-HiT consists of an embedding module, stacked hierarchical encoders, and the predictor. In particular, the embedding module embeds the POI and spatio-temporal context into latent representations to construct the spatio-temporal aware representation matrix of the check-in sequence. As for the stacked hierarchical encoders, the encoder adopts a hierarchical architecture that abstracts the input sequence to a compressive and expressive sequence. More specifically, the encoder first models the global spatio-temporal context within the entire sequence, or in other words, models the relationships between different subsequences learned in the previous encoder. Note that every single check-in in the very beginning check-in sequence is viewed as the shortest subsequence. Then the sequence is adaptively partitioned into next-level semantic subsequences with learnable positions and lengths, followed by the local context enhancement. Next-level subsequence representations are obtained by fusing their containing representations, which form the output sequence with a higher level of granularity. The hierarchical structure of the check-in sequence composed of multi-level semantic subsequences is learned based on stacked hierarchical encoders, with the purpose of comprehensively understanding users' overall sequential behavior patterns. Finally, a Multi-Layer Perceptron (MLP) based predictor is exploited to predict the check-in probabilities for users visiting POI. The details of STAR-HiT are elaborated as follows. \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{figures/framework.pdf} \caption{The framework of the proposed STAR-HiT model.} \label{fig:framework} \end{figure*} \subsection{Embedding Module} The embedding module consists of two parts: a trajectory embedding layer and a spatio-temporal context embedding layer. \subsubsection{Trajectory embedding layer} The trajectory embedding layer encodes check-in POIs into latent representations of dimension $d$. We use $ e^{p_t} \in \mathbb{R}^d $ to denote the embedding of POI $p_t$ for user's $t$-th visiting. The embedding of each check-in sequence $S_{u}$ is represented as $\hat{\mathbf{E}}(u) = [e^{p_1}; e^{p_2}; \cdots; e^{p_L}] \in \mathbb{R}^{L\times d}$, where $L$ is the length of the sequence. The user embedding like \cite{kdd20geosan, www21stan} adopted is discarded, as the personalized information is already well preserved in the spatio-temporal context embedding to be introduced next. \subsubsection{Spatio-temporal context embedding layer} To leverage the spatio-temporal context of the check-in sequence, we first calculate the time intervals and geographical distances between every two visited POIs in a trajectory to construct the spatial-temporal relation matrices. Following \cite{kdd20geosan}, for each check-in sequence $S_{u}$, we used $\Delta_{i, j}^{\operatorname{S}}=\operatorname{Haversine}(p_i, p_j)$ to obtain the distance between $i$-th and $j$-th visiting POI given their longitudes and latitudes\footnote{Haversine formula calculates the great-circle distance between two points on a sphere given their longitudes and latitudes.}. As for time intervals, instead of directly calculating the timestamp differences or uniformly grouping the values into discrete bins as \cite{aaai16strnn, www21stan} did, we resort to the scale of time intervals. Due to the extremely high variance of time intervals (\textit{e}.\textit{g}., from a few minutes to several years), we group the time interval into $M$ levels, namely, $\Delta_{i, j}^{\operatorname{T}} \in [1, M], M \in \mathbb{N}$, indicating the scale of the time interval between two check-ins ranging from within an hour to over a year. $M$ is set to $7$ in our implementation. According to Tobler’s First Law of Geography \cite{tobler1970, sigir11tobler}, near POIs are more related than distant ones, which exhibits strong geographical influence. Inspired by \cite{sigir13tobler, ijcai20asppa}, we can consider that the probability of visiting a pair of POIs $p_i$ and $p_j$ for user $u$ follows the power-law distribution as: \begin{equation} \operatorname{Pr}(p_i, p_j)=a \cdot \operatorname{D}(p_i, p_j)^{\lambda}, \label{eq:powerlaw} \end{equation} \noindent where $a$ and $k$ are learnable parameters of the power-law distribution, $D(p_i, p_j)$ is the distance between POI $p_i$ and $p_j$. We take the logarithm on both sides of Equation (\ref{eq:powerlaw}) as: \begin{equation} \operatorname{log}(\operatorname{Pr}(p_i, p_j)) = \operatorname{log}(a) + \lambda \cdot \operatorname{log} (\operatorname{D}(p_i, p_j)). \label{eq:log_powerlaw} \end{equation} By leveraging the temporal influence, we extend Equation (\ref{eq:log_powerlaw}) by replacing the learnable $\operatorname{log}(a)$ with the temporal context as: \begin{equation} \operatorname{log}(\operatorname{Pr}(p_i, p_j)) = \operatorname{T}(p_i, p_j) + \lambda \cdot \operatorname{log}(\operatorname{D}(p_i, p_j)), \label{eq:st_powerlaw} \end{equation} \noindent where $\operatorname{T}(p_i, p_j)$ represents the time interval between visiting $p_i$ and $p_j$. As such, both geographical and temporal influences are incorporated into the calculation of co-visiting probability. Furthermore, $k$ can be regarded as controlling the trade-off between geographical and temporal influence. With the spatial relation matrix $\Delta^{\operatorname{S}}$ and temporal relation matrix $\Delta^{\operatorname{T}}$, we can obtain the spatio-temporal context embedding matrix $\mathbf{E}_c(u)$ of the check-in sequence $S_{u}$ as: \begin{equation} \mathbf{E}_c(u) = \Delta^{\operatorname{T}} + \boldsymbol{ \lambda} \cdot \operatorname{log}(\Delta^{\operatorname{S}}). \end{equation} We fuse all contextual information by concatenating the trajectory embedding and spatio-temporal context embedding, and then apply a linear transformation with learnable weights $\boldsymbol{W}^{E} \in \mathbb{R}^{(L+d) \times d}$ to obtain the final embedding of the check-in sequence: \begin{equation} \mathbf{E}(u) = \operatorname{Concat}([\hat{\mathbf{E}}(u); \mathbf{E}_c(u)]) \boldsymbol{W}^{E}. \end{equation} Since the self-attention mechanism contains no recurrence and no convolution to capture relative positions in the sequence like RNN, we follow \cite{nips17trm} to add positional encodings $\mathbf{P}$ into $\mathbf{E}(u)$, \textit{i}.\textit{e}., $\mathbf{E}(u) = \mathbf{E}(u) + \mathbf{P}$. \subsection{Hierarchical Encoder} In order to capture semantic subsequences of multiple granularities to obtain the hierarchical structure of the check-in sequence, the hierarchical encoder is supposed to extract semantic subsequences in the input sequence and derive their representations to generate a shorter output sequence with higher-level representations of behavior patterns. To this end, the proposed encoder comprises: 1) \textbf{global attention layer} that models the global context, 2) \textbf{sequence partition layer} that learns to locate semantic subsequences with different positions and lengths, 3) \textbf{local attention layer} that enhances subsequence modeling using the local context, 4) \textbf{subsequence aggregation layer} that obtains subsequence representations to construct a new sequence with an increased abstraction level. The details of the hierarchical encoder are shown at the bottom of Figure \ref{fig:framework}. \subsubsection{Global Attention} The global attention layer aims to learn the global context in the input sequence. Here, we adopt the encoder layer in the vanilla Transformer that contains two sub-layers, \textit{i}.\textit{e}., the multi-head self-attention sub-layer and position-wise feed-forward sub-layer. To briefly revisit Transformer, each encoder layer adaptively aggregates the values according to the attention weights that measure the compatibility of query-key pairs, where the query, key, value are all vectors transformed from the input representation. Such point-point interaction within the sequence allows the layer to capture long-term dependencies. Moreover, multi-head self-attention enables the layer to jointly attend to information from different representation subspaces at different positions. In our case, the multi-head self-attention operation takes the embedding $\mathbf{E}(u)$ as input, linearly projects it into $h$ subspaces through distinct matrices, and then applies $h$ attention functions in parallel to produce the output representations, which are concatenated and once again projected. The whole multi-head self-attention operation can be summarized as follows: \begin{equation} \operatorname{MSA}(\mathbf{E}(u)) = \operatorname{Concat}([\operatorname{SA}_1, \operatorname{SA}_2, \cdots, \operatorname{SA}_h])\boldsymbol{W}^{O}, \end{equation} \noindent where \begin{equation} \operatorname{SA}_i = \operatorname{Attention}(\mathbf{E}(u)\boldsymbol{W}^{Q}_i, \mathbf{E}(u)\boldsymbol{W}^{K}_i, \mathbf{E}(u)\boldsymbol{W}^{V}_i). \end{equation} \noindent The projection matrices for each head $\boldsymbol{W}^{Q}_i \in \mathbb{R}^{d \times d_h}, \boldsymbol{W}^{K}_i \in \mathbb{R}^{d \times d_h}, \boldsymbol{W}^{V}_i \in \mathbb{R}^{d \times d_h}$, $\boldsymbol{W}^{O} \in \mathbb{R}^{d \times d}$ are learnable parameters, where $d_h=d/h$. Note that the superscript $(l)$ indicating the $l$-th encoder is omitted above for simplicity. Here, the attention function is scaled dot-product attention, which is calculated as follows: \begin{equation} \operatorname{Attention}(\boldsymbol{Q}, \boldsymbol{K}, \boldsymbol{V})=\operatorname{softmax}(\frac{\boldsymbol{QK}^{T}}{\sqrt{d_h}}) \boldsymbol{V}. \label{eq:attn} \end{equation} The multi-head self-attention sub-layer is followed by a position-wise feed-forward sub-layer, a fully connected two-layer feed-forward network applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between: \begin{equation} \operatorname{FFN}(x)=\operatorname{ReLU} (0, x \boldsymbol{W}_{1}+\boldsymbol{b}_{1}) \boldsymbol{W}_{2}+\boldsymbol{b}_{2}, \label{eq:ffn} \end{equation} \noindent where $\boldsymbol{W}_{1} \in \mathbb{R}^{d \times d_k}, \boldsymbol{W}_{2} \in \mathbb{R}^{d_k \times d}$ and $\boldsymbol{b}_{1} \in \mathbb{R}^{d_k}, \boldsymbol{b}_{2} \in \mathbb{R}^{d}, s.t.\ d_k > d$ are learnable parameters and shared across all positions. In addition, we also adopt residual connection \cite{resnet}, layer normalization \cite{layernorm}, and dropout regularization \cite{dropout} to refine the network structure \cite{nips17trm, cikm19bert4rec}. More specifically, we employ the residual connection around each of the two sub-layers, followed by layer normalization. The dropout is applied to the output of each sub-layer, before it is normalized. In summary, the sequence representation matrix after encoding the global context via the global attention layer is formulated as follows: \begin{equation} \begin{aligned} \hat{\mathbf{E}}_G(u)&=\operatorname{LayerNorm}(\mathbf{E}(u)+\operatorname{Dropout}(\operatorname{MSA}(\mathbf{E}(u)))), \\ \mathbf{E}_G(u)&=\operatorname{LayerNorm}(\hat{\mathbf{E}}_G(u)+\operatorname{Dropout}(\operatorname{FFN}(\hat{\mathbf{E}}_G(u)))). \end{aligned} \end{equation} \subsubsection{Sequence Partition} In order to extract semantic subsequences in the check-in sequence, we first divide the sequence uniformly into $\lceil \frac{L}{k} \rceil$ non-overlapping subsequences of length $k$. Let $x_i$ denote the center coordinate of the $i$-th subsequence in the check-in sequence $S_u$, such that the location range of the subsequence in the input sequence is initialized as $(x_i - k/2, x_i + k/2)$. Next, we set the center coordinate $x_i$ and the length $k_i$ of each subsequence into learnable parameters, which can be inferred from the spatio-temporal context. In particular, inspired by \cite{dpt} for semantic patch learning in vision tasks, we predict the offset ${dx}_i$ and length $k_i$ based on the sequence representation $\mathbf{E}_G(u)$ as follows: \begin{equation} \begin{aligned} {dx}_i&=\operatorname{Tanh}(w^1 \cdot f(\mathbf{E}_G(u)_i)), \\ k_i&=\operatorname{ReLU}(\operatorname{Tanh}(w^2 \cdot f(\mathbf{E}_G(u)_i))), \end{aligned} \end{equation} \noindent where hyper-parameters $w^1, w^2$ control the weights of the offset and length to update the subsequence location, and $\mathbf{E}_G(u)_i$ is the representation matrix slicing corresponding to the $i$-th subsequence. $f(\cdot)$ denotes the feature extractor that learns the offset and length from subsequence representations. We follow \cite{dpt} and implement the feature extractor as a 1D convolution and a linear transformation with a ReLU activation in between. Accordingly, the $i$-th learned subsequence is located in $(x_i+{dx}_i-k_i/2, x_i+{dx}_i+k_i/2)$. In this way, we can fully exploit the context to discover semantic subsequences. However, the subsequence with variable length makes it impractical to encode the intra-subsequence context subsequently. As a result, we introduce the sampling strategy to derive fixed-length subsequences. Given the location range of a subsequence in the input sequence $(x_{\operatorname{left}}, x_{\operatorname{right}})$, we first linearly interpolate $r$ points as $(x_1, x_2, \cdots, x_r)$ inside the subsequence. As the interpolated coordinates could be fractional, we then use the nearest-neighbor sampling to take representations closest to the corresponding coordinates, denoting as $\{\hat{e}^{[x_j]} \mid j=1, 2, \cdots, r\}$. Here we omit the superscript $(i)$ indicating the index of the subsequence for clarity. Finally, we concatenate the sampling representations as: \begin{equation} \mathbf{E}_P(u)_i = \operatorname{Concat}([\hat{e}^{[x_1]}; \hat{e}^{[x_2]}; \cdots; \hat{e}^{[x_r]}]), \end{equation} \noindent so as to obtain the representation matrix of the $i$-th subsequence $\mathbf{E}_P(u)_i \in \mathbb{R}^{r\times d}$. In our implementation, the number of samples $r$ for each subsequence is set to the same as the initial length $k$ of subsequences. \subsubsection{Local Attention} As semantic subsequences are identified, a local attention layer is applied afterward for encoding local contextual information within each subsequence. We use the same attention function as in Equation (\ref{eq:attn}) with only one head, since the subsequences are much shorter. We stack the current subsequence representations as $\mathbf{E}_P(u) \in \mathbb{R}^{\lceil \frac{L}{k} \rceil \times r\times d}$, such that the attention function for all subsequences can be computed in parallel as follows: \begin{equation} \operatorname{SA_{L}}(\mathbf{E}_P(u)) = \operatorname{Attention}(\mathbf{E}_P(u)\boldsymbol{W}^{Q}_L, \mathbf{E}_P(u)\boldsymbol{W}^{K}_L, \mathbf{E}_P(u)\boldsymbol{W}^{V}_L)\boldsymbol{W}^{O}_L, \end{equation} \noindent where \begin{equation} \operatorname{Attention}(\boldsymbol{Q}, \boldsymbol{K}, \boldsymbol{V})=\operatorname{softmax}(\frac{\boldsymbol{QK}^{T}}{\sqrt{d}}) \boldsymbol{V}. \end{equation} \noindent The projection matrices $\boldsymbol{W}^{Q}_L \in \mathbb{R}^{d\times d}$, $\boldsymbol{W}^{K}_L \in \mathbb{R}^{d\times d}$, $ \boldsymbol{W}^{V}_L \in \mathbb{R}^{d\times d}$ and $\boldsymbol{W}^{O}_L \in \mathbb{R}^{d\times d}$ are learnable parameters, where $\boldsymbol{W}^{Q}$, $\boldsymbol{W}^{K}$, $\boldsymbol{W}^{V}$ are shared across all subsequence representation matrices. Except for the local attention function calculated within each subsequence, the rest of the local attention layer is the same as the global attention layer, with individual parameters. Ultimately, the whole local attention layer can be described as follows: \begin{equation} \begin{aligned} \hat{\mathbf{E}}_L(u)&=\operatorname{LayerNorm}(\mathbf{E}_P(u)+\operatorname{Dropout}(\operatorname{SA_{L}}(\mathbf{E}_P(u)))), \\ \mathbf{E}_L(u)&=\operatorname{LayerNorm}(\hat{\mathbf{E}}_L(u)+\operatorname{Dropout}(\operatorname{FFN}(\hat{\mathbf{E}}_L(u)))). \end{aligned} \end{equation} \subsubsection{Subsequence Aggregation} After context modeling via attention mechanisms and sequence partitioning into semantic subsequences, we gather the representations within each subsequence to obtain the corresponding subsequence representations, which constitute the output sequence. Given the representation matrix $\mathbf{E}_L(u)$, the representation of the $i$-th subsequence is obtained by the average pooling of the representations of all representations it contains, which is formulated as: \begin{equation} \hat{\mathbf{E}}_A(u)_i = \frac{1}{r} \sum_{j=ir}^{(i+1)r}\mathbf{E}_L(u)_{i,j}, \end{equation} \noindent followed by a fully connected two-layer feed-forward network as in Equation (\ref{eq:ffn}), with the aforementioned techniques that ease the training. Accordingly, the output of the subsequence aggregation layer, as well as the output of the hierarchical encoder, is obtained by: \begin{equation} \mathbf{E}_A(u) = \operatorname{LayerNorm}(\hat{\mathbf{E}}_A(u)+\operatorname{Dropout}(\operatorname{FFN}(\hat{\mathbf{E}}_A(u)))). \end{equation} By now, the structure of the proposed hierarchical encoder is fully specified. Through the hierarchical encoder, the global context of the sequence and local contexts of semantic subsequences are well involved, while the input sequence $\mathbf{E}(u)^{(l-1)} \in \mathbb{R}^{\lceil \frac{L}{k^{(l-1)}} \rceil \times d}$ of the $l$-th encoder are abstracted to the output sequence $\mathbf{E}(u)^{(l)} = \mathbf{E}_A(u)^{(l)} \in \mathbb{R}^{\lceil \frac{L}{k^{l}} \rceil \times d}$, where $(l)$ indicates the $l$-th encoder. Besides, semantics subsequences in each encoder are identified by the learned positions and lengths. \subsubsection{Stacking Encoders} In order to model the latent hierarchical structure of the sequential behavior pattern from the check-in sequence, we stack the hierarchical encoders to recursively partition the input sequence into multiple semantic subsequences and aggregate them to form the output sequence with different levels of granularity. As aforementioned, the length of sequence is reduced from $L$ to $ \lceil \frac{L}{k^l} \rceil$ after $l$ hierarchical encoders. The output of the hierarchical encoder stack would be of shorter length while highly informative about the personalized behavior sequential pattern; meanwhile, each encoder is learned to be capable of discovering semantic subsequences with different levels of granularity. The number of stacked encoders $l$ will be discussed in Section \ref{sec:params}. \subsection{Prediction} To predict the next check-in POI, we first obtain the user representation $\boldsymbol{U}_u$ of user $u$ by summing up the output $\mathbf{E}(u)^{(l)}$ after the stack of $l$ hierarchical encoders, which constitutes the user representation matrix $\boldsymbol{U} \in \mathbb{R}^{m\times d}$ that represents the personalized sequential behavior patterns of all users. A commonly used prediction layer adopted the matching function as follows: \begin{equation} \hat{y}_{u,p} = \boldsymbol{U}_u^\top\boldsymbol{P}_p, \end{equation} \noindent where $\boldsymbol{P}_p$ is the embedding of POI $p \in \mathcal{P}$ \cite{icdm18sasrec, www21stan}. However, further adding the POI embeddings to calculate the matching score degrades the performance of the proposed STAR-HiT to some extent. The main reason may lie in the difference between the personalized representation encoded by encoders and the general representation obtained by the embedding layer. Therefore, the matching function is not adopted in our implementation. Instead, we directly use a linear transformation with learnable weight $\boldsymbol{W}^P \in \mathbb{R}^{d \times n}$ and a softmax function to convert the output into predicted next POI probabilities as follows: \begin{equation} \hat{\boldsymbol{y}}_{u} = \operatorname{softmax}(\boldsymbol{U}_u\boldsymbol{W}^P), \end{equation} \noindent where $\hat{\boldsymbol{y}}_{u} = [\hat{y}_{u,1}, \hat{y}_{u,2}, \cdots, \hat{y}_{u,n}]$ is the predicted score of user $u$ to visit the candidate POIs. To train the model, we adopt the cross-entropy loss to optimize parameters as: \begin{equation} \mathcal{L} = -\sum_{S_u \in \mathcal{S}_{\operatorname{training}}}(\log \hat{y}_{u,i}+\sum_{j \in \mathcal{P}, j \neq i} \log (1-\hat{y}_{u,j})), \end{equation} \noindent where $\mathcal{S}_{\operatorname{training}}$ is the training set of check-in sequences. \section{Experiments} In this section, we conduct experiments to show the effectiveness of the proposed STAR-HiT. Specifically, we aim to answer the following research questions: \begin{itemize} \item \textbf{RQ1: }How does STAR-HiT perform compared to state-of-the-art methods on next POI recommendation? \item \textbf{RQ2: }How do different designs of STAR-HiT influence the performance? \item \textbf{RQ3: }Can STAR-HiT capture the latent hierarchical structure present in check-in sequences? \end{itemize} In what follows, we first introduce datasets, evaluation metrics and compared methods, followed by answering the above questions. In particular, we present the performance comparison with analysis among STAR-HiT and state-of-the-art baseline methods. Then, we explore how the hyper-parameter settings influence STAR-HiT, in terms of the initial length of subsequences, the number of stacked hierarchical encoders, the dimension of representations, \textit{etc}. In addition, we examine the effect of different modules in STAR-HiT, in terms of four layers in the proposed hierarchical encoder and the learnable localization of subsequences. We also validate the capability of STAR-HiT to capture the personalized latent hierarchical structure of the check-in sequence by the case study. \subsection{Experimental Settings} \subsubsection{Datasets} To evaluate the effectiveness of STAR-HiT, we conduct experiments on three publicly available datasets: Foursquare NYC, Foursquare US, and Gowalla. \begin{itemize} \item \textbf{Foursquare NYC:} Foursquare NYC \cite{tsmc15foursquarenyc} is a widely used dataset for POI recommendation, which contains check-ins in New York city collected from April 2012 to February 2013. \item \textbf{Foursquare US:} This dataset is a subset of a long-term global-scale check-in dataset collected from Foursquare \cite{tist16foursquareglobal}. Following \cite{ijcai20asppa}, we use check-in data within the United States (except Alaska and Hawaii), and rename the dataset as Foursquare US. \item \textbf{Gowalla:} This is a check-in dataset obtained from Gowalla \cite{kdd11mf} over the period of February 2009 to October 2010. \end{itemize} Table \ref{tab:dataset_statistics} summarizes the statistics of three datasets, where \textit{Revisit Frequency} refers to the ratio of the total number of check-ins to the number of visited POIs of a user. As for \textit{Revisit Ratio}, it depicts the ratio of the number of repeated check-ins for the same POIs to the total number of check-ins of a user. Both of the above metrics measure re-visits in the dataset, which is complementary information to the sparsity. We report the average Revisit Frequency and Revisit Ratio of all users. For each dataset, we filter out users and POIs with fewer than 10 check-ins as previous work \cite{ijcai20asppa} did. For the check-in sequence of each user, we set the maximum sequence length as $L_{\operatorname{max}}$. Then we slide the fixed-length window on the original check-in trajectory to obtain sequence slices when $L > L_{\operatorname{max}}$, otherwise padding with zeros to the right to construct the sequence of length $L_{\operatorname{max}}$. Since the number of users in Foursquare NYC is much smaller, $L_{\operatorname{max}}$ is set to 100 for Foursquare NYC and 128 for others. In order to simulate the real-world next POI recommendation scenario, we rank the check-in sequence of each user in chronological order and split the dataset into training (80\%), validation (10\%), and test (10\%) sets. \begin{table*}[] \centering \caption{Statistics of Datasets} \begin{tabular}{lccc} \toprule & \makebox[0.2\linewidth][c]{{Foursquare NYC}} & \makebox[0.2\linewidth][c]{{Foursquare US}} & \makebox[0.2\linewidth][c]{{Gowalla}} \\ \midrule \# users & 1,083 & 30,410 & 51,989 \\ \# POIs & 5,135 & 79,580 & 131,282 \\ \# check-ins & 147,938 & 2,440,233 & 3,365,444 \\ Avg. POIs per user & 137 & 80 & 65 \\ Avg. users per POI & 29 & 31 & 26 \\ Revisit Frequency & 4.89 & 2.88 & 2.56 \\ Revisit Ratio & 36.45\% & 26.72\% & 28.06\% \\ Sparsity & 97.34\% & 99.90\% & 99.95\% \\ \bottomrule \end{tabular} \label{tab:dataset_statistics} \end{table*} \subsubsection{Evaluation Metrics} To evaluate the performance of next POI recommendation methods, we adopt two widely used evaluation protocols for recommendation systems \cite{www17ncf}: HR@K and NDCG@K. For each test sequence, we predict the probabilities of candidate POIs $p \in \mathcal{P}$ and recommend the top-$K$ POIs. Hit Ratio (HR) measures whether the true next visiting POI is present on the top-$K$ ranked list, while Normalized Discounted Cumulative Gain (NDCG) further emphasizes the position of the hit by assigning higher weights to hits at topper ranks. We set $K = 5$ and $K = 10$ and report the average metrics for all sequences in the test set. \subsubsection{Compared Methods} We compare our proposed STAR-HiT with a statistical model (MFLM), RNN-based models (GRU4Rec, Time-LSTM, STRNN, STGN), attention-based model (STAN), and Transformer-based models (SASRec, SSE-PT, TiSASRec, GeoSAN), as follows: \begin{itemize} \item \textbf{MFLM:} Most Frequented Location Model \cite{kdd11mf} is a statistical-based model, which calculates the probability of a user visiting the POI based on the statistics of her previous check-ins. It captures the periodic check-in habit of the user. \item \textbf{GRU4Rec:} GRU4Rec \cite{iclr16gru} models the user action sequence for session-based recommendation utilizing Gated Recurrent Unit (GRU), a variant of RNN. We adopt a two-layer GRU for modeling check-in sequences. \item \textbf{Time-LSTM:} Time-LSTM \cite{ijcai17timelstm} is a variant of LSTM that improves the modeling of sequential patterns by explicitly incorporating time intervals with designed time gates. We adopt the third version proposed in the paper since it achieves the best performance in our experiments. \item \textbf{STRNN:} Spatial Temporal Recurrent Neural Networks \cite{aaai16strnn} improves RNN for check-in sequence modeling by capturing local temporal and spatial contexts with time and distance transition matrices. \item \textbf{STGN:} Spatio-temporal Gated Network \cite{aaai19stgn} extends the gating mechanism of LSTM with four spatial-temporal gates to capture the user’s both long-term and short-term space and time preference. \item \textbf{STAN:} Spatio-Temporal Attention Network \cite{www21stan} uses a bi-layer attention architecture to explicitly exploit point-to-point spatio-temporal correlations in check-in sequences, so that correlations of non-adjacent locations and non-consecutive check-ins are well incorporated for understanding user behavior. \item \textbf{SASRec:} Self-Attention based Sequential Recommendation \cite{icdm18sasrec} directly implement the Transformer \cite{nips17trm} architecture, taking advantage of the self-attention mechanism to adaptively assign high weights to relatively few but relevant actions for recommendations. \item \textbf{SSE-PT:} Personalized Transformer with Stochastic Shared Embeddings (SSE) regularization \cite{recsys20ssept} extends SASRec by introducing personalization into the model. The usage of SSE regularization prevents the model from overfitting after leveraging user embeddings. \item \textbf{TiSASRec:} Time interval aware Self-Attention based Sequential Recommendation \cite{wsdn20tisasrec} extends SASRec by modeling both absolute positions of items and personalized time intervals between them in sequences. \item \textbf{GeoSAN:} Geography-aware sequential recommender based on the Self-Attention Network \cite{kdd20geosan} explicitly utilizes the time of check-ins and GPS positions of POIs. In particular, a self-attention based geography encoder is designed to encode the geographical information of each POI in the sequence. \end{itemize} \subsubsection{Implementation Details} We implement the compared methods following the original settings. It should be noted that the experimental results of STAN on Foursquare US and Gowalla are ignored, due to its extremely high memory usage for the distance matrix of all POIs in large-scale datasets. Besides, we remove the geography-aware negative sampler in GeoSAN for a fair comparison, as it is not performed by other methods. We employ the BPR loss \cite{bprloss} for optimizing the RNN-based models, since the cross-entropy loss generally leads to poor performance. As for our proposed STAR-HiT, the embedding dimension $d$ is set to 68 and the hidden dimension $d_k$ in the feed-forward networks is set to twice the embedding dimension, which is 128. The weights $w^1, w^2$ that control the offset and scale to update the subsequence location are both set to 1, and the dropout ratio is set to 0.2. Inspired by \cite{nips17trm}, we train the model using the Adam optimizer \cite{adam} with $\beta_{1}=0.9, \beta_{2}=0.98, \epsilon=10^{-9}$, and the learning rate $\operatorname{l\_rate}$ is varied over the course of training as: $\operatorname{l\_rate}=\lambda \cdot d^{-0.5} \cdot \min (\operatorname{num\_step}^{-0.5}, \operatorname{num\_step} \cdot \operatorname{warmup\_step}^{-1.5})$, where $d$ is the embedding dimension, $\operatorname{num\_step}$ is the number of training steps, the coefficient $\lambda$ controls the overall learning rate which is set to 1, $\operatorname{warmup\_step}$ refers to the training step with the peak learning rate and is set to 400. We use the Xavier initialization \cite{xavier} to initialize model parameters. For all models, we use the default number of training epochs of 200 for Foursquare NYC and 300 for others, and the default mini-batch size of 128. Our model is implemented in PyTorch and available at \url{https://github.com/JennyXieJiayi/STAR-HiT}. \subsection{Performance Comparison (RQ1)} \begin{table*}[htb] \centering \caption{Performance Comparison with Baseline Methods} \label{tab:perf_baselines} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccccc} \toprule & \multicolumn{4}{c}{Foursquare NYC} & \multicolumn{4}{c}{Foursquare US} & \multicolumn{4}{c}{Gowalla} \\ \cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-13} & H@5 & H@10 & N@5 & N@10 & H@5 & H@10 & N@5 & N@10 & H@5 & H@10 & N@5 & N@10 \\ \midrule MFLM & 0.1412 & 0.1421 & 0.1316 & 0.1319 & 0.1386 & 0.1394 & 0.1296 & 0.1299 & 0.1176 & 0.1176 & 0.1106 & 0.1106 \\ GRU4Rec & 0.1421 & 0.2335 & 0.1006 & 0.1310 & 0.1412 & 0.1908 & 0.1074 & 0.1227 & 0.1459 & 0.1983 & 0.1057 & 0.1227 \\ Time-LSTM & 0.1584 & 0.2534 & 0.1085 & 0.1429 & 0.1426 & 0.1928 & 0.1091 & 0.1247 & 0.1465 & 0.2002 & 0.1076 & 0.1243 \\ STRNN & 0.1475 & 0.2425 & 0.1006 & 0.1317 & 0.1738 & 0.2424 & 0.1314 & 0.1533 & 0.1474 & 0.2045 & 0.1078 & 0.1250 \\ STGN & 0.1548 & 0.2561 & 0.1132 & 0.1452 & 0.1803 & 0.2538 & 0.1341 & 0.1588 & 0.1467 & 0.2088 & 0.1070 & 0.1264 \\ STAN & 0.3587 & 0.5112 & 0.2506 & 0.3008 & - & - & - & - & - & - & - & - \\ SASRec & 0.3439 & 0.4597 & 0.1719 & 0.1882 & 0.2727 & 0.3765 & 0.1276 & 0.1356 & 0.2550 & 0.3635 & 0.1169 & 0.1253 \\ SSE-PT & 0.3719 & 0.4950 & 0.2362 & 0.2751 & 0.3181 & 0.3829 & 0.2029 & 0.2056 & 0.3049 & 0.4050 & 0.1596 & 0.1667 \\ TiSASRec & 0.3665 & 0.4860 & 0.2621 & 0.3008 & 0.2803 & 0.3834 & 0.1419 & 0.1491 & 0.2926 & 0.3851 & 0.1462 & 0.1522 \\ GeoSAN & \underline{0.4847} & \underline{0.5571} & \underline{0.3396} & \underline{0.3630} & \underline{0.4100} & \underline{0.4991} & \underline{0.3263} & \underline{0.3551} & \underline{0.3349} & \underline{0.4183} & \underline{0.2583} & \underline{0.2855} \\ STAR-HiT & \textbf{0.5991}&\textbf{0.6597}&\textbf{0.5186}&\textbf{0.5385} & \textbf{0.6968}&\textbf{0.7296}&\textbf{0.6381}&\textbf{0.6486} & \textbf{0.4497}&\textbf{0.4929}&\textbf{0.3921}&\textbf{0.4057} \\ \midrule \textit{Improv.} & 23.60\% & 18.42\% & 52.69\% & 48.32\% & 69.94\% & 46.18\% & 95.58\% & 82.65\% & 34.29\% & 17.82\% & 51.79\% & 42.13\% \\ \bottomrule \end{tabular}} \end{table*} The performance comparison with baseline methods are illustrated in Table \ref{tab:perf_baselines}. We have the following observations: \begin{itemize} \item The proposed STAR-HiT achieves the best performance among all the compared methods. In particular, STAR-HiT improves the performance over the strongest baseline, \textit{i}.\textit{e}., GeoSAN, in terms of HR@5 by 23.6\%, 69.94\%, 34.29\% in Foursquare NYC, Foursquare US, and Gowalla, respectively. Moreover, the corresponding performance improvements in terms of NDCG@5 are 52.69\%, 95.58\%, and 51.79\%, respectively. By stacked hierarchical encoders, STAR-HiT benefits from spatio-temporal context modeling and multi-granularity semantic subsequences discovering in an explicit manner, so as to model the inherent hierarchical structure exhibited in check-in sequences. This verifies the significance of modeling the hierarchical structure of check-in sequences to improve the recommendations. \item The statistical model MFLM achieves relatively poor performance on Foursquare US and Gowalla, while performing comparable to RNN-based models (\textit{i}.\textit{e}., GRU4Rec, Time-LSTM, STRNN, STGN) on Foursquare NYC. The reason may lie in that there are more revisits for users in Foursquare NYC than others (\textit{cf.} Table \ref{tab:dataset_statistics}). Combining the subtle differences of MFLM performing in terms of HRs and NDCGs, we can conclude that MFLM is suitable for users with periodic check-in patterns, while unable to deal with those with a certain degree of flexible check-in patterns. \item STAN and Transformer-based models (\textit{i}.\textit{e}., SASRec, SSE-PT, TiSASRec, GeoSAN, STAR-HiT) with the self-attention mechanism as the major component, consistently outperform RNN-based models. The recurrent neural networks handle the check-in sequence by recursively encoding previous check-ins into the internal memory as a whole, such that correlations between non-consecutive check-ins and long-term semantics are underestimated. Instead, the self-attention mechanism allows capturing the correlations of any two check-ins in an explicit way, despite whether they are consecutive. \item Among RNN-based methods, models that consider at least one of the spatial and temporal contexts (\textit{i}.\textit{e}., Time-LSTM, STRNN, STGN) perform better than others (\textit{i}.\textit{e}., GRU4Rec). Likewise, GeoSAN and STAR-HiT that embed the spatio-temporal context outperform SASRec, SSE-PT, and TiSASRec designed for traditional items. These results highlight the importance of spatio-temporal context modeling. \end{itemize} \subsection{Study of STAR-HiT (RQ2)} To get deep insights into the design of STAR-HiT, we first explore how different settings influence the performance, in terms of the initial length of subsequences, the number of stacked hierarchical encoders, the embedding dimension. Then, we analyze the effectiveness of the various components by conducting an ablation study. \subsubsection{Parameter Analysis} \label{sec:params} \begin{figure*}[htb] \centering \begin{minipage}[t]{0.32\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/1_param_ndcg_nyc.pdf} \label{fig:param_ndcg_nyc}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/3_param_ndcg_us.pdf} \label{fig:param_ndcg_us}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/5_param_ndcg_go.pdf} \label{fig:param_ndcg_go}} \end{minipage} \setcounter{subfigure}{0} \begin{minipage}[t]{0.32\linewidth} \centering \subfigure[Foursquare NYC]{ \includegraphics[width=0.96\linewidth]{figures/2_param_hr_nyc.pdf} \label{fig:param_hr_nyc}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \centering \subfigure[Foursquare US]{ \includegraphics[width=0.96\linewidth]{figures/4_param_hr_us.pdf} \label{fig:param_hr_us}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \centering \subfigure[Gowalla]{ \includegraphics[width=0.96\linewidth]{figures/6_param_hr_go.pdf} \label{fig:param_hr_go}} \end{minipage} \caption{Parameter analysis on three datasets.} \label{fig:param} \end{figure*} The initial length of subsequences and the number of stacked hierarchical encoders play critical roles in STAR-HiT. For example, one-encoder model is unable to capture the hierarchical structure of the check-in sequence, while the semantic subsequences can hardly be discovered if the initial length of subsequences are set too short. On the contrary, too much encoders could lead to an overfitting issue. Besides, if the initial subsequence length is set too long, check-ins with low correlation are aggregated together, which could introduce unwanted noises. Therefore, it is reasonable to assume that there exist optimal settings of these two hyper-parameters for latent hierarchical structure modeling of check-in sequences. Towards this end, we perform experiments with different settings of these two hyper-parameters to investigate their impact on performance. Specifically, we vary the number of hierarchical encoders (\textit{i}.\textit{e}., $l$) in $\{1, 2, 3\}$ and the initial subsequence length (\textit{i}.\textit{e}., $k$) in $\{1, 2, 4, 6, 8, 10, 12\}$. We use STAR-HiT$_{i:j}$ to denote the STAR-HiT with $i$ hierarchical encoders while the initial subsequence length is set to $j$. The experimental results are summarized in Figure \ref{fig:param}, where lines represent the results \textit{w.r.t.} $K=5$ and bars in same color depict results \textit{w.r.t.} $K=10$ for corresponding settings. From this result, we have the following observations: \begin{itemize} \item STAR-HiT$_{2:8}$ yields the best performance across all the board. Therefore, we set $l=2, k=8$ as default parameters unless otherwise specified. This verifies that STAR-HiT benefits from appropriate settings of $l$ and $k$ to effectively model the latent hierarchical structure of check-in sequences, so as to achieve superior recommendation performance. \item With fixed $l$, the performance first increases then drops as $k$ gets larger in most cases. Moreover, with larger $l$, the performance is more likely to peak at smaller $k$. It is also worthwhile to note that on Foursquare NYC and Foursquare US, STAR-HiT$_{2:k}$ performs worse than STAR-HiT$_{3:k}$ when $k<6$, while outperforms STAR-HiT$_{3:k}$ when $k\leq 6$. We attribute these characteristics to the similar role the two parameters play in capturing the multi-granularity semantic subsequences. \item STAR-HiT$_{1:k}$ performs worst compared to STAR-HiT$_{2:k}$ or STAR-HiT$_{3:k}$ on Foursquare NYC generally. The reason may lie in the relatively high revisit frequency in Foursquare NYC that exhibits a stronger hierarchical structure of the check-in sequence. \item STAR-HiT$_{3:k}$ falls behind STAR-HiT$_{1:k}$ or STAR-HiT$_{2:k}$ by a large margin in most cases on Gowalla. The possible reasons are two-fold, that is, the low revisit frequency and the high sparsity lead to difficulties in capturing the hierarchical structure of check-in sequences. \end{itemize} Taking Foursquare NYC as an example, we analyze the training efficiency of STAR-HiT with different settings of $l$ and $k$. The training loss curves are illustrated in Figure \ref{fig:training}. As we can see, STAR-HiT$_{1:1}$ and STAR-HiT$_{2:1}$ exhibit large fluctuations compared to models with higher $k$. When $l=1$, the training loss curve of STAR-HiT$_{1:4}$ is generally lower and more stable than STAR-HiT$_{1:8}$ and STAR-HiT$_{1:12}$, while such differences are unnoticeable when $l=2$. In addition, when $k>1$, the loss of STAR-HiT$_{2:k}$ drops faster and more steadily to nearly zero compared to STAR-HiT$_{1:k}$. Jointly analyzing Figure \ref{fig:param} and Figure \ref{fig:training}, we again verify that with the suitable settings of $l$ and $k$, STAR-HiT is capable of discovering the semantic subsequences in check-in sequences, thereby better uncovering the hierarchical structure present in user sequential behavior patterns. \begin{figure}[htb] \centering \begin{minipage}[t]{0.95\linewidth} \centering \subfigure[$l=1$]{ \includegraphics[width=0.99\linewidth]{figures/13_training_l1_nyc.pdf} \label{fig:nyc_train_l1}} \end{minipage} \begin{minipage}[t]{0.95\linewidth} \centering \subfigure[$l=2$]{ \includegraphics[width=0.99\linewidth]{figures/14_training_l2_nyc.pdf} \label{fig:nyc_train_l2}} \end{minipage} \caption{Training efficiency on Foursquare NYC with different initial subsequence lengths $k$.} \label{fig:training} \end{figure} \subsubsection{Effect of the embedding dimension} We also conduct experiments to analyze the effect of the embedding dimension (\textit{i}.\textit{e}., $d$) used in STAR-HiT. In particular, we set the embedding dimension from 16 to 112, with a step of 16. Figure \ref{fig:dim} illustrates the experimental results in terms of NDCG@10 and HR@10 on three datasets. From Figure \ref{fig:dim}, we can see that the performance gets much worse when using a small embedding dimension, as it is insufficient to encode the contextual information. As the embedding dimension grows, the performance first increases dramatically and then gradually becomes stable. Considering the trade-off between cost and performance, we set the default embedding dimension $d$ to 64. \begin{figure*}[htb] \centering \begin{minipage}[t]{0.3\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/7_dim_ndcg_nyc.pdf}} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/9_dim_ndcg_us.pdf}} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \subfigure{ \includegraphics[width=0.96\linewidth]{figures/11_dim_ndcg_go.pdf}} \end{minipage} \setcounter{subfigure}{0} \begin{minipage}[t]{0.3\linewidth} \centering \subfigure[Foursquare NYC]{ \includegraphics[width=0.96\linewidth]{figures/8_dim_hr_nyc.pdf}} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \subfigure[Foursquare US]{ \includegraphics[width=0.96\linewidth]{figures/10_dim_hr_us.pdf}} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \subfigure[Gowalla]{ \includegraphics[width=0.96\linewidth]{figures/12_dim_hr_go.pdf}} \end{minipage} \caption{Effect of the embedding dimension $d$.} \label{fig:dim} \end{figure*} \subsubsection{Ablation Study} As stated, there are two essential components of the proposed hierarchical encoder, including: 1) \textbf{attention module}, including the global attention layer and local attention layer, which captures global context and local context to enhance representations; 2) \textbf{subsequence aggregation module}, including the sequence partition layer and subsequence aggregation layer, which utilizes contextual information to reason and locate subsequences, thereby obtaining semantic subsequences. To analyze the effectiveness of the various components, we conduct an ablation study by considering the following variants: \begin{itemize} \item STAR-HiT$_{\operatorname{GA}}$: The global attention layer in each hierarchical encoder is removed, which captures the global spatio-temporal context to enhance representations. \item STAR-HiT$_{\operatorname{LA}}$: The local attention layer in each hierarchical encoder is removed, which injects the local context of the extracted subsequence into each representations within. \item STAR-HiT$_{\operatorname{LGA}}$: Both the global attention layer and local attention layer are removed, such that the model solely hierarchically partitions the sequence and aggregates the subsequences without any contextual enhancement. \item STAR-HiT$_{\operatorname{AGG2}}$: The hierarchical structure modeling through the sequence partition layer and subsequence aggregation layer is disabled. The local attention layer is removed as well, since it is calculated within the identified subsequence. Overall, the proposed hierarchical encoder degrades to the vanilla Transformer encoder that solely learns from POI-to-POI interactions. \item STAR-HiT$_{\operatorname{AGG4}}$: Since there are two attention layers in a hierarchical encoder, we also double the number of encoders (\textit{i}.\textit{e}., $l=4$) for STAR-HiT$_{\operatorname{AGG2}}$, so as to involve the same number of attention layers as the original STAR-HiT. \end{itemize} \begin{table*}[htb] \centering \caption{Performance Comparison (NDCG@K) with STAR-HiT Variants} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{Foursquare NYC} & \multicolumn{2}{c}{Foursquare US} & \multicolumn{2}{c}{Gowalla} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & NDCG@5 & NDCG@10 & NDCG@5 & NDCG@10 & NDCG@5 & NDCG@10 \\ \midrule STAR-HiT$_{\operatorname{GA}}$ & 0.3430 & 0.3602 & 0.5612 & 0.5756 & 0.2988 & 0.3162 \\ STAR-HiT$_{\operatorname{LA}}$ & 0.4715 & 0.4914 & 0.6213 & 0.6341 & 0.3569 & 0.3728 \\ STAR-HiT$_{\operatorname{GLA}}$ & 0.3059 & 0.3158 & 0.5359 & 0.5505 & 0.2825 & 0.2970 \\ STAR-HiT$_{\operatorname{AGG2}}$ & 0.1919 & 0.2154 & 0.5457 & 0.5608 & 0.3459 & 0.3621 \\ STAR-HiT$_{\operatorname{AGG4}}$ & 0.2136 & 0.2326 & 0.3540 & 0.3729 & 0.2556 & 0.2651 \\ \midrule STAR-HiT & 0.5186 & 0.5385 & 0.6381 & 0.6486 & 0.3921 & 0.4057 \\ \bottomrule \end{tabular} \label{tab:perf_variants} \end{table*} Table \ref{tab:perf_variants} compares the performance of STAR-HiT and its variants in terms of NDCG@5 and NDCG@10. From Table \ref{tab:perf_variants}, we have three key observations: \begin{itemize} \item \textbf{Attention module: }Removing the global attention layer leads to significant performance drops in terms of NDCG@5 by 12.05\%-33.86\% and NDCG@10 by 11.26\%-33.11\% on three datasets. Besides, removing the local attention layer leads to relatively slight performance drops with regard to NDCG@5 by 2.63\%-9.08\% and NDCG@10 by 2.23\%-8.74\%. Since the local attention is calculated within each extracted subsequence, it may be affected by the performance of the subsequence partition layer. This may be the reason why the local attention layer provides less performance improvement compared to the global attention layer. Overall, jointly removing the attention module makes the model unable to make full use of the context, resulting in even worse performance, that is, the performance drops in terms of NDCG@5 by 16.01\%-41.01\% and NDCG@10 by 15.11\%-41.34\%. \item \textbf{Subsequence aggregation module: }Disabling the multi-granularity subsequence aggregation, STAR-HiT$_{\operatorname{AGG2}}$ performs 11.78\%-63.00\%, 10.75\%-60.00\% worse than STAR-HiT in terms of NDCG@5, NDCG@10, respectively. STAR-HiT$_{\operatorname{AGG4}}$ yields worse performance as well. Surprisingly, STAR-HiT$_{\operatorname{AGG4}}$ outperforms STAR-HiT$_{\operatorname{AGG2}}$ on Foursquare NYC, while performs notably worse on others. The results indicate that STAR-HiT$_{\operatorname{AGG4}}$ may suffer from overfitting issue due to the sparsity of data. The performance comparison demonstrates the significance of involving not only POI-to-POI interactions but also subsequence-level context and interactions for recommendations, which is achieved by STAR-HiT. \item As expected, STAR-HiT outperforms all the variants by a large margin. Moreover, jointly analyzing Table \ref{tab:perf_baselines} and Table \ref{tab:perf_variants}, we can see that the variants of STAR-HiT still achieve competitive performance compared to baseline methods. The results emphasize the significance of joint context modeling through attention mechanisms and hierarchical structure modeling through multi-granularity semantic subsequence discovering. \end{itemize} \begin{table*}[htb] \centering \caption{Performance Comparison (NDCG@K) with STAR-HiT Variants of Fixed-Length Subsequences} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{Foursquare NYC} & \multicolumn{2}{c}{Foursquare US} & \multicolumn{2}{c}{Gowalla} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & NDCG@5 & NDCG@10 & NDCG@5 & NDCG@10 & NDCG@5 & NDCG@10 \\ \midrule STAR-HiT$_{\operatorname{GA}}$ & 0.3430 & 0.3602 & 0.5612 & 0.5756 & 0.2988 & 0.3162 \\ STAR-HiT$_{\operatorname{GA-F}}$ & 0.1511 & 0.1828 & 0.5296 & 0.5487 & 0.2719 & 0.2860 \\ \midrule STAR-HiT$_{\operatorname{LA}}$ & 0.4715 & 0.4914 & 0.6213 & 0.6341 & 0.3569 & 0.3728 \\ STAR-HiT$_{\operatorname{LA-F}}$ & 0.2860 & 0.3050 & 0.5439 & 0.5635 & 0.3405 & 0.3556 \\ \midrule STAR-HiT$_{\operatorname{GLA}}$ & 0.3059 & 0.3158 & 0.5359 & 0.5505 & 0.2825 & 0.2970 \\ STAR-HiT$_{\operatorname{GLA-F}}$ & 0.1430 & 0.1665 & 0.5085 & 0.5262 & 0.2612 & 0.2758 \\ \midrule STAR-HiT & 0.5186 & 0.5385 & 0.6381 & 0.6486 & 0.3921 & 0.4057 \\ STAR-HiT$_{\operatorname{F}}$ & 0.3014 & 0.3240 & 0.5686 & 0.5874 & 0.3247 & 0.3411 \\ \bottomrule \end{tabular} \label{tab:perf_variants_2} \end{table*} The sequence partition layer in the hierarchical encoder adaptively partitions the input sequence into multiple semantic subsequences, wherein the offset (\textit{i}.\textit{e}., ${dx}_i$) and length (\textit{i}.\textit{e}., $k_i$) for each subsequence are learned. Offsets are predicted to shift the subsequences towards check-ins with strong correlation, whereas lengths are utilized to better maintain local semantics. They are both supposed to facilitate the performance of STAR-HiT. Accordingly, we conduct experiments to explore how the learnable subsequence locations affect the performance. In particular, we implement the variants of STAR-HiT, STAR-HiT$_{\operatorname{GA}}$, STAR-HiT$_{\operatorname{LA}}$, STAR-HiT$_{\operatorname{GLA}}$ by disabling the learnable offsets and lengths, that is, subsequences are fixed to be extracted by uniformly partitioning the input sequence. We denote these variants as STAR-HiT$_{\operatorname{F}}$, STAR-HiT$_{\operatorname{GA-F}}$, STAR-HiT$_{\operatorname{LA-F}}$, and STAR-HiT$_{\operatorname{GLA-F}}$, respectively. The experimental results in terms of NDCG@5 and NDCG@10 are shown in Table \ref{tab:perf_variants_2}. In general, models with learnable locations of subsequences improve over models with fixed-length subsequence extraction by 64.87\%-126.95\%, 5.40\%-14.24\%, 4.82\%-9.91\% in terms of NDCG@5, and 89.67\%-97.03\%, 4.63\%-12.52\%, 4.82\%-10.57\% in terms of NDCG@10 on three datasets. It is worth noting that learnable locations of subsequence boost the performance substantially in Foursquare NYC. This may be due to the stronger hierarchical structure present in check-in sequences in Foursquare NYC, which is consistent with the findings in Section \ref{sec:params}. Partitioning the sequence into subsequences in a fixed way could potentially destroy the semantic information, eventually resulting in the failure to capture the hierarchical structure in check-in sequences. By adaptively partition the sequence into subsequences with different positions and lengths learned from contextual information, STAR-HiT is able to model the hierarchical structure of the check-in sequence by well preserving the semantics in subsequences. \subsection{Case Study (RQ3)} Jointly modeling the spatio-temporal context and multi-granularity semantic subsequences, STAR-HiT makes full use of the personalized sequential behavior pattern to predict the next visiting POI. In order to better understand the capability of STAR-HiT to model the latent hierarchical structure, we randomly select a check-in sequence sample (\textit{uid}: 172) on the test set from Foursquare NYC to conduct a case study. We first visualize the global attention weights of each head in each encoder to validate the capacity of STAR-HiT to encode spatio-temporal context. As shown in Figure \ref{fig:attentionmap}, different heads in Encoder-1 highlight different relevant check-ins, whereas heads in Encoder-2 after a round of subsequence aggregation consistently focus on the last few check-in subsequences, which seems to be the most representative subsequences correlated to the target check-in. Encoder-1 is responsible for encoding the correlations between individual check-ins, thus check-ins to the same POI or with similar spatio-temporal context to the target check-in are most likely to be assigned high weights. Note that even the weights of different check-ins to the same POI are not identical, due to the spatio-temporal context embedding that makes representations position-specific. Next, Encoder-2 focuses on subsequence-level correlation, thus the most relevant subsequences generally point to recent subsequences with similar periodicity. \begin{figure}[htb] \centering \subfigure[E-1, H-1]{ \includegraphics[height=0.21\linewidth]{figures/172_l0_h0.pdf}} \subfigure[E-1, H-2]{ \includegraphics[height=0.21\linewidth]{figures/172_l0_h1.pdf}} \subfigure[E-1, H-3]{ \includegraphics[height=0.21\linewidth]{figures/172_l0_h2.pdf}} \subfigure[E-1, H-4]{ \includegraphics[height=0.21\linewidth]{figures/172_l0_h3.pdf}} \subfigure[E-2, H-1]{ \includegraphics[height=0.21\linewidth]{figures/172_l1_h0.pdf}} \subfigure[E-2, H-2]{ \includegraphics[height=0.21\linewidth]{figures/172_l1_h1.pdf}} \subfigure[E-2, H-3]{ \includegraphics[height=0.21\linewidth]{figures/172_l1_h2.pdf}} \subfigure[E-2, H-4]{ \includegraphics[height=0.21\linewidth]{figures/172_l1_h3.pdf}} \caption{Heatmaps of global attention weights of a random sample (\textit{uid}: 172) from Foursquare NYC.} \label{fig:attentionmap} \end{figure} In addition, we visualize the correlation matrix $M \in \mathbb{R}^{L\times L}$ of check-in representations to validate the effectiveness of STAR-HiT in capturing the latent hierarchical structure of the check-in sequence. The final check-in representations are obtained by: 1) concatenating the context-aware representations $\mathbf{E}(u)$ and their belonging subsequence representations in each encoder, 2) calculating the Pearson Correlation Coefficient (PCC) of normalized representations of $i$-th check-in and $j$-th check-in, which serve as each element $M_{i,j}$ in the correlation matrix. The correlation matrix is illustrated in Figure \ref{fig:corr_mat}, and meanwhile the check-in sequence segment corresponding to the region marked with the red-dotted rectangle is shown in \ref{fig:seq_seg}. We also plot the visited POIs projected on the map in Figure \ref{fig:checkin_in_map}. From Figure \ref{fig:vis}, we can see that STAR-HiT uncovers the multi-granularity semantic subsequences. For daily regularity, check-ins on Tuesday (\textit{pid}: 8, 16, 5) are identified as a subsequence, while the check-ins from Tuesday noon to Wednesday noon (\textit{pid}: 8, 16, 5, 3, 2, 1) are extracted as another subsequence. Furthermore, the weekly regularity is discovered as well, that is, the first week marked with the red-dotted rectangle in \ref{fig:corr_mat} is clearly separated from the following weeks. It should be noted that as shown in Figure \ref{fig:seq_seg}, some subsequences aggregate the last check-in (\textit{pid}: 5) with former check-ins while some do not. It is reasonable since that even though the check-in of POI 5 occurs on Monday, it occurs in the early hours of Monday, so it is more likely to be on the same itinerary as the previous check-in. \begin{figure*}[htb] \centering \begin{minipage}[t]{0.5\linewidth} \centering \subfigure[Correlation Matrix]{ \includegraphics[width=0.99\linewidth]{figures/vis_corr_mat.pdf} \label{fig:corr_mat} }\vspace{1pt} \subfigure[Check-in Sequence Segment]{ \includegraphics[width=0.99\linewidth]{figures/172_seq_segment.pdf} \label{fig:seq_seg} } \end{minipage} \begin{minipage}[t]{0.42\linewidth} \centering \subfigure[Map]{ \includegraphics[width=0.907\linewidth]{figures/172_map_v2_crop.png} \label{fig:checkin_in_map} } \end{minipage} \caption{Visualization of a random sample (\textit{uid}: 172) on the test set.} \label{fig:vis} \end{figure*} Overall, the proposed STAR-HiT can not only achieve superior recommendation performance, but also effectively capture the latent hierarchical structure present in check-in sequences, thereby providing explanations for personalized recommendations accordingly. \section{Conclusion} In this work, we explore the latent hierarchical structure of sequential behavior patterns exhibited in user movements. We propose a novel Spatio-Temporal context AggRegated Hierarchical Transformer (STAR-HiT) model for next POI recommendation, which consists of stacked hierarchical encoders to capture the latent hierarchical structure of check-in sequences. Specifically, the hierarchical encoder is designed to jointly model spatio-temporal context and locate semantic subsequences with different positions and lengths in an explicit way. In each encoder, the global and local attention layers enhance spatio-temporal context modeling by capturing inter- and intra- subsequence dependencies; meanwhile, the sequence partition layer and subsequence aggregation layer adaptively locate and fuse semantic subsequences, and generate a new sequence with a higher level of granularity. This sequence is then fed into the next encoder for further subsequence discovery and sequence abstraction. By stacking multiple hierarchical encoders, semantic subsequences of different granularities are recursively identified and integrated, constituting the overall hierarchical structure present in the user movement. We perform extensive experiments on three public datasets to: 1) demonstrate that our proposed STAR-HiT outperforms state-of-the-art methods by a large margin, 2) get deep insights into the design of STAR-HiT, and 3) verify that STAR-HiT successfully captures multi-level semantic subsequences for revealing the latent hierarchical structure of the check-in sequence, so as to guarantee the robustness and explainability of the recommendation.
1,314,259,992,673
arxiv
\section{Introduction} Quark-Gluon Plasma (QGP) is a deconfined state of quarks and gluons, and is currently a subject of much theoretical and experimental research. QGP is produced in heavy-ion collisions, e.g., Au$-$Au collisions at Relativistic Heavy-ion Collider (RHIC) or Pb$-$Pb collisions at the Large Hadron Collider (LHC) experiments. Some of the signatures of QGP include heavy quarkonium ($J/\psi$ or $\Upsilon$) suppression, collective flow and photon/dilepton production etc. Usually, heavy quarkonium is suppressed to a much larger extent, if QGP is formed~\cite{mats, Chu, Madhu1, gans1, gans2, rishi, Madhu2} in the heavy-ion collision experiments. However, at the LHC, $J/\psi$ yield may actually increase due to recombination~\cite{gans3, stathadref, twice1ref, trans2ref}, which would obviously reduce the effective suppression. For proton-Lead (p$-$Pb) collisions, the heavy quarkonium yield may be explained using Cold Nuclear Matter (CNM) effects itself~\cite{medvel}. However, these do not necessarily prove beyond reasonable doubt that QGP is not formed in p$-$Pb collisions. There have also been attempts to explain the p$-$Pb data using hot nuclear matter effects~\cite{refA1, refA2, refA3, refA4}. In this work, we attempt to explore the yield enhancement of the charmonium state $\psi(2S)$ w.r.t. $J/\psi$, at high $p_T$, as a possible indication of the presence of QGP. One possible explanation for yield enhancement is secondary recombination of $c$ and $\bar{c}$ pair. However, it is unlikely to be a reason for $\psi(2S)$ enhancement at high $p_T$ in the case of p$-$Pb collision. This is because, both theoretical prediction and experimental data indicate that recombination decreases at high $p_T$~\cite{gans3, recomb}. Furthermore, secondary recombination depends quadratically on the number of $c$ and $\bar{c}$ pairs~\cite{thews, gans3, rituraj}, which would be very less for p$-$Pb collision. We argue that a possible reason for $\psi(2S)$ enhancement could be due to the gluon induced excitation of $J/\psi(1S)$ to $\psi(2S)$. We further argue that in a medium of equilibrated gluon distribution, this gluon induced excitation increases with $p_T$ of $J/\psi$. This can be understood in the following way. When the gluon medium achieves equilibrium, it would follow the Bose-Einstein distribution, which results in the concentration of gluons in the low energy regime. The gluon density would then decrease exponentially with gluon energy, $E_g$. The mass difference between $\psi(2S)$ and $J/\psi(1S)$, is about $0.6$ GeV. The energy equivalent to it needs to be provided by the gluon. But in a Bose-Einstein distribution, $\frac{1}{\exp(Eg/T) - 1}$, most of the gluons would have much smaller energy than $0.6$ GeV. However, for $J/\psi$ with large $p_T$, there would be a blue-shift in the gluon energy in the $J/\psi$ frame of reference, in the forward direction, and a red-shift in the backward direction. We represent this Doppler shift as $D(v_{rel}, \theta) = \gamma \frac{E_g}{T}(1 - v_{rel}\cos(\theta))$, where $v_{rel}$ is the relative velocity between the medium and $J/\psi$, and $\theta$ is the angle between $v_{rel}$ and incoming gluon momentum. The subsequent Bose-Einstein distribution, then becomes, \begin{equation} f_g(E_g,v_{rel},\theta) = \frac{1}{\exp(D(v_{rel},\theta)) - 1}. \end{equation} The modified Bose-Einstein distribution, $f_g(E_g,v_{rel},\theta)$, leads to an exponential increase in the availability of gluons with energy around $0.6$ GeV, leading to a significant number of $J/\psi$ getting excited to the $\psi(2S)$ state. The above effect is elaborated more in Sec.~\ref{sec:ATLAScomp} (see Fig.~\ref{fig:doppler}). As a side note, the mass difference of $0.6$ GeV is expected to decrease with temperature. Thus, one would qualitatively expect that the gluon induced enhancement from $J/\psi(1S)$ to $\psi(2S)$ would increase with $p_T$. We explore this analytically in the framework of pNRQCD, and compare the results with the preliminary ATLAS data~\cite{ATLAS, new_ATLAS} at $5.02$ TeV. The $p_T$ values of $J/\psi$ and $\psi(2S)$ at ATLAS~\cite{ATLAS, new_ATLAS} are high. We perform the analysis in the rest frame of the $J/\psi$. In the rest frame, the gluons of interest have an energy of around $0.6$ GeV (mass difference between $J/\psi$ and $\psi(2S)$) after the blue shift. The population of higher energy gluons decrease with increasing energy. Since the gluons of interest are ultra soft gluons even in the $J/\psi$ rest frame, it allows us to analyze the phenomenon within the framework of pNRQCD. pNRQCD as an effective field theory, was initially proposed in~\cite{pineda2}. A good overview on pNRQCD has been given in~\cite{pineda,nora3}. The suppression effects are also included in the present work. For an apples to apples comparison of the gluon induced dissociation with the gluon induced enhancement presented in this work, we utilize the model of gluon induced dissociation, developed in~\cite{Nendzig}, using the same pNRQCD Lagrangian. We calculated suppression of $\psi(2S)$ and $J/\psi$, and compare it with ALICE data~\cite{alicepsi}. A crucial aspect is that the binding energy of $\psi(2S)$ is much smaller than the energy gap between $J/\psi$ and $\psi(2S)$. This is expected to result in the $\psi(2S)$ dissociation to be significantly higher than the $J/\psi$ to $\psi(2S)$ excitation. However, the relatively small finite QGP size (lifetime) in a p$-$Pb collision, imposes significant restriction on the $\psi(2S)$ dissociation, especially at high $p_T$. The dissociation process needs to get completed within the QGP phase itself, and not carry over to the hadronic phase. In the hadronic phase, the intermediate octet state (after absorption of a gluon) is much more likely to emit a gluon and form a bound state rather than dissociate into naked $c$ and $\bar{c}$. We revisit this again in Sec.~\ref{sec:Results}. This causes the dissociation rate to reduce. Seminal work on heavy quark bound states and their interaction with gluon was done by Peskin~\cite{Pes1} and Bhanot and Peskin~\cite{Pes2}. Subsequent work on heavy quark bound states can be found in~\cite{ghig, nora1, nora2, Nendzig}. The organization of the rest of the article is as follows. In Sec.~\ref{sec:formulation}, the $1S \rightarrow 2S$ transition cross-section, $\sigma$, and the excitation rate, $\Gamma_{1S\rightarrow 2S}$, are calculated. The dissociation processes are outlined in Sec.~\ref{sec:diss}. This is followed by Sec.~\ref{sec:Results}, where the results are shown and compared with both ATLAS and ALICE experimental data. Finally, in Sec.~\ref{sec:conclusion}, we draw our final conclusions. \section{{\MakeLowercase p}NRQCD formulation} \label{sec:formulation} \subsection{Derivation of the cross-section} In this section, the cross-section $\sigma$ is calculated using the potential non-relativistic perturbative QCD (pNRQCD) formulation. The corresponding excitation rate $\Gamma_{1S \rightarrow 2S}$ is calculated in the sub section~\ref{sec:rate_constant}, at the end of the current section. The relevant part of the pNRQCD Lagrangian used for the calculation is given by \begin{math} \nonumber g Tr \left ( S^{\dagger}(\vec{r}.\vec{E})O + O^{\dagger}(\vec{r}.\vec{E})S \right ). \end{math} The variables, $S$, $O$, $E$ and $g$, refer to the singlet, octet, gluonic chromo-electric field and coupling constant, respectively. Figure~\ref{fig:1S_2S} depicts the Feynman diagrams for the processes involved. The net amplitude would be the sum of the two diagrams shown in Fig.~\ref{fig:1S_2S}. In the second diagram, the incoming singlet first emits an outgoing gluon of $3$-momentum $\vec{k}_2$, and subsequently absorbs a gluon with $3$-momentum $\vec{k}_1$. The possible singlet states, which could significantly contribute to $\psi(2S)$ via gluo-excitation can be $J/\psi$, $\eta_c$ or $\chi(1P)$. We mainly focus on $J/\psi$. The case, when the incoming singlet is either $\eta_c$ or $\chi(1P)$, is discussed in the latter part of this section. \begin{figure}[h!] \includegraphics[width=80mm,height=80mm]{1S_2S_b.eps} \caption{Feynman diagrams for gluon induced excitation of $1S$ state to $2S$ state. The second diagram is just the first diagram with gluons interchanged.} \label{fig:1S_2S} \end{figure} \label{sec:NRQCD} The notation and variables used are described as follows. The center-of-mass coordinates are: \begin{itemize} \item $\vec{R}_i = (\vec{x}_{qi} + \vec{x}_{\bar{q}i})/2$; ~~$\vec{R}_f = (\vec{x}_{qf} + \vec{x}_{\bar{q}f})/2$. \end{itemize} The relative motion (RM) coordinates are: \begin{itemize} \item $\vec{r}_i = (\vec{x}_{qi} - \vec{x}_{\bar{q}i})$; ~~~~~~$\vec{r}_f = (\vec{x}_{qf} - \vec{x}_{\bar{q}f})$. \end{itemize} The singlet and octet fields are $S = S_{nlm}\,I_3/\sqrt{N_c}$ and $O_q = \sqrt{2} O_q^bT^b$, with $\frac{q^2}{m_c}$ being the energy eigenvalues of the octet state, and $T^b$ being the generators of $SU(N_c)$. $m_c$ is the mass of the charm quark and anti-quark. In particular, we denote the singlet $1S$ and $2S$ wavefunctions as $S_{1S}(\vec{x})$ and $S_{2S}(\vec{x})$, respectively. $\vec{P}_1$, $\vec{P}_2$, $\vec{Q}$, $\vec{k}_1$ and $\vec{k}_2$ refer to the $3$-momentum of the incoming $J/\psi$, outgoing $\psi(2S)$, octet, incoming gluon and outgoing gluon, respectively. The variables, $m_{1S}$, $m_{2S}$ and $m_o$, are the invariant masses of the $J/\psi$, $\psi(2S)$ and octet states, respectively, while $\vec{p}_1$, $\vec{p}_2$ and $\vec{q}$ refer to the relative $3$-momentum between the $c\bar{c}$ pair comprising these particles. $k_{0x}$ is the energy corresponding to $\vec{k}_x$, with $x=1,2$. We model the process at leading order (LO) in pNRQCD, in a manner similar to~\cite{Nendzig}. The RM octet propagator at LO would be \begin{eqnarray} \nonumber P^{rm}_O(\vec{x},x_0,\vec{y},y_0)\\ \nonumber = \sum_{qlm} O^{b}_{qlm}(\vec{y})O^{* b'}_{qlm}(\vec{x})\delta^{bb'}e^{-i\frac{q^2}{m_c}(y_0 - x_0)}\\ \rightarrow \sum_{lm}\int dq O^b_{qlm}(\vec{y})O^{* b'}_{qlm}(\vec{x})\delta^{bb'}e^{-i\frac{q^2}{m_c}(y_0 - x_0)}, \end{eqnarray} as the octet states, represented by $q$ is a continuum of states. The octet wavefunctions are normalized to the Dirac delta function. The RM singlet propagator would be \begin{eqnarray} \nonumber P^{rm}_S(\vec{x},x_0,\vec{r}_i,t_i) = \\ \sum_{nlm} S_{nlm}(\vec{x})S^{*}_{nlm}(\vec{r}_i)e^{-iE_{S}(x_0 - t_i)} \end{eqnarray} The variable $E_{S}$ represents the singlet energy eigenvalues. The center-of-mass (CM) octet propagator would be \begin{eqnarray} \nonumber P^{cm}_{O}(\vec{Y},y_0,\vec{X},x_0) = \\ \int \frac{dE_v}{2\pi} \int \frac{d^3Q}{(2\pi)^3} e^{-i\left ( E_v(y_0 - x_0) + \vec{Q}.(\vec{Y}-\vec{X})\right )}. \end{eqnarray} Here, $E_v$ is the center of mass energy of the virtual, off-shell, octet state. The center-of-mass propagator for the incoming singlet would be \begin{eqnarray} \nonumber P^{cm}_{S}(\vec{X},x_0,\vec{R}_i,t_i) = \\ \int \frac{d^3P_1}{(2\pi)^3} e^{-i \left ( \frac{\vec{P}_1^2}{2m_s}(x_0 - t_i) + \vec{P}_1.(\vec{X}-\vec{R}_i) \right ) }. \end{eqnarray} Similarly, let $P^{cm}_{S}(\vec{R}_f,t_f,\vec{Y},y_0)$ be the center-of-mass propagator for the outgoing singlet. The overall amplitude from $(\vec{r}_i,t_i)$ to $(\vec{r}_f,t_f)$, including the vertex factors, would be \begin{eqnarray} \nonumber G(r_f, t_f, R_f, r_i, t_i, R_i) = \\ \nonumber g^2 C \int dx_0 \int dy_0 \int d^3X \int d^3 Y \int d^3x \int d^3y \\ \nonumber \times \Big \{ P^{cm}_S(\vec{R}_f,t_f,\vec{Y},y_0)P^{rm}_S(\vec{r}_f,t_f,\vec{y},y_0)(\vec{y}.\vec{E}^{a*}_2)\\ \nonumber \times P^{cm}_O(\vec{Y},y_0,\vec{X},x_0)P^{rm}_O(\vec{y},y_0,\vec{x},x_0)(\vec{x}.\vec{E}^a_1)\\ \nonumber \times P^{cm}_S(\vec{X},x_0,\vec{R}_i,t_i)P^{rm}_S(\vec{x},x_0,\vec{r}_i,t_i) \Big \}\\ + {gluons~interchanged~terms,~~} \end{eqnarray} where, \begin{eqnarray} \nonumber C = Tr\left[ \frac{I_3}{\sqrt{N_c}}T^a\sqrt{2}T^b\right] \times Tr\left[ \frac{I_3}{\sqrt{N_c}}\sqrt{2}T^b T^c\right]\\ \nonumber = \frac{\delta^{ac}}{2N_c}. \end{eqnarray} The superscripts "a" and "c" refer to the species of the incoming and outgoing gluons. The $T$ matrix for the $1S$ to $2S$ transition is then given by \begin{eqnarray} \nonumber T(1S \rightarrow 2S) = \int d^3 r_i \int d^3 r_f e^{-i\left ( \vec{P}_2.R_f\right )}\\ S_{2S}^*(r_f)G(r_f, t_f, R_f, r_i, t_i, R_i) S_{1S}(r_i)e^{i\left ( \vec{P}_1.R_i\right )}. \end{eqnarray} The term $\vec{x}.\vec{E}^a_1$ evaluates to $k_{01}(\vec{x}.\hat{\epsilon}_1)e^{-i\left ( \vec{k}_1.\vec{X} + k_{01}x_0 \right )}$, where $\hat{\epsilon}_1$ is the polarization of the incoming gluon. Similarly, for the outgoing gluon, $\vec{y}.\vec{E}^{a*}_2$ evaluates to $k_{02}(\vec{y}.\hat{\epsilon}^*_2)e^{i\left (\vec{k}_2.\vec{Y} + k_{02}y_0 \right )}$. We also normalize the singlet wavefunctions as: \begin{eqnarray} \nonumber S_{n'l'm'}(\vec{x})S^*_{nlm}(\vec{r_i}) = \delta^3(\vec{r}_i - \vec{x})\delta_{n,n'}\delta_{l,l'}\delta_{m,m'}\\ S_{n'l'm'}(\vec{r_f})S^*_{nlm}(\vec{y}) = \delta^3(\vec{y} - \vec{r}_f)\delta_{n,n'}\delta_{l,l'}\delta_{m,m'} \end{eqnarray} where, we have dropped the superscript "b", and taken the wavefunction $O^b_{qlm}(\vec{x}) = O_{qlm}(\vec{x})$. With all these, \begin{eqnarray} \nonumber T(1S \rightarrow 2S) = \Bigg [ g^2 C \int \frac{dE_v}{2\pi}\\ \nonumber \sum_{lm} \int dq \langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_{qlm} \rangle \langle O_{qlm}|(\vec{x}.\hat{\epsilon}_1)|S_{1S} \rangle \\ \nonumber \times \int d^3X \int d^3Y \int dx_0 \int dy_0\\ \nonumber \times e^{-i \left ( \vec{P}_2.(\vec{R}_f - \vec{Y}) + \frac{\vec{P}_2^2}{2m_{2S}}(t_f - y_0) + E_{1S}(t_f - y_0) \right ) }\\ \nonumber \times e^{-i \left ( \vec{Q}.(\vec{Y}- \vec{X}) + E_v(y_o - x_o) + \frac{q^2}{m_c}(y_o - x_o) \right )}\\ \nonumber \times e^{-i \left ( \vec{P}_1.(\vec{X}- \vec{R}_i) + \frac{\vec{P}_1^2}{2m_{1S}}(x_o - t_i) + E_{2S}(x_o - t_i) \right )}\\ \nonumber \times e^{-i\left ( \vec{k}_1.\vec{X} + k_{01}x_0 \right )} e^{i\left (\vec{k}_2.\vec{Y} + k_{02}y_0 \right )} \Bigg ]\\ + {gluons~interchanged~terms,~~} \end{eqnarray} where, \begin{eqnarray} \nonumber \langle O_{qlm}|(\vec{x}.\hat{\epsilon}_1)|S_{1S} \rangle =\\ \nonumber \int d^3x O^*_{qlm}(\vec{x})(\vec{x}.\hat{\epsilon}_1)S_{1S}(\vec{x}),\\ \nonumber and~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \nonumber \langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_{qlm} \rangle =\\ \int d^3y S^*_{2S}(\vec{y})(\vec{y}.\hat{\epsilon}^*_2)O_{qlm}(\vec{y}). \end{eqnarray} The variables $E_{1S}$ and $E_{2S}$ are the energy eigenvalues for the input $J/\psi$ and output $\psi(2S)$ singlet states respectively. Finally, performing all the integrals: \begin{eqnarray} \label{eq:tmatrix} \nonumber T(1S \rightarrow 2S) = g^2 C (2\pi) k_{01}k_{02} \int \frac{dE_v}{2\pi} \sum_{lm}\\ \nonumber \Bigg [ \Big \{ \langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_{q1,lm} \rangle \langle O_{q1,lm}|(\vec{x}.\hat{\epsilon}_1)|S_{1S} \rangle \\ \nonumber \times \frac{1}{2}{\sqrt \frac{m_c}{\frac{\vec{P}_1^2}{2m_{1S}} - E_v + k_{01} + E_{1S}} } \Big \} \\ \nonumber + \Big \{ \langle S_{2S}|(\vec{y}.\hat{\epsilon}_1)|O_{q2,lm} \rangle \langle O_{q2,lm}|(\vec{x}.\hat{\epsilon}^*_2)|S_{1S} \rangle \\ \nonumber \times \frac{1}{2}{\sqrt \frac{m_c}{\frac{\vec{P}_1^2}{2m_{1S}} - E_v - k_{02} + E_{1S}} } \Big \} \Bigg ]\\ \nonumber \times (2\pi)^3\delta^3(\vec{P}_1 + \vec{k}_1 - \vec{P}_2 - \vec{k}_2) e^{-i\phi}\\ \nonumber \times (2\pi)\delta( \frac{\vec{P}_1^2}{2m_{1S}} + k_{01} - \frac{\vec{P}_2^2}{2m_{2S}} - k_{02} - \Delta m),\\ \end{eqnarray} where, $\phi$ is an arbitrary phase factor $= \vec{P}_2.\vec{R}_f - \vec{P}_1.\vec{R}_i + (\frac{\vec{P}_2^2}{2m_{2S}} + E_{2S})t_f - (\frac{\vec{P}_1^2}{2m_{1S}} + E_{1S})t_i $, and $\Delta m = E_{2S} - E_{1S} (= m_{2S} - m_{1S})$. The phase factor does not appear in the final expression for the cross-section. Finally, the values of $q1$ and $q2$ appearing in Eq.~\ref{eq:tmatrix} are given by: \begin{eqnarray} \nonumber \frac{q1^2}{2m_o}= \frac{\vec{P}_1^2}{2m_{1S}} - E_v + k_{01} + E_{1S}\\ \nonumber \frac{q2^2}{2m_o}= \frac{\vec{P}_1^2}{2m_{1S}} - E_v - k_{02} + E_{1S}. \end{eqnarray} The values of $E_v$ used in simulation correspond to varying the octet energy eigenvalues, $\frac{q_1^2}{2m_o}$ and $\frac{q_2^2}{2m_o}$ from $0.1$ GeV to infinity (represented by $100$ GeV). In the $J/\psi$ rest frame, $\vec{P}_1 = 0$. Further, one can also ignore $\frac{P_2^2}{2m_{2S}}$, which simplifies the calculations. From the above $T$ matrix, $T(1S \rightarrow 2S)$, we extract out the energy and momentum conserving $\delta$ functions to get the $M$ matrix, \begin{eqnarray} \nonumber M(1S \rightarrow 2S) = 2\pi g^2 C k_{01} k_{02} M_c e^{-i\phi}, \end{eqnarray} with \begin{eqnarray} \nonumber M_c = \int \frac{dE_v}{2\pi}\\ \nonumber \Bigg \{ \Bigg [ \sum_{lm} \langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_{q1,lm} \rangle \langle O_{q1,lm}|(\vec{x}.\hat{\epsilon}_1)|S_{1S} \rangle \\ \nonumber \times \left ( \frac{1}{2} {\sqrt \frac{m_c}{k_{01} + E_{1S} - E_v} }\right ) \Bigg ]\\ \nonumber + \Bigg [ \sum_{lm}\langle S_{2S}|(\vec{y}.\hat{\epsilon}_1)|O_{q2,lm} \rangle \langle O_{q2,lm}|(\vec{x}.\hat{\epsilon}^*_2)|S_{1S} \rangle \\ \nonumber \times \left ( \frac{1}{2} {\sqrt \frac{m_c}{-k_{02} + E_{1S} - E_v} }\right ) \Bigg ] \Bigg \}.\\ \end{eqnarray} We use the $M$ matrix to calculate the cross-section. The average $1S \rightarrow 2S$ transition cross-section, $\sigma$, after dividing by the number of input gluons, is then given by \begin{eqnarray} \label{eq:sigma1} \nonumber \sigma = \frac{1}{2E^T_{1S}2k_1(1 - v_{1S})}\\ \nonumber \times \int \frac{d^3P_2}{(2\pi)^3 2E^T_{2S}}\int \frac{d^3k_2}{(2\pi)^3 2k_{02}} \\ \nonumber \times C_g \Bigg [ \left ( (2\pi) M_c k_{01}k_{02} \right )^*\left ( (2\pi) M_c k_{01}k_{02} \right )\Bigg ]\\ \nonumber \times (2\pi)^3\delta^3(\vec{P}_2 + \vec{k}_2 - \vec{k}_1) \\ \times (2\pi) \delta \left ( k_{01} - \frac{\vec{P}_2^2}{2m_{2S}} - k_{02} - \Delta m \right), \end{eqnarray} where, $C_g = g^2\frac{1}{(2N_c)^2} = 4\pi \alpha \frac{1}{(2N_c)^2}$. The expression $\delta (k_{01} - \frac{\vec{P}_2^2}{2m_{2S}} - k_{02} - \Delta m)$ in Eq.~\ref{eq:sigma1} gives $k_{02} = k_{01} - \frac{\vec{P}_2^2}{2m_{2S}} - \Delta m \approx k_{01} - \Delta m$. This value of $k_{02}$ makes $M_c$ independent of both $\vec{P}_2$ and $k_2$, and thus can be taken outside the $d^3P_2$ and $d^3k_2$ integrals. It is also understood that $k_2$ = $|\vec{k}_2|$ =$k_{02}$, and similarly $k_1$ = $|\vec{k}_1|$ = $k_{01}$. In the $J/\psi$ rest frame, $v_{1S} = 0$. We decompose $\vec{k}_2$ and $\vec{P}_2$ into parallel and perpendicular components to $\vec{k}_1$, and approximating $E^T_{2S} = m_{2S} + \frac{P_2^2}{2m_{2S}} \approx m_{2S}$, we get \begin{eqnarray} \nonumber \sigma = \frac{C_g(2\pi)}{16m_{1S}{m_{2S}}} k_1 M_c^2 \int d^3k_2 k_2 k_{2\perp}\\ \times \delta \left ( k_1 - \frac{(k_1 - k_{2||})^2 + k_{2\perp}^2}{2m_{2S}} - k_2 - \Delta m \right ). \end{eqnarray} Substituting $k_{2\perp} = k_2 \sin(\alpha)$ and $k_{2||} = k_2 \cos(\alpha)$, with $\alpha$ being the angle between $\vec{k}_1$ and $\vec{k}_2$, and evaluating the $\int dk_2$ integral, we obtain: \begin{eqnarray} \label{eq:sigma} \nonumber \sigma = \frac{C_g (2\pi)^2}{16m_{1S}m_{2S}}k_1 M_c^2\\ \times \int_0^{\pi} \left [\sin^2\alpha {\sqrt \frac{m_{2S}}{2\Delta} k_2^4} \right ] d\alpha, \end{eqnarray} with $\Delta = \frac{(k_1 \cos(\alpha) - m_{2S})^2}{2m_{2S}} - \left ( k_1 - \frac{k_1^2}{2m_{2S}} - \Delta m \right)$, and $k_2$ is now determined in terms of $k_1$ and others, and is equal to $\sqrt{2m_{2S} \Delta} + (k_1 cos(\alpha) - m_{2S})$. One can see that there is a pole when $\Delta \rightarrow 0$. This pole is unphysical, and occurs when $k_1$ is large. In other words, this implies that the above formulation is invalid for very large values of $k_1$. In our simulations, we have limited the value of $k_1$ to $1.0$ GeV. Due to the Bose enhancement of the outgoing gluon, we scale the cross-section $\sigma$ in Eq.~\ref{eq:sigma} by the factor $ f_{BE} = (1 + \frac{1}{\exp(k_{02}/T) - 1}) = (1 + \frac{1}{\exp(k_2/T) - 1})$. Thus we finally obtain: \begin{eqnarray} \nonumber \sigma = \frac{C_g (2\pi)^2}{16m_{1S}m_{2S}}k_1 \\ \times M_c^2 \int_0^{\pi} \left [f_{BE}\sin^2\alpha {\sqrt \frac{m_{2S}}{2\Delta} k_2^4} \right ]d\alpha. \end{eqnarray} \subsection{Evaluation of the correlation term} We now evaluate the correlation term $\sum_{l'm'}\langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_{ql'm'}\rangle \langle O_{ql'm'}|(\vec{x}.\hat{\epsilon}_1)|S_{1S}\rangle$. Trivially, $\vec{x}.\hat{\epsilon}_1 = |\vec{x}| (\hat{x}. \hat{\epsilon}_1)$, where $\hat{x}$ is a unit vector along $\vec{x}$. Similarly, $\vec{y}.\hat{\epsilon}^*_2 = |\vec{y}| (\hat{y}.\hat{\epsilon}^*_2)$. We can now separate the radial and angular part of the correlation term. In general, \begin{eqnarray} \nonumber \langle O_{ql'm'}|(\vec{x}.\hat{\epsilon}_1)|S_{nlm} \rangle = \\ \nonumber \int dx\,x^2 O^*_q(|\vec{x}|)\,|\vec{x}|\,S_{n}(|\vec{x}|) \\ \times \int d\Omega Y^*_{l'm'}(\theta,\phi)(\hat{x}.\hat{\epsilon}_1) Y_{lm}(\theta,\phi), \end{eqnarray} where $S_n(|\vec{x}|)$ and $O_q(|\vec{x}|)$ are the pure radial part of the singlet and octet wavefunction, respectively. For the input gluon, we average over the $\epsilon_{+}$ and $\epsilon_{-}$ polarization states of the gluon, giving the angular part as: \begin{eqnarray} \label{eq:eps_ave} \nonumber \frac{1}{2}\int d\Omega \Big \{ Y^*_{l'm'}(\theta,\phi)\\ \times \left (\frac{\sin(\theta)e^{i\phi}}{\sqrt 2} + \frac{\sin(\theta)e^{-i\phi}}{\sqrt 2} \right ) Y_{lm}(\theta,\phi) \Big \}, \end{eqnarray} where $Y_{lm}(\theta, \phi)$ and $Y^*_{l'm'}(\theta, \phi)$ are the spherical harmonics. The variables $l$ and $l'$ denote the azimuthal quantum number, while, $m$ and $m'$ denote the magnetic quantum number. The integral $d\Omega$ is over the whole solid angle, defined by the angles $\theta$ and $\phi$. For the output gluon, we sum over the $\epsilon_{+}$ and $\epsilon_{-}$ polarization states of the gluon giving, \begin{math} \nonumber \int d\Omega Y^*_{lm}(\theta,\phi)\left (\frac{\sin(\theta)e^{-i\phi}}{\sqrt 2} + \frac{\sin(\theta)e^{i\phi}}{\sqrt 2} \right ) Y_{l'm'}(\theta,\phi). \end{math} Let us analyze the expression a little bit more. For $\epsilon = \epsilon_+$, the angular part evaluates to \begin{eqnarray} \label{eq:eps_p} \int d\Omega Y^*_{l'm'}(\theta,\phi)(\frac{\sin(\theta)e^{i\phi}}{\sqrt 2}) Y_{lm}(\theta,\phi). \end{eqnarray} The $d\Omega$ integral evaluates to $1$, if $l' = l+1$, and $m' = m+1$, and $0$, otherwise. Similarly, when $\epsilon = \epsilon_{-}$, the $d\Omega$ integral evaluates to $1$, if $l' = l+1$, and $m' = m-1$, and $0$, otherwise. This indicates that when the singlet is in $1S$ state i.e. $l=0$, it will transit to an octet $1P$. On evaluating the other correlation term $\langle S_{2S}|(\vec{y}.\hat{\epsilon}^*_2)|O_q\rangle$, in a similar manner, we find that the octet $1P$ will be converted to singlet $\psi(2S)$. Putting all this together, the product of the two correlation function evaluates to: \begin{eqnarray} \nonumber \frac{1}{2}\sum_{l'm'} \Big [\int d\Omega Y^*_{00}(\theta,\phi)\\ \nonumber \times \left (\frac{\sin(\theta)e^{-i\phi}}{\sqrt 2} + \frac{\sin(\theta)e^{i\phi}}{\sqrt 2} \right ) Y_{l'm'}(\theta,\phi)\\ \nonumber \times \int d\Omega Y^*_{l'm'}(\theta,\phi)\\ \times \left (\frac{\sin(\theta)e^{i\phi}}{\sqrt 2} + \frac{\sin(\theta)e^{-i\phi}}{\sqrt 2} \right ) Y_{00}(\theta,\phi) \Big ] \end{eqnarray} The expression is non-zero only for $l' = 1$. This gives the angular part as: \begin{eqnarray} \nonumber \frac{1}{2}\Bigg [ \Bigg ( \int d\Omega \frac{1}{\sqrt{4\pi}}\frac{\sin(\theta)e^{-i\phi}}{\sqrt 2}\frac{3}{\sqrt{8\pi}}\sin(\theta)e^{i\phi} \\ \nonumber \times \int d\Omega \frac{3}{\sqrt{8\pi}}\sin(\theta)e^{-i\phi}\frac{\sin(\theta)e^{i\phi}}{\sqrt 2} \frac{1}{\sqrt{4\pi}} \Bigg )\\ \nonumber + \Bigg ( \int d\Omega \frac{1}{\sqrt{4\pi}}\frac{\sin(\theta)e^{i\phi}}{\sqrt 2} \frac{3}{\sqrt{8\pi}}\sin(\theta)e^{-i\phi} \\ \times \nonumber \int d\Omega \frac{3}{\sqrt{8\pi}}\sin(\theta)e^{i\phi}\frac{\sin(\theta)e^{-i\phi}}{\sqrt 2}\frac{1}{\sqrt{4\pi}} \Bigg ) \Bigg ]\\~~~~~~~~~~~~~~~~~~~=1.~ \end{eqnarray} We are now in a position to analyze the cross-section for $\chi_c(1P)$ and $\eta_c$ particles transition to $\psi(2S)$. \subsection{$\chi(1P)$} From Eq.~\ref{eq:eps_p} and from properties of spherical harmonics, it can be seen that in general, the expression in Eq.~\ref{eq:eps_p} is of the form \begin{eqnarray} \nonumber \delta(l+1-l')\delta(m+1-m')(..) \\ + \delta(l-1-l')\delta(m+1-m')(..), \end{eqnarray} for the $\epsilon_+$ polarization. Similarly, \begin{eqnarray} \nonumber \delta(l+1-l')\delta(m-1-m')(..) \\ + \delta(l-1-l')\delta(m-1-m')(..), \end{eqnarray} for the $\epsilon_{-}$ polarization. This would imply that when the input singlet is in $1P$ state, the octet would be either in $l=0$ or $l=2$ state. This further implies that the output singlet will have to be in either $l=1$ or $l=3$ state, but certainly not $l=0$ state. As a consequence of this, the cross-section for $\chi(1P)$ to $\psi(2S)$, transition would be zero. However, as it may occur through other mechanisms, we expect $\psi(2S)$ production from $\chi(1P)$ to be suppressed. \subsection{$\eta_c$} The particle, $\eta_c$, is a color singlet and spin singlet particle. A chromo-magnetic sector is required to modify the spin state. A chromo-magnetic vertex would be characterized by the operator $\frac{1}{2m_c}\sigma.\vec{B}^a$, where $m_c$ is the charm quark mass, and $\vec{B}^a$ is the chromo-magnetic field. For a real transverse gluon, $B_z^a$ = 0. This gives the vertex operator as $\frac{1}{2m_c}(\sigma_xB_x^a + \sigma_yB_y^a)$. This operator would convert a spin $0$ singlet state to spin $\pm 1$ state for the octet, depending on the gluon polarization. From considerations of conservation of total angular momentum, the octet wavefunction would then be an s-wave (i.e., $l=0$). This can also be seen explicitly by evaluating the angular part of the correlation function $\langle O|\frac{1}{2m_c}\sigma.\vec{B}^a|S \rangle $. This implies that the second vertex involving the outgoing gluon would again be a chromo-magnetic vertex, since a chromo-electric vertex would modify the value of $l$ by $1$. Again from considerations of conservation of angular momentum, a chromo-magnetic vertex would require the spin of the outgoing singlet to be $0$. But the spin of $\psi(2S)$ is $1$. Hence to the order $\alpha^2$, a transition from $\eta_c$ to $\psi(2S)$ is not possible. One could however argue that, if one of the gluons is off-shell, then the longitudinal polarization of the virtual gluon can lead to the creation of $\psi(2S)$ with longitudinal polarization. We know from~\cite{bodwin}, that the chromo-magnetic vertex is higher in velocity scale, than the chromo-electric vertex, by a factor of heavy quark velocity $v$. Hence, the amplitude for the process $\eta_c \rightarrow octet \rightarrow \psi(2S)$, would be suppressed by a factor $v^2$, and the cross-section by a factor $v^4$. Due to the above reasons, we ignore the contribution of both $\chi(1P)$ and $\eta_c$ particles to $\psi(2S)$ via gluo-excitation. \subsection{The rate constant $\Gamma_{1S\rightarrow 2S}$} \label{sec:rate_constant} The rate constant, $\Gamma_{1S\rightarrow 2S}$, is obtained by integrating the $1S \rightarrow 2S$ transition cross-section, $\sigma$, with the gluon distribution function, $f_g(E_g,v_{rel},\theta) = \frac{g_d}{e^{\frac{\gamma E_g}{T}(1 - v_{rel} \cos(\theta))} - 1}$, where $g_d$ is the number of gluon degree of freedom $=16$. In this section, we have used $E_g$ in place of $k_1$ (or $k_{01}$) for the incoming gluon energy. In other words, \begin{eqnarray} \label{eq:rateconstant} \nonumber \Gamma_{1S\rightarrow2S} = \\ \int \frac{1}{4\pi^2} E_g^2 \sin(\theta)f_g(E_g,v_{rel},\theta)\,\sigma\,dE_g d\theta \end{eqnarray} \begin{eqnarray} \nonumber = \frac{1}{4\pi^2}\int \frac{ E_g g_dT\,\sigma}{v_{rel}\gamma}\\ \times \ln\Bigg [ \frac{e^{\frac{\gamma E_g}{T}(1 + v_{rel})} - 1} {e^{\frac{2\gamma E_g v_{rel}}{T}} \left (e^{\frac{\gamma E_g}{T} (1- v_{rel})} - 1\right ) } \Bigg ]dE_g. \end{eqnarray} The fraction of the number of $1S$ particles converted to $2S$ is then given by \begin{math} \nonumber \Delta n_{1S\rightarrow 2S} = 1 - \exp\left (-\int_{t_0}^{t_{QGP}} \Gamma_{1S\rightarrow2S}\,dt \right ). \end{math} The variables $t_0$ and $t_{QGP}$ indicate the thermalization time and lifetime of the $QGP$. The increment in $\psi(2S)$ yield is then \begin{equation} \label{eq:increment} \frac{N_{J/\psi}}{N_{\psi(2S)}}\Delta n_{1S\rightarrow 2S}, \end{equation} where $\frac{N_{J/\psi}}{N_{\psi(2S)}}$ is the ratio of number of initial $J/\psi$ to $\psi(2S)$. As an estimate, we have taken $\frac{N_{J/\psi}}{N_{\psi(2S)}}$ to be equal to the ratio of the production cross-section, $\frac{\sigma^{NN}_{J/\psi}}{\sigma^{NN}_{\psi(2S)} } = \frac{1}{0.3}$~\cite{rituraj}. \section{Dissociation Processes} \label{sec:diss} We now look at modeling the dissociation processes. There can be multiple dissociation process, like, gluon induced dissociation, collisional damping and Chu and Matsui mechanism of suppression due to color screening. \subsection{Gluon dissociation} This mechanism is a very similar mechanism to the gluon induced enhancement derived in this work. The $\psi(2S)$ absorbs a gluon, gets converted to an octet state, and finally dissociates. The derivation of the gluon dissociation cross section, based on pNRQCD, is outlined in \cite{Nendzig}. Since gluon dissociation and the derivation of gluon enhancement are both based on the pNRQCD Lagrangian, and at leading order, it allows an apples to apples comparison between the two. This is discussed further in Sec.~\ref{sec:ALICEcomp}. The cross section is given by \begin{equation} \begin{split} \sigma_{diss,nl}(E_g) = \frac{\pi^2\alpha_s^u E_g}{N_c^2} \sqrt{\frac{m}{E_g + E_{nl}}} \\ \times \left ( \frac{l|J_{nl}^{q,l-1}|^2 + (l+1)J_{nl}^{q,l+1}|^2}{2l+1} \right ) \end{split} \end{equation} where $J_{nl}^{ql'}$ can be expressed using singlet and octet wave functions as: \begin{equation} J_{nl}^{ql'} = \int_0^\infty dr\,r \,g^*_{nl}(r)h_{ql'}(r) \end{equation} \\ In our simulations, we use identical values for all parameters between gluon induced enhancement and dissociation. The center of mass energy of incoming $J/\psi$ varies from 3.26 GeV to 25.2 GeV. For incoming $\psi(2S)$, it varies from 3.83 GeV to 25.27 GeV. These values correspond to $p_T$ range from 1 GeV to 25 GeV. As in the case of gluon induced enhancement, the cross section is then averaged over the modified gluon distribution $f_g(E_g,v_{rel},\theta)$, to obtain $\Gamma_{gdiss}$, On the same lines as Eq.~\ref{eq:rateconstant}, we then obtain, \begin{eqnarray} \nonumber \Gamma_{gdiss}= \frac{1}{4\pi^2}\int^{1 GeV}_{E_{bind}} \frac{ E_g g_dT\,\sigma_{gdiss}}{v_{rel}\gamma} \\ \times \ln\Bigg [ \frac{e^{\frac{\gamma E_g}{T}(1 + v_{rel})} - 1} {e^{\frac{2\gamma E_g v_{rel}}{T}} \left (e^{\frac{\gamma E_g}{T} (1- v_{rel})} - 1\right ) } \Bigg ]dE_g. \end{eqnarray} \subsection{Collisional damping} Collisional damping is essentially the decay due to the imaginary part of the potential~\cite{Laine1} between the quark and antiquark. \begin{equation} \label{eq:potential} \begin{split} V(r,T) = \frac{\sigma_c}{m_D}(1 - e^{-m_D\,r}) \\ - \alpha_{eff} \left ( m_D + \frac{e^{-m_D\,r}}{r} \right ) \\ - i\alpha_{eff} T \int_0^\infty \frac{2\,z\,dz}{(1+z^2)^2} \left ( 1 - \frac{\sin(m_D\,r\,z)}{m_D\,r\,z} \right ), \end{split} \end{equation} where, $m_D$ is the Debye mass and is given by \begin{math} m_D = T\sqrt{4\pi\,\alpha_s^T \left (\frac{N_c}{3} + \frac{N_f}{6} \right ) }. \end{math} The collisional damping rate constant is then given by~\cite{gans1} \begin{equation} \Gamma_{damp} = \int[\psi^\dagger_T \left [ Im(V(r,T))\right ] \psi_T]\,dr, \end{equation} with $\psi_T$, being the singlet wavefunction at temperature T. At different values of $p_T$, the singlet particle $\psi(2S)$, will effectively see a modified distribution of quarks and gluons colliding with it. This effect is captured by using an effective Temperature $T_{eff}$~\cite{Tefforig}, which then varies with $p_T$. \begin{equation} T_{eff} = T\frac{\sqrt{(1 - v_{rel}^2)}}{1 - v_{rel}\cos(\theta)}, \end{equation} where $\theta$ is the scattering angle. Averaging over $d\Omega = \sin(\theta)d\theta d\phi$, we get \begin{equation} \label{eq:teff} \langle T_{eff}\rangle = \frac{1}{2v_{rel}}T\sqrt{(1 - v_{rel}^2)}\ln\left (\frac{1 + v_{rel}}{1-v_{rel}}\right ), \end{equation} The singlet wavefunction $\psi$ is then determined at temperature $\langle T_{eff}\rangle$, to model the $p_T$ dependence of $\Gamma_{damp}$. The net dissociation constant due to collisional damping and gluon induced dissociation is given by \begin{equation} \Gamma_{total} = \Gamma_{damp} + \Gamma_{gdiss} \end{equation} \subsection{Color Screening} In the context of color screening suppression, we model the mechanism of suppression due to Debye color screening. We follow the formulation outlined in~\cite{Madhu1,Madhu2}. To incorporate medium effects, we have used the relative velocity between the $\psi(2S)$ velocity, $\vec{v}_{\psi}$, and medium velocity, $\vec{v}_{med}$, in place of $\vec{v}_{\psi}$ in the color screening equation used in~\cite{Madhu1,Madhu2}, i.e., \begin{equation} \label{eq:escape} |\vec{r}_{\psi} + \vec{v}_{rel}t_F| < r_s, \end{equation} instead of \begin{equation} \nonumber |\vec{r}_{\psi} + \vec{v}_{\psi}t_F| < r_s. \end{equation} Finally, we calculate suppression using the cooling law and pressure profile discussed in ~\cite{Madhu2}. The transverse energy deposited per unit rapidity $\frac{dE_{T}}{dy}$ and overlap area $A_T$ are required to obtained the average pressure of the medium as the function of centrality $N_{part}$ in p$-$Pb collision. We use the experimental value of $\frac{dE_{T}}{d\eta}$ and obtained the $dE_{T}/dy$ using the relation $dE_{T}/dy = 1.09\times dE_{T}/d\eta$. The overlap area, $A_T$, has been calculated using the Monte Carlo Glauber model within the framework of the ROOT software~\cite{ROOT}. The values of $N_{part}$ have been obtained from \cite{ALICEdata}, and $\frac{dE_T}{d\eta}$ from~\cite{etref}. \section{Results and Discussions} \label{sec:Results} \subsection{Comparison with ATLAS data} \label{sec:ATLAScomp} We begin by discussing the effect of modified Bose-Einstein distribution $f_g(E_g,v_{rel},\theta)$ on the gluon density available for exciting $J/\psi(1S)$ to $\psi(2S)$. \begin{figure}[h!] \includegraphics[width = 70mm,height = 70mm]{BE_doppler.eps} \caption{Variation of $BE_{Doppler} = \sin(\theta)\times f_g(E_g,v_{rel},\theta)$, w.r.t. $\theta$, for $E_g = 0.8$ GeV, and various values of $v_{rel}$.} \label{fig:doppler} \end{figure} We depict $BE_{Doppler} = \sin(\theta)\times f_g(E_g,v_{rel},\theta)$, w.r.t. $\theta$, for various values of $v_{rel}$ in Fig.~\ref{fig:doppler}. We have plotted $\sin(\theta)\times f_g(E_g,v_{rel},\theta)$, instead of $f_g(E_g,v_{rel},\theta)$, since the $\sin(\theta)$ term appears in the integrand of the expression for the rate constant, $\Gamma_{1S\rightarrow2S}$ (Eq.~\ref{eq:rateconstant}). In the blue shifted forward region, i.e., $\theta < \frac{\pi}{2}$, there is a substantial increase in the gluon density, as $v_{rel}$ increases. This gives an indication as to why gluo-excitation of $J/\psi$ to $\psi(2S)$ should increase with $p_T$. We shall now discuss the simulation results. The value of $E_{1S}$ and $E_{2S}$ are obtained by solving the Schr\"{o}dinger equation with the potential in Eq.~\ref{eq:potential}. In the potential expression, we use, $N_f = 3$ = number of flavors and $\alpha_s^T = 0.10$. The value of $\sigma_{c}= 0.192~GeV^2$ for singlet. The value of $\alpha_{eff} = \frac{4\alpha}{3}$ for singlet, and we have taken $\alpha = 0.27$. The Schr\"{o}dinger equation has been solved by taking a $10^4$ point logarithmically spaced finite spatial grid of size $9$~fm and solving the resulting matrix eigenvalue equations. We assume the critical temperature, $T_c$, to be $170$ MeV, and the charm quark mass equal to $1.275$ GeV. With these values, we determine $\Gamma_{1S\rightarrow 2S}$. It is to be noted that the potential $V(r,m_D)$ would also modify the mass of $J/\psi$ and $\psi(2S)$ with temperature. The centrality and time dependence of the temperature of the QGP medium is taken as~\cite{gans1}, \begin{equation} T(t) = T_c \frac{\left(\frac{dN_{ch}}{d\eta}/\frac{N_{part}}{2}\right)_{bin}^{1/3}} {\left(\frac{dN_{ch}}{d\eta}/\frac{N_{part}}{2}\right)_{bin0}^{1/3}} \left ( \frac{t_{QGP}}{t} \right )^{1/3}. \end{equation} Here, $bin0$ refers to a reference bin, taken as the most central bin. The variable, $bin$, corresponds to the index varying from the most central to the most peripheral bin. The simulation has been done for $t_{QGP}$ = $3.0$ fm for the most central bin. The value of $t_0$ used is $0.6$ fm. The values for $N_{part}$ and $\frac{dN_{ch}}{d\eta}$ have been obtained from~\cite{ALICEdata}. The analytical form of the correlation function $\langle 1S|r|O(l=1)\rangle$ has been taken from~\cite{nora1}; \begin{eqnarray} \nonumber |\langle 1S|r|O(l=1)\rangle| = \\ \nonumber \sqrt\frac{512\pi^2\rho(\rho + 2)^2 a_0^6 \left (1 + \frac{\rho^2}{a_0^2 q^2} \right ) e^{\frac{4\rho}{a_0q} tan^{-1}(a_0q)}} {\left ( e^{\frac{2\pi \rho}{a_0 q}} - 1 \right ) \left (1 + a_0^2q^2)^6 \right )}\\ \end{eqnarray} where $\rho = \frac{1}{N_c^2 - 1}$ and $a_0$ is the Bohr radius. Similarly, the correlation function $\langle 2S|r|O(l=1)\rangle$ has been inferred from~\cite{nora1}; \begin{eqnarray} \nonumber |\langle 2S|r|O(l=1)\rangle| = \\ \nonumber \sqrt \Bigg \{ \frac{3 \pi^2 2^{12} \rho }{(m_c^2 E^4 (1 + 4a0^2 q^2)^8 q )} \\ \nonumber \left (2 E_{2S} (2\rho^2 + 5\rho + 3) + q(\rho + 2) \right )^2 \\ \nonumber \left ( (2\,a_0\,q)^2 + 4 \rho^2 \right ) \\ \times e^{\frac{8\rho}{2a_0\,q}tan^{-1}(2a_0\,q)} \left [e^{\frac{4\pi \rho}{2a_0\,q}} - 1\right ]^{-1} \Bigg \} \end{eqnarray} Figure~\ref{fig:psi_inc} depicts the fractional increase in $\psi(2S)$ yield as a function of $p_T$. We see that this increases with $p_T$. \begin{figure}[h!] \includegraphics[width = 70mm,height = 70mm]{psi_inc_0p5.eps} \caption{The fractional increase in $\psi(2S)$ as a function of $p_T$, for medium velocity equal to $0.5c$.} \label{fig:psi_inc} \end{figure} Apart from the transition of $J/\psi(1S)$ to $\psi(2S)$, there would be multiple factors contributing to the $\psi(2S)$ yield. These include CNM effects, suppression effects due to Debye color screening, gluon induced dissociation etc. Absorption due to CNM effects can be expected to be negligible at the LHC~\cite{vogt}. The other CNM effects, namely Cronin effect and shadowing, being initial state effects, would be expected to be the same between $J/\psi$ and $\psi(2S)$. Hence, we take the $J/\psi$ $R_{pA}$ as a measure of CNM. For ease of reference, the $J/\psi$ $R_{pA}$ in the p$-$Pb collision is also plotted in Fig.~\ref{fig:psi_yield_cs}. In Fig.~\ref{fig:psi_yield_cs}, we compare the $\psi(2S)$ yield after gluo-excitation (red curve) with the ATLAS experimental data~\cite{ATLAS}. We shall later compare with the other ATLAS data~\cite{new_ATLAS}, in Figs.~\ref{fig:new_atlas_z},~\ref{fig:new_atlas_dratio} and \ref{fig:new_atlas_rapidity}. In Fig.~\ref{fig:psi_yield_cs}, the $\psi(2S)$ yield includes the $J/\psi(1S)$ yield as a measure of CNM. The $p_T$ dependent simulation data has been obtained by weighing each centrality bin with the corresponding value of $N_{coll}$ and then averaging. The $N_{coll}$ values have been obtained from ~\cite{ALICEdata}. The centrality bins are identical to those in ATLAS~\cite{ATLAS, new_ATLAS}. The regions in the plot, where the experimental yield of $\psi(2S)$ is close to unity, is a probable indication of suppression effects. To include suppression, we model the mechanism of suppression due to gluon induced dissociation, collisional damping and Debye color screening, described in Sec.~\ref{sec:diss}. There is a very important difference between gluo-excitation and gluo-suppression. In gluo-excitation, the intermediate octet state can emit a gluon and transition to the $\psi(2S)$ bound state even in the hadronic phase after the QGP has ended. In fact, it may be energetically more favorable to end up as a $\psi(2S)$ bound state, rather than completely dissociate into the constituent $c\bar{c}$ pair. However for gluo-dissociation, its unlikely that the excited intermediate octet state would dissociate into the constituent $c\bar{c}$ pair in the hadronic phase. The light quarks and anti-quarks, which have been (color) screening the $c\bar{c}$ pair from each other, disappear with the QGP transitioning to the hadronic phase. The hadronic phase does not contain naked light quarks to bind with $c$ or $\bar{c}$ to form $D$ mesons either. Thus the gluon induced dissociation needs to complete within the QGP lifetime itself. The $\psi(2S)$ binding energy is about $0.05$ GeV in vacuum. But if a gluon of energy $0.05$ GeV interacts with $\psi(2S)$, the intermediate octet state may evolve for about $1/0.05$ GeV $\approx$ $3.95$ fm, before it fully dissociates. But, by this time, the QGP lifetime gets over, and hadronic phase comes into existence. As a result, the excited octet state may no longer dissociate. Rather, it is more likely to emit a gluon, and end up as a bound state. To ensure that the dissociation takes place within the QGP lifetime itself, a much higher gluon energy is required to dissociate $\psi(2S)$. We take the minimum energy of gluon to cause suppression to be $\frac{\gamma}{t_{QGP}}$. This curtails the dissociation of $\psi(2S)$ to a significant extent. This phenomenon need not be significant in central and mid-central Pb-Pb collisions, where QGP lifetimes are much higher. The final $\psi(2S)$ yield (green curve) after including the suppression due to all the mechanisms is depicted in Fig.~\ref{fig:psi_yield_cs}. The $p_T$ dependence of suppression depicted in Fig.~\ref{fig:psi_yield_cs}, is after taking into account the effect of the radial medium velocity, $\vec{v}_{med}$. The medium velocity is taken as $0.5$ based on~\cite{medvel}. The absence of a more precise modeling of the QGP expansion, utilizing a $3+1$-hydrodynamical model, is a limitation of this work. It can be seen here that with suppression included, the experimental data is better captured. The results show that the $\psi(2S)$ suppression is relatively high at low $p_T$, while gluo-excitation of $J/\psi(1S)$ to $\psi(2S)$ is high at high $p_T$. This gets highlighted in Fig.~\ref{fig:en_gdiss_comp}, which is discussed further in Sec.~\ref{sec:ALICEcomp}. A note on the effect of $J/\psi$ suppression is due at this point. We expect the temperature of the QGP, if it is formed, to be not very high. The $J/\psi$ dissociation temperature is about $381$ MeV, which would almost always be above the QGP temperature formed in p$-$Pb collisions. Thus, $J/\psi$ will not undergo any suppression due to Debye color screening. Gluon induced suppression and suppression due to collisional damping for $J/\psi$, is depicted in Fig.~\ref{fig:en_gdiss_comp}, which indicate this to be small as well. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{replace_wrong_fig.eps} \caption{Comparison of the final $\psi(2S)$ yield with the experimental ATLAS data~\cite{ATLAS} in p$-$Pb collision at $5.02$ TeV. The dashed blue line indicates prompt $J/\psi$ yield. $t_{QGP}$ = 3.0 fm for the most central bin.} \label{fig:psi_yield_cs} \end{figure} At this point, it is worthwhile to discuss about what happens to a $p_T$ integrated $\psi(2S)$ yield. Since most of the $\psi(2S)$ production would be expected to be in the low $p_T$ region, the net $\psi(2S)$ yield would be dominated by what occurs in the low $p_T$ region, which is suppression. We discuss this in detail in Sec.~\ref{sec:ALICEcomp}. In peripheral p$-$Pb collisions, where both suppression and gluo-excitation is expected to be low, there would be no enhancement or suppression w.r.t. $J/\psi$, if QGP exists in such a collisions. However, the QGP formation probability seems to be quite small in a peripheral collisions. A question that arises here is whether the enhancement phenomenon would be seen in Pb$-$Pb collision. In a Pb$-$Pb collision, the temperatures of the medium are much higher, and can go upto $400$ or $500$ MeV. At temperatures above $190$ MeV, $\psi(2S)$ cannot exist. Any $\psi(2S)$ present would only dissociate. The gluo-excitation to $\psi(2S)$ cannot happen during this period of the QGP. During the short time, when the temperature of the QGP reduces to below $190$ MeV, this phenomenon may happen, but may get overshadowed by the suppression phenomenon that has been happening throughout the lifetime of the QGP. However, in the extreme peripheral collisions, where the temperatures are much lower during the QGP lifetime, there could be a possibility of gluon induced $\psi(2S)$ enhancement at high $p_T$. We now compare with the ATLAS data \cite{new_ATLAS} in Figs.~\ref{fig:new_atlas_z},~\ref{fig:new_atlas_dratio} and \ref{fig:new_atlas_rapidity}. Integrated $p_T$ yield ($p_T > 8$ GeV) is used to compare with experimental data. The $\psi(2S)$ values around $8$ GeV, play the dominant role as $\psi(2S)$ production decreases significantly with increasing $p_T$~\cite{new_ATLAS}. In Fig.~\ref{fig:new_atlas_z}, we plot the calculated $\frac{\psi(2S)_{zb}}{\sum_{bins} \psi(2S)_{zb}}$ data, where $\psi(2S)_{zb} = \frac{\psi(2S)_{bin}N_{coll}}{Z_{bin}}$. The Z boson data, $Z_{bin}$, obtained from~\cite{Zboson}, is the yield after normalization with $N_{coll}$, and without any Glauber Gribov color fluctuations. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{new_atlas_z.eps} \caption{Comparison with the ATLAS~\cite{new_ATLAS} experimental $\psi(2S)$ yield, which is normalized by the Z boson yield.} \label{fig:new_atlas_z} \end{figure} Figure~\ref{fig:new_atlas_dratio} compares the $\frac{\psi(2S)}{J/\psi}$ double ratio. Our simulation overestimates the double ratio somewhat, but it is able to capture the trend. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{new_atlas_dratio.eps} \caption{Comparison with the ATLAS~\cite{new_ATLAS} experimental $\psi(2S)$ double ratio.} \label{fig:new_atlas_dratio} \end{figure} Finally, in Fig.~\ref{fig:new_atlas_rapidity}, we compare our simulation results with the ATLAS rapidity dependence data in \cite{new_ATLAS}. Our result at mid-rapidity is just about touching the error bar. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{new_atlas_rapidity.eps} \caption{Comparison with the ATLAS~\cite{new_ATLAS} experimental $\psi(2S)$ double ratio.} \label{fig:new_atlas_rapidity} \end{figure} \subsection{Comparison with ALICE data} \label{sec:ALICEcomp} Unlike ATLAS data, ALICE data \cite{alicepsi} shows suppression. This suppression is observed at low $p_T$ range. Therefore, in order to compare with the ALICE data, we extend our simulation results to low $p_T$ region. Fig.~\ref{fig:ATLAS_ALICE} show the simulated data compared with both ATLAS~\cite{ATLAS} and ALICE~\cite{alicepsi} data. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{ALICE_ATLAS_comp.eps} \caption{Comparison of simulated results with both ATLAS~\cite{ATLAS} and ALICE~\cite{alicepsi} data.} \label{fig:ATLAS_ALICE} \end{figure} The ALICE data is the double ratio of $\tfrac{\sigma_{2S}}{\sigma_{1S}}$. In order to keep both the ALICE and ATLAS data on the same footing, we normalize the ATLAS $\psi(2S)$ $R_{pA}$ data by dividing with the baseline $R_{pA}$ for $J/\psi$. This also enables us to ignore CNM effects, as the CNM effects of Cronin effect and shadowing are initial state effects, and expected to be the same in both, $J/\psi$ and $\psi(2S)$. We see that the simulation data is able to capture the trend of both ALICE~\cite{alicepsi} and ATLAS~\cite{ATLAS} data simultaneously. We need to note here that the ALICE data is at forward and backward rapidity, while the ATLAS data is at mid rapidity. Our simulation hydrodynamical model also essentially models the mid rapidity region. Hence, we do not expect our simulation to identically match ALICE data, but only capture the suppression trend. However, it is of significance that our model is able to simultaneously capture the enhancement at ATLAS at high $p_T$, and suppression at ALICE at low $p_T$. We try to understand the above result by first comparing the similar mechanisms of gluon induced dissociation of $\psi(2S)$ and enhancement of $J/\psi$ to $\psi(2S)$ in Fig.~\ref{fig:en_gdiss_comp}. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{en_gdiss_comp.eps} \caption{Comparison of $\Gamma_{diss}$ due to gluodissociation and collisional damping with $\Gamma_{1S \rightarrow 2S}$. Depicted $\Gamma_{diss}$ due to gluodissociation is averaged over all bins.} \label{fig:en_gdiss_comp} \end{figure} We have used identical values of all the parameters like coupling constant etc. to calculate $\Gamma_{gdiss}$ and $\Gamma_{1S \rightarrow 2S}$. We see that dissociation is higher at low $p_T$, while enhancement is higher at high $p_T$. The binding energy of $\psi(2S)$ in vacuum is about $0.05$ GeV which is much lower than the energy gap between $J/\psi$ and $\psi(2S)$. Inspite of the low binding energy, $\psi(2S)$ dissociation is subdued since much higher gluon energy is required for $\psi(2S)$ to dissociate within the QGP, as discussed earlier in Sec.~\ref{sec:ATLAScomp}. At high $p_T$, when the $\psi(2S)$ absorbs a gluon, the evolution of the meson will happen very slowly due to Lorentz time dilation. Hence, significantly more gluon energy is required to dissociate it faster, while the meson is still within the QGP. As mentioned earlier, the minimum energy of gluon required would be $\frac{\gamma}{t_{QGP}}$, with $\gamma$ being the Lorentz dilation factor. This results in a drastic decrease in $\Gamma_{gdiss}$ at high $p_T$. We have also observed that the other suppression effects like collisional damping and Debye color screening are also higher at low $p_T$. In the case of collisional damping, the value of $T_{eff}$, given by Eq.~\ref{eq:teff}, is lower at high $p_T$, leading to lower suppression at high $p_T$. The suppression due to collisional damping is likely to have similar restrictions as gluon induced dissociation due to small QGP lifetime. We plan to quantitatively model such effects in future. The enhancement of $\psi(2S)$ is further amplified by the factor $\frac{N_{J/\psi}}{N_{\psi(2S)}}$ in Eq.~\ref{eq:increment}. Every fraction of $J/\psi$ converted to $\psi(2S)$, gives rise to $\frac{N_{J/\psi}}{N_{\psi(2S)}}$ fraction increase in $\psi(2S)$. This factor can lead to even higher enhancement of $\psi(2S)$ than dissociation of $\psi(2S)$. The net $\psi(2S)$ $R_{pA}$, seems to be a balance of the enhancement and dissociation process, and tilts in the favor of which ever is higher. As a final remark, even if the enhancement of $\psi(2S)$, were not to dominate over suppression of $\psi(2S)$, the phenomenon of $\psi(2S)$ enhancement may not be ignorable for capturing $\psi(2S)$ yield at high $p_T$. This is seen from Fig.~\ref{fig:ATLAS_ALICE}, where the normalized $\psi(2S)$ $R_{pA}$ double ratio (cyan curve), with only dissociation modeled, somewhat underestimates the $\psi(2S)$ $R_{pA}$ double ratio. Figure~\ref{fig:en_gdiss_comp} also shows that $J/\psi$ dissociation is small. This seems to be in agreement with other literature~\cite{medvel}, where CNM effects have been mainly used to analyze $J/\psi$ $R_{pA}$. Finally, we explore the nature of $\psi(2S)$ modification for $p_T$ integrated data. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{psi2S_prod_pt.eps} \caption{CMS~\cite{cmsjpsi} data for $\psi(2S)$ distribution.} \label{fig:psi2S_prod_pt} \end{figure} Figure ~\ref{fig:psi2S_prod_pt} shows CMS experimental data on $\psi(2S)$ distribution as a function of $p_T$~\cite{cmsjpsi}. It can be seen that most of $\psi(2S)$ is concentrated at low $p_T$ region. Thus for any $p_T$ integrated evaluation of $\psi(2S)$ yield, the modification of $\psi(2S)$ at low $p_T$, which is suppression, is expected to dominate. Figure~\ref{fig:alicerapidity} shows that the $p_T$ integrated value at mid rapidity is suppression and not enhancement. It is to be noted that the experimental data~\cite{alicepsi} indicates suppression and not enhancement at forward and backward rapidity. \begin{figure}[h!] \includegraphics[width = 80mm,height = 80mm]{ALICE_rapidity.eps} \caption{Comparison with ALICE rapidity data for the double ratio $\psi(2S)/J/\psi]_{pPb}]/[\psi(2S)/J/\psi]_{pp}$~\cite{alicepsi}. } \label{fig:alicerapidity} \end{figure} \section{Conclusions} \label{sec:conclusion} In conclusion, we have attempted to explain the transverse momentum dependence of $\psi(2S)$ suppression data observed in p$-$Pb collision at the LHC energy, over a wide span of transverse momentum. We have found a differential enhancement of $\psi(2S)$ w.r.t. $J/\psi(1S)$ at higher transverse momentum using gluon induced $1S$ to $2S$ transition approach, which to a significant extent agrees with the preliminary ATLAS experimental data. We have also included the effect of suppression via the gluo-dissociation and collisional damping mechanisms and the Chu and Matsui mechanism of Debye color screening. The combined result seems to corroborate better with the experimental ATLAS data. Our simulations results also corroborate with the trend in ALICE results for $\psi(2S)$ for both $p_T$ and rapidity dependence, where suppression is the dominant phenomenon. We do not see any $J/\psi$ dissociation, which augurs well with the fact that CNM effects have been able to explain $J/\psi$ $R_{pA}$ in literature. We expect that the enhancement of $J/\psi(1S)$ to $\psi(2S)$, as an increasing function of $p_T$, if confirmed experimentally, can be one definitive evidence for the presence of QGP in p$-$Pb collision. Even if there is no net enhancement at high $p_T$, $\psi(2S)$ enhancement seems to be required in order to predict the $\psi(2S)$ $R_{pA}$, especially at high $p_T$. After submission of this manuscript, we became aware that the data~\cite{ATLAS} has been superseded by~\cite{new_ATLAS}. We have, however, retained the superseded ATLAS data~\cite{ATLAS}, as it is likely that there is a value addition in showing that our simulation results corroborate with the superseded ATLAS data. \section{Acknowledgments} M. Mishra is grateful to the Department of Science and Technology (DST), New Delhi for financial assistance. Captain R. Singh is grateful to BITS - Pilani, Pilani for the financial assistance.
1,314,259,992,674
arxiv
\subsection{Typos} \begin{table}[htbp]\caption{Summary of Notations} \centering \begin{tabular}{r c p{10cm} } \toprule \multicolumn{3}{c}{}\\ \multicolumn{3}{c}{\underline{Bandit Problem Variables}}\\ \multicolumn{3}{c}{}\\ $T$ & $\triangleq$ & Total number of rounds of the sequential decision-making problem. \\ $d$ & $\triangleq$ & Number of arms in Sec.\ref{MAB}, Dimension of action feature vector in Sec.\ref{CB}.\\ $(\mathcal{A}_{t},\mathcal{A})$ & $\triangleq$ & Action set at time $t$; Union of all action sets $\mathcal{A}_{t}$.\\ $a_{t}$ & $\triangleq$ & Action picked at time $t$; selected by policy $\pi$, seen as a function of previous history. \\ $(\epsilon_{t},\sigma^{2})$ & $\triangleq$ & Stochastic feedback noise a time $t$. Sub-Gaussian with pseudo-variance parameter $\sigma^{2}$. If $\sigma^{2}$ depends on the action selected (heteroskedasticity), we use $\sigma^{2}_{a}$ instead. \\ $(r,\theta^{\star})$ & $\triangleq$ & Unknown reward function, maps action to scalar reward. Parameterized by unknown latent state $\theta^{\star}$.\\ $\Delta_{t}(a)$ & $\triangleq$ & Sub-optimality gap of action $a$ at time $t$, reward difference with optimal decision of clairvoyant policy \\ $(\Delta_{a},\Delta_{\max})$ & $\triangleq$ & If $\Delta_{t}(a)$ is independent of $t$, we use $\Delta_{a}\equiv \Delta_{t}(a)$. $\Delta_{\max}$ is an upper bound of $\Delta_{t}(a)$ for all actions $a$ and time $t$.\\ $R(T,\pi)$ & $\triangleq$ & Pseudo regret of policy $\pi$ over $T$ rounds. \\ \multicolumn{3}{c}{}\\ \multicolumn{3}{c}{\underline{Censorship Variables}}\\ \multicolumn{3}{c}{}\\ $p_{a}$ & $\triangleq$ & Probability that action $a$ is censored if selected, used in Sec. \ref{MAB}. Notation $p(a)$ is used in Sec.\ref{CB} to emphasize the dependency of $p$ on action $a$. \\ $(\phi_{j},u,p_{j})$ & $\triangleq$ & Parameters of the multi-threshold censorship model. Vector $u$ defines the direction of censorship, $(\phi_{j})_{j\leq k+1}$ define the censorship regions with fixed censorship probability and $(p_{j})_{j\leq k}$ define the probability of being censored for each region $j$.\\ $x_{a_{t}}$ & $\triangleq$ & Random variable indicating if feedback is censored as round $t$. Follows i.i.d Bernoulli distribution of parameter $p(a_{t})$. \\ \multicolumn{3}{c}{}\\ \multicolumn{3}{c}{\underline{Algorithmic and Analysis Variables}}\\ \multicolumn{3}{c}{}\\ $\lambda$ & $\triangleq$ & Regularization tuning parameter. $\lambda_{a}$ is used if heterogeneous action-based regularization. \\ $\Tilde{\Delta}^{\lambda}_{t}(a)$ & $\triangleq$ & High-probability upper bound on the sub-optimality gap, used in UCB algorithms. \\ $\mathbb{V}_{\alpha}(T,\pi)$ & $\triangleq$ & Random cumulative censored potential, seen as a function of policy $\pi$ and number of rounds $T$. First introduced in Sec.\ref{MAB} and extended in Sec.\ref{CB}. \\ $\psi_{\alpha}$ & $\triangleq$ & Primitive of the function $x\mapsto x^{-\alpha}$, for a given $\alpha>0$. \\ $N_{a}(t)$ & $\triangleq$ & Total number of time action $a$ is \textit{realized} at the end of round $t$ by policy $\pi$. Used in Sec.\ref{MAB}.\\ $\tau_{a}(t)$ & $\triangleq$ & Total number of time action $a$ is \textit{played} at the end of round $t$ by policy $\pi$. Used in Sec.\ref{MAB}.\\ $\mathbb{W}_{t}^{C}$ & $\triangleq$ & Censored Design Matrix. Linear generalization of $(N_{a}(t))_{a\in[d]}$. Used in Sec.\ref{CB}.\\ $\mathbb{W}_{t}$ & $\triangleq$ & Expected Design Matrix. Linear generalization of $(p_{a}\tau_{a}(t))_{a\in[d]}$. Used in Sec.\ref{CB}.\\ $\mathbb{W}(t)$ & $\triangleq$ & Continuous generalization of the expected design matrix$\mathbb{W}_{t}$.\\ \bottomrule \end{tabular} \label{tab:TableOfNotationForMyResearch} \end{table} \subsection{UCB algorithms}\label{UCB-algo} \begin{itemize} \item \textbf{UCB-MAB:} Following \cite{lattimore2020bandit}, the UCB algorithms for the MAB case with homogeneous regularization $\lambda>0$ uses the following optimistic reward estimator at time $t$: \begin{align*} \Tilde{r}^{\lambda}_{t}(a) \triangleq \hat{\theta}^{\lambda}_{t}(a) + \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t-1)}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t-1)}. \end{align*} It is based on the use of the regularized empirical mean to estimate the reward of action $a$ at the end of round $t$: \begin{align*} \hat{\theta}^{\lambda}_{t}(a) &\triangleq \frac{1}{N_{a}(t) + \lambda}\sum_{\tau=1}^{t} (r(a_{\tau})+\tau) \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\} \\ &= \frac{N_{a}(t)}{N_{a}(t) + \lambda}\theta_{a}^{\star}+\frac{1}{N_{a}(t) + \lambda}\sum_{\tau=1}^{t} \epsilon_{a_{\tau}} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}. \end{align*} The high-confidence property of this algorithm is proven in Lemma \ref{Fail Optim Finite}.\footnote{Typically, an upper bound on $\|\theta^{\star}\|_{\infty}$ for MAB (resp. $\|\theta^{\star}\|_{2}$ for LCB) is used instead of this unknown quantity. We keep $\|\theta^{\star}\|_{\infty}$ (resp. $\|\theta^{\star}\|_{2}$) not to overload notations but our results immediately extends to the use of the latter. Under a-priori known heteroskedasticity, the reward estimator can be expressed as: \begin{align*} \Tilde{r}^{\lambda}_{t}(a) \triangleq \hat{\theta}^{\lambda}_{t}(a) + \sqrt{\frac{6\sigma_{a}^{2}\log(T)}{\lambda+ N_{a}(t-1)}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t-1)}. \end{align*} \item \textbf{UCB for LCB} Following \cite{NIPS2011_e1d5be1c,lattimore2020bandit}, the UCB algorithms for the LCB case with homogeneous regularization $\lambda>0$ uses the following optimistic reward estimator at time $t$: \begin{align*} \Tilde{r}^{\lambda}_{t}(a) &\triangleq \langle a, \hat{\theta}^{\lambda}_{t-1}\rangle + \beta_{t-1}(\delta)\|a\|_{\mathbb{W}^{C}_{t-1}}, \end{align*} where we introduced the random quantity: \begin{align*} \beta_{t-1}(\delta) \triangleq \sqrt{\sigma^{2} \log \left(\frac{\det(\mathbb{W}^{C}_{t-1})}{\det(\lambda \mathbb{I}_{d})}\right)+2\sigma^{2}\log(\frac{1}{\delta})}+\sqrt{\lambda} \|\theta^{\star}\|_{2} \end{align*} It is based on the use of the regularized least square estimator to estimate the vector $\theta^{\star}$ at the end of round $t$: \begin{align*} \hat{\theta}^{\lambda}_{t} = (\mathbb{W}^{C}_{t})^{-1}\sum_{\tau=1}^{t}(\epsilon_{\tau}+\langle a_{\tau},\theta^{\star}\rangle)x_{a_{\tau}}a_{\tau} \end{align*} The high-confidence property of this estimator is proven in Lemma \ref{Optimistic Lemma Linear}. \end{itemize} \subsection{Proof of Lemma \ref{Potential Reduction Finite}} \PotentialReductionFinite* \begin{proof} At a given round $t\in [T]$, we have under the event $\neg \mathcal{H}_{\text{UCB}}^{\lambda}$ introduced in Lemma \ref{Fail Optim Finite}: \begin{align*} \Delta_{t}(a_{t}) = \max_{a\in\mathcal{A}_{t}}\theta^{\star}_{a} - \theta^{\star}_{a_{t}} \leq 2\sqrt{6\sigma^{2} \frac{\log(T)}{N_{a_{t}}(t-1)+\lambda}} + 2\frac{\lambda \|\theta^{\star}\|_{\infty}}{\lambda + N_{a_{t}}(t-1)}, \end{align*} where the inequality comes from the definition of the UCB algorithm and the conditioning on $\neg \mathcal{H}_{\text{UCB}}^{\lambda}$. We find there the origin of the two different orders of $N_{a}$ ($\nicefrac{1}{2}$ and $1$). Taken independently, those lead to a contribution of respectively $\mathcal{O}(d_{\mathit{eff}}\log(T))$ and $\mathcal{O}(\sqrt{d_{\mathit{eff}}T})$ . More precisely, we have: \begin{align*} R(T,\pi_{\text{UCB}}|\neg \mathcal{H}_{\text{UCB}}^{\lambda}) &\leq 2 \sqrt{6\sigma^{2}\log(T)} \sum_{t=1}^{T}\sqrt{\frac{1}{N_{a_{t}}(t-1)+\lambda}} + 2\lambda\|\theta^{\star}\|_{\infty} \sum_{t=1}^{T}\frac{1}{N_{a_{t}}(t-1)+\lambda} \\ &= 2 \sqrt{6\sigma^{2}\log(T)}\mathbb{V}_{\frac{1}{2}}(T,\pi_{\text{UCB}}) + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{V}_{1}(T,\pi_{\text{UCB}}). \end{align*} Therefore, thanks to Lemma \ref{Fail Optim Finite}, we deduce that: \begin{align*} R(T,\pi_{\text{UCB}}) &\leq (1-\mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda}))R(T,\pi_{\text{UCB}}|\neg \mathcal{H}_{\text{UCB}}^{\lambda}) + \mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda})\Delta_{max}T\\ &\leq 2 \sqrt{6\sigma^{2}\log(T)}\mathbb{V}_{\frac{1}{2}}(T,\pi_{\text{UCB}}) + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{V}_{1}(T,\pi_{\text{UCB}}) + \frac{2d\Delta_{max}}{T}. \end{align*} Finally, we conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2 \sqrt{6\sigma^{2}\log(T)}\mathbb{E}[\mathbb{V}_{\frac{1}{2}}(T,\pi_{\text{UCB}})] + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\text{UCB}})] + \frac{2d\Delta_{max}}{T}. \end{align*} \end{proof} \subsection{Statement and Proof of Lemma \ref{Fail Optim Finite}} The main step in this reduction from regret to cumulative censored potential is the study of the \textit{failure of optimism} event thanks to the following result: \begin{lemma}\label{Fail Optim Finite} For a regularization $\lambda>0$ and $\delta \in ]0,1]$, we introduce the event: \begin{align*} \mathcal{H}_{\text{UCB}}^{\lambda} = \Big\{\exists a \in [d], t\in[T], |\hat{\theta}^{\lambda}_{t}(a) -\theta^{\star}_{a}| > \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t)}\Big\}. \end{align*} We then have $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda}) \leq \frac{2d}{T^{2}}$. \end{lemma} \begin{proof} Although this event is similar to the one introduced in the classical UCB proof idea, the subtlety comes from the randomness induced by the censorship as well as the impact of regularization. The main idea is adopt a worst-case agnostic approach. First, let's note that for a given $t \in [T],a \in [d]$, we have: \begin{align*} |\hat{\theta}^{\lambda}_{t}(a) -\theta^{\star}_{a}| &= |\frac{1}{N_{a}(t) +\lambda}\sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\} -\frac{\lambda}{N_{a}(t)+\lambda}\theta^{\star}_{a}| \\ &\leq |\frac{1}{N_{a}(t) +\lambda}\sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}| + \frac{\lambda}{N_{a}(t) +\lambda}\|\theta^{\star}\|_{\infty} . \end{align*} Therefore, for a given $a\in[d], t\in [T]$, by introducing the event $\mathcal{B}_{(t,a)}\triangleq\Big\{|\hat{\theta}^{\lambda}_{t}(a) -\theta^{\star}_{a}| > \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t)}\Big\}$, we deduce: \begin{align*} \mathcal{B}_{(t,a)} &\subset \Big\{|\frac{1}{N_{a}(t) +\lambda}\sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}| + \frac{\lambda}{N_{a}(t) +\lambda}\|\theta^{\star}\|_{\infty} > \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t)}\Big\} \\ &\subset \Big\{|\frac{1}{N_{a}(t) +\lambda}\sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|> \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} \Big\}. \end{align*} Then, we have: \begin{align*} \mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda}) &= \displaystyle \mathbb{P}\Big(\bigcup_{a\in[d]}\bigcup_{t\in[T]} \mathcal{B}_{(t,a)}\Big)\\ &\leq \mathbb{P}\Big(\bigcup_{a\in[d]}\bigcup_{t\in[T]}\Big\{|\frac{1}{N_{a}(t) +\lambda} \sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|> \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} \Big\}\Big)\\ &\leq \sum_{a\in[d]} \mathbb{P}\Big(\bigcup_{t\in[T]} \Big\{|\frac{1}{N_{a}(t) +\lambda} \sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|> \sqrt{\frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}} \Big\}\Big) \\ & = \sum_{a\in[d]}\mathbb{P}\Big(\bigcup_{k\in[T]} \bigcup_{t\in[T]} \Big\{|\frac{1}{N_{a}(t) +\lambda} \sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|^{2}> \frac{6\sigma^{2}\log(T)}{\lambda+ N_{a}(t)}; N_{a}(t)=k\Big\}\Big)\\ & = \sum_{a\in[d]}\sum_{k\in[T]}\mathbb{P}(N_{a}(t)=k)\mathbb{P}\Big(\bigcup_{t\in[T]} \Big\{ |\frac{1}{k +\lambda} \sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|^{2} >\frac{6\sigma^{2}\log(T)}{k}\Big| N_{a}(t)=k\Big\}\Big)\\ & \leq \sum_{a\in[d]} \sum_{k\in[T]}\mathbb{P}\Big(\bigcup_{t\in[T]}\Big\{|\frac{1}{k +\lambda} \sum_{\tau=1}^{t} \epsilon_{\tau} \mathbf{1}\{a_{\tau}=a, x_{a_{\tau}}=1\}|^{2} >\frac{6\sigma^{2}\log(T)}{\lambda+k}\Big| N_{a}(t)=k\Big\}\Big) \\ &= \sum_{a\in[d]} \sum_{k\in[T]}\mathbb{P}\Big(|\frac{\sum_{l=1}^{k} \epsilon_{l}}{k +\lambda}|^{2} >\frac{6\sigma^{2}\log(T)}{\lambda+k}\Big), \end{align*} where we successively used union bounds over the action set and number of realizations and conditioned over number of realizations $k$. We re-indexed the random sub-Gaussian variables $(\epsilon_{t})$ for last expression thanks to the i.i.d property. Then, for a given $k$, using Hoeffding inequality for sub-Gaussian variables, we have: \begin{align*} \mathbb{P}\Big(|\frac{\sum_{l=1}^{k}\epsilon_{l}}{k+\lambda}|^{2}>\frac{6\sigma^{2}\log(T)}{k+\lambda}\Big) &= \mathbb{P}\Big(|\sum_{l=1}^{k}\epsilon_{l}|>\sqrt{6\sigma^{2}(k+\lambda)\log(T)}\Big) &\leq 2\exp\{-\frac{6\sigma^{2}(k+\lambda)\log(T)}{2k\sigma^{2}}\} \\ &\leq \frac{2}{T^{3}} \end{align*} where the used that fact that $\sum_{l=1}^{k}\epsilon_{l}$ is sub-Gaussian of pseudo-variance parameter $k\sigma^{2}$ Therefore, this yields: \begin{align*} \sum_{a\in[d]} \sum_{k\in[T]} \mathbb{P}\Big(|\frac{\sum_{l=1}^{k}\epsilon_{l}}{k+\lambda}|^{2}>\frac{6\sigma^{2}\log(T)}{k+\lambda}\Big) \leq \frac{2d}{T^{2}}. \end{align*} Finally, we conclude that $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda}) \leq \frac{2d}{T^{2}}$. \end{proof} \begin{remark}\label{Tails} We note that assuming tails distribution for the reward noise $\epsilon$ of the form: \begin{align*} \mathbb{P}\left(\epsilon \geq x\right) \leq\exp\Big\{\frac{-x^{1+q}}{2\sigma^{2}}\Big\} \end{align*} for a given $q>0$, as suggested for instance in \cite{ZhouGLM}, would lead the use of the confidence interval: \begin{align*} \mathcal{H}_{\text{UCB}}^{\lambda,q} = \Big\{\exists a \in [d], t\in[T], |\hat{\theta}^{\lambda}_{t}(a) -\theta^{\star}_{a}| > \Big(6\sigma^{2}\log(T)\Big)^{\frac{1}{1+q}}\Big(\lambda+ N_{a}(t)\Big)^{-\frac{q}{1+q}} + \frac{\lambda\|\theta^{\star}\|_{\infty}}{\lambda+ N_{a}(t)}\Big\}. \end{align*} Indeed, the same reasoning as above would then yield: \begin{align*} \mathbb{P}\Big(|\frac{\sum_{l=1}^{k}\epsilon_{l}}{k+\lambda}|>(6\sigma^{2}\log(T))^{\frac{1}{1+q}}(k+\lambda)^{-\frac{a}{1+q}}\Big) &= \mathbb{P}\Big(|\sum_{l=1}^{k}\epsilon_{l}|>(6\sigma^{2}(k+\lambda)\log(T))^{\frac{1}{1+q}}\Big)\\ &\leq 2\exp\{-\frac{6\sigma^{2}(k+\lambda)\log(T)}{2k\sigma^{2}})\} \leq \frac{2}{T^{3}} \end{align*} and therefore $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{\lambda,q})\leq \frac{2d}{T^{2}}$. For $q=1$, we recover the sub-Gaussian case, which in turns lead to the study of $\mathbb{V}_{1/2}$, as done in Lemma \ref{Potential Reduction Finite}. For general $q>0$, we would would then consider $\mathbb{V}_{q/(1+q)}$, which lead to the upper bound $\mathcal{O}(d_{\mathit{eff}}^{q/(1+q)}T^{1/(1+q)})$ through the use of Prop. \ref{Potential Control Finite}. \end{remark} \subsection{Statement and Proof of Lemma \ref{Derando Censo Linear}} \begin{lemma}\label{Derando Censo Linear} For any $\delta \in ]0,1]$, $\lambda>0$ and censorship model, let's introduce the event: \begin{align*} \mathcal{H}^{I}_{\text{CEN}}(\delta) &= \left\{\exists a \in [d], t\in[T],N_{a}(t) < (1-\delta)p_{a}\tau_{a}(t) \quad \text{and} \quad \tau_{a}(t) \geq T_{0}(a) \right\}, \end{align*} where $T_{0}(a)\triangleq 24\log(T)/p_{a}+1$. We then have $\mathbb{P}(\mathcal{H}^{I}_{\text{CEN}}(\delta)) \leq \frac{4d_{\mathit{eff}}}{\delta^{2}}T^{-12\delta^{2}}$. \end{lemma} \begin{proof} First, we apply successively two unions bounds over the action set and the number of realizations, mirroring the analysis of \cite{stoch_unrest_delay}: \begin{align*} \mathbb{P}(\mathcal{H}^{I}_{\text{CEN}}(\delta)) &\leq \sum_{a\in [d]} \mathbb{P}\Big(\Big\{\exists t\in[T], \tau_{a}(t) \geq T_{0}(a),N_{a}(t) < (1-\delta)p_{a}\tau_{a}(t) \Big\}\Big) \\ &= \sum_{a\in [d]} \mathbb{P}\Big(\bigcup_{k_{a}\in[T_{0}(a),T]} \bigcup_{t\in[T]}\Big\{\tau_{a}(t) \geq T_{0}(a),N_{a}(t) < (1-\delta)p_{a}\tau_{a}(t), \tau_{a}(t) = k_{a} \Big\}\Big) \\ &\leq \sum_{a\in [d]} \sum_{k_{a}\geq T_{0}(a)}\mathbb{P}\Big(\bigcup_{t\in[T]}\Big\{N_{a}(t) < (1-\delta)p_{a}\tau_{a}(t)\Big|\tau_{a}(t) = k_{a}\Big\}\Big). \end{align*} We then use a multiplicative Chernoff inequality for Binomial Distribution to deduce: \begin{align*} \sum_{a\in[d]} \sum_{k_{a}\geq T_{0}(a)} \mathbb{P}\Big(N_{a}(t) &< (1-\delta)p_{a}\tau_{a}(t)\Big|\tau_{a}(t)=k_{a}\Big) \leq \sum_{a\in[d]} \sum_{k_{a}\geq T_{0}(a)} \exp\{-\frac{\delta^{2}k_{a}p_{a}}{2}\}. \end{align*} The novelty of our proof is to leverage a integral comparison to deduce the improved control: \begin{align*} \sum_{a\in[d]} \sum_{k_{a}\geq T_{0}(a)} \exp\{-\frac{\delta^{2}k_{a}p_{a}}{2}\} &\leq 2 \sum_{a\in[d]} \left[-\frac{2}{\delta^{2}p_{a}}\exp\{-\frac{\delta^{2}k_{a}p_{a}}{2}\}\right]^{\tau_{a}(t)}_{T_{0}(a)-1} \\ &\leq \frac{4}{\delta^{2}}d_{\mathit{eff}}\frac{1}{T^{12\delta^{2}}} - \frac{4}{\delta^{2}}\sum_{a\in[d]}\frac{1}{p_{a}}\exp\{-\frac{\delta^{2}\tau_{a}(t)p_{a}}{2}\} \leq \frac{4}{\delta^{2}}d_{\mathit{eff}}\frac{1}{T^{12\delta^{2}}}. \end{align*} Picking for instance $\delta = \frac{1}{2}$ yields $\mathbb{P}(\mathcal{H}^{I}_{\text{CEN}}(\frac{1}{2})) \leq \frac{16d_{\mathit{eff}}}{T^{3}}$. \end{proof} \subsection{Proof of Lemma \ref{Optimization Lemma Finite}} \OptimizationLemmaFinite* \begin{proof} We first introduce the Lagrangian of the problem $\mathcal{L}(\tau_{1},\dots,\tau_{d},\mu):= \sum_{a\in[d]} \frac{1}{p_{a}}\Big(\psi_{\alpha}(p_{a}\tau_{a}+\lambda_{a})-\psi_{\alpha}(\lambda_{a})\Big) + \mu(T-\sum_{a\in[d]}\tau_{a})$. Differentiating with respect to $\tau_{a}$ for all $a\in[d]$ yields the equations: \begin{align*} \frac{1}{(p_{a}\tau_{a}+\lambda_{a})^{\alpha}} - \mu = 0. \end{align*} We then write it equivalently as: \begin{align*} \tau_{a} = \frac{1}{p_{a}}[\mu^{-1/\alpha} - \lambda_{a}]. \end{align*} However, since $(\tau_{a})$ must be nonnegative, it may not always be possible to find a solution of this form. We then verify using KKT conditions that the solution: \begin{align*} \tau_{a} = \frac{1}{p_{a}}[C - \lambda_{a}]^{+}, \end{align*} where $C$ ensures the total budget constraint $ \sum_{a\in[d]}\tau^{\star}_{a}=T$, is optimal. In particular, whenever $T\geq \max_{a}\lambda_{a}^{0}$, we recover the solution provided in the second part the Lemma. \end{proof} \subsection{Proof of Prop. \ref{Potential Control Finite} } \PotentialControlFinite* \begin{proof} For a given $\alpha\in]0,1]$, we condition on the event $\mathcal{H}^{I}_{\text{CEN}}(\delta)$ introduced in Lemma \ref{Derando Censo Linear} and consider the cases $\tau_{a}(t)\geq T_{0}(a)$ and $\tau_{a}(t)<T_{0}(a)$. This yields for any policy $\pi \in \Pi$: \begin{align*} \mathbb{V}_{\alpha}(T,\pi|\mathcal{H}^{I}_{\text{CEN}}(\delta)) &\leq \frac{\sum_{a\in[d]}T_{0}(a)}{\lambda^{\alpha}} + \sum_{t=1}^{T}\left((1-\delta)p_{a_{t}}\tau_{a_{t}}(t-1)+\lambda\right)^{-\alpha}\\ &\leq \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{1}{(1-\delta)^{\alpha}}\sum_{t=1}^{T}\left(p_{a_{t}}\tau_{a_{t}}(t-1)+\frac{\lambda}{1-\delta}\right)^{-\alpha}\\ &\leq \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{1}{(1-\delta)^{\alpha}} \sum_{a\in[d]}\int_{0}^{\tau_{a}(T)}\left(p_{a}u+\frac{\lambda}{1-\delta}\right)^{-\alpha}\partial u\\ &= \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{1}{(1-\delta)^{\alpha}} \sum_{a\in[d]}\frac{1}{p_{a}}[\psi_{\alpha}(p_{a}\tau_{a}(T)+\frac{\lambda}{1-\delta})-\psi_{\alpha}(\frac{\lambda}{1-\delta})]. \end{align*} We then apply the Lemma \ref{Optimization Lemma Finite} with constant $\Tilde{\lambda} \triangleq \lambda/(1-\delta)$ to deduce: \begin{align*} \max_{\pi \in \Pi} \mathbb{V}_{\alpha}(T,\pi|\neg \mathcal{H}^{I}_{\text{CEN}}(\delta)) \leq \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{d_{\mathit{eff}}}{(1-\delta)^{\alpha}}\Big[\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\frac{\lambda}{1-\delta}) - \psi_{\alpha}(\frac{\lambda}{1-\delta})\Big]. \end{align*} Then, we conclude thanks to Lemma \ref{Derando Censo Linear} that: \begin{align*} \max_{\pi \in \Pi} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] &\leq \mathbb{P}(\neg \mathcal{H}^{I}_{\text{CEN}}(\delta))\max_{\pi \in \Pi} \mathbb{V}_{\alpha}(T,\pi|\neg \mathcal{H}^{I}_{\text{CEN}}(\delta)) + (1-\mathbb{P}(\neg \mathcal{H}^{I}_{\text{CEN}}(\delta)))\frac{1}{\lambda^{\alpha}} \\ &\leq \frac{1}{(1-\delta)^{\alpha}}d_{\mathit{eff}}\left[\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\frac{\lambda}{1-\delta}) - \psi_{\alpha}(\frac{\lambda}{1-\delta})\right] + \frac{24d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} \\&+ \frac{4}{\delta^{2}}d_{\mathit{eff}}\frac{1}{\lambda^{\alpha}T^{12\delta^{2}}}. \end{align*} In particular, for $\alpha =1$ and $\delta = \frac{1}{2}$, this involves: \begin{align*} \max_{\pi \in \Pi} \mathbb{E}[\mathbb{V}_{1}(T,\pi)] \leq 2d_{\mathit{eff}}\log(\frac{T}{2\lambda}+1) + \frac{24d_{\mathit{eff}}\log(T)+d}{\lambda} + 16d_{\mathit{eff}}\frac{1}{\lambda T^{2}}, \end{align*} and for $\alpha=\frac{1}{2}$ and $\delta = \frac{1}{2}$, this yields: \begin{align*} \max_{\pi \in \Pi} \mathbb{E}[\mathbb{V}_{\frac{1}{2}}(T,\pi)] \leq\sqrt{2}d_{\mathit{eff}}\left[\sqrt{\frac{T}{d_{\mathit{eff}}}+2\lambda} - \sqrt{2\lambda}\right] + \frac{24d_{\mathit{eff}}\log(T)+d}{\sqrt{\lambda}} + 16d_{\mathit{eff}}\frac{1}{\sqrt{\lambda}T^{2}}. \end{align*} \end{proof} \subsection{Proof of Thm. \ref{THM Finite arms}} \THMFinitearms* \begin{proof} We first apply Lemma \ref{Potential Reduction Finite} to deduce: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq 2 \sqrt{6\sigma^{2}\log(T)}\mathbb{E}[\mathbb{V}_{\frac{1}{2}}(T,\pi_{\text{UCB}})] + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\text{UCB}})] + \frac{2d\Delta_{max}}{T} \\ &\leq 2 \sqrt{6\sigma^{2}\log(T)}\max_{\pi \in \Pi}\mathbb{E}[\mathbb{V}_{\frac{1}{2}}(T,\pi)] + 2\lambda\|\theta^{\star}\|_{\infty}\max_{\pi \in \Pi}\mathbb{E}[\mathbb{V}_{1}(T,\pi)] + \frac{2d\Delta_{max}}{T}. \end{align*} We then apply proposition \ref{Potential Control Finite}, with $\delta =1/2$ in order to deduce: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq 2 \sqrt{6\sigma^{2}\log(T)}\Big(\sqrt{2}d_{\mathit{eff}}\Big[\sqrt{\frac{T}{d_{\mathit{eff}}}+2\lambda} - \sqrt{2\lambda}\Big] + \frac{24d_{\mathit{eff}}\log(T)+d}{\sqrt{\lambda}} + 16d_{\mathit{eff}}\frac{1}{\sqrt{\lambda}T^{2}}\Big) \\ &+ 2\lambda\|\theta^{\star}\|_{\infty}\Big(2d_{\mathit{eff}}\log\Big(\frac{T}{2\lambda}+1\Big) + \frac{24d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + 16d_{\mathit{eff}}\frac{1}{\lambda T^{2}}\Big) + \frac{2d\Delta_{max}}{T}. \end{align*} By taking $\lambda = o(\log(T))$ and considering only the leading order, we conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq \Tilde{\mathcal{O}}( \sigma\sqrt{d_{\mathit{eff}}T}). \end{align*} Note that our proof easily allows to get high-probability bounds on regret instead of bounds on its expected value. \end{proof} \begin{remark}\label{Hetero Finite 1} We now extend Thm. \ref{THM Finite arms} to heteroskedastic MAB. In this model, the pseudo-variance of the sub-Gaussian noisy reward is arm-dependent and denoted $\sigma_{a}$. Moreover, the value of $\sigma_{a}$ is known to the designer of the algorithm, that is, it can be used as a parameter for the UCB algorithm. We first apply a slightly modified version of Lemma \ref{Potential Reduction Finite} to deduce: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq 2 \sqrt{6\log(T)}\mathbb{E}[\Bar{\mathbb{V}}_{\frac{1}{2}}(T,\pi_{\text{UCB}})] + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\text{UCB}})] + \frac{2d\Delta_{max}}{T}, \end{align*} where for $\alpha>0$ and $\pi \in \Pi$, we introduced the variance-based cumulative potential: \begin{align*} \Bar{\mathbb{V}}_{\alpha}(T,\pi) = \sum_{t=1}^{T}(\frac{N_{a_{t}}(t-1)}{\sigma^{1/\alpha}_{a_{t}}}+\frac{\lambda}{\sigma^{1/\alpha}_{a_{t}}})^{-\alpha}. \end{align*} Thus, heteroskedasticity induces the mapping $\Breve{p_{a}}\equiv p_{a}/\sigma^{1/\alpha}_{a}$ and $\Breve{\lambda}_{a}\equiv \lambda/\sigma^{1/\alpha}_{a}$. Following the proof of Prop. \ref{Potential Control Finite}, we deduce for any $\alpha>0$ and time allocation $(\tau_{a}(T))_{a\in[d]}$: \begin{align*} \mathbb{V}_{\alpha}(T,\pi|\mathcal{H}^{I}_{\text{CEN}}(\delta)) &\leq \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{1}{(1-\delta)^{\alpha}} \sum_{a\in[d]}\frac{1}{\Breve{p_{a}}}[\psi_{\alpha}(\Breve{p_{a}}\tau_{a}(T)+\frac{\Breve{\lambda}_{a}}{1-\delta})-\psi_{\alpha}(\frac{\Breve{\lambda}_{a}}{1-\delta})]. \end{align*} In order to apply Lemma \ref{Optimization Lemma Finite}, we introduce the notation: \begin{align*} \Breve{d}_{\mathit{eff}} = \sum_{a\in[d]}\frac{\sigma^{1/\alpha}_{a}}{p_{a}}, \quad \Breve{\lambda}_{\mathit{eff}} = \frac{\lambda}{1-\delta} \frac{d_{\mathit{eff}}}{\Breve{d}_{\mathit{eff}}} \quad \text{and}\quad \Breve{\lambda}_{a}^{0} = \frac{\lambda\Breve{d}_{\mathit{eff}}}{1-\delta}(\frac{1}{\sigma^{1/\alpha}_{a}}- \frac{d_{\mathit{eff}}}{\Breve{d}_{\mathit{eff}}}). \end{align*} and we deduce that whenever $T\geq \max_{a} \Breve{\lambda}_{a}^{0}$, we have: \begin{align*} \mathbb{V}_{\alpha}(T,\pi|\mathcal{H}^{I}_{\text{CEN}}(\delta)) &\leq \frac{24 d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{1}{(1-\delta)^{\alpha}}\Big[\Breve{d}_{\mathit{eff}}\psi_{\alpha}(\frac{T}{\Breve{d}_{\mathit{eff}}}+\Breve{\lambda}_{\mathit{eff}})-\sum_{a\in[d]}\frac{\sigma^{1/\alpha}_{a}}{p_{a}}\psi_{\alpha}(\frac{\lambda}{(1-\delta)\sigma^{1/\alpha}_{a}})\Big]. \end{align*} In particular, by considering the case $\alpha=\nicefrac{1}{2}$ and only the leading order, we deduce that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq \Tilde{\mathcal{O}}\Big(\sqrt{\Breve{d}_{\mathit{eff}}T}\Big), \end{align*} where as affirmed $\Breve{d}_{\mathit{eff}}=\sum_{a\in[d]}\frac{\sigma_{a}^{2}}{p_{a}}$. \end{remark} \subsection{Proof of Prop. \ref{Instance Dep Regret Finite}} \InstanceDepRegretFinite* \begin{proof} As in the proof of Lemma \ref{Potential Control Finite}, for a given round $t\in [T]$, we have under the event $\neg \mathcal{H}_{\text{UCB}}^{\lambda}$ \begin{align*} \Delta_{a} = \max_{\Tilde{a}\in\mathcal{A}}\theta^{\star}_{\Tilde{a}} - \theta^{\star}_{a} \leq 2\sqrt{6\sigma^{2} \frac{\log(T)}{N_{a_{t}}(t-1)+\lambda}} + 2\frac{\lambda \|\theta^{\star}\|_{\infty}}{\lambda + N_{a_{t}}(t-1)}. \end{align*} It is as an inequality of the second degree and thus for any $t\in[T]$, $a\in [d]$: \begin{align*} x_{1}\left(\sqrt{\frac{1}{\lambda+N_{a}(t)}}\right)^{2} + x_{2}\sqrt{\frac{1}{\lambda+N_{a}(t)}} - \Delta_{a}\geq 0, \end{align*} where $x_{1}=2\lambda \|\theta^{\star}\|_{\infty}$ and $x_{2}=2\sqrt{6\sigma^{2}\log(T)}$. Solving it yields: \begin{align*} \sqrt{\frac{1}{\lambda+N_{a}(t)}} \geq \frac{1}{2x_{1}}(-x_{2}+\sqrt{x_{2}^{2}+4\Delta_{a}x_{1}}), \end{align*} or equivalently: \begin{align*} N_{a}(T) &\leq \Big(\frac{4\lambda \|\theta^{\star}\|_{\infty}}{\sqrt{24\sigma^{2}\log(T)+8\Delta_{a}\lambda \|\theta^{\star}\|_{\infty}}-\sqrt{24\sigma^{2}\log(T)}}\Big)^{2}-\lambda \triangleq \Theta(T), \end{align*} where we used the notation $\Theta(T)$ to simplify the presentation.Therefore, under $\neg \mathcal{H}^{I}_{\text{CEN}}(\frac{1}{2})$, we have: \begin{align*} \tau_{a}(t) \leq \max(T_{0}(a),\frac{2}{p_{a}}\Theta(T)). \end{align*} This yields a conditional regret of: \begin{align*} R(T|&\neg(\mathcal{H}^{I}_{\text{CEN}}(\frac{1}{2})\cup \mathcal{H}^{\lambda}_{\text{UCB}})) \leq \sum_{a\in[d], a \neq a^{\star}}\Delta_{a}\tau_{a}(T) = \sum_{a\in[d], a \neq a^{\star}}\frac{2\Delta_{a}}{p_{a}}\max(12\log(T)+\frac{p_{a}}{2},\Theta(T)), \end{align*} where $a^{\star}\triangleq \operatorname{argmax}_{\Tilde{a}\in\mathcal{A}}\theta^{\star}_{\Tilde{a}}$ and an expected regret of: \begin{align*} \mathbb{E}[R(T,\pi_{\mathit{UCB}})] &\leq \sum_{a\in[d], a \neq a^{\star}}\frac{2\Delta_{a}}{p_{a}}\max(12\log(T)+\frac{p_{a}}{2},\Theta(T)) + \frac{d\Delta_{max}}{T} + \frac{16 d_{\mathit{eff}}\Delta_{max}}{T^{2}}. \end{align*} In particular, for the regularization $\lambda = o(\log(T))$, we have the asymptotic: \begin{align*} \Theta(T) = \Big(\frac{4\lambda \|\theta^{\star}\|_{\infty}}{\sqrt{24\sigma^{2}\log(T)+8\Delta_{a}\lambda \|\theta^{\star}\|_{\infty}}-\sqrt{24\sigma^{2}\log(T)}}\Big)^{2} = \frac{24\sigma^{2}\log(T)}{\Delta_{a}^{2}} + \frac{8\lambda \|\theta^{\star}\|_{\infty}}{2\Delta_{a}} + o(1). \end{align*} And thus, we conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\mathit{UCB}})] &\leq \mathcal{O}\Big(\log(T)\sum_{a\in[d], a \neq a^{\star}}\frac{1}{p_{a}}\max(\frac{\sigma^{2}}{\Delta_{a}},\Delta_{a})\Big). \end{align*} Again, note that our proof easily allows to get high-probability bounds on regret instead of bounds on its expected value. \end{proof} \begin{remark}\label{Hetero Finite 2} As in the instance-independent case, previous reasoning immediately extends to a-priori known heteroskedasticity and yields the upper bound: \begin{align*} \mathbb{E}[R(T,\pi_{\mathit{UCB}})] &\leq \mathcal{O}\Big(\log(T)\sum_{a\in[d], a \neq a^{\star}}\frac{1}{p_{a}}\max(\frac{\sigma_{a}^{2}}{\Delta_{a}},\Delta_{a})\Big). \end{align*} \end{remark} Next, we provide additional insights to the main result of this section. In particular, we seek to gain intuition about how the policies that are adaptive to the realization of censorship process would perform in expectation against a class of non-adaptive (i.e. offline ) policies. In order to precisely derive asymptotic behavior of such policies, we introduce and study a continuous counterpart of the discrete original policy maximization problem $\max_{\pi \in \Pi}\mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)]$. Lemma \ref{asympt_off} provides the basis for continuous approach in the case of offline policies by leveraging concentration inequalities for inverse Binomial distribution. We then extend this approach in the proof of Prop. \ref{Monitoring AG}. This extension enables us to provide an exact expression for the asymptotic gain of a policy class that monitors the censorship at a single point in time, as well as estimate the gain from fully adaptive policies. \subsection{Proof of Lemma \ref{asympt_off}} \asymptoff* \begin{proof} Given the offline nature of the policy class, we have: \begin{align*} \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] = \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\mathbb{E}\Big[\sum_{n=1}^{\tau_{a}}\frac{1}{(X^{a}_{n-1}+\lambda)^{\alpha}})\Big] = \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\mathbb{E}\Big[\frac{1}{(X^{a}_{n-1}+\lambda)^{\alpha}})\Big] \end{align*} where we have re-indexed $(N_{a}(t))$ by actions, where $(\tau_{a})_{a\in[d]}$ is a time allocation such that $\sum_{a\in[d]}\tau_{a}=T$ and where for a given action $a$, $(X^{a}_{n})_{n\leq \tau_{a}}$ are dependent random variables verifying $X_{n+1} = X^{a}_{n}+\mathcal{B}(p_{a})$ and $X^{a}_{n}\sim \mathcal{B}(n,p_{a})$. To lower bound this quantity, we fix a time allocation $(\tau_{a})_{a\in[d]}$ and use the fact that $x\mapsto x^{-\alpha}$ is convex with Jensen's inequality to deduce: \begin{align*} \sum_{n=1}^{\tau_{a}}\mathbb{E}\Big[\frac{1}{(X^{a}_{n-1}+\lambda)^{\alpha}})\Big] &\geq \sum_{n=1}^{\tau_{a}}\frac{1}{(\mathbb{E}[X^{a}_{n-1}]+\lambda)^{\alpha}} = \sum_{n=1}^{\tau_{a}}\frac{1}{(p_{a}(n-1)+\lambda)^{\alpha}} = \frac{1}{\lambda^{\alpha}} + \sum_{n=1}^{\tau_{a}-1}\frac{1}{(p_{a}n+\lambda)^{\alpha}} \end{align*} We then leverage the fact that $(px+\lambda)^{-\alpha}\geq \int_{x-1}^{x}(pu+\lambda)^{-\alpha}\partial u$ to deduce: \begin{align*} \sum_{n=1}^{\tau_{a}}\mathbb{E}\Big[\frac{1}{(X^{a}_{n-1}+\lambda)^{\alpha}})\Big] \geq \frac{1}{\lambda^{\alpha}} + \sum_{a\in[d]}\int_{0}^{\tau_{a}-1}\frac{1}{(p_{a}x+\lambda)^{\alpha}}\partial x = \frac{1}{\lambda^{\alpha}} + \frac{1}{p_{a}}\Big[\psi_{\alpha}(p_{a}(\tau_{a}-1)+\lambda)-\psi_{\alpha}(\lambda)\Big] \end{align*} and therefore, for any time allocation $(\tau_{a})_{a\in[d]}$, we have: \begin{align*} \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \geq \frac{d}{\lambda^{\alpha}} + \sum_{a\in[d]}\frac{1}{p_{a}}\Big[\psi_{\alpha}(p_{a}(\tau_{a}-1)+\lambda)-\psi_{\alpha}(\lambda)\Big] \end{align*} Although the maximum over time allocation is given by Lemma \ref{Optimization Lemma Finite}, we simply use the allocation $(\frac{T}{p_{a}d_{\mathit{eff}}})_{a\in[d]}$ to deduce: \begin{align*} \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \geq \sum_{a\in[d]}\frac{1}{p_{a}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda-\frac{1}{p_{a}})+\frac{d}{\lambda^{\alpha}} -\sum_{a\in[d]}\frac{1}{p_{a}}\psi_{\alpha}(\lambda) \end{align*} By making the distinction between $\alpha=1$ and $\alpha<1$ to obtain the explicit expression of $\psi_{\alpha}$, we then show that the LHS is equivalent to $d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda)$. The proof of the upper bound is more involved. In proving it, let's first assume that: \begin{claim}\label{concentration Bin} For all $a \in [d]$, $n\geq 1$, there exists a constant $C_{2}^{a}$ such that: \begin{align*} \mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}})\Big] \leq (1+\frac{C_{2}^{a}}{(n p_{a})^{1/4}})\frac{1}{(p_{a}n+\lambda)^{\alpha}} \end{align*} \end{claim} Given this result, we deduce: \begin{align*} \sum_{n=1}^{\tau_{a}-1}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}})\Big] \leq \sum_{n=1}^{\tau_{a}-1} (1+\frac{C_{2}^{a}}{(np_{a})^{1/4}})\frac{1}{(p_{a}n+\lambda)^{\alpha}}. \end{align*} Therefore, we have: \begin{align*} \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\mathbb{E}\Big[\frac{1}{(X^{a}_{n-1}+\lambda)^{\alpha}})\Big] &\leq \frac{d}{\lambda^{\alpha}}+ \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}})\Big] \\ &\leq \sum_{a\in[d]}\frac{1}{\lambda^{\alpha}} + \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\frac{1}{(p_{a}n+\lambda)^{\alpha}} \\&\quad \quad \quad+ (\max_{a\in[d]}C_{2}^{a})\max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\frac{1}{(p_{a}n)^{1/4}}\frac{1}{(p_{a}n+\lambda)^{\alpha}}. \end{align*} We first consider the second maximization problem and note that: \begin{align*} \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\frac{1}{(p_{a}n)^{1/4}}\frac{1}{(p_{a}n+\lambda)^{\alpha}} &\leq \lambda^{1/4} \max_{(\tau_{a})_{a\in[d]}}\sum_{a\in[d]}\sum_{n=1}^{\tau_{a}}\frac{1}{(p_{a}n+\lambda)^{\alpha+1/4}} \\ &= \mathcal{O}(d_{\mathit{eff}}\psi_{\alpha+\frac{1}{4}}(\frac{T}{d_{\mathit{eff}}}+\lambda)) \\ &= o(d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda)) \end{align*} where we used an integral comparison and Lemma \ref{Optimization Lemma Finite} to deduce the $\mathcal{O}$ scaling and the fact that $\alpha \in ]0,1]$ the deduce the $o$ scaling. Similarly, we know that the first maximization problem scales as $d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda)$ through another integral comparison and use of Lemma \ref{Optimization Lemma Finite}. Given this, we conclude that the upper bound is equivalent to $d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda)$. Thanks to those two results, we finally affirm that: \begin{align*} \displaystyle \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \sim d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda). \end{align*} The last step needed is to prove Claim. \ref{concentration Bin}. In doing so, we extend Lemma $5.3$ of \cite{thesesarlot} to more general inverse power function (i.e. $\alpha \neq 1$) with regularization $\lambda>0$. We first introduce a tuning parameter $u\geq 0$ and write: \begin{align*} (\mathbb{E}[X^{a}_{n}]+\lambda)^{\alpha}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\Big] &= (n p_{a}+\lambda)^{\alpha} \mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\mathbf{1}\{X^{a}_{n}\leq u\mathbb{E}[X^{a}_{n}]\}\Big] \\&\quad \quad \quad+ \mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\mathbf{1}\{X^{a}_{n}> u \mathbb{E}[X^{a}_{n}]\}\Big] \\ &\leq \frac{(n p_{a}+\lambda)^{\alpha}}{\lambda^{\alpha}}\mathbb{P}(X^{a}_{n}\leq u\cdot n p_{a}) + (\frac{n p_{a}+\lambda}{u\cdot n p_{a}+\lambda})^{\alpha}. \end{align*} Using a Berstein inequality for Binomiale variable, we have for all $\theta>0$ and $n\in \mathbb{N}$: \begin{align*} \mathbb{P}\left(X_{n}^{a} \leq\left(1-\sqrt{2 \theta}-\frac{\theta}{3}\right) n p_{a}\right) \leq e^{-\theta n p_{a}}. \end{align*} Thus, for all $0<\theta \leq \frac{3(\sqrt{5}-\sqrt{3})^{2}}{2}$, by setting $u \equiv (1-\sqrt{2 \theta}-\frac{\theta}{3})\geq 0$, we obtain that: \begin{align*} (\mathbb{E}[X^{a}_{n}]+\lambda)^{\alpha}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\Big] \leq \frac{(n p_{a}+\lambda)^{\alpha}}{\lambda^{\alpha}}e^{-\theta n p_{a}} + \Big(\frac{n p_{a}+\lambda}{(1-\sqrt{2 \theta}-\frac{\theta}{3})n p_{a}+\lambda}\Big)^{\alpha}. \end{align*} By taking $\theta = A\frac{\log(n p_{a}+\lambda)}{n p_{a}}$ for another tunable parameter $A$ and for $n$ large enough to ensure $A\frac{\log(n p_{a}+\lambda)}{n p_{a}}\leq \frac{3(\sqrt{5}-\sqrt{3})^{2}}{2}$, this yields: \begin{align*} (\mathbb{E}[X^{a}_{n}]+\lambda)^{\alpha}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\Big] &\leq \frac{(n p_{a}+\lambda)^{\alpha}}{\lambda^{\alpha}(np_{a}+\lambda)^{A}} + \Big(\frac{n p_{a}+\lambda}{(1-\sqrt{2 A\frac{\log(np_{a}+\lambda)}{n p_{a}}}-A\frac{\log(n p_{a}+\lambda)}{3 n p_{a}})n p_{a}+\lambda}\Big)^{\alpha} \\ &= \frac{(n p_{a}+\lambda)^{\alpha}}{\lambda^{\alpha}(np_{a}+\lambda)^{A}} + \Big(\frac{n p_{a}+\lambda}{n p_{a}+\lambda -\sqrt{2 A n p_{a} \log(n p_{a}+\lambda)}-\frac{A\log(n p_{a}+\lambda)}{3}}\Big)^{\alpha} \\ \end{align*} For $n$ sufficiently large to ensure $\frac{3\sqrt{2 A n p_{a} \log(np_{a})}+A\log(np_{a})}{3(np_{a}+\lambda)}\leq 1/2$ and given that $\alpha \in ]0,1]$, we then have: \begin{align*} \Big(\frac{1}{1 -\frac{2\sqrt{2 A n p_{a} \log(np_{a}+\lambda)}+A\log(np_{a}+\lambda)}{3(np_{a}+\lambda)}}\Big)^{\alpha} &\leq \Big(1 + 2\frac{3\sqrt{2 A n p_{a} \log(np_{a}+\lambda)}+A\log(np_{a}+\lambda)}{3(np_{a}+\lambda)} \Big)^{\alpha}\\ &\leq 1 + 2\alpha \frac{3\sqrt{2 A n p_{a} \log(np_{a}+\lambda)}+A\log(np_{a}+\lambda)}{3(np_{a}+\lambda)} \end{align*} To conclude, we take take $A\equiv(\alpha+1)$ and this ensure that for $n$ sufficiently large, we obtain: \begin{align*} (\mathbb{E}[X^{a}_{n}]+\lambda)^{\alpha}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\Big] &\leq 1 + 2\alpha \frac{3\sqrt{2 (\alpha+1) n p_{a} \log(np_{a}+\lambda)}+(\alpha+1)\log(np_{a}+\lambda)}{3(np_{a}+\lambda)} \\ &\quad \quad \quad + \frac{1}{\lambda^{\alpha}(np_{a}+\lambda)}. \end{align*} The leading order of this quantity is $\sqrt{\frac{log(n)}{n}}=o(n^{-1/4})$ and therefore, we conclude that there exists a constant $C_{2}^{a}$, depending on $\lambda, \alpha$ and $p_{a}$ such that for all $n\geq 1$: \begin{align*} (\mathbb{E}[X^{a}_{n}]+\lambda)^{\alpha}\mathbb{E}\Big[\frac{1}{(X^{a}_{n}+\lambda)^{\alpha}}\Big] \leq 1 + \frac{C_{2}^{a}}{(n p_{a})^{1/4}}, \end{align*} where $C_{2}^{a}$ is artificially increased to remove the two lower bounds conditions on $n$. \end{proof} \subsection{Proof of Prop. \ref{Monitoring AG}} \MonitoringAG* Thus, we find that the power of a single monitoring is sufficient to ensure almost the same gain as adaptivity i.e. constant monitoring. The linear dependency in $T_{0}$ (due to the linear increase of variance in Binomial models) is also surprising. In non-asymptotic regime, it is still true but for $\beta$ verifying $0<\beta_{-}\leq \beta\leq \beta_{+}<1$ for given $(\beta_{-},\beta_{+})$. We also observe a more general concave property of the single monitoring gain seen as a function of $T_{0}$, with limits equals to $0$ on the borders on the interval. We conjecture that this concavity is likely to turn in a submodular dependency for several monitoring shots. \begin{proof} \textbf{Single Monitoring:} We first prove a slightly extended version of (\ref{One-Shot}) by considering a monitoring at time $T_{0}$ and we recover the results of Prop. \ref{Monitoring AG} by setting $T_{0}\equiv \beta T$, for a given $\beta \in ]0,1[$. For the first step of the proof, we consider the continuous approximation of $\displaystyle \max_{\pi \in \Pi_{\text{single}}(T_{0})}\mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)]$ given by the optimization problem over continuous variables: \begin{align*} \max_{\tau_{a}(T_{0}),\tau_{a}(T)}& \mathbb{E}\Big[\sum_{a\in[d]}\frac{1}{p_{a}}[\psi_{\alpha}(N_{a}(T))-\psi_{\alpha}(N_{a}(T_{0}))] + \sum_{a\in[d]}\frac{1}{p_{a}}[\psi_{\alpha}(N_{a}(T_{0}))-\psi_{\alpha}(\lambda)]\Big]\\ \tag{$\mathcal{S}\mathcal{M}$} \label{SM} \textrm{s.t.} & \sum_{a\in[d]}\tau_{a}(T_{0}) = T_{0},\\ & \sum_{a\in[d]}\tau_{a}(T) = T, \\ & \forall a \in [d],\quad \tau_{a}(T)\geq \tau_{a}(T_{0}). \end{align*} In \ref{SM}, the single monitoring $\max$ player initially commits to an allocation of the $T_{0}$ first rounds through the policy $(\tau_{a}(T_{0}))_{a\in[d]}$, with resulting gain expressed as the second term of the maximization problem. The player then observes the realization $N_{a}(T_{0})\sim \mathcal{B}(\tau_{a}(T_{0}),p_{a})$ and allocates the rest of the $T-T_{0}$ budget through the allocation $(\tau_{a}(T))_{a\in[d]}$, with resulting gain expressed as the first term of the maximization problem. . Therefore, the single monitoring gain assesses the value of observing the deviation of $N_{a}(T_{0})$ from its expectation $\tau_{a}(T_{0})p_{a}$. In an analogous way, we then introduce the continuous approximation of $\displaystyle \max_{\pi \in \Pi_{\text{off}}}\mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)]$ given by: \begin{align*} \max_{\tau_{a}(T)}& \quad \mathbb{E}\Big[\sum_{a\in[d]}\frac{1}{p_{a}}[\psi_{\alpha}(N_{a}(T))-\psi_{\alpha}(\lambda)]\Big]\\ \tag{$\mathcal{O}\mathcal{F}\mathcal{F}$} \label{OFF} \textrm{s.t.} & \sum_{a\in[d]}\tau_{a}(T) = T. \end{align*} In \ref{OFF}, $(N_{a}(T_{0}))_{a\in[d]}$ is not observed and thus can not be leveraged by the offline player to adapt the second part of the allocation. On what follows, we use $\mathbf{E}[\mathbf{N}]$ and $\mathbf{V}[\mathbf{N}]$ to denote respectively the mean and variance of $\mathbf{N}$, the empirical discrete distribution over $(N_{a}(T_{0}))_{a\in[d]}$ with associated weights $(1/p_{a}d_{\mathit{eff}})_{a\in[d]}$. Given a realization of $(N_{a}(T_{0}))_{a\in[d]}$, we use Lemma \ref{Optimization Lemma Finite} to deduce the optimal choice of $(\tau_{a}(T))_{a\in[d]}$ in \ref{SM} and resulting expected conditional gain: \begin{align*} \sum_{a\in[d]}\underbrace{\frac{1}{p_{a}}\Big[ \psi_{\alpha}(\frac{T-T_{0}}{d_{\mathit{eff}}}+\mathbf{E}[\mathbf{N}]) -\psi_{\alpha}(N_{a}(T_{0}))\Big]}_{\textit{Gain between $T_{0}$ and $T$ for arm $a$}} + \sum_{a\in[d]}\underbrace{\frac{1}{p_{a}}\Big[ \psi_{\alpha}(N_{a}(T_{0})) -\psi_{\alpha}(\lambda)\Big]}_{\textit{Gain between $0$ and $T_{0}$ for arm $a$}}, \end{align*} where the formula is valid under the assumption $\forall a \in [d]$, $T-T_{0} \geq d_{\mathit{eff}}(N_{a}(T_{0})-\mathbf{E}[\mathbf{N}])$. Such assumption encompass the fact that the remaining budget $T-T_{0}$ should be sufficient to correct the deviation observed. Logically, we know that on expectation $\mathbb{E}[N_{a}(T_{0})-\mathbf{E}[\mathbf{N}]] = 0$, that is no systematic deviation is expected. For for all realization of randomness, the following deterministic crude upper bound hold: \begin{align*} d_{\mathit{eff}}(N_{a}(T_{0})-\mathbf{E}[\mathbf{N}]) \leq d_{\mathit{eff}}(1-\frac{1}{d_{\mathit{eff}}p_{a}})\frac{T_{0}}{p_{a}d_{\mathit{eff}}} \end{align*} this in turn imposes: \begin{align*} \frac{T_{0}}{T} \leq \min_{a\in[d]}\frac{1}{1 + \frac{d_{\mathit{eff}}-\frac{1}{p_{a}}}{p_{a}d_{\mathit{eff}}}}. \end{align*} For instance, in the uniform censorship model, this yields condition $\frac{T_{0}}{T} \leq \frac{dp}{dp + d-1}$. Nevertheless, this is overly conservative and we can get considerably stronger results by considering high-probability concentration results on $N_{a}(T_{0})$. Indeed, thanks to Chernoff Bounds for Binomial distribution, we have for $\delta \equiv T_{0}^{-1/4}$, that with probability at least $1-2d\exp\{-\delta^{2}T_{0}/3d_{\mathit{eff}}\}$, for all $a$, $(1-\delta)T_{0}/d_{\mathit{eff}} \leq N_{a}(T_{0})\leq (1+\delta)T_{0}/d_{\mathit{eff}}$. In particular, this yields $(1-\delta)T_{0}/d_{\mathit{eff}}\leq \mathbf{E}[\mathbf{N}]\leq (1+\delta)T_{0}/d_{\mathit{eff}}$. Under this event, we have $d_{\mathit{eff}}(N_{a}(T_{0})-\mathbf{E}[\mathbf{N}])\leq 2\delta T_{0}$, which imposes $T\geq (1+2\delta)T_{0} = T_{0} + 2T_{0}^{3/4}$. In particular, for $T_{0}\equiv \beta T$, where $\beta \in ]0,1[$, such condition will always be verified for $T$ large enough. On the other hand, still using Lemma \ref{Optimization Lemma Finite}, we write the conditional expected gain of the offline policy on this same realization of $(N_{a}(T))_{a\in[d]}$ for \ref{OFF} as: \begin{align*} \sum_{a\in[d]}\underbrace{\frac{1}{p_{a}}\Big[ \psi_{\alpha}(\frac{T-T_{0}}{d_{\mathit{eff}}}+N_{a}(T_{0})) -\psi_{\alpha}(N_{a}(T_{0}))\Big]}_{\textit{Gain between $T_{0}$ and $T$ for arm $a$}} + \sum_{a\in[d]}\underbrace{\frac{1}{p_{a}}\Big[ \psi_{\alpha}(N_{a}(T_{0})) -\psi_{\alpha}(\lambda)\Big]}_{\textit{Gain between $0$ and $T_{0}$ for arm $a$}}. \end{align*} where $\psi_{\alpha}(N_{a}(T_{0}))$ is artificially introduced. The difference between the two comes from the possibility for the monitoring policy to homogenize the realized $N_{a}(T_{0})$ into a uniform $\mathbf{E}[\mathbf{N}]$. The random difference $\mathcal{G}_{\mathit{single}}(T_{0})$, seen as a function of the realization of $(N_{a}(T_{0}))$ is then equal to: \begin{align*} \mathcal{G}_{\mathit{single}}(T_{0}) &\triangleq \sum_{a\in[d]}\frac{1}{p_{a}}[\psi_{\alpha}(\frac{T-T_{0}}{d_{\mathit{eff}}}+\mathbf{E}[\mathbf{N}])-\psi_{\alpha}(\frac{T-T_{0}}{d_{\mathit{eff}}}+N_{a}(T_{0}))] \\ &= d_{\mathit{eff}}\Big[\Bar{\psi}_{\alpha}(\mathbf{E}[\mathbf{N}])-\mathbf{E}[\Bar{\psi}_{\alpha}(\mathbf{N})]\Big], \end{align*} which is exactly the Jensen's gap of the concave function $\Bar{\psi}_{\alpha}: x \mapsto \psi_{a}(\frac{T-T_{0}}{d_{\mathit{eff}}}+x)$. The main insight is that this gap is then asymptotically equivalent to: \begin{align*} \Bar{\psi}_{\alpha}(\mathbf{E}[\mathbf{N}])-\mathbf{E}[\Bar{\psi}_{\alpha}(\mathbf{N})]\sim -\frac{\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])}{2}\mathbf{V}[\mathbf{N}], \end{align*} where the RHS is positive, given that $\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])$ is negative. To show this, we use the original proof of Jensen's inequality and introduce the interval $I\triangleq [\min_{a} N_{a}(T_{0}),\max_{a}N_{a}(T_{0})]$ to leverage the mean value theorem. This then yields: \begin{align*} \frac{\min_{y\in I}-\Bar{\psi}^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])}\leq 2\frac{\Bar{\psi}_{\alpha}(\mathbf{E}[\mathbf{N}])-\mathbf{E}[\Bar{\psi}_{\alpha}(\mathbf{N})]}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])\mathbf{V}[\mathbf{N}]} \leq \frac{\max_{y\in I}-\Bar{\psi}^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])} \end{align*} Whenever $T_{0}$ is a constant independent of $T$, for $T\rightarrow+\infty$ by explicitly writing the definition of the upper and lower bounds, we have almost surely: \begin{align*} \frac{\min_{y\in I}-\Bar{\psi}^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])} \rightarrow 1 \quad \text{and} \quad \frac{\max_{y\in I}-\Bar{\psi}^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])} \rightarrow 1 \end{align*} Difficulties arises when $T_{0}$ is a function of $T$, as in the statement of the result where $T_{0}\equiv\beta T$. By considering the same concentration event as the one introduced above, we have: \begin{align*} \frac{\min_{y\in I}-\psi^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])} &= \min_{y\in I} \Big(\frac{T-T_{0}+d_{\mathit{eff}}\mathbf{E}[\mathbf{N}]}{T-T_{0}+d_{\mathit{eff}}y}\Big)^{^{1+\alpha}} \geq \Big(\frac{T-T_{0}+(1-\delta)T_{0}}{T-T_{0}+(1+\delta)T_{0}}\Big)^{^{1+\alpha}} \\&\leq \Big(\frac{T-\delta T_{0}}{T+\delta T_{0}}\Big)^{^{1+\alpha}} = \Big(\frac{T-T_{0}^{3/4}}{T+T_{0}^{3/4}}\Big)^{^{1+\alpha}} \rightarrow 1. \end{align*} and similarly: \begin{align*} \frac{\max_{y\in I}-\psi^{(2)}_{\alpha}(y)}{-\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])} &= \max_{y\in I} \Big(\frac{T-T_{0}+d_{\mathit{eff}}\mathbf{E}[\mathbf{N}]}{T-T_{0}+d_{\mathit{eff}}y}\Big)^{^{1+\alpha}} \leq \Big(\frac{T-T_{0}+(1+\delta)T_{0}}{T-T_{0}+(1-\delta)T_{0}}\Big)^{^{1+\alpha}} \\ &\leq \Big(\frac{T+\delta T_{0}}{T-\delta T_{0}}\Big)^{^{1+\alpha}} =\Big(\frac{T+ T_{0}^{3/4}}{T- T_{0}^{3/4}}\Big)^{^{1+\alpha}} \rightarrow 1. \end{align*} Thus, thanks to the exponential concentration, we conclude that: \begin{align*} \mathbb{E}[\mathcal{G}_{\mathit{single}}(T_{0})] = \frac{d_{\mathit{eff}}}{2} \mathbb{E}[\Bar{\psi}_{\alpha}(\mathbf{E}[\mathbf{N}])-\mathbf{E}[\Bar{\psi}_{\alpha}(\mathbf{N})]]\sim -\frac{d_{\mathit{eff}}}{2}\mathbb{E}[\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])\mathbf{V}[\mathbf{N}]]. \end{align*} Next, we affirm that: \begin{align*} \mathbb{E}[\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])\mathbf{V}[\mathbf{N}]] &\overset{a)}{\sim} \mathbb{E}\Big[\Bar{\psi}^{(2)}_{\alpha}(\mathbf{E}[\mathbf{N}])\Big]\mathbb{E}\Big[\mathbf{V}[\mathbf{N}]\Big] \overset{b)}{\sim} \Bar{\psi}^{(2)}_{\alpha}(\mathbb{E}[\mathbf{E}[\mathbf{N}]])\mathbb{E}\Big[\mathbf{V}[\mathbf{N}]\Big], \end{align*} where $a)$ leverages the previous bounds and where we use for $b)$ similar concentration results on inverse of Binomial as done for the proof of Lemma \ref{asympt_off}.. We then use the fact that $\mathbb{E}[\mathbf{E}[\mathbf{N}]]= T_{0}/d_{\mathit{eff}}$ to conclude: \begin{align*} \mathbb{E}[\mathcal{G}_{\mathit{single}}(T_{0})] \sim -\frac{d_{\mathit{eff}}}{2}\psi^{(2)}_{\alpha}(\frac{T}{d_{\mathit{eff}}})\mathbb{E}\Big[\mathbf{V}[\mathbf{N}]\Big].\tag{$\mathcal{V}$}\label{OS_Gain} \end{align*} We consider this result to be one of the main insight for adaptivity, as it involves that at first order, the gain grows linearly in the expected value of the empirical variance of the arm allocation process. In opposition to the single monitoring policy, the adaptive policy continuously exploits such variance. Yet, in doing so, it creates a second order induced variance but we then show that this phenomena is negligible at first order. To reach a result with explicit dependency on the censorship probability $(p_{a})_{a\in[d]}$, we note that: \begin{align*} \mathbb{E}[\mathbf{V}[\mathbf{N}]] = \frac{T_{0}}{d_{\mathit{eff}}^{3}}\sum_{a\in [d]}\frac{1}{p_{a}}\Big[\sum_{b\neq a}\frac{1-p_{b}}{p_{b}}\Big], \end{align*} and therefore: \begin{align*} \mathbb{E}\Big[\mathcal{G}_{\mathit{single}}(T_{0})\Big] &\sim \frac{\alpha}{2d_{\mathit{eff}}^{2}}(\frac{d_{\mathit{eff}}}{T})^{1+\alpha} T_{0}\sum_{a\in [d]}\frac{1}{p_{a}}\Big[\sum_{b\neq a}\frac{1-p_{b}}{p_{b}}\Big] = \gamma_{\alpha}(\mathbf{p})\frac{T_{0}}{T^{1+\alpha}}. \end{align*} In particular, for $T_{0}=\beta T$, this yields $\mathbb{E}\Big[\mathcal{G}_{\mathit{single}}(\beta T)\Big] = \gamma_{\alpha}(\mathbf{p})\frac{\beta}{T^{\alpha}} + o(\frac{1}{T^{\alpha}})$. The second step closely mirrors the proof of Lemma \ref{asympt_off} and consists in justifying the use of the continuous approximation for the two optimization problems (\ref{OFF}) and (\ref{SM}). As in Lemma \ref{asympt_off}, we show that the difference between the continuous and discrete optimization results at most in a second order gain of $o(\frac{1}{T^{\alpha}})$, even when maximized as a decoupled quantity. By combining those two results, we finally deduce as announced: \begin{align*} \max_{\pi \in \Pi_{\text{single}}(\beta T)} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] - \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \sim \mathbb{E}\Big[\mathcal{G}_{\mathit{single}}(\beta T)\Big] = \gamma_{\alpha}(\mathbf{p})\frac{\beta}{T^{\alpha}} + o(\frac{1}{T^{\alpha}}) \end{align*} \textbf{Complete Adaptivity} We next tackle the proof of (\ref{Constant}), where the main idea is to show that a formula analogous to (\ref{OS_Gain}) holds for the variance of a suited random process. First, using the same proof technique as in Sec. 4 of \cite{decayingB} thanks to the decaying property of the reward in function of the number of realization, we show that the optimal adaptive policy is the greedy policy, that is the policy that picks at time $t$ the action: \begin{align*} a_{t} \triangleq \operatorname{argmax}_{a\in\mathcal{A}_{t}}(N_{a}(t-1)+\lambda)^{-\alpha}, \end{align*} with arbitrary but consistent tie-breaking. In particular, this ensures that for all actions $a,b$ and time $t$, we have $|N_{a}(t)-N_{b}(t)|\leq 1$. We then introduce the offline and adaptive allocations: \begin{align*} \tau_{a}^{off}(T) &\triangleq \frac{T}{p_{a}d_{eff}} \quad \text{and} \quad \tau_{a}^{on}(T) \triangleq \sum_{i=1}^{N_{a}(T)} \frac{1}{p_{a}}+\xi^{a}_{i}= \frac{N_{a}(T)}{p_{a}} + S^{a}(N_{a}(T)) \end{align*} where $\frac{1}{p_{a}}+\xi^{a}_{i}$ is the total random number of allocation it takes for action $a$ to be realized in the $i^{th}$ selection, $\xi^{a}_{i}$ being equal the centered deviation with respect to the expected value $\frac{1}{p_{a}}$. Of key importance in our proof is $S^{a}(N_{a}(T))$, the cumulative deviation defined as $\sum_{i=1}^{N_{a}(T)}\xi^{a}_{i}$. Note that it is well approximated in large $T$ regime as a random sum of $N_{a}(T)$ i.i.d. geometric centered variable of parameter $p_{a}$. Given this and the total budget constraint, we have the simple relation $\tau_{a}^{on}(T)=\tau_{a}^{off}(T) + \frac{1}{d_{eff}}\sum_{b}[\frac{S^{a}(N_{a}(T))}{p_{b}}-\frac{S^{b}(N_{b}(T))}{p_{a}}]$. A relevant quantity to introduce is the random allocation difference $\Delta \tau_{a,b}$ between actions $a$ and $b$ defined by: \begin{align*} \Delta \tau_{a,b} \triangleq \frac{1}{d_{eff}} (\frac{S^{a}(N_{a}(T))}{p_{b}}-\frac{S^{b}(N_{b}(T))}{p_{b}}) \end{align*} Using this notation, we simply have $\tau_{a}^{on} = \tau_{a}^{off} + \sum_{b} \Delta \tau_{a,b}$. We then introduce the random sets $I^{+}\triangleq\{a: \tau_{a}^{on}\geq \tau_{a}^{off}\}=\{a: \sum_{b} \Delta \tau_{a,b} \geq 0\}$ and $I^{-}\triangleq\{a: \tau_{a}^{on}< \tau_{a}^{off}\}=\{a: \sum_{b} \Delta \tau_{a,b} < 0\}$. On the one hand, $I^{+}$ represents the set of actions that are more sampled by the adaptive policy than by the offline policy i.e. that leads to a gain thanks to the greedy property. One the other hand, $I^{-}$ is the set of actions under-selected by the adaptive policy, leading to a loss although inferior in absolute value to the resulting gain of $I^{+}$. As for the proof of the single monitoring case, we condition on the realization $(N_{a}(T))$ and use a continuous approximation given this conditioning to study the difference of gain. Thus, we have the action gain for $a \in I^{+}$,: \begin{align*} g_{a} &\triangleq \frac{1}{p_{a}}[\psi_{\alpha}(N_{a}(T))-\psi_{\alpha}(N_{a}(T)-p_{a}\sum_{b}\Delta \tau_{a,b})] \\ &\approx \frac{1}{p_{a}}\Big[p_{a}\Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T))-\frac{(p_{a}\sum_{b} \Delta \tau_{a,b})^{2}}{2}\psi_{\alpha}^{(2)}(N_{a}(T))\Big]. \end{align*} On the other hand, for $a \in I^{-}$, we have the action loss still under the continuous approximation: \begin{align*} l_{a}&\triangleq \frac{1}{p_{a}}[\psi_{\alpha}(N_{a}(T)+p_{a}\sum_{b}\Delta \tau_{a,b})-\psi_{\alpha}(N_{a}(T))] \\ &\approx \frac{1}{p_{a}}\Big[p_{a}\Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T))+\frac{(p_{a}\sum_{b} \Delta \tau_{a,b})^{2}}{2}\psi_{\alpha}^{(2)}(N_{a}(T))\Big]. \end{align*} By introducing $\mathcal{G}_{\text{adapt}} \triangleq \sum_{a\in I^{+}} g_{a}-\sum_{a\in I^{-}} l_{a}$, the adaptive equivalent of $\mathcal{G}_{\text{single}}$ and combining previous two results, we deduce: \begin{align*} \mathcal{G}_{\text{adapt}} &= \sum_{a\in I^{+}}\frac{1}{p_{a}}\Big[p_{a}\Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T))-\frac{(p_{a}\sum_{b} \Delta \tau_{a,b})^{2}}{2}\psi_{\alpha}^{(2)}(N_{a}(T))\Big] \\&\quad \quad - \sum_{a\in I^{-}} \frac{1}{p_{a}}\Big[p_{a}\Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T))+\frac{(p_{a}\sum_{b} \Delta \tau_{a,b})^{2}}{2}\psi_{\alpha}^{(2)}(N_{a}(T))\Big]\\ &= \sum_{a\in I^{+}} \Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T)) - \sum_{a\in I^{-}} \Big(\sum_{b} \Delta \tau_{a,b}\Big)\psi_{\alpha}^{(1)}(N_{a}(T)) \\ &\quad \quad - \frac{1}{2}\sum_{a\in[d]}(p_{a}\sum_{b} \Delta \tau_{a,b})^{2}\psi_{\alpha}^{(2)}(N_{a}(T)) \end{align*} We then leverage the Taylor expansion $\psi_{\alpha}^{(1)}(N_{a}(T)) = \psi_{\alpha}^{(1)}(\Bar{N}(T)) + \psi_{\alpha}^{(2)}(\bar{N}(T))(\bar{N}(T)-N_{a}(T))$, where $\Bar{N}(T)\triangleq \sum_{a\in [d]}N_{a}(t)/d$. We know that the second term is asymptotically negligible given that $\alpha>0$ and that the difference between $\bar{N}(T)$ and $N_{a}(T)$ is constant with exponential probability, thanks to the greedy policy property. We combine this result with the fact that by definition $\sum_{a\in I^{+}} \sum_{b}\Delta \tau_{a,b} - \sum_{a\in I^{-}} \sum_{b}\Delta \tau_{a,b}=0$ to deduce that at first order: \begin{align*} \mathcal{G}_{\text{adapt}} = - \frac{1}{2}\psi_{\alpha}^{(2)}(\bar{N}(T))\sum_{a\in[d]}(p_{a}\sum_{b} \Delta \tau_{a,b})^{2} \tag{$\mathcal{L}$}\label{AD_Gain} \end{align*} We see formula (\ref{AD_Gain}) as the adaptive analogous of (\ref{OS_Gain}). Indeed, it involves the product of the second derivative $\frac{1}{2}\psi_{\alpha}^{(2)}(\bar{N}(T))$, evaluated on a quantity concentrating at $T/d_{\mathit{eff}}$ with a variance term associated to the adaptive action allocation process. We remark that for any $a\in[d]$: \begin{align*} \mathbb{E}\Big[(\sum_{b} \Delta \tau_{a,b})^{2}\Big] &= \frac{1}{d_{\mathit{eff}}^{2}}\mathbb{E}\Big[([\sum_{ b\neq a}\frac{1}{p_{b}}]S^{a}(N_{a}(T))-\frac{1}{p_{a}}\sum_{ b\neq a}S^{b}(N_{b}(T)))^{2}\Big] \\ &= \frac{1}{d_{\mathit{eff}}^{2}}\left[(d_{\mathit{eff}}-\frac{1}{p_{a}})^{2}\mathbb{V}[S^{a}(N_{a}(T)) ]+\sum_{ b\neq a }\frac{1}{p_{a}^{2}}\mathbb{V}[S^{b}(N_{b}(T)) ]\right] \end{align*} and therefore, by summing: \begin{align*} \sum_{a\in[d]}p_{a}\mathbb{E}\Big[(\sum_{b} \Delta \tau_{a,b})^{2}\Big] &= \frac{1}{d_{\mathit{eff}^{2}}}\sum_{a\in[d]}p_{a}\Big[(\sum_{ b\neq a}\frac{1}{p_{b}})^{2}\mathbb{V}[S^{a}(N_{a}(T)) ]+\sum_{ b\neq a }\frac{1}{p_{a}^{2}}\mathbb{V}[S^{b}(N_{b}(T)) ]\Big]\\ &= \frac{1}{d_{\mathit{eff}^{2}}}\sum_{a\in[d]} \Big[p_{a}(d_{\mathit{eff}}-\frac{1}{p_{a}})^{2}+d_{\mathit{eff}}-\frac{1}{p_{a}}\Big]\mathbb{V}[S^{a}(N_{a}(T)) ]\\ &= \sum_{a\in[d]} p_{a}\frac{d_{\mathit{eff}}-\frac{1}{p_{a}}}{d_{\mathit{eff}}}\mathbb{V}[S^{a}(N_{a}(T))]. \end{align*} To obtain the leading order of $\mathbb{V}[S^{a}(N_{a}(T))]$, we use Wald's second equation and the fact that $S^{a}(N_{a}(T))$ is approximated by a sum of geometric random variable of parameter $p_{a}$, modulo a asymptotically negligible summing constraint due to the fixed total budget $T$. This yields $\mathbb{V}[S^{a}(N_{a}(T))] \sim \frac{1-p_{a}}{p_{a}^{2}}\mathbb{E}[N_{a}(T)]\sim\frac{1-p_{a}}{p_{a}^{2}}\mathbb{E}[\bar{N}(T)]$, where the last results leverages again the fact that the difference between the two quantities is constant with exponential probability. We conclude with further algebraic calculation that: \begin{align*} \mathbb{E}[\mathcal{G}_{\text{adapt}}] &\sim -\mathbb{E}[\frac{\psi_{\alpha}^{(2)}}{2}(\bar{N}(T))]\sum_{a\in[d]} p_{a}\frac{d_{\mathit{eff}}-\frac{1}{p_{a}}}{d_{\mathit{eff}}}\frac{1-p_{a}}{p_{a}^{2}}\mathbb{E}[\bar{N}(T)] \\ &\sim \frac{\alpha}{2}(\frac{d_{\mathit{eff}}}{T})^{1+\alpha} \sum_{a\in [d]}\frac{1}{p_{a}}\Big[\sum_{b\neq a}\frac{1-p_{b}}{p_{b}}\Big]\frac{T}{d_{\mathit{eff}}^{2}} \\ &\sim \gamma_{\alpha}(\mathbf{p})\frac{1}{T^{\alpha}}. \end{align*} By justifying again that the continuous gain approximation leads to terms of order $o(\frac{1}{T^{\alpha}})$, as done in the proof of Lemma \ref{asympt_off}, we conclude that: \begin{align*} \max_{\pi \in \Pi_{\text{adapt}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] - \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] = \gamma_{\alpha}(\mathbf{p})\frac{1}{T^{\alpha}} + o(\frac{1}{T^{\alpha}}). \end{align*} \end{proof} \subsection{Proof of Lemma \ref{Potential Reduction Linear}} \PotentialReductionLinear* \begin{proof} We have under the event $\neg \mathcal{H}_{\text{UCB}}^{II}(\delta)$ introduced in Lemma \ref{Optimistic Lemma Linear} and thanks to Holder inequality: \begin{align*} \Delta_{t}(a)\triangleq \max_{\Tilde{a}\in\mathcal{A}_{t}}\langle\theta^{\star},\Tilde{a} \rangle - \langle \theta^{\star},a_{t} \rangle \leq 2\beta_{\delta}(t-1) \|a_{t}\|_{(\mathbb{W}^{C}(t-1))^{-1}} . \end{align*} Therefore, the conditional regret is upper-bounded by: \begin{align*} R(T|\neg \mathcal{H}_{\text{UCB}}^{II}(\delta)) \leq \beta_{\delta}(T)\sum_{t=1}^{T}\|a_{t}\|_{(\mathbb{W}^{C}(t-1))^{-1}} = \beta_{\delta}(T)\Tilde{\mathbb{V}}_{\frac{1}{2}}(T,\pi), \end{align*} where we introduced $\Tilde{\mathbb{V}}_{\frac{1}{2}}(T,\pi) \triangleq \sum_{t=1}^{T}\|a_{t}\|_{\mathbb{W}^{C}(t-1)^{-1}}$. Cauchy Schwartz inequality then allows to make the junction $\Tilde{\mathbb{V}}_{\frac{1}{2}}(T,\pi) \leq \sqrt{T}\sqrt{\mathbb{V}_{1}(T,\pi)}$. We then introduce $\Tilde{\beta}_{\delta}(T)$ a deterministic upper bound on $\beta_{\delta}(T)$: \begin{align*} \beta_{\delta}(T) &= \sqrt{\sigma^{2} \log \left(\frac{\det(\mathbb{W}^{C}_{T})}{\det(\lambda \mathbb{I}_{d})}\right)+2\sigma^{2}\log(\frac{1}{\delta})}+\sqrt{\lambda} \|\theta^{\star}\|_{2} \\ &\leq \underbrace{\sqrt{\sigma^{2}d \log (1+\frac{T}{d\lambda})+2\sigma^{2}\log(\frac{1}{\delta})}+\sqrt{\lambda} \|\theta^{\star}\|_{2}}_{ \triangleq \Tilde{\beta}_{\delta}(T)} \\ &= \Theta(\sqrt{d\log(T)}). \end{align*} Using the concavity of square root and Jensen's inequality, we have $\mathbb{E}[\sqrt{\mathbb{V}_{1}(T,\pi)}] \leq \sqrt{\mathbb{E}[\mathbb{V}_{1}(T,\pi)]}$. Finally, thanks to Lemma \ref{Optimistic Lemma Linear}, we conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2\Tilde{\beta_{\delta}}(T) \sqrt{T\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\textit{UCB}})]} + \delta T\Delta_{max}. \end{align*} \end{proof} \subsection{Statement and Proof of Lemma \ref{Optimistic Lemma Linear}} Analogous to Lemma \ref{Fail Optim Finite} for the MAB case, one key step in the proof is introduction of the failure of optimism event. Nevertheless, note the difference with the choice of norm. \begin{lemma} \label{Optimistic Lemma Linear} For any $\delta \in ]0,1]$, uniform regularization $\lambda>0$ and censored action generating process $(\mathbb{W}^{C}_{t})_{t\leq T}$, let's introduce the event: \begin{align*} \mathcal{H}_{\text{UCB}}^{II}(\delta) \triangleq \Big\{\exists t \geq 0, \|\hat{\theta}^{\lambda}_{t}-\theta^{\star}\|_{\mathbb{W}^{C}_{t}} > \underbrace{\sqrt{\sigma^{2} \log \left(\frac{\det(\mathbb{W}^{C}_{t})}{\det(\lambda \mathbb{I}_{d})}\right)+2\sigma^{2}\log(\frac{1}{\delta})}+\sqrt{\lambda} \|\theta^{\star}\|_{2}}_{\triangleq\beta_{\delta}(t)}\Big\}. \end{align*} We then have $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{II}(\delta))\leq \delta$. \end{lemma} \begin{proof} The proof closely mirrors the self-normalized bound for vector-valued martingales of Thm.$1$ from \cite{NIPS2011_e1d5be1c}. The main subtlety is to apply the results to the censored measurable vectors $(x_{a_{t}}a_{t})$ instead of classically $(a_{t})$. This yields that with probability $1-\delta$, for all $t\geq 0$: \begin{align*} \|\sum_{n=1}^{t}\epsilon_{n}x_{a_{n}}a_{n}\|^{2}_{\mathbb{W}_{t}^{C}} \leq \sigma^{2}\log\frac{\det(\mathbb{W}_{t}^{C})}{\det(\lambda \mathbb{I}_{d})} + 2\log(\frac{1}{\delta}). \end{align*} Thus, still on this event, for any $t\geq 0$ and action $a\in \mathbb{R}^{d}$, we have by definition of $\hat{\theta}^{\lambda}_{t}$ (Sec.\ref{UCB-algo}): \begin{align*} \langle a,\hat{\theta}^{\lambda}_{t}\rangle - \langle a,\theta^{\star} \rangle = \langle a, (\mathbb{W}_{t}^{C})^{-1}\sum_{n=1}^{t}\epsilon_{n}x_{a_{n}}a_{n}\rangle - \lambda \langle a, (\mathbb{W}_{t}^{C})^{-1}\theta^{\star}\rangle, \end{align*} and therefore, thanks to Cauchy-Schwartz inequality: \begin{align*} |\langle a,\hat{\theta}^{\lambda}_{t}\rangle - \langle a,\theta^{\star} \rangle| \leq \|a\|_{(\mathbb{W}_{t}^{C})^{-1}}\Big(\|\sum_{n=1}^{t}\epsilon_{t}x_{a_{t}}a_{t}\|_{\mathbb{W}_{t}^{C}} + \lambda^{1/2}\|\theta^{\star}\|_{2}\Big) \end{align*} Using previous result, for all $a\in \mathbb{B}_{d}, t\geq 0$, with probability $1-\delta$, we have: \begin{align*} |\langle a,\hat{\theta}^{\lambda}_{t}\rangle - \langle a,\theta^{\star} \rangle| \leq \sigma\sqrt{\log\Big(\frac{\det(\mathbb{W}_{t}^{C})}{\det(\lambda \mathbb{I}_{d})}\Big) + 2\log(\frac{1}{\delta})}+\lambda^{1/2}\|\theta^{\star}\|_{2} \end{align*} To conclude, we classically plug-in the value $a=\mathbb{W}_{t}^{C}(\hat{\theta}^{\lambda}_{t}-\theta^{\star})$ and divide both sides by $\|\hat{\theta}^{\lambda}_{t}-\theta^{\star}\|_{\mathbb{W}_{t}^{C}}$ to get that for all $t\geq 0$, with probability $1-\delta$, we have: \begin{align*} \|\hat{\theta}^{\lambda}_{t}-\theta^{\star}\|_{\mathbb{W}^{C}_{t}} \leq \sigma\sqrt{ \log \Big(\frac{\det(\mathbb{W}^{C}_{t})}{\det(\lambda \mathbb{I}_{d})}\Big)+2\log(\frac{1}{\delta})}+\lambda^{1/2} \|\theta^{\star}\|_{2} \end{align*} and therefore, by definition $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{II}(\delta))\leq \delta$. \end{proof} \subsection{Proof of Prop. \ref{Potential Control Linear}} \PotentialControlLinear* \begin{proof} First, we use Lemma \ref{CEN Lemma Linear} to deduce that under $\mathcal{H}_{\text{CEN}}^{II}(\delta)$: \begin{align*} \mathbb{V}_{\alpha}(T,\pi|\mathcal{H}_{\text{CEN}}^{II}(\delta)) = \sum_{t=1}^{T}\operatorname{Tr}((\mathbb{W}^{C}_{t-1})^{-\alpha}a_{t}a_{t}^{\top}) \leq c_{\delta}^{\alpha}\sum_{t=1}^{T}\operatorname{Tr}(\mathbb{W}_{t-1}^{-\alpha}a_{t}a_{t}^{\top}). \end{align*} For all $t\geq 1$, we then use the fact that $W_{t}\preceq (1+\frac{1}{\lambda})W_{t-1}$ to deduce $\operatorname{Tr}(\mathbb{W}_{t-1}^{-\alpha}a_{t}a_{t}^{\top}) \leq (1+\frac{1}{\lambda})^{\alpha}\operatorname{Tr}(\mathbb{W}_{t}^{-\alpha}a_{t}a_{t}^{\top})$. The last and most important step is the integral comparison: \begin{align*} \sum_{t=1}^{T}\operatorname{Tr}(\mathbb{W}_{t}^{-\alpha}a_{t}a_{t}^{\top}) \leq \int_{0}^{T} \operatorname{Tr}(\mathbb{W}(t)^{-\alpha}a(t)a(t)^{\top})\partial t = \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-\alpha}a(t)a(t)^{\top}\partial t \Big). \end{align*} In the previous result, the continuous extension $(a(t),\mathbb{W}(t))_{t\leq T}$ of $(a_{t},\mathbb{W}_{t})_{t\in [T]}$ for a given policy $\pi$ is defined for any time $t\geq 1$ as: \begin{align*} a(t)\triangleq a_{\lfloor t\rfloor} \quad \text{and} \quad \mathbb{W}(t)\triangleq \int_{u=1}^{t}p_{a(u)}a(u)a(u)^{\top}\partial u = \mathbb{W}_{\lfloor t\rfloor} + (t-\lfloor t\rfloor)p(a_{\lceil t\rceil})a_{\lceil t\rceil}a_{\lceil t\rceil}^{\top}. \end{align*} This yields the result: \begin{align*} \mathbb{V}_{\alpha}(T,\pi|\mathcal{H}_{\text{CEN}}^{II}(\delta)) \leq c_{\delta}^{\alpha}(1+\frac{1}{\lambda})^{\alpha}\operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-\alpha}a(t)a(t)^{\top}\partial t \Big). \end{align*} Finally, we conclude thanks to Lemma \ref{CEN Lemma Linear} that: \begin{align*} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \leq \frac{\delta}{\lambda^{\alpha}} + C(\delta)^{\alpha} \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-\alpha}a(t)a(t)^{\top}\partial t\Big). \end{align*} \end{proof} \begin{remark}\label{tour de force} The main \textit{tour de force} of the continuous approximation we employ is to relax the maximization problem by considering the class of continuous deterministic integrable policies, which is considerably more tractable from an analysis perspective. On the one hand, it allows to get closed-form solution for the maximization problem whereas the discrete approach can only deal with approximations and upper bounds. On the other hand, it clearly reveals the underlying matrix function the discrete approach is approximating and henceforth allows to leverage powerful integration results. We leverage again this idea in the context of Sec.\ref{CB} to tackle impact of censorship. To illustrate the abovementioned points, we remark that for the simpler case of classical uncensored environment, we obtain for $\alpha>0, \alpha \neq 1$: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-\alpha}} &\leq \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha}\frac{\operatorname{Tr}\Big(\int_{0}^{T} \partial\mathbb{W}(t)^{1-\alpha}\Big)}{1-\alpha} = \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha}\frac{\operatorname{Tr}(\mathbb{W}^{1-\alpha}_{T}-\mathbb{W}^{1-\alpha}_{0})}{1-\alpha}. \end{align*} For $\alpha < 1$, we then have thanks to Lemma \ref{Optimization Lemma Finite} the worst case bound $\operatorname{Tr}(\mathbb{W}_{T}^{1-\alpha}) \leq d^{\alpha}(d\lambda + T)^{1-\alpha}$ and henceforth: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-\alpha}} \leq \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha} \frac{d^{\alpha}(d\lambda + T)^{1-\alpha}-d\lambda^{1-\alpha}}{1-\alpha} \end{align*} On the other hand, for $\alpha > 1$, we deduce: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-\alpha}} \leq \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha}\frac{d\lambda^{1-\alpha}}{\alpha-1}. \end{align*} Finally, for $\alpha =1$, we use the formula $\operatorname{Tr}(\log(A))=\log(\det A)$ to deduce: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-1}} &\leq \frac{\lambda+1}{\lambda}\int_{0}^{T}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = \frac{\lambda+1}{\lambda}\operatorname{Tr}(\log\mathbb{W}_{T}-\log\mathbb{W}_{0}) = \frac{\lambda+1}{\lambda}\log\frac{\det\mathbb{W}_{T}}{\det\mathbb{W}_{0}}\\ &\leq \frac{\lambda+1}{\lambda}\log(1+\frac{T}{\lambda d}), \end{align*} where we used again Lemma \ref{Optimization Lemma Finite} to obtain the last (worst-case) upper bound. In doing so, we recover and extend the recent results of \cite{carpentier2020elliptical} in a more natural way.\footnote{Yet, we conjecture that the preliminary use of Cauchy Schwartz inequality in the case $\alpha > 1$ to affirm $\sum_{t=1}^{T}\|a_{t}\|_{\mathbb{W}_{t-1}^{-\alpha}}\leq \sqrt{T\sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-\alpha}}}$ is suboptimal in this case as it imposes a $\mathcal{O}(\sqrt{T})$ scaling.} Note that the rank $1$ assumption is not needed in the continuous relaxation and therefore our results still hold whenever $a(t)a(t)^{T}$ is replaced by any positive semi-definite matrix $H(t)$. \end{remark} \subsection{Statement of Lemma \ref{CEN Lemma Linear}} In order to prove previous property on $\mathbb{V}_{\alpha}$, a key step mirroring the MAB case is the use of high confidence lower bound on the censorship process, proven using anytime matrix martingale inequalities: \begin{lemma}(\cite{adpt_cofond_Russo}) \label{CEN Lemma Linear} For any $\delta \in ]0,1]$, $\lambda>0$ and policy $\pi$, let's introduce the event: \begin{align*} \mathcal{H}_{\text{CEN}}^{II}(\delta) \triangleq \Big\{\exists t\geq 0, \mathbb{W}^{C}_{t} \prec \frac{1}{c_{\delta}} \mathbb{W}_{t}\Big\}, \end{align*} where $c_{\delta}\triangleq 8\max(\frac{\log(d/\delta))}{\lambda},1)$. We then have $\mathbb{P}(\mathcal{H}_{\text{CEN}}^{II}(\delta))\leq \delta$. \end{lemma} Note that picking as in the MAB case $\delta \sim d/T^{2}$ would lead to a constant $c_{\delta}=\Theta(\log(T))$, that is a worsening confidence interval, except if we manage to control the initialization. One interesting technical question for future work would be to allow an initialization condition as in Lemma \ref{Derando Censo Linear} ensuring $\mathbb{W}(T_{0})$ counterbalance $\log(d/\delta)$. \subsection{Proof of Thm. \ref{THM Linear arms}} \THMLineararms* \begin{proof} Analogous to the MAB case, we use Lemma \ref{Potential Reduction Linear} to deduce: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2\Tilde{\beta_{\delta}}(T) \sqrt{T\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\textit{UCB}})]} + \delta T\Delta_{max}, \end{align*} where we have: \begin{align*} \Tilde{\beta}_{\delta}(T) = \sqrt{\sigma^{2}d \log (1+\frac{T}{d\lambda})+2\sigma^{2}\log(\frac{1}{\delta})}+\sqrt{\lambda} \|\theta^{\star}\|_{2}. \end{align*} We then pick $\delta = \frac{d}{T^{2}}$, which yields: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2\Big(\sqrt{\sigma^{2}d \log (1+\frac{T}{d\lambda})+2\sigma^{2}\log(\frac{T^{2}}{d})}+\sqrt{\lambda} \|\theta^{\star}\|_{2}\Big) \sqrt{T\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\textit{UCB}})]} + \frac{d \Delta_{max}}{T}. \end{align*} We then apply Lemma \ref{Potential Control Linear} with $\alpha=1$ and $\delta = \frac{d}{T^{2}}$ to deduce: \begin{align*} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] &\leq \frac{d}{\lambda T^{2}} + 8\frac{\lambda+1}{\lambda}\max(\frac{2\log(T)}{\lambda},1) \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) \\ &\leq \frac{d}{\lambda T^{2}} + 8\frac{\lambda+1}{\lambda}\max(\frac{2\log(T)}{\lambda},1) \max_{\pi \in \Pi}\operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big). \end{align*} By applying Thm. \ref{THM Linear Optim MTM}, we deduce the two possibilities: \begin{itemize} \item \textbf{Case 1: Single region $i_{l}$.} The effective dimension corresponding to this dynamics is $d/p_{i_{l}}$, with the following equality for $T\geq t_{l-1}$: \begin{align*} \max_{\pi \in \Pi}\operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) = \frac{1}{p_{i_{l}}}\log\det(\mathbb{W}(T))+ \sum_{n=1}^{l-1} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}), \end{align*} where we have for $T\geq t_{l-1}$ $\mathbb{W}(T) = p_{i_{l}}(T-t_{l-1})\mathbb{W}_{i_{l}} + \mathbb{W}(t_{l-1})$. Explicit formula of $(t_{n},\mathbb{W}(t_{n}))$ are given for all $n\leq l$ in Cor. \ref{Path Formula}. We then note that: \begin{align*} \frac{1}{p_{i_{l}}}\log\det(\mathbb{W}(T)) &= \frac{1}{p_{i_{l}}}\log\det(p_{i_{l}}(T-t_{l-1})\mathbb{W}_{i_{l}} + \mathbb{W}(t_{l-1})) \\ &= d_{\mathit{eff}}\log(T) + \frac{1}{p_{i_{l}}}\log\det(p_{i_{l}}(1-\frac{t_{l-1}}{T})\mathbb{W}_{i_{l}} + \frac{1}{T} \mathbb{W}(t_{l-1})). \end{align*} For $T\geq t_{l-1}$, we then write this in the form: \begin{align*} \max_{\pi \in \Pi}\operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) = d_{\mathit{eff}}\log(T) + f(T), \end{align*} where $f(T)=o(\log(T))$. \item \textbf{Case 2: Bi-region $(i_{l+1},i_{l})$.} Similarly, for $T\geq t_{l}$, we have: \begin{align*} \max_{\pi \in \Pi}\operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) &= d_{\mathit{eff}}\log(1+\frac{T-t_{l}}{t_{l}+\lambda^{\star}}) + \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}) \\ &= d_{\mathit{eff}}\log(T)+ d_{\mathit{eff}}\log(\frac{1}{T}+\frac{1-\frac{t_{l}}{T}}{t_{l}+\lambda^{\star}}) \\ &\quad \quad + \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}) \\ &= d_{\mathit{eff}}\log(T)+ f(T), \end{align*} where $f(T)=o(\log(T))$. \end{itemize} Therefore, for given $d_{\mathit{eff}}$, $f$ and $t_{0}$, we know that the following holds for all $T\geq t_{0}$: \begin{align*} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] &\leq \frac{d}{\lambda T^{2}} + 8\frac{\lambda+1}{\lambda}\max(\frac{2\log(T)}{\lambda},1) \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) \\ &\leq \frac{d}{\lambda T^{2}} + 8\frac{\lambda+1}{\lambda}\max(\frac{2\log(T)}{\lambda},1) (d_{\mathit{eff}}\log(T)+ f(T)). \end{align*} Putting the pieces together yields for $T\geq t_{0}$: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq 2\Big(\sqrt{\sigma^{2}d \log (1+\frac{T}{d\lambda})+2\sigma^{2}\log(\frac{T^{2}}{d})}+\sqrt{\lambda} \|\theta^{\star}\|_{2}\Big) \sqrt{T}\Big(\frac{d}{\lambda T^{2}} \\&+ 8\frac{\lambda+1}{\lambda}\max(\frac{2\log(T)}{\lambda},1) (d_{\mathit{eff}}\log(T)+ f(T))\Big)^{1/2} + \frac{d \Delta_{max}}{T}. \end{align*} By imposing regularization of order $\lambda = o(\log(T))$ only considering the leading order, this yields: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq \Tilde{\mathcal{O}}(\sqrt{(d+4)\sigma^{2}}\sqrt{d_{\mathit{eff}}}\sqrt{T}). \end{align*} Finally, by working in large $d$ regime, we finally conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq \Tilde{\mathcal{O}}(\sigma\sqrt{d\cdot d_{\mathit{eff}}}\sqrt{T}). \end{align*} Again, we note that our proof easily allows to get high-probability bounds on regret instead of bounds on its expected value. \end{proof} \subsection{Extension to Generalized Linear Contextual Bandits} \label{gen linear} On what follows, we provide a sketch of the extension our results to Generalized Linear Contextual Bandits (GLCB) but differ the complete treatment to future work. In this model, the reward of a given action $a$ is assumed to be of the form: \begin{align*} r(a) = \mu(\langle a,\theta^{\star}\rangle) \end{align*} for a given function $\mu$ strictly increasing, continuously differentiable and real-valued. Notable instances of such a problem include the Logistic bandit and the Poisson bandit. Of particular importance in the dimensionality study of the problem are the constants: \begin{align*} L_{\mu}=\sup _{a \in \cup\mathcal{A}_{t}} \mu^{(1)}(\langle a, \theta^{\star}\rangle) \quad \text{and} \quad \kappa=\inf _{a \in \cup\mathcal{A}_{t}} \mu^{(1)}(\langle a, \theta^{\star}\rangle). \end{align*} An important requirement of GLCB is the assumption $\kappa>0$ needed to ensure identifiability of $\theta^{\star}$ and asymptotic normality. Given this, the suited definition of pseudo-regret considered is: \begin{align*} R(T,\pi) \triangleq \sum_{t=1}^{T} \max_{a\in\mathcal{A}_{t}}\mu(\langle a, \theta^{\star}\rangle) - \mu(\langle a_{t}, \theta^{\star}\rangle) \end{align*} Note that this regret can be easily mapped to the one studied above thanks to the fact that $L_{\mu}$ is a Lipschitz constant for $\mu$: for all $a,\Tilde{a} \in \cup\mathcal{A}_{t}$, $|\mu(\langle a,\theta^{\star}\rangle)-\mu(\langle \Tilde{a},\theta^{\star}\rangle)|\leq L_{\mu}|\langle a,\theta^{\star}\rangle-\langle \Tilde{a},\theta^{\star}\rangle|$. Mirroring the proof of \cite{li2017provably}, we use a Maximum Likelihood Estimator (MLE) instead of a Least-Square Estimator for $\theta^{\star}$. More precisely, we define $\hat{\theta}^{\mathit{MLE}}_{t}$ as the solution of the equation: \begin{align*} \sum_{n=1}^{t}\langle a_{n},\epsilon_{t}+\mu(\langle a_{n},\theta^{\star}\rangle)-\mu(\langle a_{n},\theta\rangle)\rangle = 0 \end{align*} A minor difference between the approach of \cite{li2017provably} and what precedes is the use of a period of initial random sampling (e.g. \textit{exploration}) instead of the regularization to ensure inversibility of the design matrix $\mathbb{W}^{C}_{t}$. More precisely, the initial sampling ensures that with high-probability, $\lambda_{\min}(\mathbb{W}^{C}_{t})>0$ in a finite time $T_{\text{init}}$. To be possible, this requires the assumption that there exists $\sigma_{0}^{2}>0$ such that for all $t\geq 1$, we have $\lambda_{min}\left(\mathbf{E}_{a \in \mathcal{A}_{t}}\left[a a^{\top}\right]\right) \geq \sigma_{0}^{2}$, where the expectation $\mathbf{E}$ is associated with an uniform sampling of actions. Under the same assumption, the impact of censorship on this initialization step is at worst an increase of the sampling time to $\Tilde{T_{\text{init}}}\triangleq T_{\text{init}}/p_{\min}$, which is still constant. Following Lemma $9$ of \cite{li2017provably}, we then consider the censored high-probability confidence set for any $\delta \in [\frac{1}{T},1]$: \begin{align*} \mathcal{H}_{\text{UCB}}^{III}(\delta) \triangleq \Big\{\exists t \geq 0, \|\hat{\theta}^{\mathit{MLE}}_{t}-\theta^{\star}\|_{\mathbb{W}^{C}_{t}} > \frac{\sigma}{\kappa} \sqrt{\frac{d}{2} \log (1+2 \frac{t}{d})+\log (1 / \delta)} \quad \text{and} \quad \lambda_{\min}(\mathbb{W}^{C}_{t})>1 \Big\}. \end{align*} and a direct extension of their results allows us to conclude $\mathbb{P}(\mathcal{H}_{\text{UCB}}^{III}(\delta))\leq \delta$. Note that the constant $\kappa$ appears when upper bounding in the Loewner order the Fischer Information Matrix of the MLE by the matrix $\mathbb{W}^{C}_{t}$. Post-initialization, the conditional regret is then upper bounded by: \begin{align*} R(T,\pi_{\text{UCB}}|\neg \mathcal{H}_{\text{UCB}}^{III}(\delta)) &\leq \Tilde{T_{\text{init}}}\Delta_{max} + \sum^{T}_{t=T_{\text{init}}}L_{\mu} \frac{\sigma}{\kappa} \sqrt{\frac{d}{2} \log (1+2 \frac{t}{d})+\log (1 / \delta)}\|a_{t}\|_{(\mathbb{W}^{C}_{t})^{-1}} \\ &\leq \Tilde{T_{\text{init}}}\Delta_{max} + L_{\mu} \frac{\sigma}{\kappa} \sqrt{\frac{d}{2} \log (1+2 T / d)+\log (1 / \delta)}\sqrt{T\mathbb{V}_{1}(\pi_{\text{UCB}},T)}, \end{align*} Combining these elements and taking $\delta = \frac{1}{T}$, we conclude that: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})]&\leq \Tilde{\mathcal{O}}\Big(\frac{L_{\mu}}{\kappa}\sqrt{d}\sqrt{T\mathbb{E}[\mathbb{V}_{1}(\pi_{\text{UCB}},T)]}\Big) \leq \Tilde{\mathcal{O}}\Big(L_{\mu}\frac{\sqrt{d\cdot d_{\mathit{eff}}}}{\kappa}\sqrt{T}\Big), \end{align*} where we used Thm. \ref{THM Linear Optim MTM} to control $\mathbb{E}[\mathbb{V}_{1}(\pi_{\text{UCB}},T)]$ as done in the proof of Th,\ref{THM Linear arms}. \subsection{Supplementary Notations}\label{sup_not} Without loss of generality (i.e. up to an orthogonal transformation), we can consider that $u\equiv e_{d}$, the $d^{th}$ basis vector. Given this, for two regions $i<j$, we introduce the notations: \begin{align*} l(i,j) \triangleq \frac{\sin^{2}(\rho_{i})}{\sin^{2}(\rho_{j})} \quad &\textit{and} \quad u(i,j) \triangleq \frac{\cos^{2}(\rho_{i})}{\cos^{2}(\rho_{j})} \\ r^{\star}(i,j) \triangleq \frac{(d-1) u(i,j)+l(i,j)}{d} \quad &\textit{and} \quad r^{\dagger}(i,j) \triangleq \frac{1}{r^{\star}(j,i)} = \frac{dl(i,j)u(i,j)}{u(i,j)+(d-1) l(i,j)} \\ \mathbb{W}_{i} \triangleq \begin{pmatrix} \frac{\cos^{2}(\rho_{i})}{d-1}\mathbb{I}_{d-1} & (0) \\ (0) & \sin^{2}(\rho_{i}) \end{pmatrix}\quad &\textit{and} \quad\mathbb{W}(i,j) = \begin{pmatrix} \cos^{2}(\rho_{j})(u(i,j)-\frac{p_{i}}{p_{j}})\mathbb{I}_{d-1} & (0) \\ (0) & \sin^{2}(\rho_{j})(\frac{p_{i}}{p_{j}}-l(i,j)) \end{pmatrix}. \end{align*} Whenever $i$ and $j$ are clear from context, we use in $u$ (resp. $l$) as abbreviation for $u(i,j)$ (resp. $l(i,j)$). \subsection{Proof of Thm. \ref{THM Linear Optim MTM}} \THMLinearOptimMTM* \begin{algorithm} \SetAlgoLined \KwInit{Set current region $S\gets k$} \While(\tcc*[f]{Lemma \ref{One-step Transient Analysis},Fig.\ref{reach_stat}}){a region is reachable from region $S$}{ \textbf{play} region $S$ optimal policy \textbf{until} first reachable region $i^{\star}$ is reached\; \eIf(\tcc*[f]{Lemma \ref{Dual Reachability Analysis}, Fig.\ref{reach}}){region $i^{\star}$ is dual reachable from region $S$}{ Bi-region $(i^{\star},S)$ effective dimension (case 2)\tcc*{Lemma \ref{Bi-Region Effective Dimension}} \textbf{play} Bi-region $(i^{\star},S)$ optimal policy\; \textbf{End}\;} {Update current region $S\gets i^{\star}$\tcc*{Lemma \ref{Dual Reachability Analysis}, Fig.\ref{switch}} } } Single region $S$ effective dimension (case 1)\tcc*{Lemma \ref{One-step Transient Analysis}} \textbf{play} region $S$ optimal policy\; \caption{Algorithmic description of the dynamics of $\mathbb{W}(t)$} \label{Dyn_MT} \end{algorithm} \begin{proof} We first summarize the dynamics of the optimal policy of (\ref{optim_prob}) through an algorithmic description in Alg. \ref{Dyn_MT}. Two key notions of our analysis are the concepts of reachability and dual reachability of a region $i$ from a base region $j$, as described in Lemmas \ref{One-step Transient Analysis} and \ref{Dual Reachability Analysis} and schematized in Fig.\ref{reach_stat}, \ref{reach} and \ref{switch}. Formally, they can be written as two independent necessary constraints on the ratio $p_{i}/p_{j}$: $p_{i}/p_{j}<r^{\star}(i,j)$ for reachability and $p_{i}/p_{j}>r^{\dagger}(i,j)$ for dual reachability. The categorization result provided in the statement of Thm. \ref{THM Linear Optim MTM} follows from the two possible termination condition of the algorithm. We use as algorithmic invariant to ensure the termination the fact that the set of reachable regions is strictly decreasing for inclusion and finite. Hence, the while loop will terminate either because a dual reachable region is reached or because no more regions are reachable. In order to not overload the presentation, time aspect is not present in the algorithmic description but is extensively covered in Lemmas \ref{One-step Transient Analysis}, \ref{Dual Reachability Analysis}, \ref{Bi-Region Effective Dimension} and Cor. \ref{Path Formula}, as well as in what follows. One of our main finding is that the dynamics of the optimal policy of (\ref{optim_prob}) are described through $\mathbb{W}(t)$ by two qualitatively different regimes. We emphasize that our continuous approach to analyzing cumulative censored potential is key to obtaining these results. \paragraph{Transient Regime:} From the \textbf{while} loop in the algorithmic description results a so-called transient regime. More precisely, there exists a decreasing sequence of censorship regions $\{i_{1}=k,\dots,i_{l}\}$ of length $l \in [k+1]$ and associated time sequence $\{t_{0}\triangleq 0,t_{1},\dots,t_{l}\}$ such that whenever $t_{j}\leq t \leq t_{j+1}$ for a given index $j\leq l-1$, the evolution of $\mathbb{W}(t)$ is given by: \begin{align*} \mathbb{W}(t) &= p_{i_{j+1}}(t-t_{j})\mathbb{W}_{i_{j+1}} + \mathbb{W}(t_{j}) = p_{i_{j+1}}(t-t_{j})\mathbb{W}_{i_{j+1}} + \sum_{n=1}^{j} p_{i_{n}}(t_{n}-t_{n-1})\mathbb{W}_{i_{n}} + \lambda \mathbb{I}_{d}. \end{align*} This result follows from a simple induction with repeated use of Lemma \ref{One-step Transient Analysis}, giving the exact sequence of censorship regions, Moreover, closed-formed formula for the time sequence is provided in Cor. \ref{Path Formula}. We interpret this transient step as an adversarial self-correction of the initial misspecification of censorship at an extra cost. This characterization of transient regime highlights an important consequence of using classical algorithms in censored environments. \paragraph{Steady State Regime:} Post-transient regime, the dynamics of $\mathbb{W}(t)$ enter a steady state regime, where one of the two cases necessarily arise: \begin{itemize} \item \textbf{Case 1: Single region $i_{l}$.} This case arises when the \textbf{while} loop ends because no other regions are reachable. It is equivalent to have the last element of the time sequence $t_{l}$ is equal to $+\infty$ and we have the single region evolution for all $t\geq t_{l-1}$ thanks to Lemma \ref{One-step Transient Analysis}: \begin{align*} \mathbb{W}(t) &= p_{i_{l}}(t-t_{l-1})\mathbb{W}_{i_{l}} + \mathbb{W}(t_{l-1}) = p_{i_{l}}(t-t_{l-1})\mathbb{W}_{i_{l}} + \sum_{n=1}^{l-1} p_{i_{n}}(t_{n}-t_{n-1})\mathbb{W}_{i_{n}} + \lambda \mathbb{I}_{d}. \end{align*} The effective dimension corresponding to this dynamic is $d/p_{i_{l}}$, with the following equality for $T\geq t_{l-1}$: \begin{align*} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = \frac{1}{p_{i_{l}}}\log\det(\mathbb{W}(T))+ \sum_{n=1}^{l-1} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}), \end{align*} where the closed-form formula for $\mathbb{W}(t_{n})$ is provided in Cor. \ref{Path Formula} for all $n\leq l-1$. \item \textbf{Case 2: Bi-region $(i_{l+1},i_{l})$.} This case arises when the \textbf{while} loop ends because dual reachable region $i_{l+1}$ is reached from region $i_{l}$, with $i_{l+1}<i_{l}$. For all $t\geq t_{l}$, Lemma \ref{Dual Reachability Analysis} yields the evolution: \begin{align*} \mathbb{W}(t) &\propto p_{i_{l+1}}(t+\lambda^{\star})\begin{pmatrix} \cos^{2}(\phi_{i_{l}})(u(i_{l+1},i_{l})-\frac{p_{i_{l+1}}}{p_{i_{l}}})\mathbb{I}_{d-1} & (0) \\ (0) &\sin^{2}(\phi_{i_{l}})(\frac{p_{i_{l+1}}}{p_{j}}-l(i_{l+1},i_{l})) \end{pmatrix}. \end{align*} where $\lambda^{\star}$ and the proportionality factor are specified in the proof. The corresponding effective dimension is given by (\ref{bi_reg}) and the following equality holds for all $T\geq t_{l}$ thanks to Lemma \ref{Bi-Region Effective Dimension}: \begin{align*} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = d_{\mathit{eff}}\log(1+\frac{T-t_{l}}{t_{l}+\lambda^{\star}}) + \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}), \end{align*} where the closed-form formula for $\mathbb{W}(t_{n})$ is provided in Cor. \ref{Path Formula} for all $n\leq l$. \end{itemize} \end{proof} \begin{remark} Fig.\ref{reach_stat} and \ref{switch} provide further insights on formula (\ref{bi_reg}) for $d_{\mathit{eff}}$. Throughout the proof and as illustrated on Fig.\ref{reach_stat}, we see that for (\ref{bi_reg}) to arise, $\frac{p_{i}}{p_{j}}$ must belong to a certain interval $J\triangleq ]\max(1,r^{\dagger}(i,j)),r^{\star}(i,j)[$. As $r^{\star}(i,j)< u(i,j)$ and $r^{\dagger}(i,j)> l(i,j)$, we see (\ref{bi_reg}) as a weighted average of the relative distance of $\frac{p_{i}}{p_{j}}$ to $u(i,j)$ and $l(i,j)$. Fig.\ref{deff} provides a sketch of the variations of $d_{\mathit{eff}}$ as $\frac{p_{i}}{p_{j}}$ evolves in this interval. \end{remark} \begin{figure}[h] \centering \includegraphics[width=0.8 \textwidth]{img/deff.png} \caption{Sketch plot of normalized effective dimension $p_{j}d_{\mathit{eff}}$ with respect to $\frac{p_{i}}{p_{j}}$. We recover the uniform and local hardness conditions mentioned in the discussion of Thm. \ref{THM Linear Optim MTM}, as well as the existence of a \textit{minimum effective dimension} for a certain value of $\frac{p_{i}}{p_{j}}$. The necessary conditions of reachability and dual reachability (Lemma \ref{Dual Reachability Analysis} and \ref{One-step Transient Analysis}) verified by $\frac{p_{i}}{p_{j}}$ impose that it belongs to the orange segment.} \label{deff} \end{figure} \subsection{Statement and Proof of Lemma \ref{One-step Transient Analysis}} \begin{lemma}[Reachability Analysis]\label{One-step Transient Analysis} Let's assume we start at a given time $t_{1}$ in transient censored region $j$, with a matrix \begin{align*} \mathbb{W}(t_{1}) = \begin{pmatrix} \lambda_{a}\mathbb{I}_{d-1} & (0) \\ (0) & \lambda_{b} \end{pmatrix}, \end{align*} where $\lambda_{a}\geq \lambda_{b}$. We introduce $I_{j}\triangleq \{i; i <j \quad \textit{and} \quad \frac{p_{i}}{p_{j}} < r^{\star}(i,j)\}$, the set of reachable regions from region $j$ and affirm that we have the two possible cases: \begin{itemize} \item If $I_{j} = \varnothing$, i.e. no region is reachable from region $j$, we switch to a steady state regime with single region $j$ effective dimension $d_{\mathit{eff}}=d/p_{j}$. \item Otherwise, next region added to the transient sequence is $i^{\star}\triangleq \operatorname{argmin}_{i\in I_{j}}\mu^{\star}(i,j,\lambda_{a},\lambda_{b})$, at time $t_{2}\triangleq t_{1}+\frac{1}{p_{j}}\mu^{\star}(i^{\star},j,\lambda_{a},\lambda_{b})$ and we have: \begin{align*} \mathbb{W}(t_{2}) = \frac{(d-1)\sin^{2}(\phi_{j})\lambda_{a}-\cos^{2}(\phi_{j})\lambda_{b}}{d\cos^{2}(\phi_{j})\sin^{2}(\phi_{j})(r^{\star}(i^{\star},j)-\frac{p_{i}}{p_{j}})}\mathbb{W}(i^{\star},j). \end{align*} \end{itemize} \end{lemma} \begin{figure}[h] \centering \includegraphics[width=0.9 \textwidth]{img/reach.png} \caption{Illustration of the set of reachable regions from a base region $k$, as a function of $\frac{p_{i}}{p_{k}}$. Black dots and lines correspond to censorship regions defined by \ref{MT_model}. In this figure, we see that a region is reachable if and only if the black dot is below the red reachability line. As time increases, the green line rotates with region $k$ as pivot and asymptotically approaches to the red line. Hence, the first reachable region is the one first \textit{reached} by the green line.} \label{reach_stat} \end{figure} \begin{proof} First, we note that the initial starting point is recovered for $t_{1}=0$, base censored state $k$ and $\lambda_{a}=\lambda_{b}=\lambda$ but this Lemma allows to go beyond the first step in the study of the behavior of the system. We know the temporal evolution for normalized budget $\mu \triangleq p_{1}(t-t_{1})$ is of the form: \begin{align*} \mathbb{W}(t) = \begin{pmatrix} (\mu\frac{\cos^{2}(\phi_{j})}{d-1}+\lambda_{a})\mathbb{I}_{d} & (0) \\ (0) & \mu\sin^{2}(\phi_{j})+\lambda_{b} \end{pmatrix} = \mu \mathbb{W}_{j} + \mathbb{W}(t_{1}). \end{align*} We recall that the set of actions associated with region $j$ is $\{a\in \mathbb{B}_{d}, \sin(\phi_{j}) \leq \langle a,e_{d}\rangle <\sin(\phi_{j+1})\}$. Therefore, the use of Kiefer-Wolfowitz theorem \cite{lattimore2020bandit} combined with the fact $\lambda_{a}\geq \lambda_{b}$ yields that the optimal policy while evolving in region $j$ only plays unit action vector $v_{j}\equiv (\cos(\phi_{j})/(d-1)^{1/2},\dots, \cos(\phi_{j})/(d-1)^{1/2},\sin(\phi_{j}))$. By noting that $v_{j}v_{j}^{\top}=\mathbb{W}_{j}$, we obtain the formula announced. Reachability of a given state $i<j$ from state $j$ after time $t_{1}$ is then defined as: \begin{align*} \exists t \geq t_{t}, \quad \frac{1}{p_{i}}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{i}) &= \frac{1}{p_{j}}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{j}). \end{align*} We interpret this as a classical a first-order optimally condition for convex maximization problems, where the matrix $\mathbb{W}_{j}$ is weighted by the censorship probability representing the speed of increase in region $j$. We then rewrite this condition as: \begin{align*} \exists \mu \geq 0, \quad \frac{1+f(\mu)\cos^{2}(\phi_{i})}{1+f(\mu)\cos^{2}(\phi_{j})} =\frac{p_{i}}{p_{j}} \quad \textit{where} \quad f(\mu) \triangleq \frac{\mu\sin^{2}(\phi_{j})+\lambda_{b}}{\mu\frac{\cos^{2}(\phi_{j})}{d-1}+\lambda_{a}}-1. \end{align*} We know that $f$ is increasing in $\mu$ and the LHS of the equation above is decreasing in $f(\mu)$ as $i< j$. Hence, the reachability condition than be stated by looking at the limit of $f$ in $+\infty$. By using the fact that $\lim_{\mu \rightarrow +\infty} f(\mu) = \frac{d\sin^{2}(\phi_{j})-1}{\cos^{2}(\phi_{j})}$, we deduce that the reachability condition is equivalent to looking at the position of $\frac{p_{i}}{p_{j}}$ with respect to: \begin{align*} r^{\star}(i,j) \triangleq \frac{1+ud[\sin^{2}(\phi_{j})-\frac{1}{d}]}{d\sin^{2}(\phi_{j})} = \frac{(d-1)u+l}{d} = \frac{1}{d}\operatorname{Tr}(\mathbb{W}_{j}^{-1}\mathbb{W}_{i}). \end{align*} On the one hand, if $\frac{p_{i}}{p_{j}} \geq r^{\star}(i,j)$, the state in never reachable in a finite time. On the other hand, whenever $\frac{p_{i}}{p_{j}}< r^{\star}(i,j)$, the state is reachable by investing a budget $\mu^{\star}(i,j,\lambda_{a},\lambda_{b})$ such that: \begin{align*} f(\mu^{\star}(i,j,\lambda_{a},\lambda_{b})) = \frac{1}{\cos^{2}(\phi_{j})}\frac{\frac{p_{i}}{p_{j}}-1}{u-\frac{p_{i}}{p_{j}}}, \end{align*} which in turn involves: \begin{align*} \mu^{\star}(i,j,\lambda_{a},\lambda_{b}) &= \frac{d-1}{d\sin^{2}(\phi_{j})\cos^{2}(\phi_{j})}\frac{(\sin^{2}(\phi_{j})\lambda_{a}+\cos^{2}(\phi_{j})\lambda_{b})\frac{p_{i}}{p_{j}} - (\sin^{2}(\phi_{i})\lambda_{a}+\cos^{2}(\phi_{i})\lambda_{b})}{r^{\star}(i,j)-\frac{p_{i}}{p_{j}}}. \end{align*} In particular, at $t_{1}=0$ whenever $\lambda_{b}=\lambda_{a}=\lambda$ and $j=k$, this gives: \begin{align*} \mu^{\star}(i,k,\lambda,\lambda) = \frac{(d-1)\lambda}{d\sin^{2}(\phi_{k})\cos^{2}(\phi_{k})} \frac{\frac{p_{i}}{p_{k}}-1}{r^{\star}(i,k)-\frac{p_{i}}{p_{k}}}. \end{align*} The first reachable region from region $j$ is then defined as $i^{\star}\triangleq \operatorname{argmin}_{i\in I}\mu^{\star}(i,j,\lambda_{a},\lambda_{b})$, where $I\triangleq \{i; i <j \quad \textit{and} \quad \frac{p_{i}}{p_{j}} < r^{\star}(i,j)\}$. Note that at the moment $t_{2}\triangleq t_{1}+\frac{1}{p_{j}}\mu^{\star}(i^{\star},j,\lambda_{a},\lambda_{b})$ when this region is reached, we have: \begin{align*} \mathbb{W}(t_{2}) = \frac{(d-1)\sin^{2}(\phi_{j})\lambda_{a}-\cos^{2}(\phi_{j})\lambda_{b}}{d\cos^{2}(\phi_{j})\sin^{2}(\phi_{j})(r^{\star}(i^{\star},j)-\frac{p_{i}}{p_{j}})}\mathbb{W}(i,j). \end{align*} On the other hand, whenever the set $I$ is empty, by definition, the process reaches case $1$ steady-state regime and only plays optimal policy of region $j$ for remaining budget. To be fully general, we note that two or more regions can be reached simultaneously. In this case, the optimal policy tie-breaks by taking the region with maximal index i.e. higher censorship, as further described in Lemma \ref{Dual Reachability Analysis}. \end{proof} \subsection{Statement and Proof of Cor. \ref{Path Formula}} More generally, this allows us to deduce the next technical corollary: \begin{corollary}\label{Path Formula} For a sequence of censored regions $\{i_{1}=k,\dots,i_{l},i_{l+1},\dots\}$, we have for the $l^{th}$ region of the transient sequence, with starting time $t_{l-1}$ and ending time $t_{l}$: \begin{align*} \mathbb{W}(t_{l}) &= \lambda \mathbb{I}_{d}+\sum_{n=1}^{l}\mu^{\star}(i_{n+1},i_{n},\lambda^{\mathbb{W}(t_{n-1})}_{a},\lambda_{b}^{\mathbb{W}(t_{n-1})})\mathbb{W}_{i_{n}}\\ &=\frac{\lambda\frac{(d-1)\sin^{2}(\phi_{k})-\cos^{2}(\phi_{k})}{\cos^{2}(\phi_{i_{l}})\sin^{2}(\phi_{i_{l}})}\displaystyle\prod_{n=1}^{l-1} \Big(r^{\dagger}(i_{n+1},i_{n})-\frac{p_{i_{n+1}}}{p_{i_{n}}}\Big)}{d^{l}\displaystyle\prod_{n=1}^{l}\Big( r^{\star}(i_{n+1},i_{n})-\frac{p_{i_{n+1}}}{p_{i_{n}}}\Big)\displaystyle\prod_{n=1}^{l-1} \Big(u(i_{n+1},i_{n})+dl(i_{n+1},i_{n})\Big)}\mathbb{W}(i_{l+1},i_{l}), \end{align*} where $t_{l}$ is characterized by: \begin{align*} t_{l} = \sum_{n=1}^{l}\frac{1}{p_{i_{n}}}\mu^{\star}(i_{n+1},i_{n},\lambda^{\mathbb{W}(t_{n-1})}_{a},\lambda_{b}^{\mathbb{W}(t_{n-1})}), \end{align*} and where $\lambda^{\mathbb{W}(t_{n})}_{a}$ and $\lambda_{b}^{\mathbb{W}(t_{n})}$ refer respectively to the upper and lower coefficient of the diagonal matrix $\mathbb{W}(t_{n})$. \end{corollary} \begin{proof} We leverage a simple induction reasoning using for $l \geq 1$ the formula given within the proof of lemma \ref{One-step Transient Analysis}: \begin{align*} t_{l} &= t_{l-1} + \frac{1}{p_{i_{l}}}\mu^{\star}(i_{l+1},i_{l},\lambda^{\mathbb{W}(t_{l-1})}_{a},\lambda_{b}^{\mathbb{W}(t_{l-1})}) \\ \mathbb{W}(t_{l}) &= \frac{(d-1)\sin^{2}(\phi_{i_{l}})\lambda^{\mathbb{W}(t_{l-1})}_{a}-\cos^{2}(\phi_{i_{l}})\lambda^{\mathbb{W}(t_{l-1})}_{b}}{d\cos^{2}(\phi_{i_{l}})\sin^{2}(\phi_{i_{l}})(r^{\star}(i_{l+1},i_{l})-\frac{p_{i_{l+1}}}{p_{i_{l}}})}\mathbb{W}(i_{l+1},i_{l}), \end{align*} and the initialization conditions $t_{0}=0$ and $\mathbb{W}(0)=\lambda \mathbb{I}_{d}$. \end{proof} \subsection{Statement and Proof of Lemma \ref{Dual Reachability Analysis}} \begin{lemma}[Dual Reachability Analysis]\label{Dual Reachability Analysis} Let's assume we are currently playing transient region $j$ and we reach the region $i$ at time $t_{l}$. We then have the following two possible cases: \begin{itemize} \item If $\frac{p_{i}}{p_{j}} > r^{\dagger}(i,j)$, we say that regions $i$ is dual reachable from region $j$, leading to a steady state regime with bi-region $(i,j)$ effective dimension. In such case, for $t\geq t_{l}$, the potential increase is of the form: \begin{align*} \mathbb{W}(t) = \frac{1}{p_{i}/p_{j} + \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}}\frac{u-l}{u+(d-1)l}\frac{1}{p_{i}/p_{j}-r^{\dagger}(i,j)}p_{i}(t+\lambda^{\star})\mathbb{W}(i,j). \end{align*} \item Otherwise, we switch from base region $j$ to base region $i$ and continue in the transient regime. \end{itemize} \end{lemma} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{img/reach_stat.png} \caption{Sketch plot of reachability and dual reachability conditions from base region $k$ associated with the black dot (Lemma \ref{Dual Reachability Analysis} and \ref{One-step Transient Analysis}) as a function of $\frac{p_{i}}{p_{j}}$. For a region $i$ to be reachable, $\frac{p_{i}}{p_{j}}$ has to be below the red line. For a region $i$ to be dual reachable, $\frac{p_{i}}{p_{j}}$ has to be above the blue line. Henceforth, the red dot here is a censorship region that is both reachable and dual reachable whereas the purple dot is a reachable but not dual reachable region. Orange lines represent the functions $u(i,k)$ and $l(i,k)$ introduced above in Sec.\ref{sup_not}.} \label{reach} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9 \textwidth]{img/switch.png} \caption{Sketch plot of the evolution of reachability and dual reachability conditions after a region $j$ is reached from region $k$ but is not dual reachable (Else condition in Alg. \ref{Dyn_MT}). Doted red (resp. blue) line is reachability (resp. dual reachability) condition for previous region $k$ and full red (resp. blue) lines is reachability (resp. dual reachability) condition for new region $j$. Instead of starting from horizontal line at $t=0$ to find new reachable state, rotation with region $j$ as pivot is initialized at the green line associated with $t=\frac{\mu^{\star}_{j}}{p_{k}}$. Note that the $y$-axis is not normalized here.} \label{switch} \end{figure} \begin{proof} Using previous section, we know that $\mathbb{W}(t_{l}) \propto \mathbb{W}(i,j)$ where we recall that the matrix $\mathbb{W}(i,j)$ has the strong property that the gains in regions $i$ and $j$ are equal i.e.: \begin{align*} \frac{1}{p_{i}}\operatorname{Tr}(\mathbb{W}(i,j)^{-1}\mathbb{W}_{i}) &= \frac{1}{p_{j}}\operatorname{Tr}(\mathbb{W}(i,j)^{-1}\mathbb{W}_{j}). \end{align*} One of the main result we show in the multi-threshold censorship model is that for $t\geq t_{l}$, we have: \begin{align*} \mathbb{W}(t) - \mathbb{W}(t_{l}) \propto (t-t_{l}) \mathbb{W}(i,j), \end{align*} which involves in particular that for $t\geq t_{l}, \mathbb{W}(t)\propto \mathbb{W}(i,j)$. This is possible thanks to the fact that the optimal policy produces a combination of $p_{i}\mathbb{W}_{i}$ and $p_{j}\mathbb{W}_{j}$ proportional to $\mathbb{W}(i,j)$ so that optimally of both regions $i$ and $j$ is maintained while maximal first-order gain is simultaneously ensured. The proportionality condition is then written as the existence of $\mu_{i},\mu_{j}>0$ such that $p_{i}\mu_{i}\mathbb{W}_{i}+p_{j}\mu_{j}\mathbb{W}_{j}\propto \mathbb{W}(i,j)$ or equivalently as: \begin{align*} \exists \mu_{i},\mu_{j}>0, \quad \frac{\frac{1}{d-1}[p_{i}\mu_{i}\cos^{2}(\phi_{i})+p_{j}\mu_{j}\cos^{2}(\phi_{j})]}{p_{i}\mu_{i}\sin^{2}(\phi_{i})+p_{j}\mu_{j}\sin^{2}(\phi_{j})} = \frac{\cos^{2}(\phi_{j})(u(i,j)-\frac{p_{i}}{p_{j}})}{\sin^{2}(\phi_{j})(\frac{p_{i}}{p_{j}}-l(i,j))} \triangleq R, \end{align*} where $\mu_{i}$ and $\mu_{j}$ are the infinitesimal time increase in regions $i$ and $j$. It leads in turn to the ratio equality: \begin{align*} \frac{p_{i}\mu_{i}}{p_{j}\mu_{j}} = \frac{\sin^{2}(\phi_{j})(d-1)R - \cos^{2}(\phi_{j})}{\cos^{2}(\phi_{i})-\sin^{2}(\phi_{i})(d-1)R} = \frac{(d-1)u+l - d\frac{p_{i}}{p_{j}}}{(u+(d-1)l)\frac{p_{i}}{p_{j}} - dlu} = \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-\frac{p_{i}}{p_{j}}}{\frac{p_{i}}{p_{j}} - r^{\dagger}(i,j)}. \end{align*} Thus, we see that bi-region stationarity is possible if and only if $\frac{p_{i}}{p_{j}} > r^{\dagger}(i,j)$ where we introduced the dual reachability condition: \begin{align*} r^{\dagger}(i,j) \triangleq \frac{dl(i,j)u(i,j)}{u(i,j)+(d-1) l(i,j)} = \Big(\frac{\frac{d-1}{u(i,j)}+\frac{1}{l(i,j)}}{d}\Big)^{-1} = \Big(\frac{1}{d}\operatorname{Tr}(\mathbb{W}_{i}^{-1}\mathbb{W}_{j})\Big)^{-1} = \frac{1}{r^{\star}(j,i)}. \end{align*} Hence, the use of the term dual reachability comes from the fact that region $i$ is dual reachable from region $j$ if and only if region $j$ is reachable from region $j$. In such case, further algebraic calculation then lead to the instantaneous potential increase $\partial W$ for infinitesimal time $\partial t\triangleq \mu_{j}+\mu_{j}$: \begin{align*} \partial W(\partial t) &\triangleq p_{j}\mu_{j}\mathbb{W}_{j} + p_{i}\mu_{i}\mathbb{W}_{i} = \frac{u-l}{u+(d-1)l}\frac{1}{\frac{p_{i}}{p_{j}}-r^{\dagger}(i,j)}p_{j}\mu_{j}\mathbb{W}(i,j). \end{align*} We then note that: \begin{align*} \frac{\mu_{i}+\mu_{j}}{\mu_{j}} = 1 + \frac{1}{\frac{p_{i}}{p_{j}}}\frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-\frac{p_{i}}{p_{j}}}{\frac{p_{i}}{p_{j}} - r^{\dagger}(i,j)}. \end{align*} Therefore, we conclude that: \begin{align*} \partial W(\partial t) &= \frac{1}{p_{i}/p_{j} + \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}}\frac{u-l}{u+(d-1)l}\frac{1}{p_{i}/p_{j}-r^{\dagger}(i,j)}p_{i}(\mu_{j}+\mu_{i})\mathbb{W}(i,j) \\ &= \frac{1}{p_{i}/p_{j} + \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}}\frac{u-l}{u+(d-1)l}\frac{1}{p_{i}/p_{j}-r^{\dagger}(i,j)}p_{i}\partial t\mathbb{W}(i,j). \end{align*} We then introduce $\lambda^{\star}$ defined such that: \begin{align*} (t_{l}+\lambda^{\star})\mathbb{W}(i,j) \triangleq \frac{1}{p_{i}}\frac{(u+(d-1)l)(p_{i}/p_{j}-r^{\dagger}(i,j))}{u-l}\Big(p_{i}/p_{j} + \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}\Big)\mathbb{W}(t_{l}). \end{align*} Given the previous two results, we conclude that for all $t\geq t_{l}$: \begin{align*} \mathbb{W}(t) = \frac{1}{p_{i}/p_{j} + \frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}}\frac{u-l}{u+(d-1)l}\frac{1}{p_{i}/p_{j}-r^{\dagger}(i,j)}p_{i}(t+\lambda^{\star})\mathbb{W}(i,j). \end{align*} Note that entering the bi-region stationary regime impedes new regions to be reachable. Indeed, going back to the initial definition of reachability, region $n$ is said to be reachable from region $j$ after time $t_{l}$ if and only if: \begin{align*} \exists t\geq t_{l}, \quad \frac{1}{p_{n}}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{n}) &= \frac{1}{p_{j}}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{j}). \end{align*} Yet, using previous result on the evolution of $\mathbb{W}(t)$, we know that the ratio of those two quantities remain equal for any $t\geq t_{l}$ i.e. no new regions can be reached. Moreover, using the optimality criterion of Lemma \ref{One-step Transient Analysis}, when several regions are reached simultaneously, the tie-breaking is performed by considering the most censored region, i.e. the one with the highest $i$ index. If the chosen region is not dual reachable, then the next one is considered. In the case where none of them is dual reachable, the base region becomes the maximally censored region and we immediately reiterate the procedure described in Lemma \ref{Dual Reachability Analysis}. \end{proof} \subsection{Statement and Proof of Lemma \ref{Bi-Region Effective Dimension}} \begin{lemma}[Bi-Region Effective Dimension]\label{Bi-Region Effective Dimension} Let's assume we reach a bi-region $(i,j)$ steady state regime at time $t_{l}\leq T$. Then, we have: \begin{align*} \int_{t_{l}}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = d_{\mathit{eff}}\log(1+\frac{T-t_{l}}{t_{l}+\lambda^{\star}}) \sim d_{\mathit{eff}}\log(T), \end{align*} where $d_{\mathit{eff}}=\frac{1}{p_{j}}\left[(d-1) \frac{1-l(i,j)}{p_{i}/p_{j}-l(i,j)}+\frac{u(i,j)-1}{u(i,j)-p_{i}/p_{j}}\right]$ and $\lambda^{\star}$ is given in the proof of Lemma \ref{Dual Reachability Analysis}. Moreover, we have the cumulative transient potential: \begin{align*} \int_{0}^{t_{l}}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t &= \sum_{n=1}^{l} \frac{1}{p_{i_{n}}}\int_{t_{n-1}}^{t_{n}}\partial\log\det(\mathbb{W}(t)) = \sum_{n=1}^{l} \frac{1}{p_{i_{n}}} \log\frac{\det(\mathbb{W}(t_{n}))}{\det(\mathbb{W}(t_{n-1}))} \\ &= \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}). \end{align*} \end{lemma} \begin{proof} For $t\geq t_{l}$, we have the infinitesimal two-step increase $\partial G$ during the infinitesimal time $\partial t\triangleq \mu_{i}+\mu_{j}$: \begin{align*} \partial G(\partial t) &\triangleq \mu_{i}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{i}) + \mu_{j}\operatorname{Tr}((\mathbb{W}(t)+\mu_{i}p_{i}\mathbb{W}_{i})^{-1}\mathbb{W}_{j})\\ &= \mu_{i}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{i}) + \mu_{j}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{j}) +o(\partial t)\\ &= \frac{p_{i}\mu_{i}+p_{j}\mu_{j}}{p_{j}}\operatorname{Tr}(\mathbb{W}(t)^{-1}\mathbb{W}_{j}) +o(\partial t), \end{align*} where we used the property of $\mathbb{W}(i,j)$. Invoking lemma $\ref{Dual Reachability Analysis}$, we know the evolution of $\mathbb{W}(t)$ for $t\geq t_{l}$: \begin{align*} \mathbb{W}(t)=\frac{1}{1 + \frac{1}{p_{i}/p_{j}}\frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)}}\frac{u-l}{u+(d-1)l}\frac{1}{p_{i}/p_{j}-r^{\dagger}(i,j)}p_{j}(t+\lambda^{\star})\mathbb{W}(i,j), \end{align*} as well as the relations between $\mu_{i}$ and $\mu_{j}$: \begin{align*} \begin{cases} \frac{p_{i}\mu_{i}+p_{j}\mu_{j}}{p_{j}} &= \mu_{j}(1+\frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)})\\ \frac{\mu_{i}+\mu_{j}}{\mu_{j}} &= 1 + \frac{1}{p_{i}/p_{j}}\frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{\frac{p_{i}}{p_{j}} - r^{\dagger}(i,j)}. \end{cases} \end{align*} We invoke the fact that $\operatorname{Tr}(\mathbb{W}(i,j)^{-1}\mathbb{W}_{j})=\frac{1}{u-p_{i}/p_{j}}+\frac{1}{p_{i}/p_{j}-l}$ to conclude that: \begin{align*} \partial G(\partial t) &=\frac{1}{p_{j}}\frac{[(d-1)l+u-d]\frac{p_{i}}{p_{j}}-[d l u-((d-1)u+l)]}{(u-\frac{p_{i}}{p_{j}})(\frac{p_{i}}{p_{j}}-l)}\frac{(1 + \frac{1}{p_{i}/p_{j}}\frac{d}{u+(d-1)l}\frac{r^{\star}(i,j)-p_{i}/p_{j}}{p_{i}/p_{j} - r^{\dagger}(i,j)})\mu_{j}}{t+\lambda^{\star}} \\ &= \frac{1}{p_{j}}\left[(d-1) \frac{1-l}{\frac{p_{i}}{p_{j}}-l}+\frac{u-1}{u-\frac{p_{i}}{p_{j}}}\right] \frac{\partial t}{t+\lambda^{\star}} \\ &= d_{\mathit{eff}}\frac{\partial t}{t+\lambda^{\star}}. \end{align*} Given that $\partial t$ is an infinitesimal time increase, we have in the steady state regime: \begin{align*} \int_{t_{l}}^{T} \partial G = d_{\mathit{eff}}\int_{t_{l}}^{T}\frac{\partial t}{t+\lambda^{\star}} = d_{\mathit{eff}}\log(\frac{T+\lambda^{\star}}{t_{l}+\lambda^{\star}}) = d_{\mathit{eff}}\log(1+\frac{T-t_{l}}{t_{l}+\lambda^{\star}}). \end{align*} We finally note that the cumulative potential coming from the transient period is equal to: \begin{align*} \int_{0}^{t_{l}} \partial G &= \sum_{n=1}^{l} \frac{1}{p_{i_{n}}}\int_{t_{n-1}}^{t_{n}}\partial\log\det(\mathbb{W}(t)) = \sum_{n=1}^{l} \frac{1}{p_{i_{n}}} \log\frac{\det(\mathbb{W}(t_{n}))}{\det(\mathbb{W}(t_{n-1}))} \\ &= \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}), \end{align*} where the closed-form expression of $\mathbb{W}(t_{n})$ is given in Corollary \ref{Path Formula}. \end{proof} \subsection{Special case: Single-threshold model} \begin{corollary}\label{single_thres} For the single threshold model with two regions $0$ and $1$ and associated censorship probabilities $p_{0}< p_{1}$, our main theorem yields: \begin{itemize} \item If $\frac{p_{0}}{p_{1}} < \frac{d-1}{d\cos^{2}(\phi_{1})}$, then we reach bi-region steady state regime and have the effective dimension: \begin{align*} d_{\mathit{eff}} = \frac{d-1}{p_{0}}+\frac{1}{p_{0}}\frac{\sin^{2}(\phi_{1})}{\frac{p_{1}}{p_{0}}-\cos^{2}(\phi_{1})} \in [\frac{d}{p_{0}},\frac{d}{p_{1}}]. \end{align*} \item Otherwise, we are from $t=0$ in single-region steady state regime and have the effective dimension $d_{\mathit{eff}} = d/p_{1}$. \end{itemize} \end{corollary} \begin{proof} Using Lemma \ref{Dual Reachability Analysis} in the case of the single threshold model, we note that if region $0$ is reachable, it is necessarily dual reachable given that $r^{\dagger}(0,1)=0$ and henceforth, we always have $p_{0}/p_{1}> r^{\dagger}(0,1)$. Thanks to the results of Lemma \ref{One-step Transient Analysis}, we also note that $r^{\star}(0,1)=\frac{p_{0}}{p_{1}} < \frac{d-1}{d\cos^{2}(\phi_{1})}$ and that if region $0$ is reachable, it is done in a time: \begin{align*} t_{1} = \frac{1}{p_{1}}\frac{(d-1)\lambda}{d\sin^{2}(\phi_{1})\cos^{2}(\phi_{1})}\frac{\frac{p_{0}}{p_{1}}-1}{\frac{d-1}{d\cos^{2}(\phi_{1})}-\frac{p_{0}}{p_{1}}} \end{align*} \end{proof} \section{Introduction} \input{text/intro} \section{Problem Setup and Background}\label{set up} \input{text/pb_setting} \section{Multi-Armed Bandits}\label{MAB} \input{text/finite_arms} \section{Contextual Bandit} \label{CB} \input{text/contextual} \section{Concluding Remarks} \input{text/ccl} \section*{Acknowledgments and Disclosure of Funding} This research project is supported by the AFOSR FA9550-19-1-0263 “Building attack resilience into complex networks” Grant. The authors would like to thank Prem Talwai and the anonymous reviewers for providing insightful comments and suggestions. \bibliographystyle{plain} \subsection{Related Work} Within the extensive bandits literature, well-surveyed in \citep{lattimore2020bandit,MAL-068}, our work is most closely related to stochastic delayed bandits. Initially, this line of work focused on the joint evolution of actions and information in settings where the reception of the latter is delayed~\cite{dudik2011efficient}. Of particular interest is the packet loss model recently introduced in \cite{stoch_unrest_delay}, which provides the regret bound $\mathcal{O}(\frac{1}{p}R_{T})$ where $R_{T}$ is the uncensored regret and $p$ the censorship probability. Analogous results have been shown in the context of Combinatorial Multi-Armed Bandits with probabilistically triggered arms; see for example, \cite{JMLR:v17:14-298} and \cite{NIPS2017_a8e864d0}. Our work provides a systematic approach to study more general censorship models, and sheds light on how the impact of coupled feedback and censorship realizations on the expected regret can be evaluated in terms of the \emph{effective dimension} of the problem. Importantly, we also tackle the contextual bandit problems, where relatively few results are available on the regret under missing or censored feedback. A notable exception is the work of \cite{pmlr-v119-vernade20a}, who focus on a different information structure and obtain a scaling of $1/p$ (see Remark \ref{unif_models}). A related contribution by \cite{theshold_pot} provides both a potential-based analysis of the Upper Confidence Bound algorithm (UCB) for multi-armed bandits and an algorithmic variant leveraging the Kaplan-Meier estimator, although their censorship setting is different than ours. In particular, our results are applicable to settings when delay is significantly large (possibly infinite). This is in contrast to prior results on bandits with delayed information structure which assume either that the delay is \textit{constant}, \textit{upper bounded}, has a \textit{finite mean}, or simply provide regret guarantees that are \textit{linear in the cumulative delays} up to time $T$ \citep{dudik2011efficient,joulani2013online,queue_delay,ZhouGLM,pike2018bandits}. Under such assumptions on delay, one usually gets a second order additive dependency of the regret in terms of delay parameters, which practically says that delay is benign for bandits. On the other hand, we show that censorship leads to a first order multiplicative dependency on regret and we provide a complete characterization of this dependency for a wide range of bandits and censorship models. Moreover, the abovementioned works primarily focus on modifying well-known bandit algorithms to account for delays, or propose new delay-robust algorithms which may be difficult to implement in practice; a notable exception includes~\cite{wu2022thompson} but it focuses on Thompson Sampling. In our work, we instead focus on estimating the performance loss due to censorship and derive insights on the behavior of well-known UCB class of algorithms~\citep{li2010contextual,pmlr-v15-chu11a,NIPS2011_e1d5be1c}. These algorithms are widely used in practice; moreover, their theoretical study has been shown to be useful for analysis of broader class of algorithms (notably Thompson Sampling~\cite{agrawal2012analysis,tsVanRoy} and Information-Directed Sampling~\citep{NIPS2014_301ad0e3,pmlr-v75-kirschner18a}). On a somewhat related note, the literature on non-stochastic multi-armed bandit problems with delays~\citep{NIPS2010_7bb06076,pmlr-v49-cesa-bianchi16,NEURIPS2020_33c5f5bf} also tackles multiplicative dependency, although in a different setting than ours. Another related line of work is Partial Monitoring \citep{JMLR:v11:audibert10a,partialmonitor} which deals with generic categorization of learnability, rather than a fine-grained analysis of dimensionality in relation to censorship, which is our current focus. Our work contributes to the Generalized Linear Contextual Bandits literature \cite{NIPS2010_c2626d85,li2017provably} in two ways: firstly, through the use of these models in a sequential decision-making framework on which the impact of censorship is assessed in Sec. \ref{CB}. Secondly, by showing that our multi-threshold censorship model \ref{MT_model} induces, at first order, a non-linear structure that closely mirrors such models. Our results provide new tools to study this structure. It is useful to note that the notion of \textit{effective dimension} has been well-studied in the statistical learning and kernels literature~\citep{GPBandit,6138914} (where it is defined for a Gram matrix $K_n$ and regularization $\lambda$ as $d^n_{\text{eff}}(\lambda) = \text{tr}(K_n(K_n + \lambda\mathbb{I}_{d})^{-1})$). Our work shows that an analogous quantity governs the regret bound of bandit problems in censored settings. Finally, there is a rich literature on classical missing and censored data problems~\cite{little2002statistical,review_missing}. Although conditional on the choice of a given action the missing data/censorship process we study is an instance of missing-completely-at-random (MCAR), the online action generating process adds a significant difficulty to the problem: whereas MCAR is typically studied under a well-defined distributional assumption (e.g. i.i.d. generation of action), our problem needs to deal with adaptive (hence non i.i.d.) data generation process with respect to the filtration of past information. In particular, the structure of missing data set results from strong endogenous dependencies with past realization of the censorship (see Sec. \ref{set up}). \subsection{Summary of Results} In Sec. \ref{MAB}, we consider Multi-Armed Bandit (MAB) models and prove that the regret scales as $\Tilde{\mathcal{O}}(d_{\mathit{eff}}\sqrt{T})$ (Thm. \ref{THM Finite arms}), where $d_{\mathit{eff}}$ is the effective dimension with value $\sum_{a\in[d]}\frac{1}{p_{a}}$. In doing so, we recover and generalize related results from~\citep{stoch_unrest_delay,JMLR:v17:14-298} to more complex regularized settings and noise models. In particular, we prove that the effective dimension results from characterizing the so-called censored cumulative potential $\mathbb{V}_{\alpha}$. Interestingly, we also show that the adaptive nature of censorship on $\mathbb{V}_{\alpha}$ plays only a second order role (Prop. \ref{Monitoring AG}), that is, impact of censorship can be treated in an \textit{offline} manner at first order. Importantly, our study of MAB under censorship instantiates an analysis framework which extends to Linear Contextual Bandits (LCB) (Sec. \ref{CB}). Our main result provides that regret is still governed by the effective dimension, but now with a dependency of $\Tilde{\mathcal{O}}(\sigma\sqrt{d\cdot d_{\mathit{eff}}}\sqrt{T})$ (Thm. \ref{THM Linear arms}). To the best of our knowledge, these regret bounds provide the first theoretical characterization in LCB with censorship, and contribute to the literature by evaluating the impact of censorship on the performance of UCB-type algorithms. Our second main contribution is identifying the effective dimension for a broad class of multi-threshold models \ref{MT_model} as well as a precise understanding of the dynamic behavior induced by these models (Thm. \ref{THM Linear Optim MTM}). In particular, we find that censorship introduces a two-phase behavior: a transient phase during which the initial censoring misspecification is self-corrected at an additional cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension. In extending our analysis from MAB to LCB, we also develop a continuous generalization of the widely used Elliptical Potential Inequality (Prop. \ref{Potential Control Linear}), which we believe is also of independent interest. Finally, our results (Thm. \ref{THM Finite arms} and Prop. \ref{Instance Dep Regret Finite} for MAB and Thm. \ref{THM Linear arms} for LCB) suggest that the UCB class of algorithms is indeed a reliable method for stochastic bandits problems under censorship. \subsection{Project Review} \paragraph{Summary:} In the past months, we have started to study sequential decision-making processes under uncertainty, colloquially known as multi-armed bandits, under a broad class of censored environments. Our work is motivated by two types of censorship models: (i) Repeated interaction between a principal and one or more agents where the reception of feedback associated with the principal’s decision (or recommendation) is conditional on the agents' willingness to follow that decision. (ii) Classic and frequent missing data problems where exogenous system faults/failure leads to loss of information and/or delays in arrival of that information. Our goal is to develop improvements to algorithms designed for classical uncensored environments and estimate the performance loss due to censorship. We also aim to arrive at worst-case guarantees on the performance of resulting strategies. This effort contributes to a systematic study of sequential decision making problems in strategic and adversarial environments.  \paragraph{Motivation:} A typical application is dynamic decision-making in logistics systems where an operator (principal) aims to maximize a cumulative reward metric (e.g. timeliness or fuel usage efficiency) by recommending routes to drivers (agents). At a given time (stage), the principal can only revise estimates on specific routes based on the data from agents who follow its recommendation to take those routes. The choice model of the agents endogenizes the censorship process. Additionally, censorship can also arise due to unreliable or insecure communication between principal and agents. The “optimality” in this decision-making problem depends on how fast the underlying latent condition of the network that governs the stage-wise rewards can be learned. The challenge arises from the fact that the data generating process is mediated by agents’ behavior and the data available is incomplete due to censorship. The question then is to develop efficient algorithms that account for censorship and estimate the performance loss (relative to no censorship benchmark).  \paragraph{Key Challenge:} Maximizing the cumulative metric of interest in this environment involves balancing between exploring, i.e., learning the unknown payoff relevant parameter, and exploiting the optimal decisions given the current state of knowledge. This duality generates a complex and interdependent sequence of decisions used to both learn and evaluate a posteriori performance. Our ongoing effort is to characterize the impact of censorship on three core features of the problem: the cumulative reward maximization, the learning of the uncertain latent state, and the data generating process. The extensive literature on decision making under uncertainty does not tackle the complex dynamics arising from impact of censorship and the problem is still poorly understood, even for the simplest form of censoring.  \paragraph{Results:} \begin{itemize} \item Our main contribution is the introduction and evaluation for a broad class of censorship models of the effective dimension of the problem, a natural measure of its statistical complexity. Intuitively, such effective dimension extends the classical dimension in finite, linear and contextual bandits and governs the magnitude of approximation error in learning and sub-optimality of reward gain w.r.t. clairvoyant player. Interestingly, we show that this effective dimension allows us to maintain the structure of the original problem, while embedding it in a bigger space, and thus to naturally produce comparable results with uncensored settings. \item Technically, this involves introducing a new analysis framework culminating in an generalized ellipsoid potential inequality seen through a continuous lens, which we believe is of independent interest due to its wide use by the bandit community. Our proof ideas involve an innovative mix of tools from optimal design, statistical learning and stochastic process theory. Moreover, the worst-case analysis we adopt allows us to precisely understand an adversary’s optimal strategy for negatively influencing learning in the presence of censorship. \item Importantly, we find that such a strategy involves two components whose properties depend on the characteristics of the censorship: (1) an initial transient phase of rapid decay that exploits the misspecification in latent state (e.g. unawareness of censorship), capturing the initial difficulty in the learning task from principal’s perspective; (2) an follow-up stationary phase of slow decay, reflecting the inherent slowdown of learning in the steady state of censored data process. \end{itemize} Potential future directions of interest include extending our approach to a more general class of realistic censoring, with particular emphasis on time-dependent processes. Formulating the problem from a Bayesian viewpoint would likely enable to algorithms that extend basic Thompson Sampling (a widely used tool and to gain probabilistic insights on the origin of the effective dimension) to censored environments. We believe that our new perspective can be exploited to design robust algorithms operating efficiently in unknown (and possibly adversarial) environments.  \subsection{Motivations and Justifications} In the past months, we have started to study sequential decision-making processes under uncertainty, colloquially known as multi-armed bandits, under a broad class of censored environments. Our work is motivated by two types of censorship models: \paragraph{Setting 1:}Repeated interaction (dynamic decision-making) between a principal and one or more agents where the reception of feedback associated with the principal’s decision (or recommendation) is conditional on the agents' willingness to follow that decision. Use case: Recommender system, A/B testing, Logistics systems. \paragraph{Setting 2:} Classic and frequent missing data problems in dynamical settings where exogenous system faults/failure leads to loss of information and/or delays in arrival of that information. Use case: Signal processing and control theory, time series analysis, Healthcare, Adaptive Experiments, A/B testing, Transportation, Industry/Economics, Privacy. \subsection{Literature Review} \subsection{Effective Dimension and Regret Bounds} The main result of this section is that censorship effectively enlarges the dimension of the problem. We define the effective dimension as $d_{\mathit{eff}}\triangleq \sum_{a\in [d]}\frac{1}{p_{a}}$ and our result (Thm. \ref{THM Finite arms}) shows that, at first order, the regret is guaranteed to be the same as the uncensored problem with $d_{\mathit{eff}}$ arms instead of $d$. \begin{restatable}{theorem}{THMFinitearms} \label{THM Finite arms} Under censorship, the UCB algorithm with regularization $\lambda$ has an instance-independent expected regret of: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq \Tilde{\mathcal{O}}(\sigma \sqrt{d_{\mathit{eff}}T}). \end{align*} \end{restatable} Furthermore, we obtain analogous regret guarantees for instance-dependent cases where, at first order, the uncensored dimension $\sum_{a\neq a^{\star}}\frac{\sigma^{2}}{\Delta_{a}}$ enlarges to $\sum_{a\neq a^{\star}}\frac{\sigma^{2}}{p_{a}\Delta_{a}}$: \begin{restatable}{proposition}{InstanceDepRegretFinite} \label{Instance Dep Regret Finite} For a fixed action set $\mathcal{A}_{t} \equiv[d]$ and for a-priori known action gap $\Delta_{a}\triangleq \max_{\Tilde{a}}\theta^{\star}_{\Tilde{a}}-\theta^{\star}_{a}$, the UCB algorithm with regularization $\lambda$ has the instance-dependent expected regret: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq \mathcal{O}\Big(\log(T)\sum_{a\neq a^{\star}}\frac{1}{p_{a}}\max(\frac{\sigma^{2}}{\Delta_{a}},\Delta_{a})\Big). \end{align*} \end{restatable} On one hand, a preliminary understanding of censorship posits an increase of the average "\textit{regret per information gain}" \cite{pmlr-v75-kirschner18a} (as it takes longer on average to get the same amount of information) but does not change the underlying complexity of the problem. One the other hand, our results (Thm. \ref{THM Finite arms} and Prop. \ref{Instance Dep Regret Finite}) postulate that the censored problem is equivalent at first order to a higher dimensional problem but explored with the same \textit{regret per information gain}. The abovementioned results extends to a-priori known heteroskedasticity (see Rem. \ref{Hetero Finite 1} and \ref{Hetero Finite 2} in App. \ref{Proof MAB}). For this general setting, the effective dimension for instance-independent (resp. dependent) case is given by $ \sum_{a}\frac{\sigma_{a}^{2}}{p_{a}}$ (resp. $\sum_{a\neq a^{\star}}\frac{\sigma_{a}^{2}}{p_{a}\Delta_{a}}$), where $\sigma_{a}^{2}$ is the variance proxy of arm $a$. Although the scaling in $\sum_{a}\frac{1}{\Delta_{a}p_{a}}$ was already mentioned in \cite{stoch_unrest_delay} for unregularized setting with homogeneous variance $\sigma^{2}$ and proven to be optimal, our results generalize these findings. \subsection{Cumulative Censored Potential} We now provide a proof sketch of Thm. \ref{THM Finite arms}, and in doing so, we instantiate an analysis framework that will be extended in Sec. \ref{CB}. This proof consists in the successive elimination of the noise induced by the feedback and censorship. This leads to regret guarantees on a resulting deterministic quantity by characterizing worst-case learning conditions. The first step of the proof is a variant of the classical reduction of the UCB regret to another quantity we refer to as the \emph{expected cumulative censored potential}. Before stating it, we define at the end of a round $t\in [T]$, the random number of times an arm $a$ has been \textit{pulled} as $\tau_{a}(t) \triangleq \sum_{l=1}^{t} \mathbf{1}\{a_{l}=a\}$. Similarly, the number of times an action $a$ has been \textit{realized} at the end of round $t$ is denoted $N_{a}(t) \triangleq \sum_{l=1}^{t} \mathbf{1}{\{a_{l}=a, x_{a_{l}}=1\}}$. We then have: \begin{restatable}{lemma}{PotentialReductionFinite} \label{Potential Reduction Finite} Given an uniform regularization of $\lambda>0$, the UCB algorithm verifies: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2 \sqrt{6\sigma^{2}\log(T)}\mathbb{E}[\mathbb{V}_{\frac{1}{2}}(T,\pi_{\text{UCB}})] + 2\lambda\|\theta^{\star}\|_{\infty}\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\text{UCB}})] + \frac{2d\Delta_{max}}{T} \end{align*} where, for any $\alpha>0$ and $\pi \in \Pi$, the cumulative potential under censorship is given by: \begin{align*} \mathbb{V}_{\alpha}(T,\pi) = \sum_{t=1}^{T}(N_{a_{t}}(t-1)+\lambda)^{-\alpha}. \end{align*} \end{restatable} Without censorship, the cumulative potential translates the average rate of decay of uncertainty on the reward of different arms and is closely linked to the divergence between the true reward distribution and the empirical distribution of observed rewards \cite{pmlr-v119-shekhar20a}. Introducing censorship transforms the classical deterministic decay rate into a stochastic one. For a typical reward distribution, the rate of decay is proportional to a term in $n^{-\alpha}$ or can be upper bounded by such a term (see for e.g. \cite{pmlr-v119-shekhar20a}), where $n$ is the number of \textit{observed} rewards. Therefore, a higher $\alpha$ corresponds to faster learning. In contrast to the classical non-regularized analysis or to the LCB case of Sec. \ref{CB}, we observe two different orders of $\alpha$ ($\nicefrac{1}{2}$ and $1$) coming from the use of the $L_{\infty}$-norm instead of the $L_{2}$-norm. Taken independently, they lead to respective contributions of $\mathcal{O}(d_{\mathit{eff}}\log(T))$ and $\mathcal{O}(\sqrt{d_{\mathit{eff}}T})$. Note that by working with a general $\alpha$, our analysis naturally extends beyond sub-Gaussian noise to more general assumptions about the Laplace transform of noise (e.g., lighter or heavier tails), as discussed in Rem.\ref{Tails}. To further study $\mathbb{V}_{\alpha}$, we introduce the following property: \begin{restatable}{proposition}{PotentialControlFinite} \label{Potential Control Finite} For all $\alpha >0$, $\delta \in ]0,1]$ and given $\psi_{\alpha}$ a primitive of $x\mapsto x^{-\alpha}$, we have: \begin{align*} \max_{\pi \in \Pi} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] &\leq \frac{d_{\mathit{eff}}}{(1-\delta)^{\alpha}}\left[\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\frac{\lambda}{1-\delta}) - \psi_{\alpha}(\frac{\lambda}{1-\delta})\right] + \frac{24d_{\mathit{eff}}\log(T)+d}{\lambda^{\alpha}} + \frac{4d_{\mathit{eff}}}{\lambda^{\alpha}\delta^{2}T^{12\delta^{2}}}. \end{align*} \end{restatable} The proof of this proposition involves two steps: firstly, we remove the stochastic dependence induced by the censorship through concentration properties (See App. \ref{Proof MAB}), and we then solve the resulting policy maximization problem (Lemma \ref{Optimization Lemma Finite}). In the first step, we consider for a given $\delta \in ]0,1]$ the event: \begin{align*} \mathcal{H}_{CEN}(\delta) &= \left\{\exists a \in [d], t\in[T],N_{a}(t) < (1-\delta)p_{a}\tau_{a}(t) \quad \text{and} \quad \tau_{a}(t) \geq T_{0}(a) \right\}, \end{align*} where $T_{0}(a)\triangleq 24\log(T)/p_{a}+1$ and claim that $\mathbb{P}(\mathcal{H}_{CEN}(\delta)) \leq \frac{4d_{\mathit{eff}}}{\delta^{2}}T^{-12\delta^{2}}$, improving a result of \cite{stoch_unrest_delay}. Here $\mathcal{H}_{CEN}$ denotes the event where there is a significant gap between the realized and expected number of observed rewards. We consider its complement in our analysis of the principal order of regret. This allows us to lower bound for each action, the realized number of reward observations by a multiple of the number of times that action was selected, thus eliminating the randomness induced by censoring. Our second step makes use of the following lemma (also known as a \textit{water-filling process} in information theory \cite{ITbook}): \begin{restatable}{lemma}{OptimizationLemmaFinite} \label{Optimization Lemma Finite} For $\psi_{\alpha}$ a primitive of $x\mapsto x^{-\alpha}$ where $\alpha \in ]0,1]$, regularization $(\lambda_{a})_{a\in[d]}\in (\mathbb{R}_{>0})^{d}$ and censorship vector $(p_{a})_{a\in [d]}$, the solution of the optimization problem: \begin{align*} \max_{\tau_{1}\dots,\tau_{d}\geq 0} \quad & \sum_{a\in[d]} \frac{1}{p_{a}}\Big(\psi_{\alpha}(p_{a}\tau_{a}+\lambda_{a})-\psi_{\alpha}(\lambda_{a})\Big) \quad \textrm{s.t.} \quad \sum_{a\in[d]}\tau_{a}=T \end{align*} is given by $\tau^{\star}_{a}=\frac{1}{p_{a}}[C-\lambda_{a}]^{+}$, where $C$ ensures the total budget constraint $ \sum_{a\in[d]}\tau^{\star}_{a}=T$. In particular, with $\lambda_{\text{eff}}\triangleq \frac{1}{d_{\mathit{eff}}}\sum_{a\in[d]}\frac{\lambda_{a}}{p_{a}}$ and $\lambda_{a}^{0}\triangleq d_{\mathit{eff}}(\lambda_{a}-\lambda_{\text{eff}})$, the optimal solution is given by $\tau^{\star}_{a}\triangleq \frac{1}{p_{a}d_{\mathit{eff}}}(T-\lambda_{a}^{0})$ for $T\geq \displaystyle\max_{a} \lambda_{a}^{0}$ and the optimal value is $d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda_{\text{eff}}) - \sum_{a\in [d]}\frac{1}{p_{a}}\psi_{\alpha}(\lambda_{a})$. \end{restatable} For unregularized algorithms, this framework can be easily applied to provide instances-dependent guarantees by adding constraints of type $\tau_{a} \leq f(\Delta_{a})$ within Lemma \ref{Optimization Lemma Finite}. Optimal guarantees under regularization such as the ones given in Prop. \ref{Instance Dep Regret Finite} require however to consider both orders of $\mathbb{V}_{\alpha}$ ($\nicefrac{1}{2}$ and $1$) simultaneously and not independently, leading to slight variations as shown in the proof of Prop. \ref{Instance Dep Regret Finite}. Next, we further discuss the properties of $\mathbb{V}_{\alpha}$ given its importance in our analysis. \subsection{Evaluating Adaptivity Gain} It is well known that adaptivity is a key feature of sequential decision problems: optimal policies use feedback from previous decisions to decide the next action to take based on the data, and in comparison non-adaptive policies can be quite suboptimal. Somewhat interestingly, the main result of this section is that adaptivity in the context of censoring does not provide a significant advantage to the decision maker. More precisely, being able to observe which decisions have been censored and adapting to this information does not bring more than a second order gain. In proving this result, we quantify and gain insight into the expected performance of policies that are adaptive to the realization of the censorship process, in comparison to a class of non-adaptive (i.e., offline) policies. In fact, through the introduction of $\mathcal{H}_{CEN}(\delta)$ and for any $\alpha \in [0,1]$, $\delta \in ]0,1]$, we showed in Prop. \ref{Potential Control Finite} the upper bound $\frac{d_{\mathit{eff}}}{(1-\delta)^{\alpha}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}} + \frac{\lambda}{1-\delta})$ for the learning complexity $\max \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)]$ where the maximum is taken over the class of adaptive policies $\Pi_{\mathit{adapt}}$, i.e., measurable with respect to the censorship. Note that the exact value of such maximum is notoriously difficult to study due to the adaptive nature of censorship induced by the decision-making process. Next, we introduce $\Pi_{\mathit{off}}$, the class of policies that are not adaptive with respect to the censorship and we prove that : \begin{restatable}{lemma}{asymptoff}\label{asympt_off} For $\alpha \in ]0,1]$ and $\lambda>0$, we have $\displaystyle \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] \sim d_{\mathit{eff}}\psi_{\alpha}(\frac{T}{d_{\mathit{eff}}}+\lambda)$. \end{restatable} In other words, restricting attention to offline policies is sufficient to obtain the correct scaling. The next step to complete our claim is the asymptotic expansion: \begin{restatable}{proposition}{MonitoringAG}\label{Monitoring AG} For $\alpha \in ]0,1]$, by denoting $\displaystyle \gamma_{\alpha}(\mathbf{p}) \triangleq \frac{\alpha}{2d_{\mathit{eff}}^{1-\alpha}}\sum_{a\in [d]}\frac{1}{p_{a}}\Big(\sum_{\Tilde{a}\neq a}\frac{1-p_{\Tilde{a}}}{p_{\Tilde{a}}}\Big)$, we have: \begin{align*} \max_{\pi \in \Pi_{\text{adapt}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] - \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] = \gamma_{\alpha}(\mathbf{p})\frac{1}{T^{\alpha}} + o(\frac{1}{T^{\alpha}}). \tag{$\star $}\label{Constant} \end{align*} Moreover, if for a given $\beta \in ]0,1[$, we introduce $\Pi_{\text{single}}(\beta T)$ the policy class whose censorship information set has a single updating at time $\lfloor\beta T\rfloor$, we have: \begin{align*} \max_{\pi \in \Pi_{\text{single}}(\beta T)} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] - \max_{\pi \in \Pi_{\text{off}}} \mathbb{E}[\mathbb{V}_{\alpha}(T,\pi)] = \gamma_{\alpha}(\mathbf{p})\frac{\beta}{T^{\alpha}} + o(\frac{1}{T^{\alpha}}). \tag{$\star \star$}\label{One-Shot} \end{align*} \end{restatable} Thus, $\gamma_{\alpha}(\mathbf{p})$ can be viewed as an adaptivity gain resulting from the continuous correction of the cumulative variance induced by the action selection process. Essentially, it is closely related to the Jensen Gap of an appropriate random variable and the proof involves the study of the Taylor expansion of the potential function $\psi_{\alpha}$. (\ref{One-Shot}) tells us that a single observation of the censorship realization is sufficient to obtain a near-optimal \textit{gain in adaptivity}. We present a proof sketch of Prop. \ref{Monitoring AG} in App. \ref{Proof MAB}. This shows that censorship in MAB can be treated in an \textit{offline} manner at first order. \subsection{Multi-threshold Models and Regret Bounds} To address the abovementioned challenge, we now introduce a simple \emph{multi-threshold} censorship model, which enables a precise regret analysis. In particular, we consider that feedback is censored according to the following action-dependant probability: \begin{align*} p:a \in \mathbb{B}_{d} \mapsto \sum_{j=0}^{k} \mathbf{1}\{\sin(\phi_{j}) \leq \langle a,u\rangle <\sin(\phi_{j+1}) \}p_{j}, \tag{$\mathcal{MT}$}\label{MT_model} \end{align*} where $(\phi_{j})_{j\leq k+1}$ is an increasing sequence verifying $\phi_{0}=-\frac{\pi}{2}$, $\phi_{k+1}=\frac{\pi}{2}$ and $u\in \mathbb{R}^{d}$ is a unit vector. We assume that $(p_{j})_{j\leq k}$ is decreasing, i.e. the censorship is increasing with $j$ in direction $u$. Henceforth, we refer to the interval $[\sin(\phi_{j}),\sin(\phi_{j+1})[$ as \emph{region $j$}. Note that simple models such as uniform censorship are subsumed by this family (for $k$ equals $0$). \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.4\textwidth]{img/fig3.png} \end{center} \caption{Example of a multi-threshold model for $k=2$ (Green). Logistic censorship model (Red)} \label{fig:MT} \end{wrapfigure} The two main features of the multi-threshold model are: the \emph{radial} aspect (the censorship probability depends on the action through a scalar product with a given vector) and the \emph{monotonicity} (the censorship is monotone in the value of this scalar product). Note that \ref{MT_model} can be seen as a piecewise constant approximation of any Generalized Linear Model (GLM) \cite{mccullagh1989generalized}. Thus, the simplicity of this censorship model is not an inherently limiting factor on the generality of our subsequent results. Moreover, \ref{MT_model} admits a natural behavioral interpretation: Such a distribution can be seen as induced by a population model of heterogeneous random-utility maximizing agents. A single threshold model (i.e. $k$ equals $1$) corresponds to a given agent type, and the multi-threshold model naturally results from aggregate responses of heterogeneous population~\cite{DynamicDiscreteChoice}. We now state the main result of this section: \begin{restatable}{theorem}{THMLineararms}\label{THM Linear arms} For a given multi-threshold censorship model \ref{MT_model}, there exits $d_{\mathit{eff}}$ such that the UCB algorithm with regularization $\lambda$ has an instance-independent expected regret of: \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] &\leq\Tilde{\mathcal{O}}(\sigma\sqrt{d\cdot d_{\mathit{eff}}}\sqrt{T}). \end{align*} \end{restatable} Importantly, note the mapping from the original dimension $d$ to the enlarged $\sqrt{d \cdot d_{\mathit{eff}}}$, in contrast to the previous dilation $d\mapsto d_{\mathit{eff}}$ for the case of MAB problems. An extension to Generalized Linear Contextual Bandits is provided in App. \ref{gen linear} where we show that the dimension is governed by $\sqrt{d \cdot d_{\mathit{eff}}}/\kappa$, with $\kappa$ corresponding to a minimum of the derivative of the link function (encompassing the smoothness of the GLM at its maximum)~\citep{li2017provably,NIPS2010_c2626d85}. We conjecture that this result still holds if we relax the monotonicity property of \ref{MT_model} although it will require some modifications in the proofs of section \ref{Proof Multi}. On the other hand, we believe that the radial property is necessary, considering the related literature on GLMs (further discussed in App. \ref{gen linear}) where it appears prominently. \subsection{Generalized Cumulative Censored Potential} Analogous to the MAB case, we now introduce for LCB the random matrices corresponding to the effective realization $\mathbb{W}^{C}_{t}\triangleq \lambda\mathbb{I}_{d} + \sum_{n=1}^{t} x_{a_{t}}a_{t}a_{t}^{\top}$ and the expected realization $\mathbb{W}_{t} \triangleq \lambda\mathbb{I}_{d} + \sum_{n=1}^{t} p(a_{t})a_{t}a_{t}^{\top}$. We also introduce the continuous counterpart of $\mathbb{W}_{t}$ defined as $\mathbb{W}(t)\triangleq \lambda\mathbb{I}_{d}+\int_{u=0}^{t}p(a(u))a(u)a(u)^{\top}\partial u$, where $(a(u))_{u\leq T}$ is an integrable deterministic path.\footnote{In this section, the generic notation $X(t)$ is used for continuous time quantities and $X_{t}$ for discrete time.} We emphasise that the use of continuous counterpart is key in enabling our next results. As in the MAB case, we bound the regret although now using a generalization of $\mathbb{V}_{\alpha}$: \begin{restatable}{lemma}{PotentialReductionLinear} \label{Potential Reduction Linear} For all $\delta \in ]0,1]$, there exists a constant $\Tilde{\beta_{\delta}}(T)=\Theta(\sqrt{d\log(T)})$ such that \begin{align*} \mathbb{E}[R(T,\pi_{\text{UCB}})] \leq 2\Tilde{\beta_{\delta}}(T) \sqrt{T\mathbb{E}[\mathbb{V}_{1}(T,\pi_{\textit{UCB}})]} + \delta T\Delta_{max}, \end{align*} where, for $\alpha>0$ and $\pi \in \Pi$, the linear extension of the cumulative censored potential is given by: \begin{align*} \mathbb{V}_{\alpha}(T,\pi) \triangleq \sum_{t=1}^{T}\|a_{t}\|^{2}_{(\mathbb{W}^{C}_{t-1})^{-\alpha}} = \sum_{t=1}^{T}\operatorname{Tr}((\mathbb{W}^{C}_{t-1})^{-\alpha}a_{t}a_{t}^{\top}). \end{align*} \end{restatable} The proof idea is analogous (albeit more complex) than in the finite action case (see App. \ref{Proof LCB}). In order to get a handle on $\mathbb{V}_{\alpha}$, we again leverage a two-step approach: first we eliminate the randomness due to censorship (here, we utilize matrix martingale inequalities) and then optimize the resulting deterministic quantity seen through a continuous lens. The first step requires the following result: \begin{restatable}{proposition}{PotentialControlLinear} \label{Potential Control Linear} For any $\delta \in ]0,1]$, $\lambda>0$, $\alpha>0$ and policy $\pi \in \Pi$, we have: \begin{align*} \mathbb{E}[V_{\alpha}(T,\pi)] \leq \frac{\delta}{\lambda^{\alpha}} + C(\delta)^{\alpha} \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-\alpha}a(t)a(t)^{\top}\partial t\Big), \end{align*} where $C(\delta)\triangleq 8(\lambda+1)\max(\log(d/\delta))/\lambda,1)/\lambda$. \end{restatable} The key idea of this result is to observe that the telescopic sum on which the classical Elliptical Potential lemma \citep{NIPS2011_e1d5be1c,adpt_cofond_Russo,carpentier2020elliptical} heavily relies on is, in fact, the discrete approximation of an integral over a matrix path. This critical methodological contribution is further discussed in Rem. \ref{rem 1} and \ref{tour de force}. \begin{remark}\label{rem 1} One way to fully appreciate the generality of this result is to consider the simpler case of classical uncensored environment for which we obtain for $\alpha>0, \alpha \neq 1$: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-\alpha}} &\leq \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha}\frac{\operatorname{Tr}\Big(\int_{0}^{T} \partial\mathbb{W}(t)^{1-\alpha}\Big)}{1-\alpha} = \Big(\frac{\lambda+1}{\lambda}\Big)^{\alpha}\frac{\operatorname{Tr}(\mathbb{W}^{1-\alpha}_{T}-\mathbb{W}^{1-\alpha}_{0})}{1-\alpha}. \end{align*} For $\alpha =1$, a similar reasoning is applied using the formula $\operatorname{Tr}(\log(A))=\log(\det A)$: \begin{align*} \sum_{t=1}^{T}\|a_{t}\|^{2}_{\mathbb{W}_{t-1}^{-1}} &\leq \frac{\lambda+1}{\lambda}\int_{0}^{T}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = \frac{\lambda+1}{\lambda}\operatorname{Tr}(\log\mathbb{W}_{T}-\log\mathbb{W}_{0}) = \frac{\lambda+1}{\lambda}\log\frac{\det\mathbb{W}_{T}}{\det\mathbb{W}_{0}}. \end{align*} A deeper study of the eigenvalues of $\mathbb{W}^{1-\alpha}_{T}$ then yields the worst-case upper bound $d^{\alpha}(d\lambda + T)^{1-\alpha}/(1-\alpha)$ for $\alpha < 1$ and $d\lambda^{1-\alpha}/(\alpha-1)$ for $\alpha>1$, recovering more naturally and extending the results of \citep{carpentier2020elliptical}. Thus, analogous to the \textit{water filling process} highlighted in the MAB case in Lemma \ref{Optimization Lemma Finite}, we now consider a \textit{spectral water-filling} process \cite{ITbook} optimizing over the eigenvalues of $\psi_{\alpha}(\mathbb{W}_{T})$ with a slight abuse of notations ( $\mathbb{W}^{1-\alpha}_{T}$ and $\log\mathbb{W}_{T}$ in this discussion). \end{remark} Following Rem.\ref{rem 1}, for the general censored case the challenge now becomes to identify a suitable matrix operator on which the aforementioned spectral maximization can be performed. By applying Lemma \ref{Potential Reduction Linear}, we henceforth focus on the case of $\alpha=1$ for which Prop. \ref{Potential Control Linear} implies that for any policy: \begin{align*} \operatorname{Tr}\Big(\int_{0}^{T}\mathbb{W}(t)^{-1}a(t)a(t)^{\top}\partial t\Big) = \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t. \end{align*} Next, we focus on maximizing this integral over the policy class $\Pi$ and again recover the notion of effective dimension. \subsection{Effective Dimension in Linear Settings} We now highlight immediate properties of the effective dimension, and then present its general study for the multi-threshold model \ref{MT_model}. \begin{lemma}\label{unif_models} Let us consider an uniform censorship model $p:a\mapsto \Bar{p}$. By leveraging the case of equality in the Arithmetic-Geometric inequality applied to the eigenvalues of $\mathbb{W}_{T}$, we then simply deduce the associated effective dimension $d_{\mathit{eff}}\triangleq d/\Bar{p}$: \begin{align*} \max_{\pi \in \Pi}\int_{0}^{T}\frac{1}{\Bar{p}}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t &= d_{\mathit{eff}}\log(1+\frac{T}{\lambda d_{\mathit{eff}}}). \end{align*} \end{lemma} In fact, the logarithmic scaling of this quantity persists while moving beyond the uniform censorship assumption. This also highlights the importance of the leading dimension factor, crudely upper bounded by $d/p_{\mathit{min}}$ in the next lemma: \begin{restatable}{lemma}{logscaling}\label{log_scaling} For \textit{any} censorship function $p$, by introducing lower and upper bounds $(p_{\mathit{min}},p_{\mathit{max}})$ of $p$, we have: \begin{align*} \frac{d}{p_{\mathit{max}}}\log(1+\frac{p_{\mathit{min}}T}{d\lambda}) \leq \max_{\pi \in \Pi} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t \leq \frac{d}{p_{\mathit{min}}}\log(1+\frac{p_{\mathit{max}}T}{d\lambda}). \end{align*} \end{restatable} Related problems in the Generalized Linear Models literature \cite{ZhouGLM,li2017provably,NIPS2010_c2626d85} are implicitly solved in the spirit of Lemma \ref{log_scaling}, where a minimum of the derivative of the link function plays the role of $p_{\mathit{min}}$ above. However, when the function $p$ varies with action $a$, a more careful analysis is required to derive useful dimensional bounds. Our next major result addresses this gap in the literature by improving the bounds provided in Lemma \ref{log_scaling}: \begin{restatable}{theorem}{THMLinearOptimMTM} \label{THM Linear Optim MTM} For a multi-threshold censorship model \ref{MT_model}, we have: \begin{align*} \max_{\pi \in \Pi} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = d_{\mathit{eff}}\log(T) + o(\log(T)), \tag{$\mathcal{P}$}\label{optim_prob} \end{align*} where $d_{\mathit{eff}}$ is the effective dimension. Furthermore, $d_{\mathit{eff}}$ is characterized by two cases \begin{itemize} \item \textbf{Case 1:} Single region $j$ effective dimension $d_{\mathit{eff}} = \frac{d}{p_{j}}$. \item \textbf{Case 2:} Bi-region $(i,j)$ effective dimension, with $i<j$: \begin{align*} d_{\mathit{eff}}&=\frac{1}{p_{j}}\left[(d-1) \frac{1-l(i,j)}{\frac{p_{i}}{p_{j}}-l(i,j)}+\frac{u(i,j)-1}{u(i,j)-\frac{p_{i}}{p_{j}}}\right] < \frac{d}{p_{j}}. \tag{$\mathcal{D}$}\label{bi_reg} \end{align*} where $l(i,j) \triangleq \frac{\sin^{2}(\phi_{i})}{\sin^{2}(\phi_{j})}$ and $u(i,j) \triangleq \frac{\cos^{2}(\phi_{i})}{\cos^{2}(\phi_{j})}$. \end{itemize} \end{restatable} The implications of these cases are further discussed in Fig.\ref{deff} in App. \ref{Proof Multi}. Notice that a necessary condition for the bi-region $(i,j)$ effective dimension to arise is the constraint on $\frac{p_{i}}{p_{j}}$: \begin{align*} \max(1,\underbrace{\frac{d l(i,j)u(i,j)}{u(i,j)+(d-1) l(i,j)}}_{\triangleq s^{\star}(i,j)}) < \frac{p_{i}}{p_{j}} < \underbrace{\frac{(d-1) u(i,j)+l(i,j)}{d}}_{\triangleq r^{\star}(i,j)} \end{align*} In the limit $\frac{p_{i}}{p_{j}}\rightarrow r^{\star}(i,j)$, $d_{\mathit{eff}}$ goes again to $d/p_{j}$. We interpret this limiting case as \textit{locally hard} in the sense that censorship in region $j$ is sufficiently important in comparison to all other regions to impose a maximal effective dimension to the problem, irrespective of the values of $p_{i}$, matching Lemma \ref{log_scaling}. On the other hand, for the other limiting case (under additional mild assumptions), we find that $d_{\mathit{eff}}$ also goes to $d/p_{j}$, but now for a \textit{uniformly hard} reason: that is, censorship is approximately constant and equal to $p_{j}$, recovering the Lemma \ref{unif_models}. Finally, in between these two extremes lies the \textit{minimum effective dimension} for a given value of $\frac{p_{i}}{p_{j}}$. \subsection{Temporal dynamics of $\mathbb{W}(t)$}\label{temp dyn} The proof of Thm. \ref{THM Linear Optim MTM} requires the characterization of the dynamics of the optimal policy of (\ref{optim_prob}). Importantly, we discover that the evolution of $\mathbb{W}(t)$ is described by two qualitatively different regimes as outlined next. It turns out that our continuous approach to analyzing cumulative censored potential is an important tool to obtaining this result. \paragraph{Transient Regime:} There exists a decreasing sequence of censorship regions $\{i_{1}=k,\dots,i_{l}\}$ of length $l \in [k+1]$ and associated time sequence $\{t_{0}\triangleq 0,t_{1},\dots,t_{l}\}$ such that whenever $t_{j}\leq t \leq t_{j+1}$ for a given index $j\leq l-1$, the evolution of $\mathbb{W}(t)$ is given by: \begin{align*} \mathbb{W}(t) &= p_{i_{j+1}}(t-t_{j})\mathbb{W}_{i_{j+1}} + \mathbb{W}(t_{j}) = p_{i_{j+1}}(t-t_{j})\mathbb{W}_{i_{j+1}} + \sum_{n=1}^{j} p_{i_{n}}(t_{n}-t_{n-1})\mathbb{W}_{i_{n}} + \lambda \mathbb{I}_{d}, \end{align*} where $\mathbb{W}_{i}$ denotes the $d\times d$ diagonal matrix $ \text{diag}(\frac{\cos^{2}(\phi_{i})}{d-1},\dots,\frac{\cos^{2}(\phi_{i})}{d-1},\sin^{2}(\phi_{i}))$. Interestingly, the initial misspecification of censorship is self-corrected during this transient step but at an extra cost. This characterization of transient regime highlights an important consequence of using classical algorithms in censored environments. \paragraph{Steady State Regime:} Post-transient regime, the dynamics of $\mathbb{W}(t)$ enter a steady state regime, where one of the two cases necessarily arise:\footnote{These cases are fully characterized in terms of parameters of censorship model in Lemmas \ref{One-step Transient Analysis}, \ref{Dual Reachability Analysis}, \ref{Bi-Region Effective Dimension} and Cor. \ref{Path Formula}.}. \begin{itemize} \item \textbf{Case 1: Single region $i_{l}$.} This case arises when the last element of the time sequence $t_{l}$ is equal to $+\infty$ and we have the single region evolution for all $t\geq t_{l-1}$: \begin{align*} \mathbb{W}(t) &= p_{i_{l}}(t-t_{l-1})\mathbb{W}_{i_{l}} + \mathbb{W}(t_{l-1}) = p_{i_{l}}(t-t_{l-1})\mathbb{W}_{i_{l}} + \sum_{n=1}^{l-1} p_{i_{n}}(t_{n}-t_{n-1})\mathbb{W}_{i_{n}} + \lambda \mathbb{I}_{d}. \end{align*} The effective dimension corresponding to this dynamics is $d/p_{i_{l}}$, with the following equality for $T\geq t_{l-1}$: \begin{align*} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = \frac{1}{p_{i_{l}}}\log\det(\mathbb{W}(T))+ \sum_{n=1}^{l-1} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}). \end{align*} \item \textbf{Case 2: Bi-region $(i_{l+1},i_{l})$.} This case arises when the steady-state dynamics of $\mathbb{W}(t)$ span the two regions $(i_{l+1},i_{l})$ with $i_{l+1}<i_{l}$. For all $t\geq t_{l}$, we have the evolution: \begin{align*} \mathbb{W}(t) &\propto p_{i_{l+1}}(t+\lambda^{\star})\begin{pmatrix} \cos^{2}(\phi_{i_{l}})(u(i_{l+1},i_{l})-\frac{p_{i_{l+1}}}{p_{i_{l}}})\mathbb{I}_{d-1} & (0) \\ (0) &\sin^{2}(\phi_{i_{l}})(\frac{p_{i_{l+1}}}{p_{j}}-l(i_{l+1},i_{l})) \end{pmatrix}. \end{align*} where $\lambda^{\star}$ and the proportionality factor are specified in SI. The corresponding effective dimension is given by (\ref{bi_reg}) and the following equality holds for all $T\geq t_{l}$: \begin{align*} \int_{0}^{T}\frac{1}{p(a(t))}\frac{\partial\log\det(\mathbb{W}(t))}{\partial t}\partial t = d_{\mathit{eff}}\log(1+\frac{T-t_{l}}{t_{l}+\lambda^{\star}}) + \sum_{n=1}^{l} (\frac{1}{p_{i_{n}}}-\frac{1}{p_{i_{n+1}}})\log\det\mathbb{W}(t_{n}). \end{align*} \end{itemize} For further discussions on transient and steady state regimes, we refer to Fig.\ref{reach_stat}, \ref{reach} and \ref{switch}. in App. \ref{Proof Multi}. \subsection{Notations} Transpose of a vector $u$ is denoted by $u^{\top}$, classical Euclidean inner product by $\langle .,.\rangle$ and trace operator by $\operatorname{Tr}$. For positive semi-definite matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{d \times d}$ and for any vector $u \in \mathbb{R}^{d}$, notation $\|u\|_{\Sigma}$ refers to $\sqrt{u^{\top} \boldsymbol{\Sigma} u}$. We use notation $\mathbb{I}_{d}$ to denote the $d\times d$ identity matrix. $\mathbb{B}_{d}$ is the unit ball in $\mathbb{R}^{d}$. $[n]$ is the set of integers $\{1,2, \cdots, n\}$. For a given function $f$, we note $f^{(i)}$ the $i^{th}$ derivative of $f$. To avoid confusion with the dimension $d$, we use $\partial x$ instead of $dx$ to denote an infinitesimal increase of $x$. We use the asymptotic notations $\sim$, $\mathcal{O}$, $\Theta$ and $\Tilde{\mathcal{O}}$ ($\mathcal{O}$ when $\log$ factors are removed). Finally, for an event $\mathcal{H}$, we use $\neg \mathcal{H}$ to denote its complement. \subsection{Sequential Online learning: Problem Formulation and Algorithms} \begin{itemize} \item \textbf{Classical Multi-Arm setting/contextual bandits} Our setup involves a principal interacting with an environment over $T$ rounds. The environment depends on an unknown latent parameter $\theta^{\star} \in \Theta \subset \mathbb{R}^{d}$, drawn by Nature. At each round $1,\dots,T$, the principal receives from an oblivious adversary a set of action $\mathcal{A}_{t}$, included in a bounded subset $\mathcal{A}\subset \mathbb{R}^{d}$. From such set, the principal selects an action $a_{t} \in \mathcal{A}_{t}$, according to a strategy $\pi$. To this action will be associated a reward drawn from an action dependent distribution and denoted $y_{a_{t}}\in\mathbb{R}$, modeled in what follows unless notified otherwise as $y\triangleq\langle a,\theta^{\star}\rangle + \epsilon$, where $\epsilon$ is a (conditionally) independent sub-Gaussian Noise of pseudo-variance $\sigma^{2}$. Uncertainty regarding the latent parameter induces uncertainty about the true optimal action at each step $a^{\star}_{t} \triangleq \arg \max_{a\in \mathcal{A}_{t}} \mathbb{E}[y_{a_{t}}]$. Optimal behavior can be expressed in the formalism of Bellman equations, but it is generally intractable, which leads to the study of approximate optimal policies. A surprisingly rich notion to consider is then the \textit{regret}, the difference in gain between the principal and a clairvoyant agent. More precisely, we introduce the notion of pseudo-regret: \begin{align*} R(T)\triangleq \sum_{t=1}^{T} \mathbb{E}[y_{a^{\star}_{t}}-y_{a_{t}}] = \sum_{t=1}^{T} \langle a^{\star}_{t} - a_{t},\theta^{\star}\rangle \end{align*} where the expectation is with respect to the reward noise only. Performance of different policies are then compared in terms of scaling of the regret with respect to the main quantities of the problem: $T$ the number of rounds and a measure of the \textbf{dimension} of the problem (e.g. number of arm, ambient dimension $d$ or properties of the reward function). One can potentially see this regret through a Bayesian lens by introducing a prior over the possible $\theta^{\star}$ and average over instances of regret. \item \textbf{Censorship and Information Structure}: Uncensored bandits is governed by the filtration $(\mathcal{F}^{NC}_{t})_{t\leq T}$ where $\mathcal{F}^{NC}_{t} \subset \mathcal{F}$ is the sigma algebra generated by $(a_{1},y_{a_{1}},\dots,a_{t-1},y_{a_{t-1}})$. Such action are in turn generated by a possibly randomized policy $\pi\triangleq(\pi_{t})_{t\leq T}$ that is $\mathcal{F}^{NC}_{t}$-adapted. In other word, $\pi_{t}$ can be seen as a distribution probability over actions conditioned by $\mathcal{F}^{NC}_{t}$. One of the key feature of our model is that the reward will not always be observed by the principal. More precisely, it will be observed according to result of a drawn from an action-dependant Bernoulli distribution. In such case, we shall refer to this action as being realized. Such randomness is used to model the choice of the agent in setting $1$ or the potential loss of information in setting $2$. Due to the online setting where the principal endogenously learn about the environment while acting, the design of the information structure is paramount. More precisely, for a given action $a_{t}$, we denote $x_{a_{t}}$ the binary random variable, with value $1$ if and only if the variable was realized. The complete information structure can be defined as the filtration $(\mathcal{F}^{C}_{t})_{t\leq T}$ where $\mathcal{F}^{C}_{t} \subset \mathcal{F}$ is the sigma algebra generated by $(a_{1},y_{a_{1}},x_{a_{1}},\dots,a_{t-1},y_{a_{t-1}},x_{a_{t-1}})$. Let's note $\Pi_{adapt}$ the set of policies time-measurable with respect to this filtration and refer to them as adaptive policies. Yet, we shall first focus on a more realistic setting where reward is observed conditionally on realization and absence of realization i.e. censorship doesn't convey any information. In other words, we shall mainly focus on the filtration $(\mathcal{F}_{t})_{t\leq T}$ where $\mathcal{F}_{t} \subset \mathcal{F}$ is the sigma algebra generated recursively by: \begin{align*} \mathcal{F}_{t+1}\begin{cases} \mathcal{F}_{t}, & \text{if}\ x_{a_{t}}=0 \\ \mathcal{F}_{t} \cup \sigma(a_{t},y_{a_{t}}), & \text{otherwise} \end{cases} \end{align*} and we shall note $\Pi_{off}\subset \Pi_{adapt}$ the subset of policies time-measurable with respect to this filtration. \item \textbf{Optimistic algorithms}: In order to study the impact of censorship on the multi-arm problem, this work shall consider the class of high-probability index algorithms based on the \textit{optimism under uncertainty} principle. Beyond the fact that they are widely used in practice, their theoretical study has proven to be intimately linked to that of a larger class of algorithms (notably Thompson Sampling and Information Directed Sampling). More generally, extensions of our techniques can also apply to recent works on non-stationary environments or linear reinforcement learning. The regret analysis of such algorithms relies on two steps: study the average "regret per information gain" achieved by the algorithm at each step and multiply it by the (worst case) total information i.e. maximal uncertainty complexity associated with the problem. \end{itemize}
1,314,259,992,675
arxiv
\section{Introduction} The Standard Model has been confirmed by the discovery of the Higgs scalar and other precision measurements. However, it has various mysteries still. One of them is the mystery on the flavor structure. Why are there three generations ? Why are quark and lepton masses hierarchical ? Which mechanism determines their mixing angles ? Indeed, the Yukawa sector has most of free parameters in the Standard model. Discrete flavor symmetries would be important to understand fermion masses and mixing angles \cite{Altarelli:2010gt,Ishimori:2010au,King:2013eh}. For example, the mixing matrix in the lepton sector, the PMNS matrix, can be approximated by the tri-bimaximal mixing matrix in the limit $\theta_{13}=0$ \cite{Harrison:2002er}. In field-theoretical model building, one starts with a large flavor symmetry. Then, one assumes that the flavor symmetry breaks properly into $Z_3$ and $Z_2$ subsymmetries in the charged lepton or the neutrino masses, such that the tri-bimaximal mixing can be realized. Superstring theory is a promising candidate for unified theory of all of the interactions including gravity and all of the matter fields and Higgs field(s) (see for a review \cite{Ibanez}). It is found that superstring theory on six-dimensional compact space leads to interesting flavor structures. In particular, certain types of four-dimensional superstring models with rather simple six-dimensional compact spaces such as tori and orbifolds lead to definite discrete flavor symmetries. For example, intersecting D-brane models and magnetized D-brane models are among interesting model building in superstring theory \cite{Bachas:1995ik,Berkooz:1996km,Blumenhagen:2000wh,Aldazabal:2000dg,Angelantonj:2000hi, Ibanez:2001nd} (see for review \cite{Blumenhagen:2006ci,Ibanez} and references therein). These intersecting/magnetized D-brane models can lead to discrete flavor symmetries such as $D_4$, $\Delta(27)$, $\Delta(54)$ \cite{Abe:2009vi,Abe:2009uz,BerasaluceGonzalez:2012vb}.\footnote{See also \cite{Higaki:2005ie}.} Similar discrete flavor symmetries can be derived in heterotic string theory on orbifolds \cite{Kobayashi:2004ya}.\footnote{ See for recent works on other discrete stringy symmetries, e.g. \cite{BerasaluceGonzalez:2011wy,Ibanez:2012wg,BerasaluceGonzalez:2012zn, Anastasopoulos:2012zu,Honecker:2013hda,Bizet:2013gf,Nilles:2013lda,Bizet:2013wha} .} In these models, we can calculate explicitly Yukawa couplings and higher order couplings \cite{Hamidi:1986vh,Cvetic:2003ch,Cremades:2004wa} However, such discrete flavor symmetries may be broken by non-perturbative effects. {}From such a viewpoint, anomalies of discrete symmetries \cite{Araki:2008ek,Araki:2007ss,Nilles:2013lda,Bizet:2013wha,Honecker:2013hda} are important because anomalous symmetries may be broken by non-perturbative effects. Even anomaly-free U(1) gauge symmetries can be broken when axions couple with U(1) gauge bosons and they become massive. Furthermore, as concrete non-perturbative effects, D-brane instanton effects have been studied \cite{Blumenhagen:2006xt} (see also for a review \cite{Blumenhagen:2009qh} and references therein). {}From the viewpoint of flavor physics, one of important points is that D-brane instanton effects can generate right-handed Majorana neutrino masses \cite{Ibanez:2006da,Ibanez:2007rs,Cvetic:2007ku}. Then, it is also important to investigate patterns of right-handed Majorana neutrino mass matrices derived by D-brane instanton effects and study whether such effects break some or all of discrete flavor symmetries and which symmetries remain unbroken. In this paper, we study the flavor structure in intersecting D-brane models as well as magnetized D-brane models. We study anomalies of discrete flavor symmetries derived in intersecting D-brane models. We also study right-handed Majorana neutrino mass matrices, which can be generated by D-brane instanton effects. We show which types of Majorana mass matrices can be derived and which flavor symmetries remain unbroken even with right-handed Majorana neutrino mass matrices generated by D-brane instanton effects. This paper is organized as follows. In section 2, we review briefly the discrete flavor symmetries derived from intersecting D-brane models as well as magnetized D-brane models. In section 3, we study anomalies of these discrete flavor symmetries\red{.} In section 4, we study right-handed Majorana masses generated by D-brane instanton effects. Section 5 is devoted to conclusion and discussion. \red{In Appendix A, we show the computation to integrate non-vanishing Wilson line phase.} \section{Discrete flavor symmetries} In this section, we review briefly discrete flavor symmetries appearing in intersecting D-brane models as well as magnetized D-brane models \cite{Abe:2009vi,BerasaluceGonzalez:2012vb}. For concreteness, we consider IIA D6-brane models on $T^6=T^2_1\times T^2_2 \times T^2_3$, where each D6-brane wraps one-cycle of each $T^2$ of $T^6=T^2_1\times T^2_2 \times T^2_3$. That is, our setup is as follows. We consider $N_a$ stacks of D6-branes, which lead to $U(N_a)$ gauge symmetry, and they have winding numbers $(n_a^i,m_a^i)$ along the $x_i$ and $y_i$ directions on $T^2_i$, where we use orthogonal coordinates $(x_i,y_i)$ on $T^2_i$. When we denote the basis of one-cycles on $T^2_i$ by $[a_i]$ and $[b_i]$, which correspond to the $x_i$ and $y_i$ directions, the three-cycle, along which this set of D6-brane winds, is represented by \begin{eqnarray} [\Pi_a] = \prod^3_{i=1} (n^i_a [a_i] + m^i_a[b_i]). \end{eqnarray} Here, we consider two sets of D-branes, one set is $N_a$ stacks of D6-branes and another is $N_b$ stacks of D6-branes. These lead to $U(N_a)\times U(N_b)$ gauge groups. Suppose that these two stacks of D6-branes intersect each other on $T^2_i$. Their intersecting number on $T^2_i$ is obtained by \begin{eqnarray} I_{ab}^{(i)}= (n^i_a m_b^i - m_a^in_b^i), \end{eqnarray} and their total intersecting number on $T^6$ is obtained by \begin{eqnarray} [\Pi_a]\cdot[\Pi_b] = I_{ab} = \prod_{i=1}^3 I_{ab}^{(i)}. \end{eqnarray} Then, chiral matter fields with bi-fundamental representations $(N_a,\bar N_b)_{(1,-1)}$ under $U(N_a)\times U(N_b)$ appear at intersecting points on $T^2_i$, where the index $(1,-1)$ denotes $U(1)^2$ charges inside $U(N_a)$ and $U(N_b)$. There appear $I_{ab}$ families of bi-fundamental matter fields. When $I_{ab}$ is negative, there appear $|I_{ab}|$ families of matter fields with the conjugate representation $(\bar N_a, N_b)_{(-1,1)}$. The total flavor symmetry is a direct product of flavor symmetries appearing on one of $T^2_i$. Thus, we concentrate on the flavor symmetry realized on one of $T^2_i$. Then, we denote $I_{ab}^{(i)}=g$. Theses modes on $T^2_i$ have definite $Z_g$ charges and $Z_g$ transformation is represented by \begin{eqnarray} Z = \left( \begin{array}{ccccc} 1 & & & & \\ & \rho & & & \\ & & \rho^2 & & \\ & & & \ddots & \\ & & & & \rho^{g-1} \end{array} \right), \end{eqnarray} where $\rho = e^{2\pi i/g}$. In addition, there is a cyclic permutation symmetry $Z_g^{(C)}$ among these modes, i.e. \begin{eqnarray} C= \left( \begin{array}{cccccc} 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ & & & & \ddots & \\ 1 & & & & \cdots & 0 \end{array} \right). \end{eqnarray} Furthermore, these elements do not commute each other, \begin{eqnarray} CZ = \rho Z C. \end{eqnarray} Thus, this flavor symmetry includes another $Z_g'$ symmetry, which is represented by \begin{eqnarray} Z' = \left( \begin{array}{ccccc} \rho & & & & \\ & & & \ddots & \\ & & & & \rho \end{array} \right). \end{eqnarray} Then, these would generate the non-Abelian flavor symmetry, $(Z_g \times Z_g') \rtimes Z_g^{(C)}$. For example, when $g=2$ and $g=3$, the symmetries correspond to $D_4$ and $\Delta(27)$. In addition, when the totally D-brane system has the $Z_2$ reflection symmetry $P$ between $i$-th mode and $(g-i)$-th mode the $\Delta(27)$ symmetry for $g=3$ is enhanced into $\Delta(54)$ \cite{Abe:2009vi}. Similarly, we can discuss models with more than two sets of D-branes. For example, suppose that we add $N_c$ stacks of D-branes to the above system, and that their intersecting numbers satisfy $G.C.D.(I_{ab}^{(i)},I_{ac}^{(i)},I_{bc}^{(i)})=d$. Then, this model has the discrete flavor symmetry $(Z_d \times Z_d') \rtimes Z_d^{(C)}$. The above result is applicable to intersecting D-brane models on orientifolds through simple extension. Also, we can extend our discussions to orbifold cases \cite{Abe:2009vi,Abe:2009uz,BerasaluceGonzalez:2012vb}. Since magnetized D-brane models are T-duals to intersecting D-brane models, the magnetized D-brane models also have the same discrete flavor symmetries. For example, we start with $(N_a + N_b)$ stacks of D9-branes on $T^6$. Then, we introduce the magnetic flux on $T_i^2$ along $U(1)_a$ and $U(1)_b$ directions in $U(N_a + N_b)$ as \begin{eqnarray} F^{(i)} = 2\pi \left( \begin{array}{cccccc} M_a^{(i)} & & & & & \\ & \ddots & & & & \\ & & M_a^{(i)} & & & \\ & & & M_b^{(i)} & & \\ & & & & \ddots & \\ & & & & & M_b^{(i)} \\ \end{array} \right), \end{eqnarray} where $M_a^{(i)}$ and $M_b^{(i)}$ are integers. This magnetic flux background breaks the gauge group $U(N_a + N_b)$ into $U(N_a) \times U(N_b)$. The gaugino fields in the off-diagonal part correspond to the $(N_a,\bar N_b)$ bi-fundamental matter fields under the unbroken $U(N_a) \times U(N_b)$ gauge symmetry. Zero-modes with such representation appear in this model, and the number of zero-modes on $T^2_i$ is equal to $M_a^{(i)}-M_b^{(i)}$. When we denote $M_a^{(i)}-M_b^{(i)}=g$, this magnetized D-brane model leads to the same discrete flavor symmetry, $(Z_g \times Z_g')\rtimes Z_g^{(C)}$ as the above intersecting D-brane model. \section{Discrete anomalies} In this section, we study anomalies of discrete flavor symmetries. \subsection{$U(1)$ anomalies} Before studying anomalies of discrete flavor symmetries, it is useful to review anomalies of $U(1)$ gauge symmetries. In this subsection, we give a brief review on $U(1)$ anomalies \cite{Aldazabal:2000dg,Cvetic:2001nr} (see also \cite{Blumenhagen:2006ci,Ibanez}). First of all, we consider the torus compactification. A D6-brane has a charge of RR 7-form $C_7$. The total charge should vanish in a compact space. That leads to the following tadpole cancellation condition, \begin{eqnarray} \sum_a N_a [\Pi_a] =0. \end{eqnarray} The $SU(N_a)^3$ anomaly coefficient is calculated in the intersecting D-brane models by \begin{eqnarray} A_a = \sum_b I_{ab} N_b, \end{eqnarray} because there are $I_{ab}$ matter fields with $(N_a,\bar N_b)_{(1,-1)}$ for $I_{ab} >0$ and $|I_{ab}|$ matter fields with $(\bar N_a,N_b)_{(-1,1)}$ for $I_{ab} <0$. However, the tadpole cancellation condition leads to \begin{eqnarray} [\Pi_a]\cdot \sum_b N_b [\Pi_b] = 0. \end{eqnarray} That implies that $A_a = 0$, that is, anomaly free. The $U(1)_a \times SU(N_b)^2$ mixed anomaly coefficient is obtained by \begin{eqnarray} A_{ab} = N_a I_{ab}. \end{eqnarray} This anomaly is not always vanishing. However, this anomaly can always be canceled by the Green-Schwarz mechanism, where an axion shifts under the $U(1)$ gauge transformation and the anomalous U(1) gauge boson becomes massive. The $U(1)$-gravity$^2$ anomaly coefficient is obtained by \begin{eqnarray} A_{a-{\rm grav}} = N_a \sum_b I_{ab}N_b. \end{eqnarray} This anomaly is always vanishing when the tadpole cancellation condition is satisfied. Next, we review on anomalies for the orientifold compactification. That is, we introduce $O6$-branes along the direction $\prod_i [a_i]$. The system must be symmetric under the $Z_2$ reflection, $y_i \rightarrow -y_i$. In this case, we have to introduce a mirror $D6_{a'}$-branes with the winding number $(n_a^i,-m_a^i)$ corresponding to $(n_a^i,m_a^i)$. The O6-brane has (--4) times as RR charge as a D6-brane. Then, the RR-tadpole cancellation condition requires \begin{eqnarray} \sum_a N_a([\Pi_a] + [\Pi_{a'}]) - 4 [\Pi_{O6}] =0. \end{eqnarray} \begin{eqnarray} \sum_{a \neq b}N_a [\Pi_b]\cdot ([\Pi_a] + [\Pi_{a'}]) + N_b [\Pi_b]\cdot [\Pi_{b'}] - 4[\Pi_b]\cdot [\Pi_{O6}] =0. \end{eqnarray} In addition to $I_{ab}$ families of $(N_a,\bar N_b)_{(1,-1)}$ matter fields, there appear $I_{ab'}$ families of $(N_a,N_b)_{(1,1)}$ matter fields. Moreover, there appear matter fields with symmetric and asymmetric representations under $U(N_a)$ with charge 2. Their numbers are obtained by \begin{eqnarray} & & \#_{a, {\rm asymm}} = \frac12 ([\Pi_a] \cdot [\Pi_{a'}]- [\Pi_a]\cdot [\Pi_{O6}]) +[\Pi_a]\cdot [\Pi_{O6}], \\ & & \#_{a, {\rm symm}} = \frac12 ([\Pi_a] \cdot [\Pi_{a'}]- [\Pi_a]\cdot [\Pi_{O6}]). \end{eqnarray} In this case, we can show that the $SU(N_a)^3$ anomaly coefficient always vanishes when the RR-tadpole cancellation condition is satisfied, similarly to in the torus compactification. Also, the $U(1)_a-SU(N_b)^2$ anomaly coefficient is not always vanishing, but such anomaly can be canceled by the Green-Schwarz mechanism. Finally, the $U(1)_a-$gravity$^2$ anomaly coefficient is obtained by \begin{eqnarray} A_{a-{\rm grav}} &=& \prod_{b \neq} N_a N_b ([\Pi_{a}]\cdot [\Pi_b] + [\Pi_a]\cdot [\Pi_{b'}]) +2 \frac{N_a(N_a-1)}{2} \#_{a ,{\rm asymm}} \nonumber \\ & & +2 \frac{N_a(N_a+1)}{2} \#_{a ,{\rm symm}} \nonumber \\ &=& 3N_a[\Pi_a]\cdot[\Pi_{O6}]. \end{eqnarray} This does not always vanish, but such anomaly can be canceled by the Green-Schwarz mechanism. \subsection{Discrete anomalies} In the gauge theory with the gauge group $G$ and the Abelian discrete symmetry $Z_N$, the $Z_N-G^2$ mixed anomaly coefficient is calculated by \cite{Ibanez:1991hv,Banks:1991xj,Araki:2008ek,Ishimori:2010au}, \begin{eqnarray} A_{Z_N-G^2} =\sum_m q^{(m)} T_2({\bf R}^{(m)}), \end{eqnarray} where the summation of $m$ is taken over fermions with $Z_N$ charges $q^{(m)}$ and the representation ${\bf R}^{(m)}$ under $G$. Here, $T_2({\bf R}^{(m)})$ denotes the Dynkin index and we use the normalization such that $T_2=1/2$ for the fundamental representation of $SU(N)$. When the following condition is satisfied \cite{Ibanez:1991hv,Banks:1991xj,Araki:2008ek,Ishimori:2010au}, \begin{eqnarray} \sum_m q^{(m)} T_2({\bf R}^{(m)}) =0 ~~({\rm mod}~~N/2), \end{eqnarray} the $Z_N$ symmetry is anomaly-free. Similarly, we can calculate the $Z_N$-gravity$^2$ anomaly coefficient by Tr$ q^{(m)}$. If Tr$ q^{(m)} = 0$ (mod $N/2$), $Z_N$ is anomaly-free. For example, $Z_2$ symmetry is always anomaly-free. Each generator of non-Abelian discrete symmetries corresponds to an Abelian symmetry. Thus, if each Abelian generator of non-Abelian discrete flavor symmetry satisfies the above anomaly-free condition, the total non-Abelian symmetry is anomaly-free. When some discrete Abelian symmetries are anomalous, the total non-Abelian discrete symmetry is broken, and the subgroup, which does not include anomalous generators, remains unbroken. In the non-Abelian discrete symmetry, there appear multiplets and each generator is represented by a matrix, $M$. When $\det M=1$, the corresponding Abelian discrete symmetry is always anomaly-free. Only multiplets with $\det M \neq 1$ can contribute on anomalies. Since we have $\det Z' = 1$, the corresponding $Z_g'$ symmetry is always anomaly-free. On the other hand, we find $\det Z = \det C= 1$ for $g=$ odd and $\det Z = \det C= -1$ for $g=$ even. That means that the discrete flavor symmetry $(Z_g \times Z_g') \rtimes Z_g^{(C)}$ is always anomaly-free for $g=$ odd, but $Z_g$ and $Z_g^{(C)}$ can be anomalous for $g=$ even. In particular, their $Z_2$ parts are anomalous. One has to check the anomaly-free condition for such $Z_2$ part for $Z_g$ and $Z_g^{(C)}$. For example, the $\Delta(27)$ flavor symmetry for $g=3$ is always anomaly-free. However, $Z_2$ subgroups of $D_4$ for $g=2$ corresponding to the following elements, \begin{eqnarray} \left( \begin{array} {cc} 1 & 0 \\ 0 & -1 \end{array} \right),\qquad \left( \begin{array} {cc} 0 & 1 \\ 1 & 0 \end{array} \right), \end{eqnarray} can be anomalous. First, we discuss the torus compactification. For simplicity, we concentrate on the flavor symmetry appearing the first torus $T^2_1$ and we assume that all of intersecting numbers on $T^2_1$, $I_{ab}^{1}$, are even. Thus, the total flavor symmetry includes the $Z_2$ symmetry as well as $Z_2^{(C)}$, which can be anomalous. Also, we assume that there appears a trivial symmetry from the other $T^2_2 \times T^2_3$. Now, let us examine the $Z_2-SU(N_a)^2$ anomaly. There are $I_{ab}$ bi-fundamental matter fields with the representation $(N_a,\bar N_b)$. A half of $I_{ab}$ matter fields have even $Z_2$ charge and the others have odd $Z_2$ charge. The anomaly coefficient of $Z_2-SU(N_a)^2$ anomaly can be written by \begin{eqnarray} \sum_b \frac{I_{ab}}{2} N_b \frac12. \end{eqnarray} It vanishes because the tadpole cancellation condition, $\sum_b I_{ab}N_b=0$. Thus, this $Z_2$ symmetry is anomaly-free on the torus compactification. Since only this $Z_2$ symmetry can be anomalous and the others are always anomaly-free, the non-Abelian flavor symmetries $(Z_g \times Z_g') \rtimes Z_g^{(C)}$ are always anomaly-free in the torus compactification. Next, we study the orientifold compactification. Similarly, we can calculate the $Z_2-SU(N_a)^2$ anomaly coefficient, \begin{eqnarray} & & \sum_{b \neq a} \left( \frac{I_{ab}}{2} N_b + \frac{I_{ab'}}{2} N_{b'}\right) \frac12 + \frac{N_a-2}{4} \#_{a, {\rm asymm}} + \frac{N_a+2}{4} \#_{a, {\rm symm}} \nonumber \\ &=& {\frac{[\Pi_a]\cdot[\Pi_{O6}]}{2}} \end{eqnarray} That is not always vanishing, but it is proportional to the $U(1)_a$-grav$^2$ anomaly. Thus, this anomaly could be canceled when one requires the axion shift under the $Z_2$ transformation, which is related with the axion shift under $U(1)_a$. In addition, when D6$_a$ branes are parallel to the O6-branes, $Z_2-SU(N_a)^2$ anomaly coefficient is always vanishing. \section{Majorana neutrino masses} In the previous section, we have studied on anomalies of discrete flavor symmetries. Certain symmetries are anomaly-free. For example, the $\Delta(27)$ flavor symmetry is anomaly-free. Anomalous symmetries can be broken by non-perturbative effects. There is no guarantee that anomaly-free symmetries are not broken by stringy non-perturbative effects. In this section, we consider D-brane instanton effects as concrete non-perturbative effects. We study which form of right-handed Majorana neutrino mass matrix can be generated by D-brane instanton effects. Indeed, following \cite{Blumenhagen:2006xt,Blumenhagen:2009qh,Ibanez:2006da}, we study the sneutrino mass matrix assuming that the neutrino mass matrix has the same form and supersymmetry breaking effects are small. \subsection{Neutrino mass matrix} Here, we study right-handed Majorana neutrino masses, which can be generated by D-brane instanton effects. We assume that $g$ families of right-handed neutrinos $\nu_R^a$ appear by intersections between D6$_c$-brane and D6$_d$ branes, and that their intersecting numbers are equal to $I_{cd}^{(i)} = g$ for the i-th $T^2$ and $I_{cd}^{(j)} =1$ for the other tori. For the moment, let us concentrate on the three-generation model, $I_{cd}=3$, which can be obtained by $(I^{(1)}_{cd},I^{(2)}_{cd},I^{(3)}_{cd}) = (\underline{3,1,1})$, where the underline denotes all the possible permutations. We consider D2-brane instanton, which wraps one-cycle of each $T^2$ of $T^6=T^2\times T^2\times T^2$. We call it $D2_M$-brane. It intersects with D6$_{c}$ brane and D6$_{d}$ brane. At these intersecting points, zero-modes $\alpha_i$ and $\gamma_j$ appear and their numbers are obtained by $I_{Mc}$ and $I_{dM}$. Only if there are two zero-modes for both $\alpha_i$ and $\gamma_j$ the neutrino masses can be generated by D2-brane instanton effect \cite{Blumenhagen:2006xt,Blumenhagen:2009qh,Ibanez:2006da}, \begin{eqnarray} & & M\int d^2\alpha d^2 \gamma ~e^{-d^{ij}_a \alpha_i \nu_R^a \gamma_j} = Mc_{ab}, \nonumber \\ & & c_{ab} = \nu^a_R \nu^b_R (\varepsilon_{ij} \varepsilon_{k \ell}d^{ik}_a d^{j \ell}_b), \end{eqnarray} where the mass scale $M$ would be determined by the string scale $M_{st}$ and the instanton world volume $V$ as $M = M_{st}e^{-V}$. Here, $d^{ij}_a$ is the 3-point coupling coefficient among $\alpha_i$, $\nu_R^a$ and $\gamma_j$ \cite{Cvetic:2003ch}, which we show explicitly in the next subsection. The 3-point coupling coefficient $d_a^{ij}$ can be written by $d_a^{ij}=d_{a1}^{ij}d_{a2}^{ij}d_{a3}^{ij}$, where $d_{ak}^{ij}$ for $k=1,2,3$ is the contribution from the $k$-th torus. In addition, when $\alpha_i$, $\gamma_j$, or $\nu^a$ are localized at a single intersecting point on the $k$-th torus, we omit the indexes such as $d_{ak}^{j}$, $d_{ak}^{i}$, or $d_{k}^{ij}$. We have to take into account all of the possible $D2_M$-brane configurations, which can generate the above neutrino mass terms. One can obtain two zero-modes of $\alpha_i$ and $\gamma_j$ for the $D2_M$-brane set corresponding to Sp(2) or U(2) gauge group with the intersecting numbers $|I_{Mc}|=|I_{Md}|=1$ \cite{Ibanez:2007rs} or a single $D2_M$-brane with the intersecting numbers, $|I_{Mc}|=|I_{Md}|=2$. When the $D2_M$-brane set corresponds to the Sp(2) or U(2) brane, the zero-modes, $\alpha_i$ and $\gamma_j$, are doublets and the gauge invariance allows the certain couplings, say $\alpha_i$ and $\gamma_i$, but not $\alpha_i$ and $\gamma_j$ for $i \neq j$. When $I_{Mc}=I_{dM}=1$, the following form of the Majorana mass is generated, \begin{eqnarray} \int d^2\alpha d^2 \gamma ~e^{-d^{11}_a \alpha_1 \nu_R^a \gamma_1- d^{22}_a \alpha_2 \nu_R^a \gamma_2} = \nu^a_R \nu^b_R d^{11}_a d^{22}_b. \end{eqnarray} More explicitly, the following form of mass matrix is obtained \cite{Ibanez:2007rs}, \begin{eqnarray} M c_{ab} = \left( \begin{array}{ccc} d_{1}^{11} d_{1}^{22} & d_{1}^{11}d_{2}^{22} &d_{1}^{11}d_{3}^{22} \\ d_{2}^{11} d_{1}^{22} & d_{2}^{11}d_{2}^{22} &d_{2}^{11}d_{3}^{22} \\ d_{3}^{11} d_{1}^{22} & d_{3}^{11}d_{2}^{22} &d_{3}^{11}d_{3}^{22} \\ \end{array} \right). \end{eqnarray} This Majorana mass matrix has the rank one. However, we have to take into account all of the $D2_M$-brane configurations, that is, the position of $D2_M$-brane sets. Thus, we integrate over the position of the $D2_M$-brane sets. Such integration over the $D2_M$-brane position would recover the cyclic permutation symmetry, $Z_{g=3}^{(C)}$. Then, we would obtain the following form of Majorana neutrino mass matrix, \begin{eqnarray}\label{eq:n-mass-33} M = \left( \begin{array}{ccc} A & B & B\\ B & A & B\\ B & B & A\\ \end{array} \right). \end{eqnarray} We will show this form by an explicit calculation in the next subsection. As a result, there remains the cyclic permutation symmetry, $Z_{g=3}^{(C)}$, unbroken, but $Z_{g=3}$ and $Z_{g=3}'$ symmetries are broken by D-brane instanton effects, which generate the Majorana neutrino masses. This form also has the $Z_2$ reflection symmetry $P$. Thus, if the full D-brane system has the $Z_2$ reflection symmetry, the symmetry is enhanced into $S_3$. Similarly, we can study a single $D2_M$-brane with the intersecting numbers, $|I_{Mc}|=|I_{Md}|=2$. There are two types of $D2_M$-brane instanton configurations leading to $|I_{Mc}|=|I_{Md}|=2$. In one type, we have the configuration with $|I_{Mc}^{(j)}| = |I_{Md}^{(k)}| =2$ for $j \neq k$, and in the other type we have the configuration with $|I_{Mc}^{(j)}| = |I_{Md}^{(j)}| =2$. In the first case with $|I_{Mc}^{(j)}| = |I_{Md}^{(k)}| =2$ for $j \neq k$, let us set e.g. $j=1$ and $k=2$. Then, the Yukawa coupling $d^{ij}_a$ can be written by $d^{ij}_a=d^i_{a1}d^j_{a2}d_{a3}$. Also we assume that $I_{cd}^{(1)}=3$ and $I_{cd}^{(2)}=I_{cd}^{(3)}=1$. Then, the neutrino mass can be written by \begin{equation} \varepsilon_{ij} \varepsilon_{k\ell} d_{a}^{ik} d_{b}^{jl}=\varepsilon_{ij} \varepsilon_{kl} d_{a1}^{i}d_{2}^k d_{3} d_{b1}^{j}d_{2}^\ell d_{3}. \end{equation} However, this vanishes identically \cite{Ibanez:2006da}. We obtain the same result for $|I_{Mc}^{(j)} |= |I_{Md}^{(k)} |=2$ with $j \neq k$, when $(I^{(1)}_{cd},I^{(2)}_{cd},I^{(3)}_{cd}) = (\underline{3,1,1})$. On the other hand, if a single $D2_M$-brane configuration with $|I_{Mc}^{(j)}| = |I_{Md}^{(j)}| =2$ is possible, we obtain the non-vanishing neutrino mass matrix $Mc_{ab}$. Then, when we integrate over the position of the $D2_M$-brane instanton, we would obtain the same results as Eq.(\ref{eq:n-mass-33}). Thus, the cyclic permutation symmetry $Z_{g=3}^{(C)}$ is recovered. This result can be extended for models with $g$ flavors of neutrinos. When we take into account all of the possible D-brane instanton configurations, we would realize the neutrino mass matrix $Mc_{ab}$ with the cyclic permutation symmetry $Z_g^{(C)}$, i.e. \begin{eqnarray} c_{ab}=c_{a'b'} {\rm~~~for~~~}a'=a+1,~b'=b+1. \end{eqnarray} Also the mass matrix is symmetric, i.e. $c_{ab} = c_{ba}$. For example, we obtain \begin{eqnarray} c_{ab}=\left( \begin{array}{cc} A & B \\ B & A \end{array} \right), \end{eqnarray} for $g=2$ and \begin{eqnarray} c_{ab}=\left( \begin{array}{cccc} A & B & B' & B\\ B & A & B & B'\\ B' & B & A & B \\ B & B' & B & A \end{array} \right), \end{eqnarray} for $g=4$. It is found that the D-brane instantons break $Z_g'$ into $Z_2$ if $g$ is even. Otherwise, the $Z_g'$ symmetry as well as the $Z_g$ symmetry is completely broken. However, the cyclic permutation symmetry remains.\footnote{ These forms also have the $Z_2$ reflection symmetry.} We have studied the neutrino mass matrix by assuming that the neutrino and sneutrino have the same mass matrix and supersymmetry breaking effect is small \cite{Blumenhagen:2006xt,Blumenhagen:2009qh,Ibanez:2006da}. The important point to derive our result is the cyclic permutation symmetry. Thus, we would obtain the same result if the D-brane instatons do not break such a symmetry but supersymmetry is broken. \subsection{Explicit computation} Here, we discuss the Majorana neutrino mass matrix by computing explicitly the three-generation models. We consider the D2-brane instanton corresponding to Sp(2) or U(2) gauge symmetry. Suppose that D6$_c$ and D6$_d$ branes have the intersecting number, $I_{cd}=3$, and at three intersecting points there appear three generations of right-handed neutrinos. We set $(I^{(1)}_{cd},I^{(2)}_{cd},I^{3)}_{cd})=(3,1,1)$, and $I_{Mc}=I_{dM}=1$. Because the right-handed neutrinos are localized at different points from each other on the first torus, only the first torus is important for the flavor symmetry. Thus, we concentrate on the first torus for computations on Yukawa couplings and Majorana masses. We also omit the index corresponding to the $k$-torus. In the following computations, we set Wilson line moduli zero because it dose not affect the flavor structure (\red{see} appendix A for more detail). There are three generations of $\nu_a$ and we here label their flavor index as $a=0,1,2$. Also there are two-zero modes, $\alpha_i$ and $\gamma_j$ $(i,j=1,2)$, but note that these indexes $i,j$ correspond to the doublets under $Sp(2)$ or $U(2)$ and the intersecting numbers, $I_{cM}$, and $I_{Md}$, are equal to one, $I_{cM}=I_{Md}=1$. Suppose that there are three fields $\phi_a$, $\chi_{i'}$ and $\chi_{j'}$ with the "flavor numbers", $a=0,\cdots,I_{cd}-1$, $i'=0,\cdots,I_{dM}-1$, and $j'=0,\cdots,I_{Mc}-1$, where $I_{cd}$, $I_{dM}$, and $I_{Mc}$ are the corresponding intersecting numbers on the torus. In this case, the 3-point couplings $d_a^{i'j'}$ among three fields can be calculated by \cite{Cvetic:2003ch} \begin{equation}d_{a}^{i'j'}=C \sum_{\ell \in {Z}} {\rm exp}\left( \frac{-A_{ai'j'}(\ell)}{2\pi \alpha'} \right), \end{equation} where $C$ is a flavor-independent constant due to quantum contributions and \begin{equation} A_{ai'j'}(\ell)=\frac{1}{2} A |I_{cd} I_{dM} I_{Mc}|\left( \frac{a}{I_{cd}} + \frac{i'}{I_{dM}}+ \frac{j'}{I_{Mc}}+\frac{\varepsilon}{I_{dM}I_{Mc}} +\ell \right)^2, \end{equation} and $A$ denotes the area of the first torus. Here, $\varepsilon$ denotes the position of $D2_M$-brane on the first torus and we normalize $\varepsilon$ such that $\varepsilon$ varies $[0,1]$ on the torus. Note that this coupling corresponds to the contribution on the first torus, which determines the flavor structure, but we have omitted the index corresponding to the first torus. By using the $\vartheta$-function, \begin{equation} \vartheta \left[ \begin{array}{c} a \\ b \\ \end{array}\right] (\nu,\tau) = \sum_{\ell \in {Z}} {\rm exp} \left[ \pi i (a+\ell)^{2} \tau + 2 \pi i (a+\ell)(\nu + b) \right], \end{equation} we can write \begin{equation} d_{a}^{i'j'} = C \vartheta \left[ \begin{array}{c} \frac{a}{I_{cd}} + \frac{i'}{I_{dM}} + \frac{j'}{I_{Mc}}+\frac{\varepsilon}{I_{dM} I_{Mc}} \\ 0 \\ \end{array} \right] \left( 0,\frac{iA\red{|}I_{cd} I_{dM} I_{Mc}|}{4\pi^{2}\alpha'} \right) . \label{eq:3pcoupling} \end{equation} Our model corresponds to $a=0,1,2$, $I_{cd}=3$, $i'=j'=0$, $I_{dM}=I_{Mc}=1$. In the above model, the 3-point couplings among $\nu_a$, $\alpha_i$, and $\gamma_j$ are written by \begin{equation} d_{a}^{ij} = \delta_{ij} \vartheta \left[ \begin{array}{c} -\frac{a}{3} + \varepsilon \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{4\pi^{2}\alpha'} \right). \end{equation} Recall again that the indexes $i$ and $j$ of $\alpha_i$ and $\gamma_j$ are doublet indexes under Sp(2) or U(2). Using this, the matrix $c_{ab}$ is written by the integration of the position $\varepsilon$ over $[0,1]$, \begin{eqnarray}c_{ab} & = & \int_{0}^{1} d\varepsilon \vartheta \left[ \begin{array}{c} -\frac{a}{3} + \varepsilon \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{4\pi^{2}\alpha'} \right) \vartheta \left[ \begin{array}{c} -\frac{b}{3} + \varepsilon \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{4\pi^{2}\alpha'} \right) \nonumber \\ & = & \int_{0}^{1} d\varepsilon \sum_{m=1}^{2} \vartheta \left[ \begin{array}{c} -\frac{a}{6} - \frac{b}{6} + \varepsilon + \frac{m}{2}\\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \\ & & \times \vartheta \left[ \begin{array}{c} -\frac{a}{6} +\frac{b}{6} +\frac{m}{2} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right). \nonumber \end{eqnarray} We obtain \begin{eqnarray} && \int_{0}^{1} d\varepsilon \vartheta \left[ \begin{array}{c} -\frac{a}{3} + \varepsilon + \frac{m}{2}\\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \nonumber \\ & & = \int_{0}^{1} d\varepsilon \sum_{l \in {Z}} {\rm exp}\left[\pi i (-a/3 + \varepsilon +m/2 +\ell)^{2}\left(\frac{3 i A}{2 \pi^{2} \alpha'} \right) \right] \nonumber \\ & &= \int_{-\infty}^{\infty} dx {\rm exp} \left[-\frac{3A}{2\pi \alpha'} ( x - a/3 +m/2 )^{2} \right]\\ & & = \sqrt{\frac{2\pi^{2}\alpha'}{3A}}. \nonumber \end{eqnarray} Using it, the matrix elements $c_{ab}$ can be computed as follows. It is found that the diagonal elements $c_{aa}$ do not depend on $a$ and they are written by \begin{equation} c_{aa} = \sqrt{\frac{2\pi^{2}\alpha'}{3A}} \left(\vartheta \left[ \begin{array}{c} 0 \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) + \vartheta \left[ \begin{array}{c} \frac{1}{2} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \right). \end{equation} Similarly, the off-diagonal elements are written by \begin{equation} c_{01} = \sqrt{\frac{2\pi^{2}\alpha'}{3A}} \left(\vartheta \left[ \begin{array}{c} \frac{1}{6} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) + \vartheta \left[ \begin{array}{c} \frac{2}{3} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \right), \end{equation} \begin{equation} c_{02} = \sqrt{\frac{2\pi^{2}\alpha'}{3A}} \left(\vartheta \left[ \begin{array}{c} \frac{1}{3} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) + \vartheta \left[ \begin{array}{c} \frac{5}{6} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \right), \end{equation} \begin{equation} c_{12} = \sqrt{\frac{2\pi^{2}\alpha'}{3A}} \left(\vartheta \left[ \begin{array}{c} \frac{1}{6} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) + \vartheta \left[ \begin{array}{c} \frac{2}{3} \\ 0 \\ \end{array} \right] \left( 0,\frac{3iA}{2\pi^{2}\alpha'} \right) \right). \end{equation} However, we have the following formula of the $\vartheta$-function \begin{equation}\vartheta \left[ \begin{array}{c} a \\ b \\ \end{array} \right] ( \nu ,\tau) = \vartheta \left[ \begin{array}{c} a +1 \\ b \\ \end{array} \right] (\nu,\tau), \end{equation} \begin{equation}\vartheta \left[ \begin{array}{c} -a \\ 0 \\ \end{array} \right] ( 0 ,\tau) = \vartheta \left[ \begin{array}{c} a \\ 0 \\ \end{array} \right] (0,\tau). \end{equation} Then, we see that all of the off-diagonal elements are the same, \begin{eqnarray} c_{01}=c_{12}=c_{20}. \end{eqnarray} That is, we can realize the form (\ref{eq:n-mass-33}) by explicit calculations. Figure~\ref{fig:B/A} shows the ratio $B/A=c_{12}/c_{aa}$ in (\ref{eq:n-mass-33}) by varying the area $3A/2\pi^2\alpha'$. \begin{figure}[thbp] \begin{center} \epsfig{file=Seigeltheta.eps,scale=0.6 \end{center} \caption{$B/A$ vs. $3A/2\pi^2 \alpha'$} \label{fig:B/A} \end{figure} \subsection{Phenomenological implication} Here we discuss phenomenological implication of our result. The Majorana mass matrix with the form (\ref{eq:n-mass-33}) can be diagonalized by the following matrix, \begin{eqnarray}\label{eq:mixing} \left( \begin{array}{ccc} \sqrt{2/3}c & 1/\sqrt{3} & -\sqrt{2/3}s \\ -1/\sqrt{6}c-1/\sqrt{2}s & 1/\sqrt{3} & 1/\sqrt{6}s-1/\sqrt{2}c \\ -1/\sqrt{6}c+1/\sqrt{2}s & 1/\sqrt{3} & 1/\sqrt{6}s +1/\sqrt{2}c \\ \end{array} \right), \end{eqnarray} where $c=\cos \theta$ and $s= \sin \theta$, and the eigenvalues are $A-B$, $A+2B$ and $A-B$. That is, two eigenvalues are degenerate. This is because the mass matrix (\ref{eq:n-mass-33}) has the additional $Z_2$ reflection symmetry $P$ and the symmetry is enhanced into $S_3$. At any rate, this form of the mixing matrix is interesting, although the mass eigenvalues may be not completely realistic. Suppose that the Dirac neutrino Yukawa couplings and charged lepton mass matrix are almost diagonal.\footnote{The $\Delta(27)$ flavor symmetry as well as $\Delta(54)$ flavor symmetry may be useful to realize such a form.} Then, the lepton mixing matrix is obtained as the above matrix (\ref{eq:mixing}). That is the trimaximal matrix. When $s=0$, the above matrix becomes the tri-bimaximal mixing matrix. In field-theoretical model building, the tri-bimaximal mixing matrix can be obtained as follows \cite{Altarelli:2010gt,Ishimori:2010au,King:2013eh}. We start with a larger flavor symmetry and break by vacuum expectation values of scalar fields. However, one assumes that $Z_3$ and $Z_2$ subsymmetries remain in the charged lepton or neutrino mass terms. Then, the tri-bimaximal mixing matrix can be realized. In our string theory, such a $Z_3$ symmetry is realized by geometrical symmetry of the cyclic permutation $Z_3^{C}$, which can not be broken by the D-brane instanton effects, although other symmetries are broken. We may need some corrections to realize the experimental values of neutrino masses.\footnote{ To resolve the degeneracy between two mass eigenvalues, it may be important to break the $Z_2$ reflection symmetry $P$. The full D-brane system, i.e. the full Lagrangian of the low-energy effective field theory, may not have such $Z_2$ symmetry and the above degeneracy may be resolved by radiative corrections.} At least, the above results show that we can realize non-trivial mixing in the lepton sector even though our assumption above the Dirac masses can not be realized. \section{Conclusion and discussion} We have studied the flavor structure in intersecting D-brane models. We have discussed the anomalies of flavor symmetries. Certain symmetries are anomaly-free, and anomaly coefficients of discrete symmetries have the specific feature. We have studied the Majorana neutrino masses, which can be generated by D-brane instanton effects. It is found that the mass matrix form with the cyclic permutation symmetry can be realized by integrating over the position of D-brane instanton. That would lead to the interesting form of mixing angles. It is interesting to apply our results for more concrete models. We would study numerical analyses elsewhere. In some models, there appear more than one pair of Higgs fields. Their masses would be generated by D-brane instanton effects. It would be important to study the form of such Higgs mass matrix. Also, some of Yukawa couplings may be generated by D-brane instanton effects. Thus, it would be important to extend our analysis to Higgs mass matrix and Yukawa matrices. \subsection*{Acknowledgement} The work of Y. H. is supported in part by the Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Fellows No.25$\cdot$1107. The work of T.K. is supported in part by the Grants-in-Aid for Scientific No. 25400252 from the Ministry of Education, Culture,Sports, Science and Technology of Japan.
1,314,259,992,676
arxiv
\section{Introduction} Two main questions of theoretical physics requiring the knowledge of the structure of spacetime at a fundamental level are the nature of singularities appearing in classical general relativity and the ultraviolet divergences of field theory. It is expected a quantum theory of gravity can provide an answer for such questions as we have learned from simpler quantum physical systems which improve their behavior as compared to their classical analogues. Not only is compulsory to find how these questions can be answered but in fact to be able to grasp what new concepts, if any, are needed in the theoretical framework that replaces the origin of these issues. One candidate quantum gravity theory, loop quantum gravity (LQG) \cite{Ashtekar-etal(2004), Rovelli200712,Thiemann200812} which is a non-perturbative, background-independent approach to quantize general relativity is natural to consider in dealing with the nature of spacetime. In particular, the implementation of the loop quantum gravity program for cosmological models, which is known as loop quantum cosmology (LQC) \cite{Bojowald:2006da,GAMM,Ashtekar:2011ni,CorichiAshtekarPetkov}, has led to the replacement of the big-bang singularity with a quantum bounce for homogeneous and isotropic models (see, for instance, the seminal works \cite{Bojowald:2001xe,Ashtekar:2006uz,Ashtekar:2006wn,Ashtekar:2006es,Vandersloot:2006ws}). Also anisotropic \cite{Chiou:2007sp,Chiou:2007mg,MartinBenito:2008wx, MartinBenito:2009qu, Ashtekar:2009vc, Ashtekar:2009um,WilsonEwing:2010rh,Singh:2011gp, Fujio:2012zz, Liu:2012xp,Corichi:2012hy, Corichi:2015ala,Singh:2013ava,Modesto:2005zm,Ashtekar:2005qt,Bohmer:2007wi,Campiglia:2007pb,Campiglia:2007pr,Chiou:2008eg,Chiou:2008nm,Modesto:2008im,Gambini:2008dy,Cortez:2012ina,Gambini:2013hna,Gambini:2013ooa,Joe:2014tca,Corichi:2015xia}, as well as inhomogeneous models \cite{Bojowald:2006qu,MenaMarugan:2009dp,Garay:2010sk,MartinBenito:2010up,MartinBenito:2010dz,Olmedo:2011zz,Brizuela:2011ps,MartindeBlas:2012zz,Martin-Benito:2013jqa,Tarrio:2013ija,Fernandez-Mendez:2014raa,Fernandez-Mendez:2014aea,Gomar:2014faa,MartIN-Benito:2015pca} have been studied. It has been argued in some cases \cite{Ashtekar:2011ni} it is convenient to use effective models capturing their essential quantum aspects mainly when the full quantum dynamics is unknown. The effective approach has been tested by applying the effective dynamics to cases where quantum evolution is fully known, with the astonishing result that the effective dynamics matches quite well, even in the deep quantum regime, with the full quantum dynamics of LQC \cite{Ashtekar:2006wn,Bentivegna:2008bg,Ashtekar:2006es,Diener:2014mia}. Clearly it is crucial to determine whether and when an effective description is pertinent without relying on the full quantum solution. Motivated by the success of LQC in the study of homogeneous cosmologies, the study of the Schwarzschild black hole interior by using LQC techniques was put forward in \cite{Modesto:2005zm,Ashtekar:2005qt} exploiting the fact that the interior Schwarzschild geometry is a particular homogneous Kantowski-Sachs model. Their results indicated that quantum Einstein equations were not singular. However, the answer to the question what replaces the classical singularity was not answered. Further developments using an effective approach were done in \cite{Bohmer:2007wi,Campiglia:2007pb,Campiglia:2007pr,Chiou:2008eg,Chiou:2008nm,Modesto:2008im,Gambini:2008dy,Cortez:2012ina,Gambini:2013hna,Gambini:2013ooa,Joe:2014tca,Corichi:2015xia} (For a recent review see \cite{Olmedo:2016ddn}) . Interestingly, \cite{Bohmer:2007wi} argued that there is a connection between the black hole and a Nariai Universe whereas \cite{Corichi:2015xia} found the presence of a white hole instead; the difference between these two works being whether a pair of parameters in the quantization are scale factor dependent or constants, respectively. In particular \cite{Bohmer:2007wi} get quantum corrections both at singularity and horizon, as opposed to \cite{Corichi:2015xia} which corrections are limited to the would be singularity. Actually such difference appears already in cosmological models in regard to the inadequacy of the so called $\mu_{0}=constant$ prescription \cite{Ashtekar:2011ni,CorichiAshtekarPetkov}; in order to correctly describe the classical regime such parameter should be scale factor dependent. Yet \cite{Corichi:2015xia} argue such analysis for cosmological models does not hold for Schwarzschild since it would alter the notion of classical horizon. To us this is an unsettled issue which requires further study and more information, for instance, the behavior of effective quantities, like geometric scalars in the effective Raychaudhuri equation, or the effective Kretschmann and curvature scalars (see \cite{Cortez:2012ina,Joe:2014tca} for first steps in this direction). The present work is aimed at filling the gap between the description using loop quantum model and that using an effective dynamics for the Kantowski-Sachs model representing the Schwarzschild black hole interior. We will derive the effective Hamiltonian constraint via the path integral approach starting from the quantum Hamiltonian in the so called improved dynamics scheme with the quantization parameters depending on the scale factors and consider the transition amplitude between two basis states labelled with different values of a time parameter. After performing the usual partition of the time interval we get the effective action, $S_{{\rm{eff}}}$, as the argument of an imaginary exponential that is to be integrated upon according to Feynman's prescription. It is from $S_{\rm{eff}}$ that the effective Hamiltonian $H_{\rm{eff}}$ will be extracted. Thereafter we will analyze in an analytical manner the effective Hamiltonian theory associated to $H_{\rm{eff}}$ and its impact on the behavior of relevant scalars. More precisely, we will prove that the effective expansion scalar, its time derivative and shear are bounded. Moreover, it is demonstrated that every scalar polynomial invariant, so, in particular, the Ricci and Kretschmann scalars, are bounded in the effective approach. This paper is organized as follows. Section \ref{ClassicalKS} is devoted to the classical setting of the theory. We start by recalling that the Schwarzschild interior geometry can be described by a Kantowski-Sachs model. Thereafter, we recast the model in connection variables and perform a qualitative canonical analysis of the classical dynamics identifying the singular behavior of curvature invariants. Next, in Section \ref{QKS}, within the framework of the improved dynamics prescription, we get the effective Hamiltonian constraint by using the path integral approach. This along the lines of \cite{Liu:2012xp} for Bianchi I. The effective loop quantum black hole interior geometry is analyzed in Section \ref{EffectiveKS}, where it is shown that classically divergent quantities are actually bounded in the effective approach. In Section \ref{sec-disc} we discuss and summarize our main results. Throughout this work we will denote by $\mu$ to the so-called improved dynamics for homogeneous models, which has previously been denoted in literature by $\bar{\mu}^{\prime}$ (see, for instance, \cite{Chiou:2008nm}). \section{Classical Theory} \label{ClassicalKS} \subsection{The interior geometry in connection variables} As it is well known, a Schwarzschild black hole of mass $M$ (i.e., a spherically symmetric vacuum solution to general relativity) is described by the metric \begin{equation} ds^{2}=-\left( 1-\frac{2GM}{r}\right) dT^{2}+{\left( 1-\frac {2GM}{r}\right)^{-1} }dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right), \label{metrica schwarschild} \end{equation} in Schwarzschild coordinates $T\in {\mathbb{R}}$, $r\in \mathbb{R}^{+}$, $0\leq \theta \leq \pi$ and $0\leq \phi \leq 2\pi$. At $r=0$ there is a true singularity (the Kretschmann scalar blows up as $r\to 0$) which is wrapped by an event horizon located at the so-called Schwarzschild radius, $r_{s}=2GM$ (where the Schwarzschild coordinates become singular). The exterior (i.e., $r>r_{s}$) spacelike $(\partial / \partial r)^{a}$ and timelike $(\partial / \partial t)^{a}$ vector fields switch into, respectively, timelike and spacelike vector fields at the black hole interior (i.e., $0<r<r_{s}$). For the sake of clarity, let us then rename the interior spatial coordinate $T$ by $x$, and the interior time coordinate $r$ by $t$. Thus, the Schwarzschild interior metric can be written as follows \begin{equation} ds^{2}=-\left( \frac{2GM}{t}-1\right)^{-1}dt^{2}+\left( \frac{2GM} {t}-1\right) dx^{2}+t^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right),\label{schwarzschild interior} \end{equation} where $0<t<2GM$ and $x\in \mathbb{R}$. The singularity now corresponds to an {\emph{initial}} singularity at time $t=0$, resembling to a cosmological singularity. In fact, the Schwarzschild interior solution (\ref{schwarzschild interior}) belongs to the class of Kantowski-Sachs cosmological models \cite{Kantowski:1966te} with homogeneous spatial sections $\Sigma\approx\mathbb{R}\times S^{2}$, i.e., Kantowski-Sachs models with symmetry group $\mathbb{R}\times SO(3)$, which are described by metrics of the form \begin{equation} ds^{2}=-N^{2}dt^{2}+X^{2}dx^{2}+Y^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right).\label{KS general metric} \end{equation} Here, the metric coefficients $X$, $Y$ and the lapse function $N$, depend on the coordinate time $t$ only. Since the Schwarzschild interior geometry can be understood as a Kantowski-Sachs spacetime with symmetry group $\mathbb{R}\times SO(3)$, let us consider the Kantowski-Sachs symmetry reduction of canonical general relativity in connection variables to get the connection description for the black hole interior. Following the procedure for the study of homogeneous models \cite{ashbolew} let us introduce an auxiliary metric $\mathring{q}_{ab}$ on the $3$-manifold $\mathbb{R}\times S^{2}$, with compatible triad $\mathring{e}^{a}_{i} $ and co-triad $\mathring{\omega}^{i}_{a}$ that are left-invariant under the action of the Killing fields of $\Sigma$. They carry the symmetry information but ignore the particularities about the minusuperspace model. The usual choice for the auxiliary metric{\footnote{For a different choice see \cite{Corichi:2015xia}.}}, which we will also consider here, is \cite{Ashtekar:2005qt,Modesto:2005zm,Bohmer:2007wi,Chiou:2008eg}: \begin{equation} \label{fid-metric} \mathring{q}_{ab}dy^{a}dy^{b}=dx^{2}+d\theta^{2}+\sin^{2}\theta d\phi^{2}. \end{equation} The determinant of the fiducial metric (\ref{fid-metric}) is given by $\mathring{q}=\sin^{2}\theta$, so that the densitized triad $\mathring{E}^{a}_{i}=\sqrt{\mathring{q}}\: \mathring{e}^{a}_{i}$ reads $\mathring{E}^{a}_{i}=\sin\theta \mathring{e}^{a}_{i}$. The compatible densitized triad $\mathring{E}^{a}\partial_{a}=\mathring{E}^{a}_{i}\tau^{i}\partial_{a}$, which takes values in the dual of $su(2)$, and its corresponding $su(2)$-valued co-triad $\mathring{\omega}_{a}dy^{a}=\mathring{\omega} ^{i}_{a}\tau_{i}dy^{a}$, are explicitly given by \begin{equation} \label{fiducial-cotriad-triad} \mathring{\omega}_{a}dy^{a}=\tau_{3}dx + \tau_{2} d\theta - \tau_{1}\sin \theta d\phi,\quad \mathring{E}^{a}\partial_{a}=\tau_{3}\sin\theta\partial_{x}+\tau_{2}\sin\theta\partial_{\theta}-\tau_{1}\partial_{\phi}, \end{equation} where $\tau_i$ are the standard generators of $SU(2)$, satisfying $[\tau_{i},\tau_{j}]=\epsilon_{ij}^{\:\:\: k}\tau_{k}$. Note that integrals over $\mathbb{R}\times S^{2}$ involving spatially homogeneous quantities will generally diverge, given the non-compact character of the $x$-direction. To circumvent this feature, which, for instance, is an obstacle to properly calculate the Poisson brackets, one restricts $x$ to an interval of finite length $L$, w.r.t. the fiducial metric, and then perform all integrations over a finite-sized cell ${\cal{V}}_{0}=[0,L]\times S^{2}$ of fiducial volume $V_{0}=4\pi L$. Now, by imposing the Kantowski-Sachs symmetry group $\mathbb{R}\times SO(3)$ in the full theory, one gets that the symmetric connection $A=A^{i}_{a}\tau_{i}dy^{a}$ and triad $E=E_{i}^{a}\tau^{i}\partial_{a}$ can be written, after gauge fixing of the Gauss constraint, as follows \begin{equation} \label{AE-in-fidu} A =L^{-1}\,c \left( \mathring{\omega}_{x}dx\right)+b\left( \mathring{\omega}_{\theta}d\theta + \mathring{\omega}_{\phi}d\phi\right)+\Gamma, \quad E=p_{c}\left(\mathring{E}^{x}\partial_{x}\right)+L^{-1}p_{b}\left(\mathring{E}^{\theta}\partial_{\theta}-\mathring{E}^{\phi}\partial_{\phi}\right). \end{equation} Here, $\Gamma=\Gamma^{i}_{a}\tau_{i}dy^{a}=\cos\theta\tau_{3}d\phi$ is the spin-connection compatible with the triad density $E$. Coefficients $b$, $c$, $p_{b}$ and $p_{c}$, which are all only functions of time, capture the non-trivial information about the symmetry reduced model. From Eqs.(\ref{fiducial-cotriad-triad})-(\ref{AE-in-fidu}), it follows that the Kantowsi-Sachs connection and triad are explicitly given by \begin{equation} A = L^{-1}\, c\tau_{3}dx+b\tau_{2}d\theta-b\tau_{1}\sin\theta d\phi+\tau_{3} \cos\theta d\phi, \label{A reducido} \end{equation} \begin{equation} E= p_{c}\tau_{3}\sin\theta\,\partial_{x}+L^{-1}p_{b}\tau_{2}\sin \theta\,\partial_{\theta}-L^{-1}p_{b}\tau_{1}\,\partial_{\phi}. \label{E reducido} \end{equation} The phase space resulting from the symmetry reduction and gauge fixing processes is the symplectic space ${\mathbf{\Gamma}}=[(b,p_{b},c,p_{c}),\Omega]$, with symplectic form \cite{Ashtekar:2005qt,Chiou:2008eg} \begin{equation} \label{reduced-symps} \Omega=\frac{1}{8\pi G\gamma}\int_{{\cal{V}}_0}d^{3}y\: \left(dA^{i}_{a}\wedge dE^{a}_{i}\right)=\frac{1}{2G\gamma}\left(dc\wedge dp_{c}+2db\wedge dp_{b} \right), \end{equation} where $\gamma$ is the so-called Barbero-Immirzi parameter. The only non vanishing Poisson brackets defined by the reduced symplectic form (\ref{reduced-symps}) are \begin{equation} \label{poisson-brackets} \{b,p_{b}\}=G\gamma, \quad \{c,p_{c}\}=2G\gamma. \end{equation} Let us remark that, in fact, we will not consider the whole of the phase space $\bf{\Gamma}$. Indeed, as a part of the gauge-fixing procedure, $p_b$ can be chosen to be a strictly positive function \cite{Chiou:2008nm}, $p_{b}>0$. Besides, since distinct signs of $p_c$ correspond to regions with triads of opposite orientations \cite{Ashtekar:2005qt,Chiou:2008nm}, then $p_{c}$ can be chosen to be strictly positive as well. Recall that a $3$-metric $q_{ab}$ is related with its compatible densitized triad $E^{a}_{i}$ by $qq^{ab}=E^{a}_{i}E^{b}_{i}$. Thus, from Eqs. (\ref{KS general metric}) and (\ref{E reducido}) it follows that $X^{2}=p_{b}^{2}/(L^{2}p_{c})$ and $Y^{2}=p_{c}$; i.e., in terms of the triad variables, the Kantowski-Sachs metric reads \begin{equation} ds^{2}=-N(t)^{2}dt^{2}+\frac{p_{b}^{2}}{L^{2}p_{c}}dx^{2}+p_{c}\left( d\theta^{2}+\sin^{2}\theta d\phi^{2}\right) . \label{metrica cinematica} \end{equation} Thus, with respect to the metric (\ref{metrica cinematica}), the length of the interval $[0,L]$ in the $x$-direction, the area of $S^{2}$ and the volume of the cell ${\cal{V}}=[0,L]\times S^{2}$, are respectively given by \begin{equation} \label{leng-are-vol} l=p_{b}/\sqrt{p_{c}},\quad A_{S^{2}}=4\pi p_{c} , \quad V=4\pi p_{b}\sqrt{p_{c}} \end{equation} Now, in terms of the reduced canonical variables, $(b,p_{b})$ and $(c,p_{c})$, the Hamiltonian constraint takes the form \cite{Ashtekar:2005qt} \begin{equation} {\cal{C}}_{\rm{Ham}}=16\pi G\,H_{{\rm{class}}}=-\frac{8\pi N}{\gamma^{2}}\left[ 2bc\sqrt{p_{c}}+(b^{2}+\gamma^{2} )\frac{p_{b}}{\sqrt{p_{c}}}\right] , \label{class-H} \end{equation} which defines $H_{{\rm{class}}}$. By choosing the lapse function equal to one, from Eqs.(\ref{poisson-brackets}) and (\ref{class-H}) we obtain that the dynamics is dictated by \begin{eqnarray} \dot{b} & = & \{b,H_{{\rm{class}}}\} = -\frac{1}{2\gamma\sqrt{p_{c}}}\:\left(b^{2}+\gamma^{2}\right),\label{bdot} \\ \dot{c} & =& \{c,H_{{\rm{class}}}\} =\frac{1}{2\, \gamma\,p_{c}^{3/2}}\:\left(b^{2}p_{b}-2bcp_{c}+\gamma^{2}p_{b}\right),\label{cdot}\\ \dot{p}_{b} & =& \{p_{b},H_{{\rm{class}}}\} =\frac{1}{\gamma{\sqrt{p_{c}}}}\:\left(bp_{b}+cp_{c}\right),\label{pbdot}\\ \dot{p}_{c} & =&\{p_{c},H_{{\rm{class}}}\} =\frac{1}{\gamma}\:\left(2b\sqrt{p_{c}}\right). \label{pcdot} \end{eqnarray} A direct calculation shows that the Ricci and Kretschmann scalars of the metric (\ref{metrica cinematica}) are, respectively, \begin{equation} R=\frac{2p_{c}\ddot{p}_{b}+p_{b}\left( 2+\ddot{p}_{c}\right) }{p_{b}p_{c}}, \label{ricciscalarkinematic} \end{equation} \begin{align} K =R_{abcd}R^{abcd} & =\frac{1}{2\text{$p_{b}$}^{2}\text{$p_{c}$}^{4}}\left[ 4\text{$p_{c}$}^{2}\left( 3\dot{p}_{b}^{2}\dot{p}_{c}^{2}-4\text{$p_{c}$} \dot{p}_{b}\dot{p}_{c}\text{$\ddot{p}_{b}$}+2\text{$p_{c}$}^{2}\text{$\ddot {p}_{b}$}^{2}\right) \right. \label{Kretschmannscalarkinematic}\\ & +4\text{$p_{b}p_{c}$}\left( \text{$p_{c}\ddot{p}_{b}$}\left( 3\dot{p} _{c}^{2}-2\text{$p_{c}$}\ddot{p}_{c}\right) +\dot{p}_{b}\left( -4\dot{p} _{c}^{3}+2\text{$p_{c}$}\dot{p}_{c}\ddot{p}_{c}\right) \right) \nonumber\\ & \left. +\text{$p_{b}$}^{2}\left( 7\dot{p}_{c}^{4}+2\text{$p_{c}$}\dot {p}_{c}^{2}\left( 2-5\ddot{p}_{c}\right) +\text{$p_{c}$}^{2}\left( 8+6\ddot{p}_{c}^{2}\right) \right) \right]. \nonumber \end{align} It is not difficult to see, by using Eqs.(\ref{pbdot})-(\ref{pcdot}), that $\ddot{p}_{b} =bc/\gamma^{2}$ and $\ddot{p}_{c} =(b/\gamma)^{2}-1$. Substituting the latter expressions into Eqs.(\ref{ricciscalarkinematic})-(\ref{Kretschmannscalarkinematic}), as well as by imposing the constraint (\ref{class-H}) and employing the dynamical equations (\ref{bdot})-(\ref{pcdot}), we get that on the constraint surface \begin{equation} \label{class-RK} R=0, \qquad K=\frac{12}{\gamma^{4}}\left( \frac{b^{2}+\gamma^{2}}{p_{c}}\right)^{2}. \end{equation} Solving the equations (\ref{bdot}) and (\ref{pcdot}), (see Eq. (\ref{pcb}) below) one gets $(b^{2}+\gamma^{2})^{2}=a_{0}/p_{c}$, with $a_{0}$ being a constant depending on initial conditions and $\gamma$. Hence the Kretschmann scalar goes as $1/p_{c}^{3}$. Explicitly, \begin{equation} \label{class-RK-2} K=\frac{12a_{0}}{\gamma^{4}p_{c}^{3}}. \end{equation} Thus, the Kretschmann scalar blows up as $p_{c}$ tends to zero, corresponding to the classical singularity. Let us now examine the solutions to the system (\ref{class-H})-(\ref{pcdot}), and let us inspect the behavior of the expansion scalar and shear. \subsection{Solutions, expansion scalar and shear} \label{Classical Qualitative Analysis} To start, notice that $cp_{c}$ is a constant on the constraint surface. Indeed, from Eqs.(\ref{cdot}) and (\ref{pcdot}) it follows that \begin{equation} \{cp_{c},H_{{\rm{class}}}\}=-\frac{\gamma}{16\pi}{\cal{C}}_{\rm{Ham}}. \end{equation} Let us denote the constant $cp_{c}$ by $\gamma K_{c}$; i.e., \begin{equation} \label{Kc} cp_{c}=\gamma K_{c}. \end{equation} Since the sign flipping $K_{c}\to -K_{c}$ is associated to the time reversal $t\to -t$ \cite{Chiou:2008nm}, the two regions, $K_{c}>0$ and $K_{c}<0$, are causally disconnected. Let us consider the region $K_{c}>0$, as in \cite{Chiou:2008nm} (for the sake of completeness, we will also discuss the opposite choice, $K_{c}<0$, at the end of the present section). Provided that $p_{c}> 0$, we have that $c$ must be a strictly positive function of time $t$. Since ${\cal{C}}_{\rm{Ham}}=0$ implies that $b$ and $c$ must have opposite signs (see Eq. (\ref{class-H})), we then conclude that $b<0$. On the other hand, viewed as a quadratic equation in $b$, the constraint (\ref{class-H}) has discriminant $D=\gamma^{2}(K^{2}_{c}-p^{2}_{b}),$ where we have used (\ref{Kc}). Thus, to keep $b$ real, $D$ must be non-negative, which implies that $p_{b}$ is bounded from above \begin{equation} p_{b}\leq K_{c}. \label{cotapbclasica} \end{equation} Now, note that Eqs.(\ref{bdot}) and (\ref{pcdot}) are actually decoupled equations from the rest of Hamilton's equations. Thus, we have that \begin{equation} \frac{dp_{c}}{db}=-\frac{4bp_{c}}{\left(b^{2}+\gamma^{2}\right)}, \label{dpc/db} \end{equation} which solution is given by \begin{equation} p_{c}=p_{c0}\left( \frac{b_{0}^{2}+\gamma^{2}}{b^{2}+\gamma^{2}}\right) ^{2}. \label{pcb} \end{equation} Here, $p_{c0}$ and $b_{0}$ stand for initial conditions at $t=t_{0}$. Since $\dot{b}<0$ (c.f. Eq.(\ref{bdot})), we have that $b$ is a monotonically decreasing function of time $t$ and, by virtue of Eq.(\ref{pcb}), so is $p_c$. Now, substituting Eq.(\ref{pcb}) in Eq.(\ref{bdot}), we get that \begin{equation} \label{b-dot-sol} \dot{b}=-\alpha_{0}\left(b^{2}+\gamma^{2}\right)^{2}, \qquad \alpha_{0}=\left[2\gamma \sqrt{p_{c0}}\,(b_{0}^{2}+\gamma^{2})\,\right]^{-1}. \end{equation} So that, \begin{equation} \label{b-sol} g(b)=-2\gamma^{3}\alpha_{0}(t-t_{0})+g(b_{0}),\qquad g(s)=\frac{\gamma s}{(s^{2}+\gamma^{2})}+\arctan\left(\frac{s}{\gamma}\right). \end{equation} Clearly, $g<0$ for all $b<0$. It is easy to see that $g$ decreases monotonically as $b$ evolves in time, which in turn implies that $g$ is a monotonous decreasing function of $t$. Indeed, a straightforward calculation shows that $dg/dt=-2\gamma^{3}\alpha_{0}$; i.e., $g$ is a monotonically decreasing function of time. Note, in addition, that $-\pi /2 < g$. Thus, the relationship (\ref{b-sol}) makes sense (i.e., it is a well-defined relationship providing $b(t)$ at each given $t$ value) only if $2\gamma^{3}\alpha_{0}\Delta t<\pi/2+g(b_{0})$, where $\Delta t =(t-t_{0})$. Hence, as $\Delta t$ approaches to the maximal value $\Delta t_{f}$, \begin{equation} \label{time singularity} \Delta t _{f}=\frac{1}{2\gamma^{3}\alpha_{0}}\left[\frac{\pi}{2}+g(b_{0})\right], \end{equation} the solution $b(t)$ will tend to $b\to -\infty$. Then, from Eq.(\ref{pcb}) it follows that the solution $p_{c}(t)$ will tend to zero as $\Delta t$ approaches $\Delta t _{f}$. By substituting Eq.(\ref{pcb}) into $c=\gamma K_{c}/p_{c}$ [see Eq.(\ref{Kc})], we get that \begin{equation} c=\frac{\gamma K_{c}}{p_{c0}}\left( \frac{b^{2}+\gamma^{2}}{b_{0}^{2}+\gamma^{2}}\right) ^{2}. \label{cb} \end{equation} Hence, the solution $c$ must tend to infinity as $\Delta t \to \Delta t _{f}$. By using Eq.(\ref{Kc}) and Eq.(\ref{pcb}) into the constraint equation ${\cal{C}}_{\rm{Ham}}=0$ [see Eq.(\ref{class-H})], we obtain that \begin{equation} \label{pbb} p_{b}=-2\gamma K_{c}\frac{b}{(b^{2}+\gamma^{2})}. \end{equation} Thus, in the limit when $\Delta t$ tends to $\Delta t_{f}$, the solution $p_b$ goes to zero as $1/\vert b \vert$. (Note that $p_{b}/p_{c}$ diverges as $\vert b \vert^{3}$ when $\Delta t \to \Delta t_{f}$). Relations (\ref{pcb}), (\ref{b-sol}), (\ref{cb}) and (\ref{pbb}) provide the solution to the system (\ref{class-H})-(\ref{pcdot}). Once the solution $b(t)$ is obtained from Eq.(\ref{b-sol}), the rest of solution functions, namely $p_{c}(t)$, $c(t)$ and $p_{b}(t)$, are determined by substituting $b(t)$ into equations (\ref{pcb}), (\ref{cb}) and (\ref{pbb}), respectively. The time domain of the solution functions is $t\in [t_{0},t_{0}+\Delta t]$, with $\Delta t \leq \Delta t_{f}$; given an initial data $(b_{0},p_{b0},c_{0},p_{c0})$ at $t=t_0$, with $b_{0}\in \mathbb{R}^{-}$, $p_{b0}\in (0,K_{c})$, and $c_{0},p_{c0}\in \mathbb{R}^{+}$, the solution will tend to the `endpoint' \begin{equation} (b\rightarrow-\infty,p_{b}\to 0 ,c\to \infty,p_{c}\to0), \label{endpt} \end{equation} as $t$ approaches $t_{f}=t_{0}+\Delta t_{f}$. From (\ref{leng-are-vol}), (\ref{pcb}) and (\ref{pbb}), it follows that the length $l$, the area $A_{S^{2}}$ and the cell volume $V$ will behave as $l\sim \vert b \vert$, $A_{S^{2}}\sim 1/b^{4}$ and $V\sim 1/\vert b \vert^{3}$ as $t\to t_{f}$. Let us now consider the congruence of timelike geodesics defined by comoving observers in Kantowski-Sachs spacetime (\ref{metrica cinematica}), with $N=1$; that is, the associated vector field to the congruence is $\xi^{a}=(\partial/\partial t)^{a}$. Thus, the expansion scalar $\theta$ corresponds to $\dot{V}/V$, where $V=4\pi p_{b}\sqrt{p_{c}}$ is the congruence's cross-sectional volume. A simple calculation shows that \begin{equation} \label{scalar-factor-r} \theta=\frac{\dot{p}_{b}}{p_{b}}+\frac{\dot{p}_{c}}{2p_{c}}. \end{equation} By using Eqs.(\ref{bdot}), (\ref{pcdot}), (\ref{pcb}) and (\ref{pbb}), as well as calculating $\dot{p}_b$ by employing Eq.(\ref{pbb}), it is not difficult to see that \begin{equation} \label{class-exp-scalar} \theta=\alpha_{0}\left(\frac{b^{2}+\gamma^{2}}{b}\right)\left(3b^{2}-\gamma^{2}\right). \end{equation} Clearly, the expansion scalar is a monotonically decreasing function of time (recall that $b<0$ and that it is a monotonically decreasing function of $t$). What is more, irrespective of the initial condition $b_0$, $\theta\to -\infty$ as $t$ tends to the maximal value $t_{f}$ (i.e., the volume shrinks to zero as $t\to t_{f}$, invariably). At finite proper time $\Delta t_{f}$, the congruence of timelike geodesics develop a caustic, and the cell volume becomes zero; in fact, the geodesics of the congruence turn out to be inextendible (i.e., incomplete). Note, however, that depending upon the initial condition $b_0$, there would be a stage where the volume, in fact, will enlarge. Indeed, observe that $\theta$ is strictly positive for $-\gamma/\sqrt{3}<b<0$, it is zero at $b=-\gamma/\sqrt{3}$, and it is strictly negative for $b<-\gamma/\sqrt{3}$. Thus, if the initial condition $b_{0}$ is in $(-\gamma/\sqrt{3},0)$, we will have that the volume will increase up to a maximum value $V_{{\rm{max}}}$, at $b=-\gamma/\sqrt{3}$, and afterwards it will monotonically decrease up to zero volume, at $t=t_{f}$ (time at which $p_{c}$ vanishes and the Kretschmann scalar blows up). A direct calculation shows that the time derivative of the expansion scalar (\ref{class-exp-scalar}) is given by \begin{equation} \label{class-theta-dot} \dot{\theta} =-\alpha_{0}^{2}\left(\frac{b^{2}+\gamma^{2}}{b}\right)^{2} \left(9b^{4}+2\gamma^{2}b^{2}+\gamma^{4}\right), \end{equation} where we have used (\ref{b-dot-sol}). Since $\dot{\theta}$ is strictly negative, the expansion scalar $\theta$ is a monotonically decreasing function of $t$ (as we have already pointed out). The shear, which is given by (see for instance \cite{Joe:2014tca}) \begin{equation} \label{shear-r} \sigma^{2}=\frac{1}{2}\sigma_{ab}\sigma^{ab} =\frac{1}{3}\left( \frac{\dot{p}_{b}}{p_{b}}-\frac{\dot {p}_{c}}{p_{c}}\right)^2 , \end{equation} reads explicitly as follows \begin{equation} \label{class-shear} \sigma^{2}=\frac{\alpha_{0}^{2}}{3}\left(\frac{b^{2}+\gamma^{2}}{b}\right)^{2}\left(3b^{2}+\gamma^{2}\right)^{2}. \end{equation} From Eqs.(\ref{class-exp-scalar}), (\ref{class-theta-dot}) and (\ref{class-shear}), it is a simple exercise to see that $\dot{\theta}=-(1/3)\theta^{2}-2\sigma^{2}$, which is nothing but Raychaudhuri's equation. (Recall that the congruence is hypersurface orthogonal, so that there is no rotational term. In addition, the term $R_{ab}\xi^{a}\xi^{b}=R_{00}$ is identically zero on shell). Let us remark that by considering $K_{c}<0$, one gets that $c<0$ (since $p_{c}$ is strictly positive) and that $b>0$ (since $b$ and $c$ must have opposite signs). Exactly as above, it is shown that $p_{b}\leq \vert K_{c} \vert$. The expressions for $p_{c}$, $c$ and $p_{b}$ [respectively, Eqs. (\ref{pcb}), (\ref{cb}) and (\ref{pbb})] will be the same ones, though now with $K_{c}<0$ and $b>0$. Of course, equation (\ref{b-dot-sol}) governing the dynamics of $b$ is the same one, so is its solution (\ref{b-sol}); but now with $g>0$, provided that $b>0$. Since $\dot{b}<0$, then $b$ is a monotonically decreasing function of $t$, which implies that $g$ decreases in time. Rather than technical, the important difference between conventions $K_{c}>0$ and $K_{c}<0$ is conceptual. Recall that associated to the sign of $K_{c}$ is a time reversal, so it is natural to write the solution to Eq.(\ref{b-dot-sol}) as \begin{equation} \label{b-sol-neg-k} g(b)=2\gamma^{3}\alpha_{0}(t_{0}-t)+g(b_{0}),\qquad g(s)=\frac{\gamma s}{(s^{2}+\gamma^{2})}+\arctan\left(\frac{s}{\gamma}\right), \end{equation} with $t_{0}>t$ (for instance, $t_0$ would denote the `present time', whereas $t$ stands for an earlier time). Clearly, $0<g<\pi/2$ for all $b\in \mathbb{R}^{+}$, and it is a monotonically decreasing function of $t$. Equation (\ref{b-sol-neg-k}) implies that $b(t)$ will tend to $b\to \infty$ as $t\to 0$; so that, $p_{c}\to 0$, $c\to -\infty$ and $p_{b}\to 0$ as $t$ approaches zero. In particular, we have that as $t\to 0$, the Kretschmann scalar will diverge as $b^{8}$, whereas the cell volume will collapse to zero as $V\sim 1/b^{3}$ . The explicit expression for the expansion scalar of a congruence of timelike geodesics constructed from comoving observers (i.e., with associated vector field $\xi^{a}=(\partial /\partial t )^{a}$) is also given by Eq.(\ref{class-exp-scalar}). Note that $\theta>0$ for $0<t<t_{*}$, where $t_{*}$ is `the cosmological time' at which $b(t_{*})=\gamma/\sqrt{3}$, $\theta$ is zero at $t_{*}$, and it is strictly negative for $t>t_{*}$. The expansion scalar is, in fact, a monotonically decreasing function in time. Note, in addition, that $\theta\to \infty$ at $t=0$. The shear and the time derivative of $\theta$ have, of course, exactly the same expressions as above. Now, by considering the congruence of `past-directed comoving world lines', which associated vector field is $\xi^{a}=-(\partial /\partial t )^{a}$, one gets that the backward in time (BT) expansion scalar is given by \begin{equation} \label{expansion} \theta_{\rm{BT}}(\tau')=-\alpha_{0}\left(\frac{b^{2}+\gamma^{2}}{b}\right)\left(3b^{2}-\gamma^{2}\right), \end{equation} where $b$ is evaluated at $(t_{0}-\tau')$ and the parameter $\tau'$ is from zero to $t_{0}$ (so that $t=0$ corresponds to the limit $\tau'\to t_{0}$). Thus, $\theta_{\rm{BT}}(\tau')\to -\infty$ within a finite `proper time' $t_{0}$; that is to say, the volume shrinks to zero as we approach the `initial singularity'. \section{Quantum Schwarzschild interior} \label{QKS} In this section we implement a path integral quantization of the Schwarzschild black hole interior. To do so we make use of its Kantowski-Sachs form as well as the similarity of the latter with Bianchi I model, both being anisotropic homogeneous models. In particular, rather than starting from scratch with the LQC techniques (see e.g. \cite{Ashtekar:2011ni, Ashtekar:2009vc,Ashtekar:2005qt}) we perform a sequence of transformations in phase space that ultimately allow us to identify adequate holonomy type variables for the KS model at the hamiltonian level, first, and, second, to introduce its path integral quantization. We follow closely the analyisis for Bianchi I in \cite{Liu:2012xp}. In this way an effective action, and hence an effective hamiltonian, can be identified from the transition amplitude of the quantum KS model. This effective hamiltonian will be used in the next section to analyze the effective geometry for Schwarzschild interior. Let us notice that an effective Hamiltonian was proposed in \cite{Bohmer:2007wi,Chiou:2008eg} motivated by several previous results. Essentially it was defined by the heuristic replacements $b\rightarrow \sin(\mu_bb)/\mu_b$ and $c\rightarrow \sin(\mu_cc)/\mu_c$ in (\ref{class-H}), where the use of $\mu_b,\mu_c$ follows from their appearence in a form of the hamiltonian constraint in which curvature terms are expressed by holonomies along elementary squares of length related to them \cite{Ashtekar:2011ni}. Our approach, that was described above, is different from this simple replacement, however, we will regain the effective hamiltonian of \cite{Chiou:2008eg}. Now various criteria turn out to be necessary in the holonomy version of the construction \cite{Ashtekar:2011ni}. They include (i) the area of the elementary squares used in the holonomies should not be less than the minimum area gap $\Delta$ found in the spectrum of the area operator of the full theory, (ii) physical quantities must be independent of a fiducial metric introduced along the analysis, as well as (iii) avoidance of large quantum gravity effects in classical regimes \cite{Ashtekar:2006uz}. Such criteria led to propose the following form of the $\mu^{\prime}s$ \cite{Bohmer:2007wi, Chiou:2008eg, Ashtekar:2009vc} \footnote{The two other possibilities that were explored, do not satisfy all these criteria, yielding inconsistent physics. See for example \cite{Joe:2014tca} and references therein.} \begin{equation} \mu_{b}:=\sqrt{\frac{\Delta}{p_{c}}}\quad\text{and}\quad\mu_{c}:=\frac {\sqrt{\Delta p_{c}}}{p_{b}}.\label{mu \end{equation} Let us begin with the hamiltonian description and consider the classical constraint (\ref{class-H}). It is convenient to define first the following set of canonical variables \cite{Liu:2012xp} \begin{equation} \lambda_{b}:=\frac{p_{b}}{\sqrt{G\hbar}},\quad\lambda_{c}:=\sqrt{\frac{{p_{c} }{{G\hbar}}},\label{muprimvar}\\ \varphi_{b}:=\frac{b}{\sqrt{G\hbar}},\quad\varphi_{c}:=c\sqrt{\frac{{p_{c} }{{G\hbar}}}, \end{equation} which Poisson brackets take the form \begin{equation} \label{PBphilambda} \hbar\{\varphi_{l},\lambda_{j}\}=\gamma\delta_{l,j}\qquad l,j=b,c. \end{equation} Let us observe that the following variables \begin{equation} k_{b}:=\frac{\varphi_{b}}{\lambda_{c}}=\frac{b\mu_{b}}{\sqrt{\Delta}},\quad k_{c}:=\frac{\varphi_{c}}{\lambda_{b}}=\frac{c\mu_{c}}{\sqrt{\Delta },\label{muprimvar2 \end{equation} have exponential forms \begin{eqnarray} \label{U} U_b := \text{ e}^{\text{i}\sqrt{\Delta}k_{b}} = \text{ e}^{\text{i}b\mu_b}, \quad U_c :=\text{ e}^{\text{i}\sqrt{\Delta}k_{c}} = \text{ e}^{\text{i}c\mu_c}, \end{eqnarray} that are amenable for the application of LQC techniques. In the sequel the following identity will be of much help \begin{equation} \frac{\sin(\sqrt{\Delta}k_{l})}{\sqrt{\Delta}}=\frac{U_{l}-(U_{l})^{\ast} }{2\text{i}\sqrt{\Delta}}, \quad l=b,c.\label{representacion seno \end{equation} Using the set of variables $\lambda_b,\lambda_c, k_b,k_c$, given in eqs. (\ref{mu}), (\ref{muprimvar}) and (\ref{muprimvar2}), in the classical constraint (\ref{class-H}) yields the following form \begin{equation} \label{Hmulk} H_{\mu}=-\frac{\hbar}{2\gamma^{2}}\left[ 2(\lambda_bk_b)(\lambda_ck_c)+\frac{\lambda_{c}}{\lambda_{b}}(\lambda_bk_b)^{2}+\frac{\gamma^{2}}{G\hbar}\frac{\lambda_{b}}{\lambda_{c}}\right]. \end{equation} Next we consider a small argument approximation for the $(\lambda_bk_b)$ and $(\lambda_c k_c)$ factors in (\ref{Hmulk}) through the following relations \begin{eqnarray} (\lambda_b k_b) &\approx& \frac{1}{\sqrt{\Delta}} \left[ \lambda_b\sin(\sqrt{\Delta}k_b) \right] :=\Phi_b, \label{Phib}\\ (\lambda_c k_c) &\approx& \frac{1}{\sqrt{\Delta}} \left[ \lambda_c\sin(\sqrt{\Delta}k_c) \right] :=\Phi_c, \label{Phic} \end{eqnarray} where the $\sin(\sqrt{\Delta}k_b)$ and $\sin(\sqrt{\Delta}k_c)$ will be understood according to (\ref{representacion seno}) so that our elementary variables will be $\lambda_l,U_l, l=b,c$. Their Poisson brackets can be obtained upon combination of (\ref{PBphilambda}), (\ref{muprimvar2}) and (\ref{U}). The result is \begin{eqnarray} \label{PBlambdaU} \left\{ {\lambda}_b, {U}_b\right\}&=&\frac{\ell_0}{i\hbar} \frac{{U}_b}{{\lambda}_c}, \nonumber\\ \left\{ {\lambda}_c, {U}_c\right\} &=& \frac{\ell_0}{i\hbar} \frac{{U}_c}{{\lambda}_b} , \end{eqnarray} with $\ell_0:=\gamma\sqrt{\Delta}$. Let us now proceed to quantization. Our elementary quantum observables will be $\widehat{\lambda}_{b},\widehat {\lambda}_{c},\hat{U_b}$ and $\hat{U_c}$. The Hilbert space of this system will be $\mathcal{H}^{(2)}_{\mathrm Poly}= \mathcal{H}_{\mathrm Poly}\otimes\mathcal{H}_{\mathrm Poly}$ with $\mathcal{H}_{\mathrm Poly}=L^{2}(\mathbb{R} _{\text{Bohr}},\text{d}\mu_{H})$ and $\mathbb{R}_{\text{Bohr}}$ is the Bohr compactification of the real line and $\text{d}\mu_{H}$ is its Haar's measure \cite{Velhinho:2007gg}. We use a basis of eigenkets $|\vec{\lambda}\rangle:=|\lambda_{b},\lambda _{c}\rangle$ of the operators $\hat{\lambda}_{b}$ and $\hat{\lambda}_{c}$. These basis satisf \begin{equation} \langle\vec{\lambda}^{\prime}|\vec{\lambda}\rangle=\delta_{\vec{\lambda }^{\prime},\vec{\lambda}}, \end{equation} where $\delta_{\lambda^{\prime},\lambda}$ is a Kronecker delta. To represent $\hat{U}'s$ we make use of the commutation relations \begin{eqnarray} \left[ \hat{\lambda}_b, \hat{U}_b\right] &=& \ell_0 \frac{\hat{U}_b}{\hat{\lambda}_c}, \\ \left[ \hat{\lambda}_c, \hat{U}_c\right] &=& \ell_0 \frac{\hat{U}_c}{\hat{\lambda}_b} , \end{eqnarray} which follow from the application of Dirac's prescription to the Poisson brackets (\ref{PBlambdaU}). They lead to \begin{equation} \hat{U}_{b}|\vec{\lambda}\rangle=|\lambda_{b}+\ell_{0}/\lambda_{c},\lambda _{c}\rangle,\\ \qquad\hat{U}_{c}|\vec{\lambda}\rangle=|\lambda_{b},\lambda_{c}+\ell _{0}/\lambda_{b}\rangle. \end{equation} Now we proceed to implement the quantum version of (\ref{Phib}) and (\ref{Phic}) as \cite{Ashtekar:2010gz} \begin{equation} \hat{\Phi}_{b}:=\frac{1}{\sqrt{\Delta}}\left[ \sqrt{\hat{\lambda}_{b} \widehat{\sin{\sqrt{\Delta}k_{b}}}\sqrt{\hat{\lambda}_{b}}\right] \quad\text{and}\quad\hat{\Phi}_{c}:=\frac{1}{\sqrt{\Delta}}\left[ \sqrt {\hat{\lambda}_{c}}\widehat{\sin{\sqrt{\Delta}k_{c}}}\sqrt{\hat{\lambda}_{c }\right] ,\label{phi \end{equation} whose action on the basis ar \begin{align} \hat{\Phi}_{b}|\vec{\lambda}\rangle & =-\frac{\text{i}\sqrt{\lambda_{b} }{2\sqrt{\Delta}}\left[ \sqrt{\lambda_{b}+\ell_{0}/\lambda_{c}}|\lambda _{b}+\ell_{0}/\lambda_{c},\lambda_{c}\rangle-\sqrt{\lambda_{b}-\ell _{0}/\lambda_{c}}|\lambda_{b}-\ell_{0}/\lambda_{c},\lambda_{c}\rangle\right] ,\label{phiactua}\\ \hat{\Phi}_{c}|\vec{\lambda}\rangle & =-\frac{\text{i}\sqrt{\lambda_{c} }{2\sqrt{\Delta}}\left[ \sqrt{\lambda_{c}+\ell_{0}/\lambda_{b}}|\lambda _{b},\lambda_{c}+\ell_{0}/\lambda_{b}\rangle-\sqrt{\lambda_{c}-\ell _{0}/\lambda_{b}}|\lambda_{b},\lambda_{c}-\ell_{0}/\lambda_{b}\rangle\right] . \label{phiactua2} \end{align} Hence the quantum version of the hamiltonian constraint (\ref{Hmulk}), using (\ref{Phib}) and (\ref{Phic}) first at the classical level and then their quantum version (\ref{phi}), becomes \begin{equation} \hat{H}_{\mu}=-\frac{\hbar}{2\gamma^{2}}\left[ \hat{\Phi}_{b}\hat{\Phi _{c}+\hat{\Phi}_{c}\hat{\Phi}_{b}+(\hat{\Phi}_{b})^{2}\frac{\hat{\lambda}_{c }{2\hat{\lambda}_{b}}+\frac{\hat{\lambda}_{c}}{2\hat{\lambda}_{b}}\hat{(\Phi }_{b})^{2}+\frac{\gamma^{2}}{G\hbar}\frac{\hat{\lambda}_{b}}{\hat{\lambda_{c }}\right] , \label{Hmuprima \end{equation} for which a symmetric ordering has been introduced. Its building blocks are defined to act upon the eigenbasis $|\vec\lambda\rangle$, according to (\ref{phiactua}) and (\ref{phiactua2}) while the $\hat{\lambda}$ factors act diagonally. This hamiltonian will be used now to obtain Feynman's formula for the propagator to go from state $|\vec{\lambda}_{i};\tau_{i}\rangle$ at proper time $\tau_i$ to $|\vec{\lambda}_{f};\tau_{f}\rangle$ at time $\tau_f > \tau_i$. It takes the form \begin{equation} \langle \vec{\lambda}_{f};\tau_{f} |\vec{\lambda}_{i};\tau_{i}\rangle=\langle \vec{\lambda}_{f}|\text{ e}^{-\text{i}\Delta\tau\hat{H}_{\mu}/\hbar |\vec{\lambda}_{i}\rangle, \qquad \Delta \tau= \tau_f-\tau_i. \label{aprima1 \end{equation} To calculate explicitly such propagator we consider as usual a partition of the time interval $\Delta \tau$ and split, accordingly, the time evolution operator as \begin{equation} \label{Nexp} \text{ e}^{-\text{i}\Delta\tau\hat{H}_{\mu}/\hbar}=\prod_{n=0}^{N-1}\text{ e}^{-\text{i}\epsilon\hat{H}_{\mu}/\hbar},\quad\text{where}\quad N\epsilon=\Delta\tau. \end{equation} Then using \begin{equation} \hat{\mathbb{I}}=\sum_{\vec{\lambda}_{n}}|\vec{\lambda}_{n}\rangle\langle \vec{\lambda}_{n}|, \end{equation} together with (\ref{Nexp}) allow us to rewrite (\ref{aprima1}) a \begin{equation} \langle \vec{\lambda}_{f};\tau_{f} |\vec{\lambda}_{i};\tau_{i}\rangle=\sum _{\vec{\lambda}_{N-1},...,\vec{\lambda}_{1}}\prod_{n=0}^{N-1}\langle \vec{\lambda}_{n+1}|\text{e}^{-\text{i}\epsilon\hat{H}_{\mu}/\hbar |\vec{\lambda}_{n}\rangle, \label{aprima \end{equation} where $\vec{\lambda}_{f}=\vec{\lambda}_{N}$ y $\vec{\lambda}_{i}=\vec{\lambda }_{0}$. Next we consider that for small $\epsilon \begin{equation} \langle\vec{\lambda}_{n+1}|\text{e}^{-\text{i}\epsilon\hat{H}_{\mu}/\hbar }|\vec{\lambda}_{n}\rangle=\delta_{\vec{\lambda}_{n+1},\vec{\lambda}_{n }-\text{i}\frac{\epsilon}{\hbar}\langle\vec{\lambda}_{n+1}|\hat{H}_{\mu |\vec{\lambda}_{n}\rangle+\mathcal{O}(\epsilon^{2}). \label{aprimepartial \end{equation} The matrix elements $\langle\vec{\lambda}_{n+1}|\hat{H}_{\mu}|\vec{\lambda }_{n}\rangle$ can be calculated using (\ref{phiactua}) and (\ref{Hmuprima}): \begin{align} \label{LHL} \langle\vec{\lambda}_{n+1}|\hat{H}_{\mu}|\vec{\lambda}_{n}\rangle =\frac{\hbar}{8\gamma^{2}\Delta}&\left\{\sqrt{\lambda_{b,n+1}\lambda_{b,n}}\sqrt{\lambda_{c,n}\lambda_{c,n+1}}\left(P_n +Q_n\right)+\frac{4\gamma^{2}\Delta}{G\hbar}\frac{\lambda_{b,n}}{\lambda _{c,n}}\delta_{\vec{\lambda}_{n}\vec{\lambda}_{n+1}}\right. \nonumber\\ &\hspace{1cm} \left. +\sqrt{\lambda_{b,n}\lambda_{b,n+1}}\frac{\lambda_{b,n}+\lambda_{b,n+1}}{2}\left( \frac{\lambda_{c,n}}{2\lambda_{b,n}}+\frac{\lambda_{c,n+1}}{2\lambda_{b,n+1}}\right)R_n \right\}. \end{align} where \begin{align} \label{LHL} P_n&=\left( \delta_{\lambda_{b,n+1},\lambda_{b,n}+\ell_{0}/\lambda_{c,n+1}}-\delta_{\lambda_{b,n+1},\lambda_{b,n}-\ell_{0} /\lambda_{c,n+1}}\right) \left( \delta_{\lambda_{c,n+1},\lambda_{c,n}+\ell_{0}/\lambda_{b,n}}-\delta_{\lambda_{c,n+1},\lambda_{c,n}-\ell _{0}/\lambda_{b,n}}\right),\nonumber\\ Q_n&=\left( \delta_{\lambda_{b,n+1},\lambda_{b,n}+\ell_{0}/\lambda_{c,n}}-\delta_{\lambda_{b,n+1},\lambda_{b,n}-\ell_{0}/\lambda_{c,n} }\right) \left( \delta_{\lambda_{c,n+1},\lambda_{c,n}+\ell_{0}/\lambda_{b,n+1}}-\delta_{\lambda_{c,n+1},\lambda_{c,n}-\ell_{0 /\lambda_{b,n+1}}\right), \nonumber\\ R_n&=\left( \delta_{\lambda_{b,n+1},\lambda_{b,n}+2\ell_{0}/\lambda_{c,n+1}}-2\delta_{\lambda_{b,n+1},\lambda_{b,n}}+\delta _{\lambda_{b,n+1},\lambda_{b,n}-2\ell_{0}/\lambda_{c,n+1}}\right)\delta_{\lambda_{c,n+1},\lambda_{c,n}}. \end{align} At this point we can see from (\ref{phiactua})-(\ref{phiactua2}) and hence in the matrix elements of $\hat{H}_{\mu}$ given by Eq. (\ref{Hmuprima}), that states supported on a regular (equally spaced) $\vec{\lambda}-$lattice do not fit into our quantum KS model. This is a difficulty that also appears in the Bianchi I models and thus, to proceed further, we can use the approximation proposed in \cite{Liu:2012xp} for that case. It consists of exploiting the fact that we are looking for a continuous yet quantum effective approximation \cite{Varadarajan:1999it,Ashtekar:2002vh}. Hence, effectively, one can replace at leading order the Kronecker deltas by Dirac's in (\ref{LHL}). This implies that one is approximating at leading order a description from ${\cal H}_{\mathrm Poly}^{(2)}$ to ${\cal H}_{\mathrm Sch}^{(2)}={\cal H}_{\mathrm Sch}\otimes {\cal H}_{\mathrm Sch}, {\cal H}_{\mathrm Sch}=L^2(\mathbb{R},dx)$, so that $\vec{\lambda}$ is now a continuous variable. Within this approximation it is useful to adopt the following integral form of Dirac's delta \begin{equation} \delta(\lambda_{n+1}-\lambda_{n})=\frac{1}{2\pi\gamma}\int_{\mathbb{R }\text{d}\varphi_{n+1}\text{ e}^{-\text{i}\varphi_{n+1}(\lambda_{n+1 -\lambda_{n})/\gamma}. \end{equation} Then, eq. (\ref{aprimepartial}) can be expressed as \begin{align} \langle\vec{\lambda}_{n+1}|\text{e}^{-\text{i}\epsilon\hat{H}_{\mu} |\vec{\lambda}_{n}\rangle=&\left( \frac{1}{2\pi\gamma}\right) ^{2}\int d\vec{\varphi}_{n+1}\text{ e}^{-\text{i}\vec{\varphi}_{n+1}(\vec{\lambda}_{n+1 -\vec{\lambda}_{n})/\gamma}\nonumber\\ &\hspace{1.5cm}\times\left\{ 1+\text{i}\frac{\epsilon}{2\gamma ^{2}\Delta}\left[M_n+N_n + L_n+\frac{\gamma^{2}\Delta}{G\hbar \frac{\lambda_{b,n}}{\lambda_{c,n}}\right] \right\} +\mathcal{O (\epsilon^{2}), \end{align} where \begin{align} M_n&=\sqrt{\lambda_{b,n+1}\lambda_{b,n}}\sqrt{\lambda_{c,n}\lambda_{c,n+1}}\sin(\sqrt{\Delta}\varphi_{b,n+1}/\lambda_{c,n+1})\sin(\sqrt{\Delta}\varphi_{c,n+1}/\lambda_{b,n}),\\ N_n&=\sqrt{\lambda_{b,n+1}\lambda_{b,n}}\sqrt{\lambda_{c,n}\lambda_{c,n+1}}\sin(\sqrt{\Delta}\varphi_{b,n+1}/\lambda_{c,n})\sin(\sqrt{\Delta}\varphi_{c,n+1}/\lambda_{b,n+1}),\\ L_n&=\sqrt{\lambda_{b,n}\lambda_{b,n+1}}\frac{\lambda_{b,n}+\lambda_{b,n+1}}{2}\left( \frac{\lambda_{c,n}}{2\lambda_{b,n}}+\frac{\lambda_{c,n+1}}{2\lambda_{b,n+1}}\right)\sin(\sqrt{\Delta}\varphi_{b,n+1}/\lambda_{c,n+1})^{2}. \end{align} here $\vec{\varphi}=(\varphi_{b},\varphi_{c})$. This last expression allow us to rewrite the propagator in the for \begin{equation} \label{TA} \langle \vec{\lambda}_{f};\tau_{f} |\vec{\lambda}_{i};\tau_{i}\rangle =\left( \frac{1}{2\pi\gamma}\right) ^{2N} \int d{\vec{\lambda}_{N-1}... d\vec{\lambda }_{1}}\int\text{d}\vec{\varphi}_{N}...\text{d}\vec{\varphi}_{1}\text{ e}^{\text{i}/\hbar S_{\mu}^{N}}+\mathcal{O}(\epsilon^{2}), \end{equation} wher \begin{align}\label{SNprime} S_{\mu}^{N} & =\epsilon\sum_{n=0}^{N-1}-\frac{\hbar}{\gamma}\vec{\varphi }_{n+1}\frac{\vec{\lambda}_{n+1}-\vec{\lambda}_{n}}{\epsilon}+\frac{\hbar }{2\gamma^{2}\Delta}\left[ \sqrt{\lambda_{b,n+1}\lambda_{b,n}}\sqrt {\lambda_{c,n}\lambda_{c,n+1}}\right. \nonumber \\ & \times\left( \sin(\sqrt{\Delta}\varphi_{b,n+1}/\lambda_{c,n+1})\sin (\sqrt{\Delta}\varphi_{c,n+1}/\lambda_{b,n})+\sin(\sqrt{\Delta}\varphi _{b,n+1}/\lambda_{c,n})\sin(\sqrt{\Delta}\varphi_{c,n+1}/\lambda _{b,n+1})\right) \nonumber\\ & \left. +\sqrt{\lambda_{b,n}\lambda_{b,n+1}}\frac{\lambda_{b,n +\lambda_{b,n+1}}{2}\left( \frac{\lambda_{c,n}}{2\lambda_{b,n}}+\frac {\lambda_{c,n+1}}{2\lambda_{b,n+1}}\right) \sin(\sqrt{\Delta}\varphi _{b,n+1}/\lambda_{c,n+1})^{2}+\frac{\gamma^{2}\Delta}{G\hbar}\frac {\lambda_{b,n}}{\lambda_{c,n}}\right] . \end{align} Now we take the limit $N\rightarrow\infty$ and Eq. (\ref{SNprime}) takes the for \begin{align} S_{\mu}= & \lim_{N\rightarrow\infty}S_{\mu}^{N}= \int_{\tau_{i}}^{\tau_{f }\text{d}\tau \left\{ -\frac{\hbar}{\gamma}\vec{\varphi}\cdot\dot{\vec{\lambda }\nonumber\label{Sprime} \right.\\ & \left. +\frac{\hbar}{2\gamma^{2}\Delta}\left[ \lambda_{b}\lambda_{c}\sin (\sqrt{\Delta}\varphi_{b}/\lambda_{c})\left( 2\sin(\sqrt{\Delta}\varphi _{c}/\lambda_{b})+\sin(\sqrt{\Delta}\varphi_{b}/\lambda_{c})\right) +\frac{\gamma^{2}\Delta}{G\hbar}\frac{\lambda_{b}}{\lambda_{c}}\right] \right\}. \end{align} Therefore we can see that the effective hamiltonian is \begin{equation} H_{\mu}^{\text{eff}}=-\frac{\hbar}{2\gamma^{2}\Delta}\left[ \lambda _{b}\lambda_{c}\sin(\sqrt{\Delta}\varphi_{b}/\lambda_{c})\left( 2\sin (\sqrt{\Delta}\varphi_{c}/\lambda_{b})+\sin(\sqrt{\Delta}\varphi_{b /\lambda_{c})\right) +\frac{\gamma^{2}\Delta}{G\hbar}\frac{\lambda^{b }{\lambda_{c}}\right] . \end{equation} Using (\ref{muprimvar}) to return to the original variables $(b,c,p_{b ,p_{c})$ we get finally \begin{equation} H_{\mu}^{\text{eff}}=-\frac{1}{2G\gamma^{2}}\left[ 2\sqrt{p_{c}}\frac{\sin \mu_{b}b}{\mu_{b}}\frac{\sin\mu_{c}c}{\mu_{c}}+\frac{p_{b}}{\sqrt{p_{c} }\left( \frac{\sin\mu_{b}b}{\mu_{b}}\right) ^{2}+\gamma^{2}\frac{p_{b }{\sqrt{p_{c}}}\right] . \label{eff ham \end{equation} Let us emphasize that the classical hamiltonian (\ref{class-H}) is recovered by taking the small argument limit $\vert \mu_{l}l\vert <<1$ ($l=b,c$) in the effective hamiltonian (\ref{eff ham}); i.e., the classical model, namely, the classical hamiltonian and equations of motion, are recovered from the effective one in the regime $\vert \mu_{l}l\vert <<1$. The hamiltonian (\ref{eff ham}) is the key piece defining and governing the effective quantum geometry. Effective states $(b,c,p_{b},p_{c})$ lie in the constraint surface $H_{\mu}^{\text{eff}}=0$, ``evolving" along the gauge integral curves of the hamiltonian vector field generated by $H_{\mu}^{\text{eff}}$. In the next section we will focus on analyze how geometrical quantities behave in the effective quantum scenario provided by $H_{\mu}^{\text{eff}}$. \section{Effective loop quantum dynamics} \label{EffectiveKS} To investigate the effective geometry, let us begin by considering the Hamilton equations associated to the hamiltonian (\ref{eff ham}), $\dot{\zeta}=\{\zeta,H_{\mu}^{\text{eff}}\}$, with the Poisson brackets (\ref{poisson-brackets}), and the effective scalar constraint $H_{\mu}^{\text{eff}}=0$, \begin{equation} \dot{b}=\frac{-\gamma^{2}\mu_{b}^{2}-\sin\left(b\mu_{b}\right) \left[\sin\left(b\mu_{b}\right) -2c\mu_{c}\cos\left( c\mu_{c}\right) +2\sin\left( c\mu_{c}\right) \right] }{2\gamma \sqrt{\Delta}\mu_{b}}, \label{beffdot \end{equation} \begin{equation} \dot{c}=\frac{\gamma^{2}\mu_{b}^{2}+2b\mu_{b}\cos\left( b\mu _{b}\right) \left[ \sin\left( b\mu_{b}\right) +\sin\left( c\mu_{c}\right) \right] -\sin\left( b\mu_{b}\right) \left[ \sin\left( b\mu_{b}\right)+2c\mu_{c}\cos\left( c\mu_{c}\right) +2\sin\left( c\mu_{c}\right) \right] }{2\gamma \sqrt{\Delta}\mu_{c}}, \label{ceffdot \end{equation} \begin{equation} \dot{p}_{b}=\frac{\sqrt{\Delta}\cos\left( b\mu_{b}\right) \left[ \sin\left( b\mu_{b}\right) +\sin\left( c\mu_{c}\right) \right] }{\gamma\mu_{b}\mu_{c}}, \label{pbeffdot \end{equation} \begin{equation} \dot{p}_{c}=\frac{2\sqrt{\Delta}\cos\left( c\mu_{c}\right) \sin\left( b\mu_{b}\right) }{\gamma\mu_{b}^{2}}. \label{pceffdot \end{equation} At this point two remarks are in order. First, provided that both $p_{b}$ and $p_{c}$ are strictly positive quantities, the first term in (\ref{eff ham}) must be strictly negative in order to satisfy the constraint. Second, since $\gamma \mu_{b}\mu_{c}=\gamma \Delta /p_{b}$ and $\gamma \mu_{b}^{2}=\gamma \Delta /p_{c}$ [cf. Eq. (\ref{mu})], Eq.(\ref{pbeffdot}) and Eq.(\ref{pceffdot}) can be written in the form $d(\ln p_{i})/dt=f_{i}(b\mu_{b},c\mu_{c})$, $i=a,b$. A solution to the effective model is a sufficiently smooth{\footnote{Let $f=(l,p_{l})$, $l=a,b$, be a solution to Eqs. (\ref{beffdot})-(\ref{pceffdot}), and let us suppose that $f$ is, at least, of class $C^{1}$. Thus, it follows from Eqs. (\ref{beffdot})-(\ref{pceffdot}) that $f$ is, in fact, a $C^{\infty}$ function.}}, real solution to Eqs. (\ref{beffdot})-(\ref{pceffdot}) which, in addition, satisfies the scalar constraint (\ref{eff ham}). Let us refer to solutions of the effective model as effective solutions. Since the dynamics is pure gauge, each point in the constraint surface is an appropriate initial condition for effective solutions. Now, to fix notation, let $\chi_{0}$ be the initial condition $(b_{0},c_{0},p_{b0},p_{c0})$ at $t=t_0$ (the reference initial time) to the effective solution $\chi=(b,c,p_{b},p_{c})$. From Eq. (\ref{eff ham}), it follows that effective solutions $\chi$ must satisfy, in particular, that \begin{equation} \sin(\mu_{b}b)=-\sin(\mu_{c}c)\pm \sqrt{\sin^{2}(\mu_{c}c)-\frac{\Delta \gamma^2}{p_{c}}}. \end{equation} Since effective solutions $\chi$ are real ones, the discriminant must necessarily be nonnegative, so that $\sin^{2}(\mu_{c}c)\geq \Delta \gamma^2/p_{c}$. Thus, in particular, we have that $p_{c}$ is bounded from below\footnote{This bound is consistent with that found in Ref. \cite{Chiou:2008nm}, where $p_{c}\geq\Delta\gamma^{2}/3.$} by $\Delta\gamma^{2}$; i.e., \begin{equation} p_{c}\geq\Delta\gamma^{2}. \label{cota pc \end{equation} This expression implies that the area of $S^{2}$ [cf. Eq. (\ref{leng-are-vol})] cannot be less than $4\pi\Delta\gamma^{2}$ in the effective geometry. By using inequality (\ref{cota pc}) into the relations defining $\mu_{b}$ and $\mu_{c}$ [cf. Eq. (\ref{mu})], we get that \begin{equation} \label{mub-muc-fbounds} \frac{\Delta \gamma}{p_{c}}\leq \mu_{b}\leq \frac{1}{\gamma},\qquad \frac{\Delta \gamma}{p_{b}}\leq \mu_{c}\leq \frac{p_{c}}{\gamma p_{b}}. \end{equation} Thus, in particular, \begin{equation} \label{cota mub} \gamma\mu_{b}\leq1. \end{equation} Using again that the first term in (\ref{eff ham}) must be strictly negative it must be the case that $\sin(\mu_{b}b)$ and $\sin(\mu_{c}c)$ must have opposite constant signs, \begin{equation} \sin(\mu_{l}l)>0,\quad \sin(\mu_{l'}l')<0, \label{sinmubsinmuc \end{equation} with $l$ being equal to $b$ or $c$, and $l'$ being the complementary of $l$; that is, for $l=b$ ($l=c$), $l'=c$ ($l'=b$). Strict inequalities (\ref{sinmubsinmuc}), and the continuity of the functions $\mu_{b}b$ and $\mu_{c}c$, imply that $2n_{0}\pi<\mu_{l}l <(2n_{0}+1)\pi$ and $(2m_{0}-1)\pi<\mu_{l'}l' <2m_{0}\pi$, for some $n_{0},m_{0}\in \mathbb{Z}$ fixed and determined by the effective solution $\chi$; in fact, by its corresponding initial condition $\chi_0$. Indeed, given an initial condition $\chi_{0}$, we will have that $\sin(\mu_{l}l)_{0}>0$ and that $\sin(\mu_{l'}l')_{0}<0$, so that $n_{0}$ is the greatest integer $n$ satisfying that $n<(\mu_{l}l)_{0}/2\pi$, whereas $m_{0}$ is the least integer $m$ satisfying that $(\mu_{l'}l')_{0}/2\pi<m$. By continuity, $\mu_{l}l$ and $\mu_{l'}l'$ must remain, respectively, in $\big{(}2n_{0}\pi,(2n_{0}+1)\pi\big{)}$ and in $\big{(}(2m_{0}-1)\pi,2m_{0}\pi\big{)}$; otherwise, the constraint will be violated. We then have disjoint sectors, and they are as many as the distinct pairs $(n_{0},m_{0})$ that the initial conditions define. Although we will perform our analysis by considering a generic sector, it is worth remarking that it is only within the $(0,0)$-sector that the regime $\mu_{d} \vert d\vert <<1$ can be consistently treated. Let us introduce a more symmetric notation through $$ N_{l}= \begin{cases} (2n_{0}+1) ,& \text{if } n_{0}\geq 0\\ 2\vert n_{0}\vert , & \text{if } n_{0}<0 \end{cases}, \qquad N_{l'}= \begin{cases} 2m_{0} ,& \text{if } m_{0}\geq 1\\ (2\vert m_{0}\vert +1) , & \text{if } m_{0}\leq 0 \end{cases} $$ In terms of $N_{d}$ (with $d$ being $b$ or $c$) we have that $\mu_{d}\vert d \vert$ is confined to be in $\big( \pi(N_{d}-1),\pi N_{d}\big)$. Explicitly, given an effective solution $\chi$, the quantities $\mu_{b}\vert b \vert$ and $\mu_{c}\vert c \vert$ are bounded by $\pi(N_{b}-1)<\mu_{b}\vert b \vert <\pi N_{b}$ and by $ \pi(N_{c}-1)<\mu_{c}\vert c \vert <\pi N_{c}$, where $N_{b}$ and $N_{c}$ are (strictly) positive fixed integers determined by the initial condition $\chi_{0}$. Now, inequalities $\pi(N_{d}-1)<\mu_{d}\vert d \vert$ and $\mu_{d}\vert d \vert <\pi N_{d}$ imply that $0<\vert\sin(\mu_{d}d)\vert \leq 1$ and that $0\leq \vert\cos(\mu_{d}d)\vert < 1$. Thus, for any given phase-space function $g$, we will have the strict inequality \begin{equation} \label{cos-stbound} \vert g\cos(\mu_{d}d)\vert< \vert g \vert. \end{equation} In particular, we have that the strict inequality $\vert\sin(\mu_{d}d)\cos(\mu_{d'}d')\vert<1$ must be satisfied. Now, let us consider $\vert \dot{p}_{b}\vert$ and $\vert \dot{p}_{c}\vert$. From Eqs. (\ref{pbeffdot})-(\ref{pceffdot}) -written in terms of the explicit expressions for $\gamma \mu_{b}\mu_{c}$ and $\gamma \mu_{b}^{2}$- it follows by using the triangle inequality, the boundedness of the sine function and Eq.(\ref{cos-stbound}) that \begin{equation} \label{dpb-dpc-bounds} \left\vert \dot{p}_{b}\right\vert < \left(\frac{3}{2\gamma \sqrt{\Delta}}\right)p_{b}, \qquad \left\vert \dot{p}_{c}\right\vert < \left(\frac{2}{\gamma \sqrt{\Delta}}\right)p_{c}. \end{equation} To get the first inequality we have used, in addition, the relationship $\sin(2b\mu_{b})=2\cos(b\mu_{b})\sin(b\mu_{b})$ in Eq. (\ref{pbeffdot}). Similar calculations employing relations (\ref{cota pc})-(\ref{mub-muc-fbounds}) in Eqs. (\ref{beffdot}) and (\ref{ceffdot}) shows that $\vert \dot{b}\vert$ and $\vert \dot{c}\vert$ are bounded from above by \begin{equation} \vert \dot{b}\vert < \frac{1}{2\sqrt{\Delta}}+\left(\frac{3+2\mu_{c}\vert c \vert}{2\gamma^{2}\Delta^{3/2}}\right)p_{c}\leq \left(\frac{2+\mu_{c}\vert c\vert}{\gamma^{2}\Delta^{3/2}}\right)p_{c},\quad \vert \dot{c}\vert <\left(\frac{4+2\mu_{c}\vert c \vert+3\mu_{b}\vert b \vert}{2\gamma^{2}\Delta^{3/2}}\right) p_{b}. \end{equation} Since $\mu_{d}\vert d\vert< \pi N_{d}$, we obtain that \begin{equation} \label{db-dc-bounds} \vert \dot{b}\vert < \left(\frac{2+\pi N_{c}}{\gamma^{2}\Delta^{3/2}}\right)p_{c} ,\quad \vert \dot{c}\vert < \left(\frac{4+ 2\pi N_{c}+3\pi N_{b}} {2\gamma^{2}\Delta^{3/2}}\right) p_{b}. \end{equation} Inequalities (\ref{dpb-dpc-bounds}) and (\ref{db-dc-bounds}) imply that $\vert \dot{\chi} \vert$ is bounded from above by $F_{(N_{b},N_{c})}\vert \chi \vert$, where $$F_{(N_{b},N_{c})}^{2}=\max\left\{\frac{9}{4\gamma^{2}\Delta}+\frac{(4+2\pi N_{c}+3\pi N_{b})^{2}}{4\gamma^{4}\Delta^{3}}\, , \,\frac{4}{\gamma^{2}\Delta}+\frac{(2+\pi N_{c})^{2}}{\gamma^{4}\Delta^{3}}\right\}.$$ All effective solutions in the sector labelled by $(N_{b},N_{c})$ turn out defined for $t\in \mathbb{R}$. In addition, let us remark that effective solutions are bounded by the exponential function. Indeed, recall that Eq.(\ref{pbeffdot}) and Eq.(\ref{pceffdot}) can be written in the form $d(\ln p_{d})/dt=f_{d}(b\mu_{b},c\mu_{c})$. Thus, combining Eqs. (\ref{pceffdot}) and (\ref{cota pc}), employing the boundedness of the sine and Eq. (\ref{cos-stbound}), it is not difficult to see that \begin{equation} \label{pc-bounds} \Delta\gamma^{2}\, \leq \, p_{c} \, <\, p_{c0}\:e^{ 2\vert t-t_{0} \vert/ \gamma{\sqrt{\Delta}}}. \end{equation} Similarly, from Eq.(\ref{pbeffdot}) it follows that \begin{equation} \label{pb-bounds} p_{b0}\:e^{-3\vert t-t_{0}\vert/\gamma\sqrt{4\Delta}}< p_{b}<p_{b0}\:e^{3\vert t-t_{0}\vert/\gamma\sqrt{4\Delta}}. \end{equation} Since $\pi(N_{d}-1)<\mu_{d}\vert d \vert <\pi N_{d}$, using inequalities (\ref{cota pc}) and (\ref{pc-bounds})-(\ref{pb-bounds}), we get that \begin{equation} \label{b-mod-bound} \gamma \pi (N_{b}-1)<\vert b \vert < \frac{\pi N_{b}}{(\mu_{b})_{0}}e^{\vert t-t_{0} \vert/ \gamma{\sqrt{\Delta}}}, \end{equation} \begin{equation} \label{c-mod-bound} \frac{\pi (N_{c}-1)}{(\mu_{c})_{0}}e^{-5\vert t-t_{0}\vert/\gamma\sqrt{4\Delta}}<\vert c \vert < \frac{\pi N_{c}\,p_{b0}}{\Delta \gamma}e^{3\vert t-t_{0}\vert/\gamma\sqrt{4\Delta}} \end{equation} Thus, in contrast to the classical model, $b$ and $c$ are finite quantities at every time: there is not a finite proper time limit, $t_f$, at which $b$ and $c$ will become infinite. Besides, {\emph{for all}} $t\in \mathbb{R}$, $p_{c}$ is bounded from below by a positive number, namely $\Delta \gamma^{2}$, and $p_{b}$ is a strictly positive quantity as well. Note, in addition, that Eqs. (\ref{pc-bounds})-(\ref{pb-bounds}) prevent the metric (\ref{metrica cinematica}) to have a coordinate singularity in the effective approach. Let us now focus on the behavior of geometrical and invariant quantities. From Eqs. (\ref{pbeffdot})-(\ref{pceffdot}), using the explicit expressions for $\gamma \mu_{b}\mu_{c}$ and $\gamma \mu_{b}^{2}$, it immediately follows that \begin{equation} \label{dtheta-b} \left\vert \theta_{\rm{eff}}\right\vert =\left\vert\frac{\dot{p}_{b}}{p_{b}}+\frac{\dot{p}_{c}}{2p_{c}}\right\vert = \frac{1}{\gamma\sqrt{\Delta}} \left\vert \sin(b\mu_{b}+c\mu_{c})+\frac{1}{2}\sin(2b\mu_{b})\right\vert \leq \frac{3}{\gamma\sqrt{4\Delta}}, \end{equation} \begin{equation} \label{dshear-b} \left\vert \sigma_{\rm{eff}}\right\vert = \frac{1}{\sqrt{3}}\left\vert\frac{\dot{p}_{b}}{p_{b}}-\frac{\dot{p}_{c}}{p_{c}}\right\vert =\frac{1}{\gamma\sqrt{3\Delta}} \left\vert \sin(c\mu_{c}-b\mu_{b})-\cos(c\mu_{c})\sin(b\mu_{b})+\frac{1}{2}\sin(2b\mu_{b})\ \right\vert < \frac{5}{\gamma\sqrt{12\Delta}}, \end{equation} where we have used Eq.(\ref{cos-stbound}) to get the strict inequality in the last term of Eq. (\ref{dshear-b}). The boundedness of the expansion scalar ensures, in particular, that the volume of a cell will remain different from zero at any finite proper time. Indeed, let $V_{r}=(4\pi p_{b}\sqrt{p_{c}})\,\vert_{t_{r}}$ be the volume of the cell ${\cal{V}}=[0,L]\times S^{2}$ at an arbitrary reference proper time $t_{r}$ of comoving observers in the effective Kantowski-Sachs geometry (with $N=1$), and let $t$ be any other finite proper time. It is a simple matter to see that $\vert \theta_{\rm{eff}}\vert\leq 3/\gamma\sqrt{4\Delta}$ implies that \begin{equation} \label{two-bounds} V_{r}e^{-3\vert t-t_{r}\vert/\gamma\sqrt{4\Delta}}\leq V\leq V_{r}e^{3\vert t-t_{r}\vert/\gamma\sqrt{4\Delta}}, \end{equation} where $V$ is the volume of the cell $\cal{V}$ at time $t$. The volume $V$ is a well-defined, strictly positive quantity at any finite proper time $t\in \mathbb{R}$ and, consequently, the congruence of timelike geodesics defined by comoving observers [i.e., the integral curves of the vector field $\xi^{a}=(\partial/\partial t)^{a}$] will not develop a caustic (at finite proper times). Let us now demonstrate that the effective Ricci and Kretschmann scalars, $R_{\rm{eff}}$ and $K_{\rm{eff}}$, are in fact well-behaved, finite quantities. Provided that $R_{\rm{eff}}$ and $K_{\rm{eff}}$ have second order terms in the time derivatives of $p_{b}$ and $p_{c}$, we shall first calculate $\{\dot{p}_{d},H_{\mu}^{\rm{eff}}\}$. By using Eqs. (\ref{pbeffdot})-(\ref{pceffdot}), as well as Eqs. (\ref{beffdot})-(\ref{ceffdot}), a straightforward calculation shows that \begin{eqnarray} \label{pbeffdotdot} \ddot{p}_{b}= \frac{p_{b}}{2\gamma^{2}\Delta}\Bigg{(} &\gamma^{2}&\mu_{b}^{2}\sin(2b\mu_{b})\left[\sin(b\mu_{b})\cos(c\mu_{c})+\frac{1}{4}\sin(2c\mu_{c})\right]\nonumber \\ \, &+&\sin(2c\mu_{c})\left[\sin(c\mu_{c})\cos(b\mu_{b})+\frac{1}{4}\sin(2b\mu_{b})\sin^{2}(b\mu_{b})\right]\nonumber \\ \, &+& \cos(c\mu_{c})\left[\sin(2c\mu_{c})\cos(b\mu_{b})+\cos(c\mu_{c})\sin^{2}(b\mu_{b})\sin(2b\mu_{b})\right](\mu_{b}b-\mu_{c}c)\Bigg{)}, \end{eqnarray} \begin{eqnarray} \label{pceffdotdot} \ddot{p}_{c}=-\cos(b\mu_{b}&-&c\mu_{c})+\frac{p_{c}}{\gamma^{2}\Delta}\Bigg{(}2\sin^{2}(b\mu_{b})\left[1+\cos^{2}(c\mu_{c})\right]-\frac{1}{2}\sin(2b\mu_{b})\sin(2c\mu_{c})\nonumber \\ \, &-& \sin^{2}(b\mu_{b})\cos(b\mu_{b}+c\mu_{c}) +\sin(2b\mu_{b})\left[1+\sin(b\mu_{b})\sin(c\mu_{c})\right](\mu_{c}c-\mu_{b}b)\Bigg{)}. \end{eqnarray} Employing the triangle inequality, condition (\ref{cos-stbound}), and the boundedness of $\mu_{b}$ [cf. (\ref{cota mub})] as well as of the sine function, we get that \begin{equation} \left\vert\ddot{p}_{b}\right\vert< \frac{1}{\gamma^{2}\Delta}\left(\frac{5}{4}+\big{[}\mu_{b}\vert b\vert +\mu_{c}\vert c\vert\big{]}\right)p_{b},\quad \left\vert\ddot{p}_{c}\right\vert < 1+\frac{1}{\gamma^{2}\Delta}\left(\frac{11}{2}+2\big{[}\mu_{b}\vert b\vert +\mu_{c}\vert c\vert\big{]}\right)p_{c}. \end{equation} Using that $\mu_{d}\vert d \vert < \pi N_{d}$, we arrive to \begin{equation} \label{pb-pb-ddot-bounds} \left\vert\ddot{p}_{b}\right\vert< \left(\frac{5+{4\pi [N_{b}+ N_{c}]}}{4\gamma^{2}\Delta}\right)p_{b}, \quad \left\vert\ddot{p}_{c}\right\vert< 1+\left(\frac{11+{4\pi [N_{b}+ N_{c}]}}{2\gamma^{2}\Delta}\right)p_{c}\leq \left(\frac{13+{4\pi [N_{b}+ N_{c}]}}{2\gamma^{2}\Delta}\right)p_{c}, \end{equation} where the last inequality in the second expression follows from $1\leq p_{c}/(\gamma^{2}\Delta)$. In order to simplify notation, let us introduce the quotients $x:=\dot{p_{b}}/p_{b}$, $y:=\dot{p_{c}}/p_{c}$, $v:=\ddot{p_{b}}/p_{b}$ and $w:=\ddot{p_{c}}/p_{c}$. So, inequalities (\ref{dpb-dpc-bounds}) and (\ref{pb-pb-ddot-bounds}) read as follows \begin{equation} \label{quotients-bounds} \vert x \vert < \frac{3}{2\gamma\sqrt{\Delta}},\quad \vert y \vert < \frac{2}{\gamma\sqrt{\Delta}}, \quad \vert v \vert < \left(\frac{5+{4\pi [N_{b}+ N_{c}]}}{4\gamma^{2}\Delta}\right) ,\quad \vert w \vert < \left(\frac{13+{4\pi [N_{b}+ N_{c}]}}{2\gamma^{2}\Delta}\right). \end{equation} The Ricci scalar, which is given by $R_{\rm{eff}}=2v+w+(2/p_{c})$ [cf. Eq. (\ref{ricciscalarkinematic})], is thus bounded by \begin{equation} \vert R_{\rm{eff}}\vert \leq 2 \vert v \vert + \vert w \vert+\frac{2}{p_{c}}. \end{equation} By using Eq. (\ref{cota pc}) and Eq. (\ref{quotients-bounds}) we have that the Ricci scalar [in the sector labelled by $(N_{b},N_{c})$] is bounded by \begin{equation} \label{Rbound} \vert R_{\rm{eff}}\vert <\left(\frac{11+{4\pi [N_{b}+ N_{c}]}}{\gamma^{2}\Delta}\right). \end{equation} Let us now focus on the Kretschmann scalar, $K_{\rm{eff}}$. From Eq. (\ref{Kretschmannscalarkinematic}), it is easy to see that in terms of the quotients $x$, $y$, $v$ and $w$, $K_{\rm{eff}}$ is given by \begin{eqnarray} K_{\rm{eff}}=&4&v^{2}+3w^{2}-4vw-8vxy+4wyx+6vy^2\nonumber \\ \,&-&5wy^{2} +6x^{2}y^{2}-8xy^{3}+\frac{7}{2}y^{4}+\frac{2}{p_{c}}y^{2}+\frac{4}{p_{c}^{2}}. \end{eqnarray} Clearly, the effective Kretschmann scalar turns out to be a bounded quantity. The explicit bound is obtained by using the inequalities (\ref{cota pc}) and (\ref{quotients-bounds}), as well as the triangle inequality. A straightforward calculation shows that \begin{equation} \label{Kbound} \vert K_{\rm{eff}} \vert < \frac{\xi}{\gamma^{4}\Delta^{2}}, \qquad \xi=4\left(6[N_{b}+N_{c}]^{2}\pi^{2}+59[N_{b}+N_{c}] \pi+160\right)+\frac{23}{2}.\end{equation} In addition, since $\dot{\theta}_{\rm{eff}}=v-x^{2}+(w-y^{2})/2$, we get for effective solutions that \begin{equation} \left \vert \dot{\theta}_{{\rm{eff}}}\right\vert\leq \vert v \vert+x^{2}+\frac{1}{2}\vert w \vert +\frac{1}{2}y^{2}<\frac{1}{\gamma^{2}\Delta}\left(\frac{35}{4}+ 2(N_{b}+N_{c})\pi \right) \end{equation} This, together with Eqs. (\ref{dtheta-b})-(\ref{dshear-b}), proves that $(R_{00})_{\rm{eff}}$ is a bounded quantity as well. In general, we have that any quantity of the form \begin{equation} \label{g-form} \Lambda:=\sum_{j=1}^{N}C_{j}\left(\ddot{p}_{b}\right)^{n_{j}}\left(\ddot{p}_{c}\right)^{m_{j}}\left(\dot{p}_{b}\right)^{r_{j}}\left(\dot{p}_{c}\right)^{s_{j}}\left(p_{b}\right)^{\alpha_{j}}\left(p_{c}\right)^{\beta_{j}}, \end{equation} where $n_{j}$, $m_{j}$, $r_{j}$ and $s_{j}$ are nonnegative integers, and $\alpha_{j}$ and $\beta_{j}$ are any two real numbers, is a bounded quantity on shell. Indeed, from inequalities (\ref{dpb-dpc-bounds}) and (\ref{pb-pb-ddot-bounds}) it follows that \begin{equation} \label{general-b} \vert \Lambda_{\rm{eff}} \vert < \sum_{j=1}^{N}\vert C_{j}\vert\left(\frac{A_{bc}}{4\gamma^{2}\Delta}\right)^{n_{j}}\left(\frac{B_{bc}}{2\gamma^{2}\Delta}\right)^{m_{j}} \left(\frac{3}{2\gamma\sqrt{\Delta}}\right)^{r_{j}} \left(\frac{2}{\gamma\sqrt{\Delta}}\right)^{s_{j}} \left(p_{b}\right)^{n_{j}+r_{j}+\alpha_{j}}\left(p_{c}\right)^{m_{j}+s_{j}+\beta_{j}}, \end{equation} where $A_{bc}:=5+{4\pi [N_{b}+ N_{c}]}$ and $B_{bc}:=13+{4\pi [N_{b}+ N_{c}]}$. Then, we have that in the effective approach of the KS model, any effective quantity $\Lambda_{\rm{eff}}$ of the form (\ref{g-form}) will be bounded by (\ref{general-b}). Provided that any scalar polynomial invariant $P$ associated to the metric (\ref{metrica cinematica}) -with the lapse function being set to the unit constant function- will take the form (\ref{g-form}), as it is actually the case for the Ricci and Kretschmann scalars, we can assert that in the effective geometry of the KS model $P_{\rm{eff}}$ will be bounded everywhere, even though its classical counterpart is not (i.e., even if $P_{\rm{class}}$ diverges at some regime). \section{Discussion} \label{sec-disc} The quest for the fundamental nature of spacetime may shed light on long standing problems as the singularities appearing in classical general relativity and the ultraviolet divergences of field theories. Hence quantum gravity theories that endow with quantum character to spacetime acquire particular interest. Loop quantum gravity, in particular, has yielded homogenous cosmological models in which the classical singularity is replaced by a quantum bounce and thus Schwarzschild interior which classically amounts to a homogeneous, Kantowski-Sachs, model is amenable for a similar treatment. Indeed the loop quantization of Schwarschild interior showed that the would be classical singularity is actually traversable and later on some heuristic effective models confirmed the same result but also added possible replacements for the singularities like another black hole, a Nariai universe or a white hole. However, connecting the quantum treatment with the effective model was left open. In this paper we have advanced a proposal that links the loop quantum description of the Schwarzschild interior with an effective model that is based on a path integral scheme. Specifically we have built a transition amplitude between two loop quantum states of the Kantowski-Sacks model as a path integral, Eq. (\ref{aprima}), consisting of an imaginary exponential of an action in phase space from which the effective heuristic hamiltonian constraint descends, Eqs. (\ref{TA}), (\ref{Sprime}) and (\ref{eff ham}). Although this strategy was originally used for homogeneous isotropic, as well as some anisotropic, models the particular case of Kantowski-Sachs had not been dealt with before. Armed with the effective constraint we embarked in the study of the ensuing dynamics that happened to lead to rather simple analytic bounds for the basic phase space variables, Eqs. (\ref{pc-bounds})-(\ref{c-mod-bound}) and their time derivatives. In particular expansion and shear turn out to be bounded too as it is the volume, Eqs. (\ref{dtheta-b})-(\ref{two-bounds}). Similarly, by considering the second order time derivatives of the basic phase space variables, according to the effective dynamics, we get that both the effective Ricci and Kretschmann scalars, Eqs. (\ref{Rbound}) and (\ref{Kbound}), are bounded. This bounded character actually holds for any product of the form (\ref{g-form}) containing second order and first order time derivatives as well as powers of the variables $p_b,p_c$. It is a remarkable fact that analytic results were obtained from the effective dynamics which, although simple in appearance, could only be treated numerically in previous works. There are several interesting points which can be further explored along the lines we have followed in the present work. One of them concerns our analysis performed considering a generic sector labeled by $(N_l,N_{l'})$ indicated by the detailed form of the effective hamiltonian constraint. Since it is in the regime $\mu_{d} \vert d\vert <<1$, from which the classical behavior can be recovered through a semiclassical approximation this selects only the $(0,0)$-sector. Thus, it is natural to ask about the physical relevance of the other sectors. Another thing we have not done in the present work is an analysis of the would be classical horizon. Since we have adopted the improved quantization for the Kantowski-Sachs model it is expected that it will differ from the recent results of \cite{Corichi:2015xia} that adopts an effective dynamics preserving the classical horizon definition. Further work is required to clarify other possible physical differences. Indeed, for example, recent phenomenological results on black hole evaporation \cite{Barrau:2016qri,Barrau:2015ana} require connecting interior effective descriptions like the one studied presently with that corresponding to the exterior. It would be interesting to combine our path integral analysis of the Schwarszchild interior including a coupling to a scalar field along the lines of \cite{Hartle:1976tp} and extend it to the exterior region in order to investigate further quantum gravity corrections to the black hole emission (See e.g. \cite{Tecotl:2015cya} which applies polymer path integral to a mechanical model to study the problematics of the black hole semiclassical approximation.) Finally, important consequences of the features we have found here may play a role in the geodesic analysis in regard to completeness and perhaps complement recent results for the cosmological case \cite{Joe:2014tca,Saini:2016vgo}. \section*{Acknowledgements} This work was partially supported by CONACyT Grant No. 237351 ``Implicaciones f\'isicas de la estructura del espacio tiempo" and DGAPA-UNAM Grant No. IN113115 ``Teor\'ia de campos en fondos curvos, gravedad cu\'antica y holograf\'ia".
1,314,259,992,677
arxiv
\section{Introduction}\label{1} Fast radio bursts (FRBs) are bright, millisecond-duration radio pulses that generated from the extragalactic sources (in most cases) according to the dispersion measures (DM) and the redshift of host galaxies of localized FRBs \citep{Lorimer07, Thornton13, Chatterjee17}. Great progresses have been made in observations and theories, since the first reported FRB in 2007 \citep{Cordes19,Petroff19}. More than one hundred of FRBs have been verified, 20 (91) of which are reported as repeating (apparently non-repeating) FRBs \citep{Petroff16, CHIME19a, Kumar19, CHIME19b, Fonseca20}. The corresponding FRB detection rate is $\sim 10^3-10^4 \space \rm{day^{-1}\space sky^{-1}}$ \citep{Thornton13,Spitler14,Keane15,Rane16,Oppermann16,Champion16,Scholz16,Lawrence17,Patel18,Connor19}. The physical origin of FRBs is still unclear, though many theoretical models are proposed to solve this challenge \citep{Cordes19,Petroff19}, including the mergers of compact objects \citep{Yamasaki18}, collapse of supermassive neutron stars \citep{Falcke14}, energetic flares coming form the magnetars \citep{Kulkarni14,Connor16,Cordes16a,Popov16,Margalit18} and interactions between superconducting cosmic strings \citep{Yu14,Thompson17}. Recently, bright millisecond-timescale radio bursts from the magnetar SGR 1935 + 2154 have been detected by CHIME/FRB \citep{Scholz20} and STARE2 \citep{Bochenek20a}. This phenomenon suggests that some FRBs may be involved in the strong magnetic activity generated by magnetars \citep{Katz16,Margalit20,Andersen20,Lin20,Bochenek20b,Lyutikov20}, especially for the repeating FRBs. The repeaters and apparently non-repeaters (hereafter referred to as non-repeaters) may have different physical origins. Thus, it is important to classify the FRBs based on the observed properties. Clearly, it is natural to divide FRBs into two groups as repeating and non-repeating samples \citep{Petroff19}. Both pulse width and radio luminosity are important characteristics for FRBs and their radiative properties, and these are frequently used as FRB sorting criteria \citep{Oppermann16,Ravi19,Qiu20,Fonseca20}. \citet{Petroff19} presented the histogram of FRB pulse width, however further statistical tests were not pursued. Recent results from the CHIME/FRB collaboration show that the distributions of two categories are not same with \rm{$\sim 5\sigma$ and $\sim 4\sigma$} significance using analysis of their own data on 18 repeating FRBs and 12 non-repeating ones \citep{Fonseca20}. The ASKAP group found that the distribution of pulse width may not be bimodal \citep{Qiu20}. But considering that the ASKAP sample is small and Bayesian methods are more suitable for large samples, the conclusion is still inconclusive. Meanwhile, much research on the radio luminosity of FRBs have been carried out, which mainly focuses on the luminosity function \citep{Kumar17, Luo18, Luo20, Hashimoto20} , and radio spectrum \citep{Spitler16, Macquart19, Katz20}. Although the above analyses are impressive, the statistics combined with more data from other telescopes and quantitative statistical tests are still needed. Here, we collect all detected FRB data, including the information on pulse width and radio luminosity, to examine their distributions for the repeating and non-repeating samples and check whether two groups of samples have the same or different origins. In section 2, we organize and check the rationality of the data. In section 3, we exploit the Anderson-Darling (A-D) test to see whether the above data conform to the Gaussian distribution and use a Mann-Whitney-Wilcoxon (M-W-W) test to check whether the two distributions share the same origins. Finally, in section 4 we summarize our results, and discuss possible mechanisms for the repeating and non-repeating FRBs. \section{FRB Observation Data}\label{2} Our data of FRBs are taken from the database of FRB Catalogue\footnote{http://www.frbcat.org/}(FRBCAT) \citep{Petroff16}, and those for repeating FRBs come from FRBCAT and published papers \citep{Petroff16, CHIME19a, Kumar19, CHIME19b,Fonseca20}. According to FRBCAT, there is a significant gap between $\sim35\,\rm{ms}$ and $\sim300\,\rm{ms}$ in pulse width. However, we notice that the measurements with pulse width larger than $\sim300\,\rm{ms}$ all come from one radio facility, i.e., the radio telescope BSA LPI of the Pushchino Radio Astronomy Observatory, in which the time interval between samples is $100\,\rm{ms}$ \citep{Fedorova19}. Considering that there may exist some observational bias effects in these data, we only include the pulses with a width shorter than $\sim35\,\rm{ms}$ (80 non-repeaters). For non-repeating FRBs, the possibility of repetition cannot be rejected, in particular repeating signals from FRB171019 have been observed \citep{Kumar19} in 2019. However, our analysis is based on the current observational data, and the phenomenon of repeating signals in non-repeaters is difficult to predict at present. Therefore, we treat the apparently non-repeating sources as real non-repeaters under current circumstances. For repeating FRBs, two or more observations are included. Thus, we calculate the average pulse width and radio luminosity of each source as representative of its pulse width and radio luminosity, respectively. Since the pulse width in the FRBCAT is the observed width, which is easily affected by dispersion\citep{Ravi19}, to study the pulse width more accurately, we need to introduce the intrinsic width that is estimated by Eq.(1) \citep{Connor20, Qiu20}. \begin{equation} \begin{split} t_i = \sqrt {t_{obs}^2 - t_{DM}^2 - t_s^2} \end{split} \end{equation} In the above formula, $t_i$ ($t_{obs}$) is the intrinsic width (observed width), with $t_s$ being the sampling time that depends on the instrument, and $t_{DM}$ is the dispersion smearing timescale as calculated in the following, \begin{equation} \begin{split} t_{DM} = 8.3 \times 10^{-3} \rm{DM {\Delta {\nu_{MHz}}\over\nu_{GHz}^3} \quad ms} \end{split} \end{equation} where $\rm DM$ is the dispersion measure, $\rm\Delta {\nu_{ MHz}}$ is the channel bandwidth in the unit of MHz and $\nu_{\rm GHz}$ is the central frequency in the unit of GHz. Therefore, the pulse width in the following text represents the intrinsic width. In Table 1, we summarize the properties of 20 repeating FRBs, by listing the observed width, intrinsic width, flux density, fluence and luminosity distance. In Figure 1, we plot a histogram of the pulse width for the repeating and non-repeating FRBs. \begin{figure} \centering \includegraphics[width=7.8cm]{1.eps} \caption{Upper panel: histogram of repeating and non-repeating FRBs with pulse width $<35 \, \rm {ms}$. The solid (dashed) line is the fitted curve for non-repeating (repeating) FRBs. The filled histogram is for non-repeating FRBs and the cross-hatched histogram is for repeating FRBs. Middle and bottom panels: the residuals for the non-repeaters and repeaters between the source data and fitted curve, shown as solid points, where the dashed lines refer to the cases of residual=0.} \label{fig1} \end{figure} \section{FRB Classifications and Statistical Tests}\label{3} \subsection{Samples of all FRBs} A Gaussian distribution occurs naturally in many astronomical data sets \citep{Press86, Mackay03}, sometimes occurring in data with logarithmic sampling. For completeness, we first investigate the Gaussian property. Regarding the total statistical properties of all FRBs, if we assume that the pulse widths of these data follow the Gaussian distribution, we can apply the A-D test \citep{Ivezi19} to examine whether the assumption is correct. The results of A-D test are shown in Table 2. For a 0.05 significance level, the statistic (0.547) is smaller than the critical value (0.760), which indicates that the Gaussian property of the data on pulse width is probable. We further use the A-D test to examine whether the radio luminosity of FRBs conforms to a Gaussian distribution, and the result shows that its statistic (0.475) is smaller than the critical value (0.759, as seen in Table 2), which means that the Gaussian property of the radio luminosity is likely. Therefore, using a Gaussian distribution to fit for the pulse width and radio luminosity is acceptable for all data but unknown for the separate data. Next we will discuss the distributions for the two samples of repeaters and non-repeaters. \begin{table*} \centering \caption{Parameters of 20 repeating FRBs. $^a$} \resizebox{\textwidth}{!}{ \begin{tabular}{@{}lccccccc@{}} \hline \noalign{\smallskip} \bf No.&\bf Sources &\bf Observed Width$^b$ &\bf Intrinsic Width$^c$&\bf Flux Density$^d$&\bf Fluence$^e$&\bf Distance$^f$& \bf Refs. \\ \noalign{\smallskip} \bf & &\bf (ms) &\bf (ms)&\bf (Jy)&\bf (Jy ms)&\bf (Gpc) & \\ \hline \noalign{\smallskip} 1&FRB121102 & 4.82 &4.78& 0.25 &0.372 & 1.61 & (1)\\ 2&FRB180814.J0422+73 & 23.45(22.57)$^g$ &23.43&...$^h$ & 22.57 & 0.39 & (2)(3)\\ 3&FRB171019 & 4.62 &4.08& ...$^{hi}$ & 101.54 & 1.89 & (4)\\ 4&FRB180916.J0158+65 & 5.27 &5.16& 2.08 & 1.62 & 0.58 & (5)\\ 5&FRB181030.J1054+73 & 1.01 &0.10& 3.15& 4.75 & 0.24 & (5)\\ 6&FRB181128.J0456+63 & 5.90 &5.80& 0.40& 3.45 & 1.14 & (5)\\ 7&FRB181119.J12+65 & 3.49 &3.33& 0.43& 1.77 & 1.42 & (5)\\ 8&FRB190116.J1249+27 & 2.75 &2.53&0.35 & 1.80 & 1.90 & (5)\\ 9&FRB181017.J1705+68 & 16.8 &16.73&0.40 & 8.50 & 6.97 & (5)\\ 10&FRB190209.J0937+77 & 6.55 &6.46&0.50 & 1.25 & 1.66 & (5)\\ 11&FRB190222.J2052+69 & 2.71 &2.48&1.65 & 5.45 & 1.64 & (5)\\ 12&FRB190208.J1855+46 & 1.11 &0.14& 0.50 & 1.70& 2.35 & (6)\\ 13&FRB180908.J1232+74 & 3.83 &3.70& 2.90 & 0.50 & 0.62 & (6)\\ 14&FRB190604.J1435+53 & 2.10 &1.78& 0.75 & 8.30& 2.42 & (6)\\ 15&FRB190212.J18+81 & 3.10 &2.93& 0.75 & 2.75& 1.05 & (6)\\ 16&FRB190303.J1353+48 & 3.20 &3.04& 0.47 & 2.67& 0.77 & (6)\\ 17&FRB190417.J1939+59 & 4.50 &4.20& 0.53 & 3.10& 7.40 & (6)\\ 18&FRB190117.J2207+17 & 2.74 &2.53& 1.00 & 6.36& 1.49 & (6)\\ 19&FRB190213.J02+20 & 7.00 &6.90& 0.50 & 1.80& 2.91 & (6)\\ 20&FRB190907.J08+46 & 2.18 &1.92& 0.30 & 2.03& 1.07 & (6)\\ \hline \end{tabular} } \label{tab1} \begin{flushleft} $^a$ The highest radio luminosity is $1.2\times 10^{44}\,\rm{erg/s}$ from FRB190523, and the faintest for extragalactic conditions is $6.2\times 10^{38}\,\rm{erg/s}$ from FRB141113. $^b$ The average observed width. $^c$ The average intrinsic width calculated by eq.(1). $^d$ The average flux density. $^e$ The average fluence. $^f$ The distance is from FRBCAT \citep{Wright06} based on $\rm {\Lambda CDM}$ with cosmological parameters: $\rm{H_0=69.6\,km/s/Mpc}$, $\rm{\Omega_M=0.286}$ and $\rm{\Omega_{vac}=0.714}$, where $\rm{H_0}$ is Hubble constant, $\rm{\Omega_M}$ is the mass fraction in the universe and $\rm{\Omega_{vac}}$ is the dark energy fraction in the universe. $^g$ 23.45 is from CHIME \citep{CHIME19a}, while 22.57 is from FRBCAT. In our statistics we use the data from CHIME. $^h$ The parameters are not given in FRBCAT or Refs.. $^i$ The flux density is not given, but the fluence is given. So we use the fluence to estimate the luminosity. \\ Refs. : (1) Spitler et al. (2016); (2) CHIME/FRB Collaboration et al. (2019a); (3) FRBCAT; (4) Kumar et al. (2019); (5) CHIME/FRB Collaboration et al. (2019b); (6) Fonseca et al. (2020). \end{flushleft} \end{table*} \subsection{Samples of repeating and non-repeating FRBs} To obtain the statistical test results for FRB classifications, we can apply the A-D test and M-W-W test. Here, due to the small size of repeater sample (20 repeating FRBs), a Kolmogorov-Smirnov (K-S) test is not an effective tool \citep{Ivezi19, Yang19}. For example, for a sample size of 10, the error of a K-S test can reach 7\%\footnote{https://en.wikipedia.org/wiki/Kolmogorov-Smirnov\_test}. Thus, we adopt the A-D test to check whether the sample distribution is consistent with a single Gaussian, then use the M-W-W test to examine whether the two samples share the same origin even if the data volumes are different. In our case, the two groups of samples for repeating and non-repeating FRBs are tested, based on the two characteristic quantities, the pulse width and radio luminosity. On the one hand, according to the data in Table 1, there are 20 repeaters, the pulse width of which range from $\sim0.1\,\rm{ms}$ to $\sim23\,\rm{ms}$. On the other hand, for the non-repeating FRB data, the pulse widths range from $\sim0.05\,\rm{ms}$ to $\sim34\,\rm{ms}$. The mean pulse width of repeating (non-repeating) sources is $\sim5.10\,\rm{ms}$ ($\sim3.35\,\rm{ms}$). We use A-D test to check the Gaussian property of both groups of samples, the results of which are shown in Table 2. The results show that the distribution of pulse width for repeating samples is not Gaussian in either logarithmic or linear coordinates scales, while the non-repeating sample belongs to a Gaussian distribution if expressed logarithmically. Furthermore, to check if the distribution of repeating FRB pulse widths follows a $\chi$-square type in linear coordinates, the function of which can be described below, \begin{equation} \begin{split} f_{k}(x)&={1\over2^{{k/2}}\Gamma({k/2})}x^{k/2-1}e^{-x/2}\\ &=Ax^{k/2-1}e^{-x/2}\ \end{split} \end{equation} where A is referred as the fitting coefficient. Here we employ Eq. (3) to fit the pulse width data of repeaters with the best fitting parameters of $\rm A=72.978$ and $\rm k=1.626$. The goodness of the $\chi$-square fit is calculated by Eq. (4) and the result is 0.80. In Eq. (4), $R^2$ is the goodness of fit, RSS is residual sum of the square, TSS is the total sum of squares, $y_i$ values are real data, $\hat y$ are test data on the fitted curve and $\overline y$ is mean value of the real data. Through the Figure 1, the histograms are all on the fitted curve or on both sides of the curve. Thus, we conclude that the distribution of pulse widths of repeating FRBs could follow a $\chi$-square function. \begin{equation} \begin{split} R^2&=1-RSS/TSS\\ &=1-\sum\limits_{i=1}^{n}(y_i-\hat y)^2/\sum\limits_{i=1}^{n}(y_i-\overline y)^2 \end{split} \end{equation} In Figure2, we draw the cumulative distribution function (CDF) for the two groups of samples. Although the two CDFs are close to each other, they belong to different distributions. To evaluate the reliability of above inference, we apply a M-W-W test. The resulting p-value of this M-W-W test is 0.0065, which is less than 0.05, indicating that the distribution of two groups are different, as shown in Table 3. As a next step, we try to use the radio luminosity as a statistical variant to realize classification for two group samples. The radio luminosity is estimated by Eq. (5), where S is the flux density and D is the luminosity distance. \begin{figure} \centering \includegraphics[width=7.8cm]{2.eps} \caption{Cumulative distribution function (CDF) of pulse width. The dashed line is for repeating FRBs, the solid line is for non-repeating FRBs.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=7.8cm]{3.eps} \caption{Histogram of repeating and non-repeating FRBs for radio luminosity expressed logarithmically. The solid line is the fitted curve of non-repeating FRBs. The dashed line is the fitted curve of repeating FRBs. The cross-hatched histogram is repeating FRBs and the empty one means non-repeating FRBs. Middle and bottom panels: the residuals for the non-repeaters and repeaters between the source data and fitted curve, shown as solid points, where the dashed lines refer to the cases of residual=0.} \label{fig3} \end{figure} \begin{figure} \centering \includegraphics[width=7.8cm]{4.eps} \caption{Cumulative distribution function (CDF) of radio luminosity. The dashed line is for repeating FRBs and the soild one shows the non-repeating FRBs.} \label{fig4} \end{figure} \begin{equation} \begin{split} L_{\rm{radio}}\sim SD^2\ \end{split} \end{equation} First of all, the radio luminosity of repeaters and non-repeaters ranges from $\sim10^{39}\,\rm{erg/s}$ to $\sim10^{42}\,\rm{erg/s}$ and $\sim10^{38}\,\rm{erg/s}$ to $\sim10^{44}\,\rm{erg/s}$, respectively. We plot their histograms in Figure 3. The mean radio luminosity of repeating (non-repeating) sources is $\sim2.6\times 10^{41}\,\rm{erg/s}$ ($\sim6.2\times 10^{42}\,\rm{erg/s}$). Then, we need to test for the Gaussian property of the repeating and non-repeating groups by using A-D test, the results of which are shown in Table 2: for the repeating FRBs the statistic (0.333) is smaller than the critical value (0.692), and for the non-repeating FRBs the statistic (0.592) is also smaller than the critical value (0.752), so they are both under the corresponding significance level of 0.05. Hence, these results indicate that the distributions of two groups conform to Gaussian distributions. In Figure 4, we draw the CDF for two groups, finding that the two curves are well-separated. We find both distributions are different using the M-W-W test, for which p-value is $7.905\times 10^{-7}$, much smaller than 0.05, as shown in Table 3. In short, from the viewpoint of statistical tests, two groups of samples very likely have different distributions. \begin{table} \centering \caption{A-D test to check Gaussian property $^a$.} \begin{tabular}{@{}lccc@{}} \hline \noalign{\smallskip} \bf Sample &\bf Statistic $^b$ &\bf Critical values ($\alpha=0.05$)$^c$ \\ \hline \noalign{\smallskip} \bf Pulse Width & &\\ All Data & 0.547 & 0.760 \\ Non-repeating & 0.692 & 0.754 \\ Repeating & 1.430 & 0.692 \\ \bf Luminosity & &\\ All Data& 0.475 & 0.759 \\ Non-repeating & 0.592 & 0.752 \\ Repeating & 0.333 & 0.692 \\ \hline \end{tabular} \label{tab3} \begin{flushleft} $^a$ Tested when in logarithmic scale.\\ $^b$ The statistic in the A-D test. Only when it is smaller than a critical value does the result make sense;\\ $^c$ The significance level in A-D test we take to be 0.05.\\ \end{flushleft} \end{table} \begin{table} \centering \caption{M-W-W test to check different distributions.} \begin{tabular}{@{}lcccccc@{}} \hline \noalign{\smallskip} \bf Characteristic &&&&&\bf P-values \\ \hline \noalign{\smallskip} Pulse Width &&&&& 0.0065 \\ Radio Luminosity &&&&& $7.905\times10^{-7}$\\ Radio Luminosity in CHIME Data &&&&& $8.79 \times 10^{-6}$\\ Adjusted Radio Luminosity &&&&& $2.47 \times 10^{-9}$\\ \hline \end{tabular} \label{tab3} \end{table} However, we notice that the mean values of radio luminosity for non-repeaters are different for the CHIME data with central frequency of 600MHz ($\rm{1.77 \times 10^{43} \, erg/s}$) and other data centered around 1.4GHz ($\rm {2.59 \times 10^{42} \, erg/s}$) \citep{Luo20}. If we assume that the detected FRBs by both the CHIME and other radio telescopes are originated from the same phenomenon, then the two mean values of luminosity should be similar in the case of readjusting the observational frequency. In other words, the two different mean values of luminosity of FRBs by the different instruments may be caused by the observation frequency bands or facilities calibration. First, we simply use the CHIME data \citep{Fonseca20} with 18 repeaters and 12 non-repeaters to test the former conclusion, whether the two samples have the same distributions. Through M-W-W test, the p-value is $8.79 \times 10^{-6}$ which is consistent with the former conclusion that they follow the different origins statistically. Second, we shift the luminosity values of CHIME by reducing an order of magnitude, which results in the same mean value as that by the other telescopes, as shown in Figure 5. Obviously, the two samples of the adjusted non-repeaters and repeaters are tested by applying M-W-W test, as shown in Table 3, and the result shows the same conclusion as the former that the repeaters and non-repeaters belong to the different distributions. \begin{figure} \centering \includegraphics[width=7.8cm]{5.eps} \caption{ Histogram of repeating and non-repeating FRBs for radio luminosity on a logarithmic scale, after the adjustment of data. The solid (dashed) line stands for the fitted curve of non-repeating (repeating) FRBs. The cross-hatched histogram is for repeating FRBs and the empty histogram means the non-repeating ones. Middle and bottom panels: residuals of non-repeaters and repeaters between source data and the fitted curve, shown as solid points. Dashed lines refer to residual=0.} \label{fig5} \end{figure} \section{Conclusions and Discussions}\label{4} In this paper, we employ statistical methods to analyze the distributions of FRB properties based on the statistical variants of FRB pulse width and radio luminosity, in an attempt to classify the two sample groups, the repeating (20 samples) and non-repeating (80 samples) bursts. Because of the small FRB sample sets at present, we avoid the K-S test and employ the M-W-W test due to larger uncertainties of the former. We find that the two groups of samples have different origins, with the significance level of 0.05. We think that the statistical classification turns out to be an effective guide to understanding FRB origins. Firstly, taking the pulse width as a statistical variant, the distribution of non-repeating group is conforms to a Gaussian, but a Gaussian property for the repeating group is not clear ($\chi$-square distribution seems to be better) when an A-D test is applied. Furthermore, using M-W-W test, we find that the distributions of two groups, repeating and non-repeating, are different. Secondly, in terms of radio luminosity, by adopting the A-D test, we find that the distributions of repeating and non-repeating groups are both Gaussian. A further M-W-W test shows that the distributions of two groups are significantly different. In addition, for a more complete conclusion, we notice that the CHIME data present the FRB luminosity of non-repeaters one order of magnitude higher than that of the other telescopes, in average. Again, we adjust the CHIME data by reducing one order of magnitude to test if the modified data of CHIME repeaters has the different origin from the non-repeaters, and we obtained the same conclusion as that for the unadjusted data by M-W-W test: non-repeaters and repeaters have the different origins. Therefore the statistical difference between two samples of data indicates that they may have a different physical origin, or the repeating and non-repeating phenomena originates in very different physical processes. There are some interpretations concerning our data and conclusion which need to be clarified. 1) The pulse width data from FRBCAT is the observed value and further obtained by search code \citep{Petroff16}, but it is not the intrinsic width. Moreover, according to papers published by CHIME \citep{CHIME19b} and Kumar et al. (2019), the data they present have been fitted with Gaussian profiles, which means that these data are the observed pulse width. Since the main purpose of this paper is not to discuss the DM-t relationship, we directly use the Eq.(1) to estimate the intrinsic width. Thus, for these data, we do not apply further processing to remove the additional intrinsic structure effects. 2) The pulse width and flux density data are not corrected precisely for frequency, because the spectral index of each FRB is difficult to determine, in particular for the lack of data for simultaneous observation of the same source at different frequencies. 3) The distance we use is from FRBCAT \citep{Wright06} based on $\rm {\Lambda CDM}$ with cosmological parameters: $\rm{H_0=69.6\,km/s/Mpc}$, $\rm{\Omega_M=0.286}$ and $\rm{\Omega_{vac}=0.714}$. Some papers on luminosity function, such as the work by Luo et al.\citep{Luo20}, use $\rm {\Lambda CDM}$ with cosmological parameters: $\rm{H_0=67.8\,km/s/Mpc}$, $\rm{\Omega_M=0.308}$ and $\rm{\Omega_{vac}=0.692}$. 4) The goodness of repeaters' pulse width fitting curve is only 0.80, which is not high enough. This indicates that even though the distribution of repeaters' pulse width is not the Gaussian, the $\chi$-square type should be not the best. The explanation for this may be due to the lack of observation data. To explain the physical origins of repeating and non-repeating FRB mechanisms, many FRB models have been proposed. Now that we find two "congenital" origins related to different physical processes, the two camps of models will be briefly summarized. In general, FRB models usually involve the physical processes of compact objects [white dwarf (WD), neutron star (NS), black hole (BH)], or the medium around them. Repeating FRB models usually relate to the interaction of compact objects with companion stars or the surrounding medium, including the super giant pulses from young NS or pulsars \citep{Cordes16a, Connor16}, extreme activities in the magnetosphere magnetars \citep{Kulkarni14, Wang20, Lyutikov20}, accretion of compact objects \citep{Gu16}, NS interaction with comets or asteroids \citep{Dai16}, and maser phenomenon in the surrounding medium of magnetars \citep{Yu20}. For the non-repeating FRBs, many models involve the one-off explosive events such as the mergers of compact object binaries or collisions between one compact object and another astronomical object, for example, mergers of BH-NS \citep{Zhang16}, NS-NS \citep{Yamasaki18}, WD-BH \citep{Li18}, or comets and asteroid hitting the surface of a NS \citep{Geng15}. Because our statistical results support the classification of FRB into repeating and non-repeating categories, our work puts certain constraints on the different models. For example, a model needs to be discussed from the perspective of repeating and non-repeating, especially the luminosity difference between repeaters and non-repeaters is about 1.5 orders of magnitude. This difference may indicate that the non-repeating sources come from a onetime catastrophic energy release or a violent outburst with a long energy storage period. Furthermore, compared with the latest results from CHIME \citep{Fonseca20} and Luo et al.\citep{Luo20}, all data in FRBCAT have been considered and the new method of M-W-W test has been used in this work. However, we have to note that different observed center frequencies may cause the changes in pulse width and radio luminosity, which may be the reason why the former research did not consider all the data. Finally, long-term observations of the repeating sources will test the different models and give us a better understanding of their burst mechanisms. We hope that in terms of further observations, the same FRB can be observed at the different frequency bands simultaneously. In this way, we can more accurately determine the spectral index of FRBs, thereby constraining the luminosity function of FRBs. Many more FRBs are expected to be published soon from CHIME, ASKAP, and FAST \citep{Li18, Li19, Zhu20}, the higher sensitivity of which could provide valuable information regarding their still mysterious origin. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant No.U1938117, No. 11988101, No. U1731238 and No. 11703003), the International Partnership Program of Chinese Academy of Sciences grant No. 114A11KYSB20160008, the National Key R\&D Program of China No. 2016YFA0400702, and the Guizhou Provincial Science and Technology Foundation (Grant No. [2020]1Y019). And, we thank the anonymous referee especially for the critical comments and suggestions, which have significantly improved the quality of the paper. \section*{Data Availability} The data underlying this article are available in the references below: (1) Spitler, et al. (2016); (2) CHIME/FRB Collaboration, et al. (2019a); {\bf (3) Kumar et al. (2019);} (4) CHIME/FRB Collaboration, et al. (2019b); (5) Fonseca, et al. (2020). Some data of FRBs are taken from the database of FRB Catalogue (FRBCAT), available at http://www.frbcat.org/.
1,314,259,992,678
arxiv
\section{Introduction} The notion of a $Q$-function associated to a pair $\{S,A\}$ consisting of a symmetric operator $S$ and a selfadjoint extension $A$ of $S$ in a Hilbert or Pontryagin space was introduced by M.G.~Krein and H.~Langer in \cite{KL73,KL77}. A $Q$-function contains the spectral information of the selfadjoint extensions of the underlying symmetric operator and therefore these functions play a very important role in the spectral and perturbation theory of selfadjoint operators. $Q$-functions appear also naturally in the description of the resolvents of the selfadjoint extensions of a symmetric operator with the help of Krein's formula and they can be used to construct functional models for selfadjoint operators. In the theory of boundary triplets associated to symmetric operators $Q$-functions can be interpreted as so-called Weyl functions, cf. \cite{BGP08,DHMS06,DM91,DM95,GG91}. A prominent example for a $Q$-function is the classical Titchmarsh-Weyl coefficient in the theory of singular Sturm-Liouville operators. The main objective of this paper is to extend the concept of $Q$-functions in such a way that the Dirichlet-to-Neumann map in the theory of elliptic differential equations can be identified as a generalized $Q$-function. In the abstract part of the paper we introduce the notion of generalized $Q$-functions and we show that these functions have similar properties as classical $Q$-functions. Besides a symmetric operator $S$ and a selfadjoint extension $A$ also an operator $T$ whose closure coincides with $S^*$ is used. Some of the ideas here parallel \cite{BL07}, where a more abstract approach with isometric and unitary relations in Krein spaces was used. The main result in the abstract part is Theorem~\ref{qthmgen1} which states that an operator function is a generalized $Q$-function if and only if it coincides up to a possibly unbounded constant on a dense subspace with the restriction of a Nevanlinna function with an invertible imaginary part and a certain asymptotic behaviour. Section~\ref{ellops} and Section~\ref{cellops} deal with second order elliptic operators on bounded and unbounded domains, and with the coupling of such operators. Suppose first that the domain $\Omega\subset\dR^n$, $n>1$, is bounded with a smooth boundary $\partial\Omega$. Let $A_D$ and $A_N$ be the selfadjoint realizations of an formally symmetric uniformly elliptic differential expression \begin{equation}\label{cl1} \cL=-\sum_{j,k=1}^n \frac{\partial}{\partial x_j} \,a_{jk} \frac{\partial }{\partial x_k}+ a \end{equation} in $L^2(\Omega)$ defined on $H^2(\Omega)$ and subject to Dirichlet and Neumann boundary conditions, respectively. If $T$ denotes the realization of $\cL$ on $H^2(\Omega)$, then the closure of $T$ in $L^2(\Omega)$ coincides with the maximal operator associated to $\cL$ in $L^2(\Omega)$, and $A_D$ and $A_N$ are both selfadjoint restrictions of $T$. For a function $f\in H^2(\Omega)$ denote the trace and the trace of the conormal derivative by $f|_{\partial\Omega}$ and $\tfrac{\partial f}{\partial\nu}|_{\partial\Omega}$, respectively. Then for each $\lambda\in\rho(A_D)$ the Dirichlet-to-Neumann map \begin{equation}\label{dnmapintro} Q(\lambda)(f_\lambda|_{\partial\Omega}):= -\frac{\partial f_\lambda}{\partial\nu}\Bigl|_{\partial\Omega},\qquad \text{where}\quad T f_\lambda=\lambda f_\lambda, \end{equation} is well-defined and will be regarded as an operator in $L^2(\partial\Omega)$ defined on $H^{3/2}(\partial\Omega)$ with values in $H^{1/2}(\partial\Omega)$. The minus sign in \eqref{dnmapintro} is used for technical reasons. It turns out that the operator function $\lambda\mapsto Q(\lambda)$ is a generalized $Q$-function in the sense of Definition~\ref{defq} and an explicit variant of Krein's formula for the resolvents of $A_D$ and $A_N$ is obtained in Theorem~\ref{bigthm1}, see also \cite{BL07,BGW09,GM08,GM08-2,P08,PR09,Post07} for more general problems. In particular, in the case $n=2$ the difference of these resolvents is a trace class operator and we obtain the trace formula \begin{equation}\label{traceformi} {\mathrm{tr}}\bigl((A_D-\lambda)^{-1}-(A_N-\lambda)^{-1}\bigr) ={\mathrm{tr}}\left(\overline{Q(\lambda)^{-1}}\,\frac{d}{d\lambda}\,\widetilde Q(\lambda)\right) \end{equation} for $\lambda\in\rho(A_D)\cap\rho(A_N)$. Here $\overline{Q(\lambda)^{-1}}$ is the closure of $Q(\lambda)^{-1}$ in $L^2(\partial\Omega)$ and $\widetilde Q$ is a Nevanlinna function which differs from the Dirichlet-to-Neumann map by a symmetric constant. Trace formulas for canonical differential expressions and in more abstract situations for the finite-dimensional case can be found in, e.g., \cite{AG03,AG05,BMN08}. In Section~\ref{cellops} we consider a so-called coupling of elliptic operators. Such couplings are of great interest in problems of mathematical physics, e.g., in the description of quantum networks; for more details and further references we refer the reader to the recent works \cite{EK04,EK04-2,MPP07,MPR07,P07}. Suppose that $\dR^n$, $n>1$, is decomposed in a bounded domain $\Omega$ with smooth boundary $\cC$ and the unbounded domain $\Omega^\prime=\dR^n\backslash\overline\Omega$. The orthogonal sum of the selfadjoint Dirichlet operators $A_D$ and $A_D^\prime$ associated to $\cL$ in $L^2(\Omega)$ and $L^2(\Omega^\prime)$, respectively, is regarded as a selfadjoint diagonal block operator matrix in $L^2(\dR^n)$. The resolvent of $A_D\oplus A_D^\prime$ is then compared with the resolvent of the usual selfadjoint realization $\widetilde A$ of $\cL$ in $L^2(\dR^n)$ defined on $H^2(\dR^n)$. In order to express this difference in the Krein type formula \begin{equation}\label{kreinintro} \bigl((A_D\oplus A_D^\prime)-\lambda\bigr)^{-1}-(\widetilde A-\lambda)^{-1}= \Gamma(\lambda) Q(\lambda)^{-1}\Gamma(\bar\lambda)^* \end{equation} with a generalized $Q$-function an analogon of the Dirichlet-to-Neumann map is constructed which measures the jump of the conormal derivative of $L^2(\Omega)$ and $L^2(\Omega^\prime)$-solutions of $\cL u=\lambda u$ on the boundary $\cC$, see \eqref{qcoup}. The operator $\Gamma(\lambda):L^2(\cC)\rightarrow L^2(\dR^n)$ in \eqref{kreinintro} is closely connected with the generalized $Q$-function and is here identified with a Poisson-type operator solving a certain Dirichlet problem. As a consequence of the representation \eqref{kreinintro} we also obtain a trace formula of the type \eqref{traceformi} in the coupled case. \section{Generalized $Q$-functions}\label{genq} In this section we introduce the notion of generalized $Q$-functions associated to a symmetric operators in Hilbert spaces. The class of generalized $Q$-functions is characterized in Theorem~\ref{qthmgen1}, where it turns out that generalized $Q$-functions are closely connected with operator-valued Nevanlinna or Riesz-Herglotz functions. We also note in advance that for the case of finite deficiency indices of the underlying symmetric operator the concept of generalized $Q$-functions coincides with the classical notion of (ordinary) $Q$-functions studied by M.G.~Krein and H.~Langer in \cite{KL73,KL77}, see also \cite{K47,K49}. Let $\cH$ be a separable Hilbert space and let $S$ be a densely defined closed symmetric operator with equal (in general infinite) deficiency indices $$n_\pm(S)=\dim\ker(S^*\mp i)\leq \infty$$ in $\cH$. It is well known that under this assumption $S$ admits selfadjoint extensions in $\cH$. In the following let $A$ be a fixed selfadjoint extension of $S$ in $\cH$, so that, $S\subset A=A^*\subset S^*$. Furthermore, let $T$ be a linear operator in $\cH$ such that $A\subset T\subset S^*$ and $\overline T=S^*$ holds, i.e., the domain ${\mathrm{dom\,}} T$ of $T$ is a core of ${\mathrm{dom\,}} S^*$ (see \cite{K76}), ${\mathrm{dom\,}} T$ contains ${\mathrm{dom\,}} A$ and $Af=Tf$ holds for all $f\in{\mathrm{dom\,}} A$. For $\lambda\in\dC$ belonging to the resolvent set $\rho(A)$ of the selfadjoint operator $A$ define the defect spaces $\cN_\lambda(T)=\ker(T-\lambda)$ and $\cN_\lambda(S^*)=\ker(S^*-\lambda)$. Then the decompositions \begin{equation}\label{decoall} {\mathrm{dom\,}} S^*={\mathrm{dom\,}} A\,\dot +\,\cN_\lambda(S^*)\quad\text{and}\quad {\mathrm{dom\,}} T={\mathrm{dom\,}} A\,\dot +\,\cN_\lambda(T) \end{equation} hold for all $\lambda\in\rho(A)$ and the closure $\overline{\cN_\lambda(T)}$ of $\cN_\lambda(T)$ in $\cH$ coincides with $\cN_\lambda(S^*)$. Recall that the symmetric operator $S$ is said to be {\it simple} if there exists no nontrivial subspace ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}$ in ${\mathrm{dom\,}} S$ such that $S$ restricted to ${\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}$ is a selfadjoint operator in the Hilbert space $\overline{\mathcal D}} \def\cE{{\mathcal E}} \def\cF{{\mathcal F}$. It is important to note that $S$ is simple if and only if \begin{equation}\label{cspan} \cH={\rm \overline{span}\, }\bigl\{\cN_\lambda(S^*):\lambda\in\dC\backslash\dR\bigr\} \end{equation} holds, cf. \cite{K49}. Here ${\rm \overline{span}\, }$ denotes the closed linear span. As $\overline{\cN_\lambda(T)}=\cN_\lambda(S^*)$ it is clear that the right hand side in \eqref{cspan} coincides with \begin{equation*} {\rm \overline{span}\, }\bigl\{\cN_\lambda(T):\lambda\in\dC\backslash\dR\bigr\}. \end{equation*} Fix some $\lambda_0\in\rho(A)$, let ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ be a Hilbert space with the same dimension as $\cN_{\lambda_0}(T)$ and let $\Gamma_{\lambda_0}$ be a densely defined bounded operator from ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ into $\cH$ such that ${\mathrm{ran\,}}\Gamma_{\lambda_0}=\cN_{\lambda_0}(T)$ and $\ker\Gamma_{\lambda_0}=\{0\}$ holds. The domain ${\mathrm{dom\,}}\Gamma_{\lambda_0}$ of $\Gamma_{\lambda_0}$ will be denoted by ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$. Observe that the closure $\overline\Gamma_{\lambda_0}$ of the operator $\Gamma_{\lambda_0}$ is the bounded extension of $\Gamma_{\lambda_0}$ which is defined on $\overline{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$. We write $\overline\Gamma_{\lambda_0}\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\cH)$, where $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\cH)$ is the space of bounded linear operators defined on ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ with values in $\cH$. \begin{lemma}\label{gamlem} The operator function $\lambda\mapsto\Gamma(\lambda):=(I+(\lambda-\lambda_0)(A-\lambda)^{-1})\Gamma_{\lambda_0}$ satisfies $\Gamma(\lambda_0)=\Gamma_{\lambda_0}$, \begin{equation*} \Gamma(\lambda)=\bigl(I+(\lambda-\mu)(A-\lambda)^{-1}\bigr)\Gamma(\mu),\qquad \lambda,\mu\in\rho(A), \end{equation*} and $\Gamma(\lambda)$ is a bounded operator from ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ into $\cH$ which maps ${\mathrm{dom\,}} \Gamma(\lambda)={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ bijectively onto $\cN_\lambda(T)$ for all $\lambda\in\rho(A)$. Moreover, $\lambda\mapsto\Gamma(\lambda)g$ is holomorphic on $\rho(A)$ for every $g\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$. \end{lemma} \begin{proof} Let us show that ${\mathrm{ran\,}}\Gamma(\lambda)=\cN_\lambda(T)$ is true. The other assertions in the lemma are obvious or follow from a straightforward calculation. Since $T$ is an extension of $A$ we have $(T-\lambda)(A-\lambda)^{-1}=I$ for $\lambda\in\rho(A)$ and therefore \begin{equation*} (T-\lambda)\Gamma(\lambda)h=(T-\lambda)\bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr)\Gamma_{\lambda_0} h =(T-\lambda_0)\Gamma_{\lambda_0} h=0 \end{equation*} shows that ${\mathrm{ran\,}}\Gamma(\lambda)\subset\cN_\lambda(T)$ holds. Now let $f_\lambda\in\cN_\lambda(T)$. Then it follows as above that \begin{equation*} f_{\lambda_0}:=\bigl(I+(\lambda_0-\lambda)(A-\lambda_0)^{-1}\bigr) f_\lambda \end{equation*} is an element in $\cN_{\lambda_0}(T)$ and hence there exists $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ such that $f_{\lambda_0}=\Gamma_{\lambda_0}h$. Now a simple calculation shows $f_\lambda=\Gamma(\lambda)h$, thus ${\mathrm{ran\,}}\Gamma(\lambda)=\cN_\lambda(T)$. \end{proof} In the following definition the concept of generalized $Q$-functions is introduced. \begin{definition}\label{defq} Let $S$, $A$, $T$, and $\Gamma(\cdot)$ be as above. An operator function $Q$ defined on $\rho(A)$ whose values $Q(\lambda)$ are linear operators in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ with ${\mathrm{dom\,}} Q(\lambda)={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ for all $\lambda\in\rho(A)$ is said to be a {\em generalized $Q$-function} of the triple $\{S,A,T\}$ if \begin{equation}\label{q} Q(\lambda)-Q(\mu)^*=(\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda) \end{equation} holds for all $\lambda,\mu\in\rho(A)$. If, in addition, ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and $T=S^*$, then $Q$ is called an {\em ordinary $Q$-function} of $\{S,A\}$. \end{definition} We note that the values $Q(\lambda)$, $\lambda\in\rho(A)$, of a generalized $Q$-function can be unbounded non-closed operators. The adjoint $Q(\mu)^*$ in \eqref{q} is well defined since ${\mathrm{dom\,}} Q(\mu)$ is dense in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and by setting $\lambda=\bar\mu$ in \eqref{q} it follows $Q(\mu)\subset Q(\bar\mu)^*$. Hence the identity \eqref{q} holds on ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$, the operators $Q(\lambda)$ are closable in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and symmetric for $\lambda\in\rho(A)\cap\dR$. The real and imaginary parts of the operators $Q(\lambda)$ are defined as usual: \begin{equation*} {\rm Re\,} Q(\lambda)=\frac{1}{2}\bigl(Q(\lambda)+Q(\lambda)^*\bigr)\quad\text{and}\quad {\rm Im\,} Q(\lambda)=\frac{1}{2i}\bigl(Q(\lambda)-Q(\lambda)^*\bigr). \end{equation*} Since $({\rm Re\,} Q(\lambda)h,h)$ and $({\rm Im\,} Q(\lambda)h,h)$ are real for all $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ the operators ${\rm Re\,} Q(\lambda)$ and ${\rm Im\,} Q(\lambda)$ are symmetric. \begin{remark} We note that the concept of generalized $Q$-functions is closely connected with the theory of boundary triplets and associated Weyl functions. The Weyl function of an ordinary or generalized boundary triplet (see \cite{BGP08,DM91,DM95,GG91}) is also a generalized $Q$-function, but the converse is not true. The class of generalized $Q$-functions studied here coincides with the class of Weyl functions of so-called quasi boundary triplets introduced in \cite{BL07}. Furthermore, we note that generalized $Q$-functions are no subclass of the Weyl families associated to boundary relations, see \cite{DHMS06} and Theorem~\ref{qthmgen1}. \end{remark} The concept of generalized $Q$-functions differs from the classical notion of ordinary $Q$-functions only in the case $n_\pm(S)=\infty$. \begin{proposition} Let $Q$ be a generalized $Q$-function of the triple $\{S,A,T\}$ and assume, in addition, that the deficiency indices $n_\pm(S)$ are finite. Then $T=S^*$ and $Q$ is an ordinary $Q$-function of the pair $\{S,A\}$. \end{proposition} \begin{proof} If the deficiency indices of the closed operator $S$ are finite, then $T$ is a finite dimensional extension of $S$ and hence also $T$ is closed. Therefore $T=\overline T=S^*$. Moreover, in this case also $\dim{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}=\dim\cN_{\lambda_0}(T)$ is finite and hence ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0={\mathrm{dom\,}}\Gamma(\lambda)={\mathrm{dom\,}} Q(\lambda)={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$, $\lambda\in\dC\backslash\dR$. \end{proof} The representation of a generalized $Q$-function with the help of the resolvent of $A$ in the next proposition is formally the same as for ordinary $Q$-functions, see \cite{KL73,KL77,LT77}. \begin{proposition}\label{formq} Let $Q$ be a generalized $Q$-function of the triple $\{S,A,T\}$ and let $\lambda_0\in\rho(A)$. Then $Q$ can be written as the sum of the possibly unbounded operator ${\rm Re\,} Q(\lambda_0)$ and a bounded holomorphic operator function, \begin{equation}\label{qa} Q(\lambda)={\rm Re\,} Q(\lambda_0)+\Gamma_{\lambda_0}^*\bigl((\lambda-{\rm Re\,}\lambda_0)+(\lambda-\lambda_0)(\lambda-\bar\lambda_0) (A-\lambda)^{-1}\bigr)\Gamma_{\lambda_0}, \end{equation} and, in particular, any two generalized $Q$-functions of $\{S,A\}$ differ by a constant. \end{proposition} \begin{proof} Let $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and set $\mu=\lambda_0$ in \eqref{q}. Making use of the definition of $\Gamma(\lambda)$ in Lemma~\ref{gamlem} we obtain \begin{equation*} Q(\lambda)h=Q(\lambda_0)^*h+(\lambda-\bar\lambda_0)\Gamma_{\lambda_0}^* \bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr)\Gamma_{\lambda_0}h. \end{equation*} As $Q(\lambda_0)h-Q(\lambda_0)^*h=(\lambda_0-\bar\lambda_0)\Gamma_{\lambda_0}^*\Gamma_{\lambda_0}h$ we see that the above formula can be rewritten as \begin{equation*} Q(\lambda)h=Q(\lambda_0)h+(\lambda-\lambda_0)\Gamma_{\lambda_0}^*\Gamma_{\lambda_0}h+ \Gamma_{\lambda_0}^* (\lambda-\lambda_0)(\lambda-\bar\lambda_0)(A-\lambda)^{-1}\Gamma_{\lambda_0}h. \end{equation*} The representation \eqref{qa} follows by inserting $Q(\lambda_0)h={\rm Re\,} Q(\lambda_0)h+i {\rm Im\,} Q(\lambda_0)h$ and ${\rm Im\,} Q(\lambda_0)h= {\rm Im\,}\lambda_0 \Gamma_{\lambda_0}^*\Gamma_{\lambda_0}h$ into this expression. \end{proof} Generalized $Q$-functions are closely connected with the class of Nevanlinna functions, cf. Theorem~\ref{qthmgen1} below. Let $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$ be the space of everywhere defined bounded linear operators in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$. Recall that an $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued operator function $\widetilde Q$ which is holomorphic on $\dC\backslash\dR$ and satisfies \begin{equation}\label{imqpos} \frac{{\rm Im\,} \widetilde Q(\lambda)}{{\rm Im\,} \lambda} \geq 0\qquad\text{and}\qquad \widetilde Q(\bar\lambda)=\widetilde Q(\lambda)^* \end{equation} for $\lambda\in\dC\backslash\dR$ is said to be an $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued {\it Nevanlinna function}. We note that $\widetilde Q$ is an $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function if and only if $\widetilde Q$ admits an integral representation of the form \begin{equation}\label{intrepq} \widetilde Q(\lambda)=\alpha+\lambda \beta+\int_\dR\left(\frac{1}{t-\lambda}-\frac{t}{1+t^2}\right)d\Sigma(t), \qquad\lambda\in\dC\backslash\dR, \end{equation} where $\alpha=\alpha^*\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$, $0\leq\beta=\beta^*\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$ and $t\mapsto\Sigma(t)\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$ is a selfadjoint nondecreasing $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued function on $\dR$ such that \begin{equation*} \int_\dR \frac{1}{1+t^2}\,d\Sigma(t)\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}). \end{equation*} It is well known that Nevanlinna functions can be represented with the help of selfadjoint operators or relations in Hilbert spaces in a very similar form as in \eqref{qa}. Such operator and functional models for Nevanlinna functions can be found in, e.g., \cite{ABDS90,BHS08,B78,BDS93,DM95,GT00,HSW98,LT77,MM03}. In the next theorem we characterize the class of generalized $Q$-functions. Roughly speaking, it turns out that up to a symmetric constant a generalized $Q$-function is a restrictions of an $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function $\widetilde Q$ with invertible imaginary part on ${\mathrm{dom\,}} Q(\lambda)$ and $\widetilde Q$ satisfies certain limit properties at $\infty$. \begin{theorem}\label{qthmgen1} Let ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ be a dense subspace of ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$, $\lambda_0\in\dC\backslash\dR$, and let $Q$ be a function defined on $\dC\backslash\dR$ whose values $Q(\lambda)$ are linear operators in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ with ${\mathrm{dom\,}} Q(\lambda)={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$, $\lambda\in\dC\backslash\dR$. Then the following is equivalent: \begin{enumerate} \item [{\rm (i)}] $Q$ is a generalized $Q$-function of a triple $\{S,A,T\}$, where $S$ is a simple symmetric operator in some separable Hilbert space $\cH$, $A$ is a selfadjoint extension of $S$ in $\cH$ and $A\subset T\subset S^*$ with $\overline T=S^*$; \item [{\rm (ii)}] There exists an unique $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function $\widetilde Q$ with the properties {\rm ($\alpha$), ($\beta$)} and {\rm ($\gamma$)}: \begin{enumerate} \item [{\rm ($\alpha$)}] The relations \begin{equation*} Q(\lambda)h-{\rm Re\,} Q(\lambda_0)h=\widetilde Q(\lambda)h \end{equation*} and \begin{equation*} Q(\lambda)^*h-{\rm Re\,} Q(\lambda_0)h=\widetilde Q(\lambda)^*h \end{equation*} hold for all $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and $\lambda\in\dC\backslash\dR$; \item [{\rm ($\beta$)}] ${\rm Im\,} \widetilde Q(\lambda)h=0$ for some $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and $\lambda\in\dC\backslash\dR$ implies $h=0$; \item [{\rm ($\gamma$)}] The conditions \begin{equation*}\label{conds} \lim_{\eta\rightarrow +\infty} \frac{1}{\eta}(\widetilde Q(i\eta)k,k)=0\quad \text{and}\quad \lim_{\eta\rightarrow +\infty} \eta\,{\rm Im\,} (\widetilde Q(i\eta) k,k)=\infty \end{equation*} are valid for all $k\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$, $k\not=0$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} We start by showing that (i) implies (ii). For this, let $Q$ be a generalized $Q$-function of the triple $\{S,A,T\}$ and suppose that $S$ is simple. Let $\Gamma_{\lambda_0}$ be a bounded operator defined on ${\mathrm{dom\,}} Q(\lambda)={\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ such that ${\mathrm{ran\,}}\Gamma_{\lambda_0}=\cN_{\lambda_0}(T)$ and $\ker\Gamma_{\lambda_0}=\{0\}$. According to Proposition~\ref{formq} for each $\lambda\in\dC\backslash\dR$ \begin{equation*} Q(\lambda)-{\rm Re\,} Q(\lambda_0)=\Gamma_{\lambda_0}^*\bigl((\lambda-{\rm Re\,}\lambda_0)+(\lambda-\lambda_0)(\lambda-\bar\lambda_0) (A-\lambda)^{-1}\bigr)\Gamma_{\lambda_0} \end{equation*} is a bounded operator in ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ defined on the dense subspace ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and hence admits a unique bounded extension onto ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ which is given by \begin{equation}\label{qtilde} \widetilde Q(\lambda):=\Gamma_{\lambda_0}^*\bigl((\lambda-{\rm Re\,}\lambda_0)+(\lambda-\lambda_0)(\lambda-\bar\lambda_0) (A-\lambda)^{-1}\bigr)\overline\Gamma_{\lambda_0}, \end{equation} where $\overline\Gamma_{\lambda_0}\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\cH)$ is the closure of $\Gamma_{\lambda_0}$. Obviously we have \begin{equation*} Q(\lambda)h-{\rm Re\,} Q(\lambda_0)h=\widetilde Q(\lambda)h \end{equation*} for all $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and $\lambda\in\dC\backslash\dR$, which is the first relation in ($\alpha$). Recall that for a generalized $Q$-function $Q(\bar\lambda)^*$ is an extension of $Q(\lambda)$. This implies ${\rm Re\,} Q(\lambda_0)\subset ({\rm Re\,} Q(\lambda_0))^*$, \begin{equation*} Q(\lambda)^*-{\rm Re\,} Q(\lambda_0) \subset\bigl(Q(\lambda)-{\rm Re\,} Q(\lambda_0)\bigr)^*=\widetilde Q(\lambda)^* \end{equation*} and therefore also $Q(\lambda)^*h-{\rm Re\,} Q(\lambda_0)h=\widetilde Q(\lambda)^*h$ is true for all $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and $\lambda\in\dC\backslash\dR$. Hence we have shown ($\alpha$). Clearly $\widetilde Q$ in \eqref{qtilde} is a holomorphic $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued function on $\dC\backslash\dR$. Denote by $\overline{\Gamma(\lambda)}$ the closure of $\Gamma(\lambda)=(I+(\lambda-\lambda_0)(A-\lambda)^{-1})\Gamma_{\lambda_0}$. Then \begin{equation*} \overline{\Gamma(\lambda)}=\bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr)\overline \Gamma_{\lambda_0}, \qquad\lambda\in\dC\backslash\dR, \end{equation*} and it is not difficult to see that \eqref{q} extends to \begin{equation*} \widetilde Q(\lambda)-\widetilde Q(\mu)^*=(\lambda-\bar\mu)\Gamma(\mu)^*\overline{\Gamma(\lambda)}. \end{equation*} Hence \begin{equation*} \bigl({\rm Im\,}\widetilde Q(\lambda)k,k\bigr)= ({\rm Im\,} \lambda) \bigl(\Gamma(\lambda)^*\overline{\Gamma(\lambda)}k,k\bigr) = ({\rm Im\,} \lambda) \Vert \overline{\Gamma(\lambda)}k\Vert^2 \end{equation*} holds for all $k\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}$ and this implies that $\widetilde Q$ is a Nevanlinna function, cf. \eqref{imqpos}. Furthermore, for $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ we have \begin{equation*} {\rm Im\,}\widetilde Q(\lambda) h=({\rm Im\,}\lambda) \Gamma(\lambda)^*\Gamma(\lambda) h \end{equation*} and from the property $\ker\Gamma(\lambda)=\{0\}$, cf. Lemma~\ref{gamlem}, we conclude that ${\rm Im\,}\widetilde Q(\lambda) h=0$ for $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ implies $h=0$, i.e., condition ($\beta$) holds. The same arguments as in \cite[Theorem 2.4, Corollaries 2.5 and 2.6]{LT77} together with the assumption that $S$ is a densely defined closed simple symmetric operator show that $\widetilde Q$ satisfies the conditions in ($\gamma$). \vskip 0.3cm\noindent Let us now verify the converse direction. If $\widetilde Q$ is a $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function, $\lambda_0\in\dC\backslash\dR$ and the first condition in ($\gamma$) holds, then it is well known that there exists a Hilbert space $\cH$, a selfadjoint operator $A$ in $\cH$ and a mapping $\widetilde\Gamma\in\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\cH)$ such that the representation \begin{equation}\label{qtilderep} \widetilde Q(\lambda)={\rm Re\,}\widetilde Q(\lambda_0)+\widetilde\Gamma^*\bigl((\lambda-{\rm Re\,}\lambda_0)+ (\lambda-\lambda_0)(\lambda-\overline\lambda_0)(A-\lambda)^{-1}\bigr)\widetilde\Gamma \end{equation} is valid for all $\lambda\in\dC\backslash\dR$, see, e.g., \cite{HSW98,LT77}. Furthermore, the space $\cH$ can be chosen minimal, i.e., \begin{equation}\label{min} \cH={\rm \overline{span}\, }\bigl\{\bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr)\widetilde\Gamma k: k\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I},\,\lambda\in\dC\backslash\dR\bigr\}. \end{equation} We define the mapping $\Gamma_{\lambda_0}$ to be the restriction of $\widetilde\Gamma$ onto ${\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$. As $\widetilde\Gamma$ is bounded the closure $\overline\Gamma_{\lambda_0}$ of $\Gamma_{\lambda_0}$ coincides with $\widetilde\Gamma$. We claim that $\Gamma_{\lambda_0}$ is injective. In fact, if $\Gamma_{\lambda_0}h=0$ for some $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ then $\widetilde\Gamma h=0$ and by \eqref{qtilderep} we have $\widetilde Q(\lambda) h={\rm Re\,}\widetilde Q(\lambda_0)h$. Therefore ${\rm Im\,}\widetilde Q(\lambda)h=0$ and by assumption ($\beta$) this implies $h=0$. Define the operator $S$ by \begin{equation*} Sf=Af,\quad {\mathrm{dom\,}} S=\bigl\{f\in{\mathrm{dom\,}} A: ((A-\bar\lambda_0)f,\Gamma_{\lambda_0}h)=0\,\,\text{for all}\,\, h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0\bigr\}. \end{equation*} Then $S$ is a closed symmetric operator and the identities ${\mathrm{ran\,}}(S-\bar\lambda_0)=({\mathrm{ran\,}} \Gamma_{\lambda_0})^\bot$ and $\ker(S^*-\lambda_0)=\overline{{\mathrm{ran\,}}\Gamma_{\lambda_0}}$ hold. Let \begin{equation}\label{gamlam} \Gamma(\lambda)=(I+(\lambda-\lambda_0)(A-\lambda)^{-1})\Gamma_{\lambda_0},\qquad \lambda\in\dC\backslash\dR. \end{equation} It is not difficult to check that ${\mathrm{ran\,}}(S-\bar\lambda)=({\mathrm{ran\,}}\Gamma(\lambda))^\bot$ is true for all $\lambda\in\dC\backslash\dR$ and the conditions in ($\gamma$) together with \eqref{min} now yield in the same way as in \cite[Theorem 2.4, Corollaries 2.5 and 2.6]{LT77} that $S$ is densely defined and simple. Note that ${\mathrm{dom\,}} A\cap{\mathrm{ran\,}} \Gamma_{\lambda_0}=\{0\}$ since $\lambda_0\in\rho(A)$ and ${\mathrm{ran\,}}\Gamma_{\lambda_0}\subset\cN_{\lambda_0}(S^*)$. Let us define a linear operator $T$ in $\cH$ on ${\mathrm{dom\,}} T:={\mathrm{dom\,}} A\,\dot+\,{\mathrm{ran\,}}\Gamma_{\lambda_0}$ by \begin{equation*} T(f+f_{\lambda_0}):=Af + \lambda_0f_{\lambda_0},\qquad f\in{\mathrm{dom\,}} A,\,\, f_{\lambda_0}\in{\mathrm{ran\,}}\Gamma_{\lambda_0}. \end{equation*} Obviously $T$ is an extension of $A$ and since $\cN_{\lambda_0}(T)={\mathrm{ran\,}}\Gamma_{\lambda_0}$ and ${\mathrm{ran\,}}\Gamma_{\lambda_0}$ is dense in $\cN_{\lambda_0}(S^*)$ we obtain from ${\mathrm{dom\,}} S^*={\mathrm{dom\,}} A\,\dot+\,\cN_{\lambda_0}(S^*)$, cf. \eqref{decoall}, that $T\subset S^*$ and $\overline T=S^*$ holds. According to condition ($\alpha$) the Nevanlinna function $\widetilde Q$ and the function $Q$ are related by \begin{equation*} Q(\lambda)h=\widetilde Q(\lambda)h+{\rm Re\,} Q(\lambda_0)h\quad\text{and}\quad Q(\lambda)^*h=\widetilde Q(\lambda)^*h+{\rm Re\,} Q(\lambda_0)h \end{equation*} for all $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ and $\lambda\in\dC\backslash\dR$. It remains to show that $Q$ satisfies \eqref{q}. Observe first that for $\lambda,\mu\in\dC\backslash\dR$ we have \begin{equation}\label{qcheck} Q(\lambda)h-Q(\mu)^*h=\widetilde Q(\lambda)h-\widetilde Q(\mu)^*h. \end{equation} Denote the closures of the operators $\Gamma(\lambda)$, $\lambda\in\dC\backslash\dR$, in \eqref{gamlam} by $\widetilde\Gamma(\lambda)$. Then \begin{equation*} \widetilde\Gamma(\lambda)=\overline{\Gamma(\lambda)}=\bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr) \overline \Gamma_{\lambda_0} =\bigl(I+(\lambda-\lambda_0)(A-\lambda)^{-1}\bigr)\widetilde\Gamma \end{equation*} and it follows from \eqref{qtilderep} with a straightforward calculation that \begin{equation}\label{qtildegam} \widetilde Q(\lambda)-\widetilde Q(\mu)^*=(\lambda-\bar\mu)\widetilde\Gamma(\mu)^*\widetilde\Gamma(\lambda), \qquad\lambda,\mu\in\dC\backslash\dR, \end{equation} holds. As $\widetilde\Gamma(\mu)^*=\overline{\Gamma(\mu)}^{\,*}=\Gamma(\mu)^*$ we conclude \begin{equation*} Q(\lambda)h-Q(\mu)^*h=(\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda)h,\qquad h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0, \end{equation*} from \eqref{qcheck}. Therefore $Q$ is a generalized $Q$-function of the triple $\{S,A,T\}$. \end{proof} \begin{remark} The definition of a generalized $Q$-function can be extended to the case that $A$ is a selfadjoint relation, $S$ is a non-densely defined symmetric operator or relation and $T$ is a linear relation which is dense in the relation $S^*$. We refer to \cite{LT77} for ordinary $Q$-functions in this more general situation. In this case the condition {\rm ($\gamma$)} in Theorem~\ref{qthmgen1} can be dropped. \end{remark} For ordinary $Q$-functions Theorem~\ref{qthmgen1} reads as follows, cf. \cite[Theorem 2.2 and Theorem 2.4]{LT77}. \begin{theorem}\label{qthmord} A $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function $\widetilde Q$ is an ordinary $Q$-function of some pair $\{S,A\}$, where $S$ is a densely defined closed simple symmetric operator in some Hilbert space $\cH$ and $A$ is a selfadjoint extension of $S$ in $\cH$, if and only if condition ($\gamma$) in Theorem~\ref{qthmgen1} and $0\in\rho({\rm Im\,} \widetilde Q(\lambda))$ holds for some, and hence for all, $\lambda\in\dC\backslash\dR$. \end{theorem} \begin{corollary}\label{derivcor} Let $Q$ be a generalized $Q$-function of $\{S,A,T\}$ and let $\widetilde Q$ be the $\cL({\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I})$-valued Nevanlinna function in Theorem~\ref{qthmgen1}. Then for all $\lambda\in\dC\backslash\dR$ and $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$ we have \begin{equation*} \frac{d}{d\lambda}\,Q(\lambda) h=\frac{d}{d\lambda}\,\widetilde Q(\lambda)h=\Gamma(\bar\lambda)^*\Gamma(\lambda)h. \end{equation*} \end{corollary} \begin{proof} It follows from \eqref{qtildegam} that \begin{equation*} \frac{d}{d\lambda}\,\widetilde Q(\lambda)= \lim_{\bar\mu\rightarrow\lambda}\,\frac{\widetilde Q(\lambda)-\widetilde Q(\mu)^*}{\lambda-\bar\mu} =\widetilde\Gamma(\bar\lambda)^*\widetilde\Gamma(\lambda) \end{equation*} holds. Hence condition ($\alpha$) in Theorem~\ref{qthmgen1} and $\widetilde\Gamma(\lambda)=\overline{\Gamma(\lambda)}$ imply \begin{equation*} \frac{d}{d\lambda}\,Q(\lambda)h =\lim_{\bar\mu\rightarrow\lambda}\,\frac{Q(\lambda)h- Q(\mu)^*h}{\lambda-\bar\mu} =\lim_{\bar\mu\rightarrow\lambda}\,\frac{\widetilde Q(\lambda)h-\widetilde Q(\mu)^*h}{\lambda-\bar\mu} =\Gamma(\bar\lambda)^*\Gamma(\lambda)h \end{equation*} for $h\in{\mathcal G}} \def\cH{{\mathcal H}} \def\cI{{\mathcal I}_0$. \end{proof} \section{Elliptic operators and the Dirichlet-to-Neumann map}\label{ellops} Let $\Omega\subset\dR^n$ be a bounded or unbounded domain with compact $C^\infty$-boundary $\partial\Omega$. Let $\cL$ be the "formally selfadjoint" uniformly elliptic second order differential expression \begin{equation}\label{cl} (\cL f)(x):=-\sum_{j,k=1}^n \left( \frac{\partial}{\partial x_j} a_{jk} \frac{\partial f}{\partial x_k}\right)(x)+ a(x)f(x), \end{equation} $x\in\Omega$, with bounded infinitely differentiable coefficients $a_{jk}\in C^\infty(\overline\Omega)$ satisfying $a_{jk}(x)=\overline{a_{kj}(x)}$ for all $x\in\overline\Omega$ and $j,k=1,\dots,n$, the function $a\in L^\infty(\Omega)$ is real valued and \begin{equation}\label{elliptic} \sum_{j,k=1}^n a_{jk}(x)\xi_j\xi_k\geq C\sum_{k=1}^n\xi_k^2 \end{equation} holds for some $C>0$, all $\xi=(\xi_1,\dots,\xi_n)^\top\in\dR^n$ and $x\in\overline\Omega$. We note that the assumptions on the domain $\Omega$ and the coefficients of $\cL$ can be relaxed but it is not our aim to treat the most general setting here. We refer the reader to e.g. \cite{G85,LM72,M,W} for possible generalizations. In the following we consider the selfadjoint realizations of $\cL$ in $L^2(\Omega)$ subject to Dirichlet and Neumann boundary conditions. For a function $f$ in the Sobolev space $H^2(\Omega)$ we denote the trace by $f\vert_{\partial\Omega}$ and the trace of the conormal derivative is defined by \begin{equation*} \frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}:=\sum_{j,k=1}^n a_{jk} n_j \frac{\partial f}{\partial x_k} \Bigl|_{\partial\Omega}; \end{equation*} here $n(x)=(n_1(x),\dots, n_n(x))^\top$ is the unit vector at the point $x\in\partial\Omega$ pointing out of $\Omega$. Recall that the mapping $C^\infty(\overline\Omega)\ni f\mapsto\bigl\{f|_{\partial\Omega}, \tfrac{\partial f}{\partial\nu}\bigl|_{\partial\Omega}\bigr\}$ extends by continuity to a continuous surjective mapping \begin{equation}\label{tracemap} H^2(\Omega)\ni f\mapsto \left\{f|_{\partial\Omega},\frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}\right\} \in H^{3/2}(\partial\Omega)\times H^{1/2}(\partial\Omega). \end{equation} The kernel of this map is \begin{equation*} H^2_0(\Omega)=\left\{f\in H^2(\Omega): f\vert_{\partial\Omega}=\frac{\partial f}{\partial\nu} \Bigl|_{\partial\Omega}=0\right\} \end{equation*} which coincides with the closure of $C_0^\infty(\Omega)$ in $H^2(\Omega)$. We refer the reader to the monographs \cite{LM72,M,W} for more details. In the following the scalar products in $L^2(\Omega)$ and $L^2(\partial\Omega)$ are denoted by $(\cdot,\cdot)_\Omega$ and $(\cdot,\cdot)_{\partial\Omega}$, respectively. Then Green`s identity \begin{equation}\label{greenid} (\cL f,g)_\Omega-(f,\cL g)_\Omega= \left(f\vert_{\partial\Omega},\frac{\partial g}{\partial\nu}\Bigl|_{\partial\Omega}\right)_{\partial\Omega} -\left(\frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega},g\vert_{\partial\Omega} \right)_{\partial\Omega} \end{equation} holds for all functions $f,g\in H^2(\Omega)$. We note that \eqref{greenid} is even true for $f\in H^2(\Omega)$ and $g$ belonging to the domain of the maximal operator associated to $\cL$ in $L^2(\Omega)$ if the $(\cdot,\cdot)_{\partial\Omega}$ scalar product in $L^2(\partial\Omega)$ is extended by continuity to $H^{3/2}(\partial\Omega)\times H^{-3/2}(\partial\Omega)$ and $H^{1/2}(\partial\Omega)\times H^{-1/2}(\partial\Omega)$, respectively, see \cite{LM72,W}. However, we shall make use of \eqref{greenid} only for the case $f,g\in H^2(\Omega)$. It is well known that the realizations $A_D$ and $A_N$ of $\cL$ subject to Dirichlet and Neumann boundary conditions defined by \begin{equation}\label{adan} \begin{split} A_D f&=\cL f,\quad {\mathrm{dom\,}} A_D=\bigl\{f\in H^2(\Omega): f\vert_{\partial\Omega}=0\bigr\},\\ A_N f&=\cL f,\quad {\mathrm{dom\,}} A_N=\Bigl\{f\in H^2(\Omega): \frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}=0\Bigr\}, \end{split} \end{equation} are selfadjoint operators in $L^2(\Omega)$. The following statement is known and can be found in, e.g., \cite{LM72}. It can be proved with similar methods as Theorem~\ref{opscoup} in the next section. \begin{proposition}\label{opprop} Let $\cL$ be the elliptic differential expression in \eqref{cl}. Then the operator \begin{equation}\label{minop} Sf=\cL f,\qquad {\mathrm{dom\,}} S=H^2_0(\Omega), \end{equation} is a densely defined closed symmetric operator in $L^2(\Omega)$ with infinite deficiency indices $n_\pm(S)$ and the adjoint $S^*$ of $S$ coincides with the maximal operator associated to $\cL$, \begin{equation*} S^*f=\cL f,\qquad {\mathrm{dom\,}} S^*=\bigl\{f\in L^2(\Omega):\cL f\in L^2(\Omega)\bigr\}. \end{equation*} The operator \begin{equation*} Tf=\cL f,\qquad {\mathrm{dom\,}} T=H^2(\Omega), \end{equation*} is not closed as an operator in $L^2(\Omega)$ and $T$ satisfies $\overline T=S^*$ and $T^*=S$. Furthermore, the selfadjoint operators $A_D$ and $A_N$ in \eqref{adan} are extensions of $S$ and restrictions of $T$. \end{proposition} In order to define a mapping $\Gamma_{\lambda_0}$ for the definition of a generalized $Q$-function associated to the triple $\{S,A_D,T\}$ we make use of the decomposition \eqref{decoall} in the present situation. More precisely, for all points $\lambda$ in the resolvent set $\rho(A_D)$ of the selfadjoint Dirichlet operator $A_D$ we have the direct sum decomposition of ${\mathrm{dom\,}} T=H^2(\Omega)$: \begin{equation}\label{h2deco} H^2(\Omega)={\mathrm{dom\,}} A_D\,\dot +\,\cN_\lambda(T)=\bigl\{f\in H^2(\Omega):f\vert_{\partial\Omega}=0\bigr\}\,\dot +\, \cN_\lambda(T), \end{equation} where \begin{equation*} \cN_\lambda(T)=\ker(T-\lambda)=\bigl\{f_\lambda\in H^2(\Omega): \cL f_\lambda=\lambda f_\lambda \bigr\}. \end{equation*} Let now $\varphi$ be a function in $H^{3/2}(\partial\Omega)$ and let ${\lambda_0}\in\rho(A_D)$. Then it follows from \eqref{tracemap} and \eqref{h2deco} that there exists a unique function $f_{\lambda_0}\in H^2(\Omega)$ which solves the equation $\cL f_{\lambda_0}=\lambda_0 f_{\lambda_0}$, i.e., $f_{\lambda_0}\in\cN_{\lambda_0}(T)$, and satisfies $f_{\lambda_0}\vert_{\partial\Omega}=\varphi$. We shall denote the mapping that assigns $f_{\lambda_0}$ to $\varphi$ by $\Gamma_{\lambda_0}$, \begin{equation}\label{Gammalambda0} H^{3/2}(\partial\Omega)\ni \varphi\mapsto \Gamma_{\lambda_0}\varphi :=f_{\lambda_0}\in\cN_{\lambda_0}(T), \end{equation} and we regard $\Gamma_{\lambda_0}$ as an operator from $L^2(\partial\Omega)$ into $L^2(\Omega)$ with ${\mathrm{dom\,}} \Gamma_{\lambda_0}=H^{3/2}(\partial\Omega)$ and ${\mathrm{ran\,}}\Gamma_{\lambda_0}=\cN_{\lambda_0}(T)$. \begin{proposition}\label{Gammalambda0prop} Let $\lambda_0\in\rho(A_D)$, let $\Gamma_{\lambda_0}$ be as in \eqref{Gammalambda0} and let $\lambda\in\rho(A_D)$. Then the following holds: \begin{enumerate} \item [{\rm (i)}] $\Gamma_{\lambda_0}$ is a bounded operator from $L^2(\partial\Omega)$ in $L^2(\Omega)$ with dense domain $H^{3/2}(\partial\Omega)$; \item [{\rm (ii)}] The operator $\Gamma(\lambda)=(I+(\lambda-\lambda_0)(A_D-\lambda)^{-1})\Gamma_{\lambda_0}$ is given by \begin{equation*} \Gamma(\lambda) \varphi=f_\lambda,\quad\text{where}\quad f_\lambda\in\cN_\lambda(T)\,\,\,\,\text{and}\,\,\,\, f_\lambda\vert_{\partial\Omega}=\varphi; \end{equation*} \item [{\rm (iii)}] The mapping $\Gamma(\bar\lambda)^*:L^2(\Omega)\rightarrow L^2(\partial\Omega)$ satisfies \begin{equation*} \Gamma(\bar\lambda)^*(A_D-\lambda)f=-\frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega},\qquad f\in{\mathrm{dom\,}} A_D. \end{equation*} \end{enumerate} \end{proposition} \begin{proof} Statement (i) will be a consequence of (iii). We prove assertion (ii). Recall that by Lemma~\ref{gamlem} the range of the operator $\Gamma(\lambda)$, $\lambda\in\rho(A_D)$, is $\cN_\lambda(T)$. Let $\varphi\in {\mathrm{dom\,}}\Gamma(\lambda)=H^{3/2}(\partial\Omega)$ and choose elements $f_\lambda\in\cN_\lambda(T)$ and $f_{\lambda_0}\in\cN_{\lambda_0}(T)$ such that \begin{equation*} f_\lambda\vert_{\partial\Omega}=\varphi=f_{\lambda_0}\vert_{\partial\Omega} \end{equation*} holds. According to \eqref{h2deco} the functions $f_\lambda$ and $f_{\lambda_0}$ are unique. Then $\Gamma_{\lambda_0}\varphi=f_{\lambda_0}$ and hence we obtain \begin{equation*} \Gamma(\lambda)\varphi=\Gamma_{\lambda_0}\varphi+(\lambda-\lambda_0)(A_D-\lambda)^{-1}\Gamma_{\lambda_0} \varphi =f_{\lambda_0}+(\lambda-\lambda_0)(A_D-\lambda)^{-1}\Gamma_{\lambda_0} \varphi. \end{equation*} Since $(\lambda-\lambda_0)(A_D-\lambda)^{-1}\Gamma_{\lambda_0} \varphi$ belongs to ${\mathrm{dom\,}} A_D$ it is clear that the trace of this element vanishes. Therefore, the traces of the functions $\Gamma(\lambda)\varphi \in\cN_\lambda(T)$ and $f_{\lambda_0}$ coincide, \begin{equation*} (\Gamma(\lambda) \varphi)\vert_{\partial\Omega}=f_{\lambda_0}\vert_{\partial\Omega}=\varphi=f_\lambda\vert_{\partial\Omega}. \end{equation*} Thus we have that the traces of $\Gamma(\lambda)\varphi \in\cN_\lambda(T)$ and $f_\lambda\in\cN_\lambda(T)$ coincide and from \eqref{h2deco} we conclude $\Gamma(\lambda)\varphi =f_\lambda$. \vskip 0.3cm\noindent (iii) Let $\varphi\in H^{3/2}(\partial\Omega)$ and choose the unique function $g_{\bar\lambda}\in\cN_{\bar\lambda}(T)$ with the property $g_{\bar\lambda}\vert_{\partial\Omega}=\varphi$. Hence we have $\Gamma(\bar\lambda)\varphi=g_{\bar\lambda}$ and for $f\in{\mathrm{dom\,}} A_D$ it follows \begin{equation*} \bigl(\Gamma(\bar\lambda)\varphi,(A_D-\lambda)f\bigr)_\Omega= (g_{\bar\lambda},A_D f)_\Omega-(\bar\lambda g_{\bar\lambda},f)_\Omega= (g_{\bar\lambda},A_D f)_\Omega-(T g_{\bar\lambda},f)_\Omega. \end{equation*} Making use of Green's identity \eqref{greenid} we find \begin{equation*} (g_{\bar\lambda},A_D f)_\Omega-(T g_{\bar\lambda},f)_\Omega= \left(\frac{\partial g_{\bar\lambda}}{\partial\nu}\Bigl|_{\partial\Omega},f\vert_{\partial\Omega}\right)_{\partial\Omega} -\left(g_{\bar\lambda}\vert_{\partial\Omega}, \frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}\right)_{\partial\Omega} \end{equation*} and since the trace of $f\in{\mathrm{dom\,}} A_D$ vanishes the first summand on the right hand side is zero. Therefore \begin{equation*} \bigl(\Gamma(\bar\lambda)\varphi,(A_D-\lambda)f\bigr)_\Omega=-\left(g_{\bar\lambda}\vert_{\partial\Omega}, \frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}\right)_{\partial\Omega}=\left(\varphi,- \frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}\right)_{\partial\Omega} \end{equation*} holds for all $\varphi\in{\mathrm{dom\,}}\Gamma(\bar\lambda)=H^{3/2}(\partial\Omega)$. This gives $(A_D-\lambda)f\in{\mathrm{dom\,}}\Gamma(\bar\lambda)^*$ and \begin{equation*} \Gamma(\bar\lambda)^*(A_D-\lambda)f=-\frac{\partial f}{\partial\nu}\Bigl|_{\partial\Omega}. \end{equation*} Moreover, as $\lambda\in\rho(A_D)$ and $f\in{\mathrm{dom\,}} A_D$ was arbitrary we see that $\Gamma(\bar\lambda)^*$ is defined on the whole space $L^2(\Omega)$. This together with the fact that $\Gamma(\bar\lambda)^*$ is closed implies \begin{equation*} \Gamma(\bar\lambda)^*\in\cL\bigl(L^2(\Omega),L^2(\partial\Omega)\bigr) \end{equation*} for $\lambda\in\rho(A_D)$ and, in particular, $\Gamma(\bar\lambda)\subset\overline{\Gamma(\bar\lambda)}=\Gamma(\bar\lambda)^{**}$ is bounded. Inserting $\lambda_0=\bar\lambda$ this yields assertion (i). \end{proof} In the study of elliptic differential operators the so-called Dirichlet-to-Neumann map plays an important role, we mention only \cite{AP04,BMNW08,GLMZ05,GMZ07,GMZ07-2,GM08,GM08-2,G68,M04,MPP07,MPR07,P07,P08,PR09,Post07,V52}. Roughly speaking this operator maps the Dirichlet boundary value $f_\lambda\vert_{\partial\Omega}$ of an $H^2(\Omega)$-solution of the equation $\cL u=\lambda u$ onto the Neumann boundary value $\tfrac{\partial f_\lambda}{\partial\nu}|_{\partial\Omega}$ of this solution. In the following definition also a minus sign arises, which is needed to obtain a generalized $Q$-function in Theorem~\ref{bigthm1}. Otherwise $-Q$ would turn out to be a generalized $Q$-function. \begin{definition}\label{dirneu} Let $\lambda\in\rho(A_D)$ and assign to $\varphi\in H^{3/2}(\partial\Omega)$ the unique function $f_\lambda\in\cN_\lambda(T)$ such that $f_\lambda\vert_{\partial\Omega}=\varphi$, see \eqref{tracemap} and \eqref{h2deco}. The operator $Q(\lambda)$ in $L^2(\partial\Omega)$ defined by \begin{equation}\label{dnmap} Q(\lambda) \varphi=Q(\lambda)(f_\lambda|_{\partial\Omega}):=-\frac{\partial f_\lambda}{\partial\nu}\Bigl|_{\partial\Omega}, \qquad \varphi\in {\mathrm{dom\,}} Q(\lambda)=H^{3/2}(\partial\Omega), \end{equation} is called the {\em Dirichlet-to-Neumann map} associated to $\cL$. \end{definition} Note that by \eqref{tracemap} the range of the Dirichlet-to-Neumann map $Q(\lambda)$, $\lambda\in\rho(A_D)$, lies in $H^{1/2}(\partial\Omega)$. We remark that the Dirichlet-to-Neumann map can be extended, e.g., to an operator from $H^1(\partial\Omega)$ in $L^2(\partial\Omega)$ if instead of $H^2(\Omega)$ the operator $T$ is defined on a suitable subspace of $H^{3/2}(\Omega)$, cf. \cite{AP04,BF62,B65,BL07,G71,LM72}. However, for our purposes this is not necessary since $A_D$ and $A_N$ are defined on subspaces of $H^2(\Omega)$. In the next theorem we show that the Dirichlet-to-Neumann map is a generalized $Q$-function and we illustrate the usefulness of this object in the representation of the difference of the resolvents of the Dirichlet and Neumann operators $A_D$ and $A_N$ in \eqref{adan}. Similar Krein type resolvent formulas can also be found in \cite{BL07,BGW09,GM08,GM08-2,P08,PR09,Post07}. The fact that the difference of the resolvents belongs to some von Neumann-Schatten class depending on the dimension of the space is well-known and goes back to M.S.~Birman, cf. \cite{B62}. \begin{theorem}\label{bigthm1} Let $\cL$ be the elliptic differential expression in \eqref{cl} and let $A_D$ and $A_N$ be the selfadjoint realizations of $\cL$ in \eqref{adan}. Denote by $S$ the minimal operator associated to $\cL$ and let $T=\cL\upharpoonright H^2(\Omega)$ be as in Proposition~\ref{opprop}. Define $\Gamma(\lambda)$ as in Proposition~\ref{Gammalambda0prop} and let $Q(\lambda)$, $\lambda\in\rho(A_D)$, be the Dirichlet-to-Neumann map. Then the following holds: \begin{enumerate} \item [{\rm (i)}] $Q$ is a generalized $Q$-function of the triple $\{S,A_D,T\}$; \item [{\rm (ii)}] The operator $Q(\lambda)$ is injective for all $\lambda\in\rho(A_D)\cap\rho(A_N)$ and the resolvent formula \begin{equation}\label{resform} (A_D-\lambda)^{-1}-(A_N-\lambda)^{-1}=\Gamma(\lambda) Q(\lambda)^{-1} \Gamma(\bar\lambda)^* \end{equation} holds; \item [{\rm (iii)}] For $p\in\dN$ and $2p+1>n$ the difference of the resolvents in \eqref{resform} belongs to the von Neumann-Schatten class ${\mathfrak S}} \def\sT{{\mathfrak T}} \def\sU{{\mathfrak U}_p(L^2(\Omega))$. \end{enumerate} \end{theorem} \begin{proof} In order to proof assertion (i) we have to check the relation \begin{equation}\label{qrel1} Q(\lambda)-Q(\mu)^*=(\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda),\qquad\lambda,\mu\in\rho(A_D), \end{equation} on ${\mathrm{dom\,}} Q(\lambda)\cap{\mathrm{dom\,}} Q(\mu)^*$. For this it will be first shown that ${\mathrm{dom\,}} Q(\lambda)=H^{3/2}(\partial\Omega)$ is a subset of ${\mathrm{dom\,}} Q(\mu)^*$ and that $Q(\mu)^*$ is an extension of $Q(\bar\mu)$. Let $\psi\in H^{3/2}(\partial\Omega)$ and choose the unique function $f_{\bar\mu}\in\cN_{\bar\mu}(T)$ such that $f_{\bar\mu}|_{\partial\Omega}=\psi$. For an arbitrary $\varphi\in{\mathrm{dom\,}} Q(\mu)=H^{3/2}(\partial\Omega)$ let $f_\mu\in\cN_\mu(T)$ be the unique function that satisfies $f_\mu|_{\partial\Omega}=\varphi$. By the definition of the Dirichlet-to-Neumann map we have \begin{equation*} Q(\mu)\varphi=-\frac{\partial f_\mu}{\partial\nu}\Bigl|_{\partial\Omega}\quad\text{and}\quad Q(\bar\mu)\psi=-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega} \end{equation*} and hence Green's identity \eqref{greenid} shows \begin{equation*} \begin{split} (Q(\mu)\varphi,\psi)_{\partial\Omega}& =\left(-\frac{\partial f_\mu}{\partial\nu}\Bigl|_{\partial\Omega},f_{\bar\mu}|_{\partial\Omega} \right)_{\partial\Omega}\\ &= \left(f_\mu|_{\partial\Omega},\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega} \right)_{\partial\Omega}- \left(\frac{\partial f_\mu}{\partial\nu}\Bigl|_{\partial\Omega},f_{\bar\mu}|_{\partial\Omega} \right)_{\partial\Omega} + \left(\varphi,-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega} \right)_{\partial\Omega}\\ &=(Tf_\mu,f_{\bar\mu})_\Omega-(f_\mu,T f_{\bar\mu})_\Omega+\left(\varphi,-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega} \right)_{\partial\Omega}. \end{split} \end{equation*} Since $f_\mu\in\cN_\mu(T)$ and $f_{\bar\mu}\in\cN_{\bar\mu}(T)$ it is clear that $(Tf_\mu,f_{\bar\mu})_\Omega=(f_\mu,T f_{\bar\mu})_\Omega$ holds and therefore we obtain \begin{equation*} (Q(\mu)\varphi,\psi)_{\partial\Omega}=\left(\varphi,-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega} \right)_{\partial\Omega} \end{equation*} for all $\varphi\in{\mathrm{dom\,}} Q(\mu)$. Thus $\psi\in{\mathrm{dom\,}} Q(\mu)^*$ and \begin{equation*} Q(\mu)^*\psi=-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_{\partial\Omega}=Q(\bar\mu)\psi. \end{equation*} Next we prove the relation \eqref{qrel1}. Let $\varphi,\psi\in H^{3/2}(\partial\Omega)$ and choose the functions $f_\lambda\in\cN_\lambda(T)$ and $g_\mu\in\cN_\mu(T)$ such that $f_\lambda\vert_{\partial\Omega}=\varphi$ and $g_\mu\vert_{\partial\Omega}=\psi$. Hence we have \begin{equation*} Q(\lambda)\varphi=-\frac{\partial f_\lambda}{\partial\nu}\Bigl|_{\partial\Omega},\quad Q(\mu)\psi=-\frac{\partial g_\mu}{\partial\nu}\Bigl|_{\partial\Omega},\quad \Gamma(\lambda)\varphi=f_\lambda\quad\text{and}\quad\Gamma(\mu)\psi=g_\mu. \end{equation*} Note that $\varphi\in H^{3/2}(\Omega)$ belongs to ${\mathrm{dom\,}} Q(\mu)^*$ by the above considerations. With the help of Green's identity \eqref{greenid} we find \begin{equation*} \begin{split} \bigl((Q(\lambda)&-Q(\mu)^*)\varphi,\psi\bigr)_{\partial\Omega}=-\left(\frac{\partial f_\lambda}{\partial\nu}\Bigl|_{\partial\Omega}, g_\mu\vert_{\partial\Omega}\right)_{\partial\Omega}+\left(f_\lambda\vert_{\partial\Omega}, \frac{\partial g_\mu}{\partial\nu}\Bigl|_{\partial\Omega}\right)_{\partial\Omega}\\ &=(T f_\lambda,g_\mu)_\Omega - (f_\lambda, Tg_\mu)_\Omega=(\lambda-\bar\mu)(f_\lambda,g_\mu)_\Omega\\ &=(\lambda-\bar\mu)(\Gamma(\lambda)\varphi,\Gamma(\mu)\psi)_\Omega =\bigl((\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda)\varphi,\psi\bigr)_{\partial\Omega}. \end{split} \end{equation*} This holds for all $\psi$ in the dense subset $H^{3/2}(\partial\Omega)$ of $L^2(\partial\Omega)$ and therefore \eqref{qrel1} is valid on ${\mathrm{dom\,}} Q(\lambda)={\mathrm{dom\,}}\Gamma(\lambda)=H^{3/2}(\partial\Omega)$, i.e., the Dirichlet-to-Neumann map is a generalized $Q$-function of the triple $\{S,A_D,T\}$. \vskip 0.3cm\noindent (ii) Let $\lambda\in\rho(A_D)\cap\rho(A_N)$ and suppose that we have $Q(\lambda)\varphi=0$ for some $\varphi\in H^{3/2}(\partial\Omega)$. There exists a unique $f_\lambda\in\cN_\lambda(T)$ such that $f_\lambda\vert_{\partial\Omega}=\varphi$ and for this $f_\lambda$ by assumption we have $\tfrac{\partial f_\lambda}{\partial\nu}|_{\partial\Omega}=0$. Hence $f_\lambda\in{\mathrm{dom\,}} A_N\cap\cN_\lambda(T)$ and from $\lambda\in\rho(A_N)$ we conclude $f_\lambda=0$, that is, $\varphi=f_\lambda|_{\partial\Omega}=0$. Therefore $Q(\lambda)^{-1}$, $\lambda\in\rho(A_D)\cap\rho(A_N)$ exists and, roughly speaking, $Q(\lambda)^{-1}$ maps the negative Neumann boundary values of $H^2(\Omega)$-solutions of $\cL u=\lambda u$ onto their Dirichlet boundary values. Let us proof the formula \eqref{resform} for the difference of the resolvents of $A_D$ and $A_N$. Observe first, that the right hand side in \eqref{resform} is well defined. In fact, by Proposition~\ref{Gammalambda0prop}~(iii) and \eqref{tracemap} the range of $\Gamma(\bar\lambda)^*$ lies in $H^{1/2}(\partial\Omega)$ and it follows from the surjectivity of the mapping in \eqref{tracemap} that $Q(\lambda)^{-1}$ is defined on the whole space $H^{1/2}(\partial\Omega)$ and maps $H^{1/2}(\partial\Omega)$ onto $H^{3/2}(\partial\Omega)$, the domain of $\Gamma(\lambda)$. Let now $f\in L^2(\Omega)$. We claim that the function \begin{equation}\label{function} g=(A_D-\lambda)^{-1}f-\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^* f \end{equation} belongs to ${\mathrm{dom\,}} A_N$. It is clear that $g$ is in $H^2(\Omega)$ since $(A_D-\lambda)^{-1}f\in{\mathrm{dom\,}} A_D$ and the second term on the right hand side belongs to $\cN_\lambda(T)$, the range of $\Gamma(\lambda)$. In order to verify $\tfrac{\partial g}{\partial \nu}|_{\partial\Omega}=0$ we choose $f_D\in{\mathrm{dom\,}} A_D$ such that $f=(A_D-\lambda)f_D$, so that \eqref{function} becomes \begin{equation}\label{function2} g=f_D-\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^* (A_D-\lambda)f_D = f_D+\Gamma(\lambda)Q(\lambda)^{-1}\frac{\partial f_D}{\partial\nu}\Bigl|_{\partial\Omega}, \end{equation} where we have used Proposition~\ref{Gammalambda0prop}~(iii). Let $f_\lambda:=\Gamma(\lambda)Q(\lambda)^{-1}\tfrac{\partial f_D}{\partial\nu}|_{\partial\Omega}$. Then $f_\lambda\in\cN_\lambda(T)$ and the trace of $f_\lambda$ is given by \begin{equation*} f_\lambda|_{\partial\Omega}=Q(\lambda)^{-1}\frac{\partial f_D}{\partial\nu}\Bigl|_{\partial\Omega}. \end{equation*} Hence $Q(\lambda)f_\lambda|_{\partial\Omega}=\tfrac{\partial f_D}{\partial\nu}|_{\partial\Omega}$, but on the other hand, by the definition of the Dirichlet-to-Neumann map $Q(\lambda)f_\lambda|_{\partial\Omega}=-\tfrac{\partial f_\lambda}{\partial\nu}|_{\partial\Omega}$. Therefore, the sum of the Neumann boundary value of the function $f_\lambda$ and the Neumann boundary value of $f_D$ is zero and we conclude from \eqref{function2} \begin{equation*} \frac{\partial g}{\partial\nu}\Bigl|_{\partial\Omega}= \frac{\partial f_D}{\partial\nu}\Bigl|_{\partial\Omega}+\frac{\partial f_\lambda}{\partial\nu}\Bigl|_{\partial\Omega}=0. \end{equation*} We have shown that $g$ in \eqref{function} belongs to ${\mathrm{dom\,}} A_N$. As $T$ is an extension of $A_N$ and $A_D$, and ${\mathrm{ran\,}}\Gamma(\lambda)=\ker(T-\lambda)$ we obtain \begin{equation*} (A_N-\lambda) g=(T-\lambda)(A_D-\lambda)^{-1}f-(T-\lambda)\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^* f =f. \end{equation*} Together with \eqref{function} we find \begin{equation*} (A_N-\lambda)^{-1}f=(A_D-\lambda)^{-1}f-\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^* f \end{equation*} for all $\lambda\in\rho(A_D)\cap\rho(A_N)$ and $f\in L^2(\Omega)$, and therefore the resolvent formula \eqref{resform} is valid. \vskip 0.3cm\noindent Up to some small modifications assertion (iii) was proved in \cite{B62}. \end{proof} We mention that for $\lambda,\lambda_0\in\rho(A_D)$ the Dirichlet-to-Neumann map is connected with the resolvent of $A_D$ via \begin{equation*} Q(\lambda)={\rm Re\,} Q(\lambda_0)+\Gamma_{\lambda_0}\bigl((\lambda-{\rm Re\,}\lambda_0)+ (\lambda-\lambda_0)(\lambda-\bar\lambda_0)(A_D-\lambda)^{-1}\bigr)\Gamma_{\lambda_0}. \end{equation*} This follows from the fact that $Q$ is a generalized $Q$-function and Proposition~\ref{formq}. The following two corollaries collect some properties of the Dirichlet-to-Neumann map and its inverse. \begin{corollary}\label{prop1} For $\lambda,\lambda_0\in\rho(A_D)$ the Dirichlet-to-Neumann map $Q(\lambda)$ has the following properties. \begin{enumerate} \item [{\rm (i)}] $Q(\lambda)$ is a non-closed unbounded operator in $L^2(\partial\Omega)$ defined on $H^{3/2}(\partial\Omega)$ with ${\mathrm{ran\,}} Q(\lambda)\subset H^{1/2}(\partial\Omega)$; \item [{\rm (ii)}] $Q(\lambda)-{\rm Re\,} Q(\lambda_0)$ is a non-closed bounded operator in $L^2(\partial\Omega)$ defined on $H^{3/2}(\partial\Omega)$; \item [{\rm (iii)}] the closure $\widetilde Q(\lambda)$ of the operator $Q(\lambda)-{\rm Re\,} Q(\lambda_0)$ in $L^2(\partial\Omega)$ satisfies $$\frac{d}{d\lambda}\,\widetilde Q(\lambda)=\Gamma(\bar\lambda)^*\overline{\Gamma(\lambda)}$$ and $\widetilde Q$ is a $\cL(L^2(\partial\Omega))$-valued Nevanlinna function. \end{enumerate} \end{corollary} \begin{proof} Besides the statement that $Q(\lambda)$ is a non-closed unbounded operator the assertions follow from the fact that $Q$ is a generalized $Q$-function and the results in Section~\ref{genq}. In Corollary~\ref{prop2} it will turn out that $\overline{Q(\lambda)^{-1}}$ is a compact operator and that $Q(\lambda)^{-1}$ is not closed. This implies that $\overline{Q(\lambda)}$ and $Q(\lambda)$ are unbounded and that $Q(\lambda)$ is not closed. \end{proof} \begin{corollary}\label{prop2} For $\lambda\in\rho(A_D)\cap\rho(A_N)$ the inverse $Q(\lambda)^{-1}$ of the Dirichlet-to-Neumann map $Q(\lambda)$ has the following properties. \begin{enumerate} \item [{\rm (i)}] $Q(\lambda)^{-1}$ is a non-closed bounded operator in $L^2(\partial\Omega)$ defined on $H^{1/2}(\partial\Omega)$ with ${\mathrm{ran\,}} Q(\lambda)^{-1}=H^{3/2}(\partial\Omega)$; \item [{\rm (ii)}] the closure $\overline{Q(\lambda)^{-1}}$ is a compact operator in $L^2(\partial\Omega)$; \item [{\rm (iii)}] the function $\lambda\mapsto -\overline{Q(\lambda)^{-1}}$ is a $\cL(L^2(\partial\Omega))$-valued Nevanlinna function. \end{enumerate} \end{corollary} \begin{proof} It is clear that (i) is an immediate consequence of (ii). Statement (iii) follows from Theorem~\ref{qthmgen1} and general properties of the Nevanlinna class. Assertion (ii) is essentially a consequence of the classical results in \cite{LM72}, see also \cite[Theorem~2.1]{G71}. Namely, for $\lambda\in\rho(A_D)\cap\rho(A_N)$ the operator $Q(\lambda):H^{3/2}(\partial\Omega)\rightarrow H^{1/2}(\partial\Omega)$ is an isomorphism and can be extended to an isomorphism $\widehat Q(\lambda):H^{1}(\partial\Omega)\rightarrow L^2(\partial\Omega)$ which acts as in \eqref{dnmap}. Therefore $Q(\lambda)^{-1}\subset \widehat Q(\lambda)^{-1}$ is a densely defined operator in $L^2(\partial\Omega)$ which is bounded as an operator in $H^1(\partial\Omega)$ and hence also bounded when considered as an operator in $L^2(\partial\Omega)$. Its closure $\overline{Q(\lambda)^{-1}}$ in $L^2(\partial\Omega)$ is a bounded everywhere defined operator in $L^2(\partial\Omega)$ with values in $H^1(\partial\Omega)$ and coincides with $\widehat Q(\lambda)^{-1}$. As $H^1(\partial\Omega)$ is compactly embedded in $L^2(\partial\Omega)$ it follows that $\overline{Q(\lambda)^{-1}}$ is a compact operator in $L^2(\partial\Omega)$. \end{proof} The next corollary is a simple consequence of Theorem~\ref{bigthm1} for the case that the difference of the resolvents is a trace class operator. \begin{corollary} Let the assumptions be as in Theorem~\ref{bigthm1}, let $\widetilde Q$ be the Nevanlinna function from Corollary~\ref{prop1} and suppose, in addition, $n=2$. Then \begin{equation}\label{traceresform} {\mathrm{tr}}\bigl((A_D-\lambda)^{-1}-(A_N-\lambda)^{-1}\bigr)={\mathrm{tr}}\left(\overline{Q(\lambda)^{-1}}\,\, \frac{d}{d\lambda}\,\widetilde Q(\lambda)\right) \end{equation} holds for all $\lambda\in\rho(A_D)\cap\rho(A_N)$. \end{corollary} \begin{proof} The resolvent formula \eqref{resform} can be written in the form \begin{equation}\label{resform234} (A_D-\lambda)^{-1}-(A_N-\lambda)^{-1}=\overline{\Gamma(\lambda)}\,\overline{Q(\lambda)^{-1}}\,\Gamma(\bar\lambda)^*, \end{equation} where the closures $\overline{\Gamma(\lambda)}$ and $\overline{Q(\lambda)^{-1}}$ are everywhere defined bounded operators, cf. Corollary~\ref{prop2}~(ii). In the case $n=2$ it follows from Theorem~\ref{bigthm1}~(iii) that \eqref{resform234} is a trace class operator and from Corollaries~\ref{derivcor}, \ref{prop1}~(iii) and well known properties of the trace of bounded operators (see \cite{GK69}) we conclude \eqref{traceresform}. \end{proof} \section{Coupling of elliptic differential operators}\label{cellops} In this section we study the uniformly elliptic second order differential expression $\cL$ from \eqref{cl} on two different domains and a coupling of the associated Dirichlet operators. More precisely, let $\Omega\subset\dR^n$ be a simply connected bounded domain with $C^\infty$-boundary $\cC:=\partial\Omega$ and let $\Omega^\prime=\dR^n\backslash\overline\Omega$ be the complement of the closure of $\Omega$ in $\dR^n$. Clearly, $\Omega^\prime$ is an unbounded domain with the compact $C^\infty$-boundary $\partial\Omega^\prime=\cC$. Let again $\cL$ be given by \begin{equation}\label{cl2} \cL h=-\sum_{j,k=1}^n \frac{\partial}{\partial x_j}\, a_{jk} \frac{\partial h}{\partial x_k} + ah \end{equation} with bounded coefficients $a_{jk}\in C^\infty(\dR^n)$ satisfying $a_{jk}(x)=\overline{a_{kj}(x)}$ for all $x\in\dR^n$ and $j,k=1,\dots,n$, the function $a\in L^\infty(\dR^n)$ is real valued and suppose that $\cL$ is uniformly elliptic, cf. \eqref{elliptic}. The restriction of $\cL$ on functions $f$ defined on $\Omega$ or functions $f^\prime$ defined on $\Omega^\prime$ will be denoted by $\cL_\Omega$ and $\cL_{\Omega^\prime}$, respectively. Then it is clear that the differential expressions $\cL_\Omega$ and $\cL_{\Omega^\prime}$ are of the type as in Section~\ref{ellops}. In the following we will usually denote functions defined on $\dR^n$ by $h$ or $k$, and we denote functions defined on $\Omega$ or $\Omega^\prime$ by $f,g$ or $f^\prime,g^\prime$, respectively. The scalar products of $L^2(\Omega)$ and $L^2(\Omega^\prime)$ are indexed with $\Omega$ and $\Omega^\prime$, respectively, whereas the scalar product of $L^2(\dR^n)$ is just denoted by $(\cdot,\cdot)$. For the trace of a function $f\in H^2(\Omega)$ and $f^\prime\in H^2(\Omega^\prime)$ we write $f|_\cC$ and $f^\prime|_\cC$, and the trace of the conormal derivatives are \begin{equation}\label{cono} \frac{\partial f}{\partial\nu}\Bigl|_\cC=\sum_{j,k=1}^n a_{jk}n_j\,\frac{\partial f}{\partial x_k}\Bigl|_\cC\quad\text{and}\quad \frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC=\sum_{j,k=1}^n a_{jk}n^\prime_j\, \frac{\partial f}{\partial x_k}\Bigl|_\cC; \end{equation} here $n(x)=(n_1(x),\dots,n_n(x))^\top$ and $n^\prime(x)=-n(x)$ are the unit vectors at the point $x\in\cC=\partial\Omega=\partial\Omega^\prime$ pointing out of $\Omega$ and $\Omega^\prime$, respectively. Note also that the coefficients $a_{jk}$ in \eqref{cono} are the restrictions of the coefficients in \eqref{cl2} onto $\Omega$ and $\Omega^\prime$, respectively. The Dirichlet operators \begin{equation*} \begin{split} A_\Omega f&=\cL_\Omega f,\qquad\,\,\,\, \,\,{\mathrm{dom\,}} A_\Omega=\bigl\{f\in H^2(\Omega):f|_\cC=0\bigr\},\\ A_{\Omega^\prime}f^\prime&=\cL_{\Omega^\prime}f^\prime,\qquad {\mathrm{dom\,}} A_{\Omega^\prime}=\bigl\{f^\prime\in H^2(\Omega^\prime):f^\prime|_\cC=0\bigr\}, \end{split} \end{equation*} are selfadjoint operators in $L^2(\Omega)$ and $L^2(\Omega^\prime)$, respectively. Hence the orthogonal sum \begin{equation}\label{aschro} A=\begin{pmatrix} A_\Omega & 0 \\ 0 & A_{\Omega^\prime}\end{pmatrix},\qquad {\mathrm{dom\,}} A={\mathrm{dom\,}} A_\Omega\oplus {\mathrm{dom\,}} A_{\Omega^\prime}, \end{equation} is a selfadjoint operator in $L^2(\dR^n)=L^2(\Omega)\oplus L^2(\Omega^\prime)$. Observe that \begin{equation}\label{aschroe} \begin{split} A(f\oplus f^\prime)&=\cL (f\oplus f^\prime)=\cL_\Omega f\oplus \cL_{\Omega^\prime}f^\prime,\\ {\mathrm{dom\,}} A&=\bigl\{f\oplus f^\prime \in H^2(\Omega)\oplus H^2(\Omega^\prime):f|_\cC=0=f^\prime|_\cC\bigr\}, \end{split} \end{equation} and that $A$ is not a usual second order elliptic differential operator on $\dR^n$ since for a function $f\oplus f^\prime\in {\mathrm{dom\,}} A$ the traces of the conormal derivatives $\tfrac{\partial f}{\partial\nu}|_\cC$ and $-\tfrac{\partial f^\prime}{\partial\nu^\prime}|_\cC$ at the boundary $\cC$ of the domains $\Omega$ and $\Omega^\prime$ in general do not coincide. Besides the operator $A$ we consider the usual selfadjoint operator associated to $\cL$ in $L^2(\dR^n)$ defined by \begin{equation}\label{atildeschroe} \widetilde A h=\cL h,\qquad h\in{\mathrm{dom\,}}\widetilde A=H^2(\dR^n), \end{equation} and our aim is to prove a formula for the difference of the resolvents of $\widetilde A$ and $A$ with the help of a generalized $Q$-function in a similar form as in the previous section. The following theorem indicates how $S$ and $T$ in the triple $\{S,A,T\}$ for the definition of a generalized $Q$-function can be chosen. \begin{theorem}\label{opscoup} The operator \begin{equation} S h=\cL h,\quad {\mathrm{dom\,}} S=\bigl\{h=f\oplus f^\prime \in H^2(\dR^n):f|_\cC=0=f^\prime|_\cC\bigr\}, \end{equation} is a densely defined closed symmetric operator in $L^2(\dR^n)$ with infinite deficiency indices $n_\pm(S)$. The operator \begin{equation} \begin{split} T(f\oplus f^\prime)&=\cL(f\oplus f^\prime),\\ {\mathrm{dom\,}} T&=\bigl\{f\oplus f^\prime \in H^2(\Omega)\oplus H^2(\Omega^\prime):f|_\cC=f^\prime|_\cC\bigr\}, \end{split} \end{equation} is not closed as an operator in $L^2(\dR^n)$ and $T$ satisfies $\overline T=S^*$ and $T^*=S$. Furthermore, the selfadjoint operators $A$ and $\widetilde A$ in \eqref{aschro}, \eqref{aschroe} and \eqref{atildeschroe} are extensions of $S$ and restrictions of $T$. \end{theorem} \begin{proof} The operator $S$ is a restriction of the selfadjoint operator $A$ and hence $S$ is symmetric. The fact that ${\mathrm{dom\,}} S$ is dense follows, e.g., from the fact that $H_0^2(\Omega)$ and $H^2_0(\Omega^\prime)$ are dense subspaces of $L^2(\Omega)$ and $L^2(\Omega^\prime)$, respectively, cf. Proposition~\ref{opprop}, and $$H_0^2(\Omega)\oplus H^2_0(\Omega^\prime)\subset{\mathrm{dom\,}} S.$$ Since for any function $h\in H^2(\dR^n)$ decomposed as $h=f\oplus f^\prime$, where $f\in H^2(\Omega)$, $f^\prime\in H^2(\Omega^\prime)$, we have $f\vert_\cC=f^\prime\vert_\cC\in H^{3/2}(\cC)$ it follows that $\widetilde A$ is an extension of $S$ and a restriction of the operator $T$. Moreover, $S\subset A\subset T$ is obvious. Let us verify that $S=T^*$ holds. In particular this implies that $S$ is closed and that $\overline T=S^*$ is true. We start with the inclusion $S\subset T^*$. Let $h=f\oplus f^\prime\in{\mathrm{dom\,}} S$ and $k= g \oplus g^\prime\in{\mathrm{dom\,}} T$, where $f,g\in H^2(\Omega)$ and $f^\prime , g^\prime \in H^2(\Omega^\prime)$. First of all we have \begin{equation*} (T k,h)-(k,Sh)=(\cL_\Omega g,f)_\Omega-(g,\cL_\Omega f)_\Omega+ (\cL_{\Omega^\prime} g^\prime ,f^\prime)_{\Omega^\prime}-(g^\prime,\cL_{\Omega^\prime} f^\prime)_{\Omega^\prime} \end{equation*} and Green's identity \eqref{greenid} shows that this is equal to \begin{equation*} \left(g\vert_\cC,\frac{\partial f}{\partial\nu}\Bigl|_\cC\right)_\cC- \biggl(\frac{\partial g}{\partial\nu}\Bigl|_\cC,f\vert_\cC\biggr)_\cC+ \left(g^\prime \vert_\cC,\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC- \left(\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC,f^\prime\vert_\cC\right)_\cC. \end{equation*} Since $h=f\oplus f^\prime\in{\mathrm{dom\,}} S$ we have \begin{equation*} f\vert_\cC=f^\prime\vert_\cC=0\qquad\text{and}\qquad\frac{\partial f}{\partial\nu}\Bigl|_\cC= -\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC, \end{equation*} and for $k=g\oplus g^\prime\in{\mathrm{dom\,}} T$ by definition $g\vert_\cC=g^\prime\vert_\cC$ holds. Hence we conclude \begin{equation*} (T k,h)-(k,Sh)=0 \end{equation*} and therefore every $h\in{\mathrm{dom\,}} S$ belongs to ${\mathrm{dom\,}} T^*$ and $T^*h=Sh$, i.e., $S\subset T^*$. Let us now prove the converse inclusion $T^*\subset S$. For this it is sufficient to check that every function $h\in{\mathrm{dom\,}} T^*$ belongs to ${\mathrm{dom\,}} S$. From the fact that $T$ is an extension of the selfadjoint operators $A$ and $\widetilde A$ we conclude \begin{equation*} T^*\subset A^*=A\subset T\qquad\text{and}\qquad T^*\subset\widetilde A^*=\widetilde A\subset T, \end{equation*} so that $T^*$ is a restriction of $A$ and $\widetilde A$. Hence every function $h$ in ${\mathrm{dom\,}} T^*$ belongs also to ${\mathrm{dom\,}} A$ and ${\mathrm{dom\,}} \widetilde A$. Thus $h=f\oplus f^\prime\in H^2(\dR^n)$ and $f\in H^2(\Omega)$ and $f^\prime\in H^2(\Omega^\prime)$ satisfy $f\vert_\cC=f^\prime\vert_\cC=0$. Therefore ${\mathrm{dom\,}} T^*\subset {\mathrm{dom\,}} S$ and we have shown $T^*=S$. Next it will be verified that $T$ is not closed. The arguments are similar as in \cite[Proof of Proposition 4.5]{BKSZ08} and could also be formulated in terms of unitary relations between Krein spaces, cf. \cite{DHMS06}. Assume that $T$ is closed, i.e., $T=\overline T$, and consider the subspace \begin{equation*} {\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}=\left\{\left[\begin{matrix}f\oplus f^\prime \\ T(f\oplus f^\prime) \\ f\vert_\cC\\ \tfrac{\partial f}{\partial \nu}|_\cC + \tfrac{\partial f^\prime}{\partial \nu^\prime}|_\cC\end{matrix}\right]: f\oplus f^\prime \in {\mathrm{dom\,}} T \right\}\subset L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC) \oplus L^2(\cC). \end{equation*} Observe that by \eqref{tracemap} and the definition of $T$ the mapping \begin{equation}\label{tracemapt} {\mathrm{dom\,}} T\ni f \oplus f^\prime\,\,\mapsto\,\,\left\{f|_\cC,\frac{\partial f}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC\right\} \in H^{3/2}(\cC)\,\times\, H^{1/2}(\cC) \end{equation} is onto. Setting $\cN=L^2(\Omega)\oplus L^2(\Omega)\oplus \{0\}\oplus \{0\}$ it is clear that the sum of the subpaces ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}$ and $\cN$ is \begin{equation}\label{cmcn} {\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}+\cN=L^2(\dR^n)\oplus L^2(\dR^n)\oplus \bigl( H^{3/2}(\cC)\,\times\, H^{1/2}(\cC)\bigr). \end{equation} We will calculate the orthogonal complements of ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}$ and $\cN$ in $L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC) \oplus L^2(\cC)$ and show that ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot +\cN^\bot$ is closed. First of all we have \begin{equation}\label{nbot} \cN^\bot=\{0\}\oplus \{0\}\oplus L^2(\cC)\oplus L^2(\cC) \end{equation} and in order to determine ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot$ suppose that \begin{equation}\label{mort} \left[\begin{matrix} l \oplus l^\prime \\ g\oplus g^\prime \\ \varphi \\ \psi\end{matrix}\right]\in{\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot,\qquad g,l \in L^2(\Omega), \,\, g^\prime,l^\prime\in L^2(\Omega^\prime),\,\,\varphi,\psi\in L^2(\cC), \end{equation} is an element in $L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC) \oplus L^2(\cC)$ which is orthogonal to ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}$. Then we have \begin{equation}\label{polk} \bigl(T(f\oplus f^\prime),g\oplus g^\prime\bigr)+\bigl(f \oplus f^\prime, l \oplus l^\prime \bigr)=-\bigl(f\vert_\cC,\varphi\bigr)_\cC- \left(\frac{\partial f}{\partial \nu}\Bigl|_\cC + \frac{\partial f^\prime}{\partial \nu^\prime}\Bigl|_\cC,\psi\right)_\cC \end{equation} for all $f\oplus f^\prime\in {\mathrm{dom\,}} T$. In particular, for $f\oplus f^\prime\in {\mathrm{dom\,}} S$ we have \begin{equation*} \frac{\partial f}{\partial \nu}\Bigl|_\cC = - \frac{\partial f^\prime}{\partial \nu^\prime}\Bigl|_\cC\quad\text{and}\quad f\vert_\cC=f^\prime\vert_\cC=0, \end{equation*} so that \eqref{polk} becomes \begin{equation*} \bigl(T(f\oplus f^\prime),g\oplus g^\prime\bigr)=\bigl(S(f\oplus f^\prime),g\oplus g^\prime\bigr)=-\bigl(f \oplus f^\prime, l \oplus l^\prime\bigr) \end{equation*} and hence $g\oplus g^\prime\in {\mathrm{dom\,}} S^*$ and $S^*(g\oplus g^\prime)=- l \oplus l^\prime$. But we have assumed that $T$ is closed and hence from $S=T^*$ we conclude $S^*=T^{**}=\overline T=T$, so that \begin{equation}\label{hkdomt} g\oplus g^\prime \in {\mathrm{dom\,}} T\qquad \text{and}\quad T(g\oplus g^\prime)=- l \oplus l^\prime. \end{equation} From Green's identity we then obtain \begin{equation*} \begin{split} &\bigl(T(f\oplus f^\prime),g\oplus g^\prime\bigr)-\bigl(f \oplus f^\prime, T( g \oplus g^\prime)\bigr)\\ &\qquad=(\cL_\Omega f,g)_\Omega-(f,\cL_\Omega g)_\Omega+(\cL_{\Omega^\prime} f^\prime,g^\prime)_{\Omega^\prime} -(f^\prime,\cL_{\Omega^\prime}g^\prime)_{\Omega^\prime}\\ &\qquad=\left(f|_\cC,\frac{\partial g}{\partial\nu}\Bigl|_\cC\right)_\cC- \left(\frac{\partial f}{\partial\nu}\Bigl|_\cC,g|_\cC\right)_\cC +\left(f^\prime|_\cC,\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC -\left(\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC,g^\prime|_\cC\right)_\cC\\ &\qquad=\left(f|_\cC,\frac{\partial g}{\partial\nu}\Bigl|_\cC+\frac{\partial g^\prime}{\partial\nu^\prime} \Bigl|_\cC\right)_\cC- \left(\frac{\partial f}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC,g|_\cC\right)_\cC, \end{split} \end{equation*} where we have used that $f\oplus f^\prime,\,g\oplus g^\prime\in{\mathrm{dom\,}} T$ satisfy $f|_\cC=f^\prime|_\cC$ and $g|_\cC=g^\prime|_\cC$. Inserting \eqref{hkdomt} in \eqref{polk} and comparing this with the above relation shows that the identity \begin{equation}\label{compare} \left(f|_\cC,\frac{\partial g}{\partial\nu}\Bigl|_\cC+\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC+\,\varphi\right)_\cC =\left(\frac{\partial f}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC,g|_\cC-\psi\right)_\cC \end{equation} holds for all $f\oplus f^\prime\in {\mathrm{dom\,}} T$. As the mapping \eqref{tracemapt} is surjective and $H^{3/2}(\cC) \times H^{1/2}(\cC)$ is dense in $L^2(\cC)\oplus L^2(\cC)$ we conclude from \eqref{compare} that \begin{equation*} \varphi=-\left(\frac{\partial g}{\partial\nu}\Bigl|_\cC+\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC\right) \qquad\text{and}\qquad \psi=g|_\cC \end{equation*} holds. Hence we have seen that the element \eqref{mort} in ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot$ is of the form \begin{equation}\label{mort2} \left[\begin{matrix} - T(g\oplus g^\prime) \\ g\oplus g^\prime \\ -\tfrac{\partial g}{\partial \nu}|_\cC - \tfrac{\partial g^\prime}{\partial \nu^\prime}|_\cC \\ g\vert_\cC \end{matrix}\right] \end{equation} for some $g\oplus g^\prime\in {\mathrm{dom\,}} T$. It is not difficult to check that conversely an element as in \eqref{mort2} belongs to ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot$. Therefore the orthogonal complement of ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}$ is given by \begin{equation*} {\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot=\left\{\left[\begin{matrix} - T(g\oplus g^\prime) \\ g\oplus g^\prime \\ -\tfrac{\partial g}{\partial n}\bigl|_\cC - \tfrac{\partial g^\prime}{\partial \nu^\prime}\bigl|_\cC \\ g\vert_\cC \end{matrix}\right]: g\oplus g^\prime \in {\mathrm{dom\,}} T \right\} \subset L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC)\oplus L^2(\cC) \end{equation*} and together with \eqref{nbot} we find that the sum of ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot$ and $\cN^\bot$ is \begin{equation*} {\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot +\cN^\bot =\left\{\left[\begin{matrix} - T(g\oplus g^\prime)\\ g\oplus g^\prime \end{matrix}\right]:g\oplus g^\prime\in{\mathrm{dom\,}} T \right\}\oplus L^2(\cC)\oplus L^2(\cC). \end{equation*} The assumption that $T$ is closed implies that ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}^\bot +\cN^\bot$ is a closed subspace of $L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC)\oplus L^2(\cC)$. But then according to \cite[IV Theorem 4.8]{K76} also ${\mathcal M}} \def\cN{{\mathcal N}} \def\cO{{\mathcal O}+\cN$ is a closed subspace of $L^2(\dR^n)\oplus L^2(\dR^n)\oplus L^2(\cC)\oplus L^2(\cC)$ which is a contradiction to \eqref{cmcn}. Thus $T$ can not be closed. \end{proof} The following lemma will be useful later in this section. \begin{lemma}\label{usefullemma} Let $S$ and $T$ be as in Theorem~\ref{opscoup} and let $\widetilde A$ be the selfadjoint realization of $\cL$ in $L^2(\dR^n)$ defined on $H^2(\dR^n)$. For a function $f\oplus f^\prime\in {\mathrm{dom\,}} T$, where $f\in H^2(\Omega)$ and $f^\prime\in H^2(\Omega^\prime)$, we have \begin{equation*} f\oplus f^\prime\in {\mathrm{dom\,}} \widetilde A\qquad\text{if and only if}\qquad \frac{\partial f}{\partial\nu}\Bigl|_\cC= -\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC. \end{equation*} \end{lemma} \begin{proof} For a function $f\oplus f^\prime\in {\mathrm{dom\,}}\widetilde A=H^2(\dR^n)$ it is clear that $\tfrac{\partial f}{\partial\nu}|_\cC= -\tfrac{\partial f^\prime}{\partial\nu^\prime}|_\cC$ holds. Conversely, let $f\oplus f^\prime\in{\mathrm{dom\,}} T$ and assume \begin{equation} \frac{\partial f}{\partial\nu}\Bigl|_\cC= -\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC. \end{equation} Then also $f|_\cC=f^\prime|_\cC$ and since every $g\oplus g^\prime\in{\mathrm{dom\,}} \widetilde A$ satisfies \begin{equation*} g|_\cC=g^\prime|_\cC\qquad\text{and}\qquad \frac{\partial g}{\partial\nu}\Bigl|_\cC= -\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC \end{equation*} Green's identity implies \begin{equation*} \begin{split} &\qquad\qquad\bigl(\widetilde A (g\oplus g^\prime),f\oplus f^\prime\bigr)-\bigl(g\oplus g^\prime,T(f\oplus f^\prime)\bigr)\\ &=\left(g|_\cC,\frac{\partial f}{\partial\nu}\Bigl|_\cC\right)_\cC- \left(\frac{\partial g}{\partial\nu}\Bigl|_\cC,f|_\cC\right)_\cC+ \left(g^\prime|_\cC,\frac{\partial f^\prime}{\partial\nu}\Bigl|_\cC\right)_\cC- \left(\frac{\partial g^\prime}{\partial\nu}\Bigl|_\cC,f^\prime|_\cC\right)_\cC=0. \end{split} \end{equation*} Therefore $f\oplus f^\prime\in{\mathrm{dom\,}} \widetilde A^*={\mathrm{dom\,}}\widetilde A$. \end{proof} Next we define a mapping $\Gamma_{\lambda_0}$ which satisfies the assumptions in the definition of a generalized $Q$-function. For this let $A$ be the selfadjoint operator in $L^2(\dR^n)$ in \eqref{aschro} and \eqref{aschroe} which is the orthogonal sum of the Dirichlet operators $A_\Omega$ and $A_{\Omega^\prime}$ in $L^2(\Omega)$ and $L^2(\Omega^\prime)$, respectively. For $\lambda\in\rho(A)$ the domain of the operator $T$ in Theorem~\ref{opscoup} can be decomposed in \begin{equation}\label{deco3} \begin{split} {\mathrm{dom\,}} T&={\mathrm{dom\,}} A\,\dot+\,\cN_\lambda(T)\\ &=\bigl\{f\oplus f^\prime\in H^2(\Omega)\oplus H^2(\Omega^\prime): f|_\cC=f^\prime|_\cC=0\bigr\}\, \dot+\, \cN_\lambda(T), \end{split} \end{equation} cf. \eqref{decoall}. Let us fix some $\lambda_0\in\rho(A)$. The decomposition \eqref{deco3} and the surjectivity of the map \begin{equation}\label{tracemaptt} {\mathrm{dom\,}} T\ni f \oplus f^\prime\,\,\mapsto\,\,\left\{f|_\cC,\frac{\partial f}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime} {\partial\nu^\prime}\Bigl|_\cC\right\}\in H^{3/2}(\cC)\,\times\, H^{1/2}(\cC), \end{equation} cf. \eqref{tracemap}, \eqref{tracemapt} imply that for a given function $\varphi\in H^{3/2}(\cC)$ there exists a unique function $f_{\lambda_0}\oplus f^\prime_{\lambda_0}\in \cN_{\lambda_0}(T)$ such that $f_{\lambda_0}|_\cC=f^\prime_{\lambda_0}|_\cC=\varphi$. Let $\Gamma_{\lambda_0}$ be the mapping that assigns $f_{\lambda_0}\oplus f^\prime_{\lambda_0}$ to $\varphi$, \begin{equation}\label{gammalambda0} H^{3/2}(\cC)\ni \varphi\mapsto \Gamma_{\lambda_0}\varphi:=f_{\lambda_0}\oplus f^\prime_{\lambda_0}. \end{equation} Similarly as in the previous section $\Gamma_{\lambda_0}$ will be regarded as an operator from $L^2(\cC)$ to $L^2(\dR^n)$ with ${\mathrm{dom\,}}\Gamma_{\lambda_0}=H^{3/2}(\cC)$ and ${\mathrm{ran\,}}\Gamma_{\lambda_0}=\cN_{\lambda_0}(T)$. Observe that the function $\Gamma_{\lambda_0}\varphi=f_{\lambda_0}\oplus f^\prime_{\lambda_0}$ consists of an $H^2(\Omega)$-solution $f_{\lambda_0}$ of $\cL_\Omega u=\lambda_0 u$ and an $H^2(\Omega^\prime)$-solution $f^\prime_{\lambda_0}$ of $\cL_{\Omega^\prime} u^\prime=\lambda_0 u^\prime$ satisfying the boundary conditions $\varphi=f_{\lambda_0}|_\cC=f^\prime_{\lambda_0}|_\cC$. The following proposition parallels Proposition~\ref{Gammalambda0prop}. \begin{proposition}\label{Gammalambda0prop2} Let $\lambda_0\in\rho(A)$, let $\Gamma_{\lambda_0}$ be as in \eqref{gammalambda0} and let $\lambda\in\rho(A)$. Then the following holds: \begin{enumerate} \item [{\rm (i)}] $\Gamma_{\lambda_0}$ is a bounded operator from $L^2(\cC)$ in $L^2(\dR^n)$ with dense domain $H^{3/2}(\cC)$; \item [{\rm (ii)}] The operator $\Gamma(\lambda)=(I+(\lambda-\lambda_0)(A-\lambda)^{-1})\Gamma_{\lambda_0}$ is given by \begin{equation*} \Gamma(\lambda) \varphi =f_\lambda\oplus f^\prime_\lambda ,\quad\text{where}\quad f_\lambda\oplus f^\prime_\lambda\in\cN_\lambda(T)\,\,\,\,\text{and}\,\,\,\, f_\lambda\vert_\cC=\varphi=f^\prime_\lambda\vert_\cC; \end{equation*} \item [{\rm (iii)}] The mapping $\Gamma(\bar\lambda)^*:L^2(\dR^n)\rightarrow L^2(\cC)$ satisfies \begin{equation*} \Gamma(\bar\lambda)^*(A-\lambda)h=-\frac{\partial f}{\partial\nu}\Bigl|_\cC - \frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC,\qquad h=f\oplus f^\prime\in{\mathrm{dom\,}} A. \end{equation*} \end{enumerate} \end{proposition} \begin{proof} We start with the proof (ii). Let $\varphi\in H^{3/2}(\cC)$ and choose the unique elements $f_\lambda\oplus f^\prime_\lambda\in\cN_\lambda(T)$ and $f_{\lambda_0}\oplus f^\prime_{\lambda_0}\in\cN_{\lambda_0}(T)$ such that \begin{equation*} f_\lambda|_\cC=f^\prime_\lambda|_\cC=\varphi=f_{\lambda_0}|_\cC=f^\prime_{\lambda_0}|_\cC \end{equation*} holds. By definition $\Gamma_{\lambda_0}\varphi=f_{\lambda_0}\oplus f^\prime_{\lambda_0}$ and therefore \begin{equation*} \begin{split} \Gamma(\lambda)\varphi&=\Gamma_{\lambda_0}\varphi+(\lambda-\lambda_0)(A-\lambda)^{-1}\Gamma_{\lambda_0}\varphi\\ &=f_{\lambda_0}\oplus f^\prime_{\lambda_0}+(\lambda-\lambda_0)(A-\lambda)^{-1}\Gamma_{\lambda_0}\varphi. \end{split} \end{equation*} Since $(\lambda-\lambda_0)(A-\lambda)^{-1}\Gamma_{\lambda_0}\varphi$ is a function belonging to ${\mathrm{dom\,}} A$ we have \begin{equation*} \bigl((\lambda-\lambda_0)(A-\lambda)^{-1}\Gamma_{\lambda_0}\varphi\bigr)\bigl|_\cC=0, \end{equation*} cf. \eqref{aschroe}. This implies \begin{equation*} (\Gamma(\lambda)\varphi)|_\cC= (\Gamma_{\lambda_0}\varphi)|_\cC=\bigl(f_{\lambda_0}\oplus f^\prime_{\lambda_0}\bigr)|_\cC=f_{\lambda_0}|_\cC=f^\prime_{\lambda_0}|_\cC=\varphi \end{equation*} and since ${\mathrm{ran\,}}\Gamma(\lambda)=\cN_\lambda(T)$, see Lemma~\ref{gamlem}, and $f_\lambda\oplus f_\lambda^\prime$ is the unique function in $\cN_\lambda(T)$ with $f_\lambda|_\cC=f^\prime_\lambda|_\cC=\varphi$ we conclude $\Gamma(\lambda)\varphi=f_\lambda\oplus f_\lambda^\prime$. \vskip 0.3cm\noindent Next we verify (iii). Observe that then $\Gamma(\bar\lambda)^*$, $\lambda\in\rho(A)$, is a closed operator which is defined on the whole space, i.e., $\Gamma(\bar\lambda)^*$ is bounded and hence assertion (i) follows by setting $\lambda_0=\bar\lambda$. Let $\varphi\in H^{3/2}(\cC)$ and choose the unique function $f_{\bar\lambda}\oplus f^\prime_{\bar\lambda}\in\cN_{\bar\lambda}(T)$ such that \begin{equation}\label{asdf} f_{\bar\lambda}\vert_\cC=f^\prime_{\bar\lambda}\vert_\cC=\varphi \end{equation} holds. Then $\Gamma(\bar\lambda)\varphi=f_{\bar\lambda}\oplus f^\prime_{\bar\lambda}$ and for each $h=f\oplus f^\prime\in{\mathrm{dom\,}} A$, where $f\in H^2(\Omega)$, $f^\prime\in H^2(\Omega^\prime)$, we have \begin{equation*} \begin{split} \bigl(\Gamma(\bar\lambda)\varphi,(A-\lambda)h\bigr)&=\bigl(f_{\bar\lambda}\oplus f^\prime_{\bar\lambda}, A(f\oplus f^\prime)\bigr) -\bigl(T(f_{\bar\lambda}\oplus f^\prime_{\bar\lambda}),f\oplus f^\prime\bigr)\\ &=(f_{\bar\lambda},\cL_\Omega f)_\Omega-(\cL_\Omega f_{\bar\lambda},f)_\Omega+ (f^\prime_{\bar\lambda},\cL_{\Omega^\prime} f^\prime)_{\Omega^\prime}-(\cL_{\Omega^\prime} f^\prime_{\bar\lambda},f^\prime)_{\Omega^\prime}. \end{split} \end{equation*} With the help of Green's identity this can be rewritten as \begin{equation*} \left(\frac{\partial f_{\bar\lambda}}{\partial\nu}\Bigl|_\cC,f|_\cC\right)_\cC- \left(f_{\bar\lambda}|_\cC,\frac{\partial f}{\partial\nu}\Bigl|_\cC\right)_\cC +\left(\frac{\partial f^\prime_{\bar\lambda}}{\partial\nu^\prime}\Bigl|_\cC,f^\prime|_\cC\right)_\cC- \left(f^\prime_{\bar\lambda}|_\cC,\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC. \end{equation*} Since for $h=f\oplus f^\prime\in{\mathrm{dom\,}} A$ we have $f|_\cC=f^\prime|_\cC=0$ we conclude from the above calculation and \eqref{asdf} that \begin{equation*} \bigl(\Gamma(\bar\lambda)\varphi,(A-\lambda)h\bigr)=- \left(\varphi,\frac{\partial f}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC \end{equation*} holds for every $\varphi\in H^{3/2}(\cC)={\mathrm{dom\,}}\Gamma(\bar\lambda)$. Hence $(A-\lambda)h\in{\mathrm{dom\,}}\Gamma(\bar\lambda)^*$ and \begin{equation*} \Gamma(\bar\lambda)^*(A-\lambda)h=-\frac{\partial f}{\partial\nu}\Bigl|_\cC-\frac{\partial f^\prime}{\partial\nu^\prime}\Bigl|_\cC, \qquad h=f\oplus f^\prime\in{\mathrm{dom\,}} A. \end{equation*} Furthermore, for $\lambda\in\rho(A)$ we have ${\mathrm{ran\,}}(A-\lambda)=L^2(\dR^n)$, so that $\Gamma(\bar\lambda)^*$ is a bounded operator defined on $L^2(\dR^n)$. \end{proof} Next we define a function $Q$ in a similar way as the Dirichlet-to-Neumann map in Definition~\ref{dirneu}. For this we make use of the decomposition \eqref{deco3}. Namely, for $\lambda\in\rho(A)$ and $\varphi\in H^{3/2}(\cC)$ there exists a unique function $f_\lambda\oplus f^\prime_\lambda\in\cN_\lambda(T)$ such that $f_\lambda\vert_\cC=f^\prime_\lambda\vert_\cC=\varphi$. The operator $Q(\lambda)$ in $L^2(\cC)$ is now defined by \begin{equation}\label{qcoup} Q(\lambda)\varphi:=-\frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC- \frac{\partial f_\lambda^\prime}{\partial\nu^\prime}\Bigl|_\cC, \qquad\varphi\in {\mathrm{dom\,}} Q(\lambda)=H^{3/2}(\cC). \end{equation} Observe that ${\mathrm{ran\,}} Q(\lambda)\subset H^{1/2}(\cC)$ holds. Roughly speaking, up to a minus sign $Q(\lambda)$ maps the Dirichlet boundary value of the $H^2$-solutions of $\cL_\Omega u=\lambda u$ and $\cL_{\Omega^\prime}u^\prime=\lambda u^\prime$, $u|_\cC=u^\prime|_\cC$, onto the sum of the Neumann boundary values of these solutions. We mention that in the analysis of so-called intermediate Hamiltonians a modified form of such a Dirichlet-to-Neumann map has been used in \cite{MPP07}. In the following theorem it turns out that $Q$ can be interpreted as a generalized $Q$-function and the difference of the resolvents of $A$ and $\widetilde A$ is expressed with the help of $Q$. \begin{theorem}\label{bigthm2} Let $\cL$ be the elliptic differential expression in \eqref{cl2} and let $A$ and $\widetilde A$ be the selfadjoint realizations of $\cL$ in \eqref{aschro}-\eqref{aschroe} and \eqref{atildeschroe}, respectively. Let $S$ and $T$ be the operators in Theorem~\ref{opscoup}, define $\Gamma(\lambda)$ as in Proposition~\ref{Gammalambda0prop2} and let $Q(\lambda)$, $\lambda\in\rho(A)$, be as in \eqref{qcoup}. Then the following holds: \begin{enumerate} \item [{\rm (i)}] $Q$ is a generalized $Q$-function of the triple $\{S,A,T\}$; \item [{\rm (ii)}] The operator $Q(\lambda)$ is injective for all $\lambda\in\rho(A)\cap\rho(\widetilde A)$ and the resolvent formula \begin{equation}\label{resform2} (A-\lambda)^{-1}-(\widetilde A-\lambda)^{-1}=\Gamma(\lambda) Q(\lambda)^{-1} \Gamma(\bar\lambda)^* \end{equation} holds; \item [{\rm (iii)}] For $p\in\dN$ and $2p+1>n$ the difference of the resolvents in \eqref{resform2} belongs to the von Neumann-Schatten class ${\mathfrak S}} \def\sT{{\mathfrak T}} \def\sU{{\mathfrak U}_p(L^2(\Omega))$. \end{enumerate} \end{theorem} \begin{proof} Let us prove assertion (i). Before the defining relation \eqref{q} for a generalized $Q$-function will be verified we show that the operator $Q(\mu)^*$ is an extension of $Q(\bar\mu)$, $\mu\in\rho(A)$. For this let $\psi\in H^{3/2}(\cC)$ and choose the unique element $f_{\bar\mu}\oplus f^\prime_{\bar\mu}\in\cN_{\bar\mu}(T)$ with the property $f_{\bar\mu}|_\cC=f^\prime_{\bar\mu}|_\cC=\psi$. For $\varphi\in H^{3/2}(\cC)$ let $f_{\mu}\oplus f^\prime_{\mu}\in\cN_{\mu}(T)$ be such that $f_{\mu}|_\cC=f^\prime_{\mu}|_\cC=\varphi$ holds. By the definition of $Q$ in \eqref{qcoup} we have \begin{equation*} Q(\mu)\varphi=-\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC-\frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC \quad\text{and}\quad Q(\bar\mu)\psi=-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_\cC-\frac{\partial f^\prime_{\bar\mu}}{\partial\nu^\prime}\Bigl|_\cC. \end{equation*} This gives \begin{equation}\label{qadj} (Q(\mu)\varphi,\psi)=-\left(\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC,f_{\bar\mu}|_\cC\right)_\cC -\left(\frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC,f^\prime_{\bar\mu}|_\cC\right)_\cC \end{equation} and since \begin{equation*} \begin{split} \left(f_\mu|_\cC,\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_\cC\right)_\cC- \left(\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC,f_{\bar\mu}|_\cC\right)_\cC&= (\cL_\Omega f_\mu,f_{\bar\mu})_\Omega-(f_\mu,\cL_\Omega f_{\bar\mu})_\Omega=0,\\ \left(f^\prime_\mu|_\cC,\frac{\partial f^\prime_{\bar\mu}}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC- \left(\frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC,f^\prime_{\bar\mu}|_\cC\right)_\cC &=(\cL_{\Omega^\prime} f^\prime_\mu,f^\prime_{\bar\mu})_{\Omega^\prime}-(f^\prime_\mu,\cL_{\Omega^\prime} f^\prime_{\bar\mu})_{\Omega^\prime}=0 \end{split} \end{equation*} we can rewrite \eqref{qadj} in the form \begin{equation*} (Q(\mu)\varphi,\psi)=-\left(f_\mu|_\cC,\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_\cC\right)_\cC- \left(f^\prime_\mu|_\cC,\frac{\partial f^\prime_{\bar\mu}}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC =-\left(\varphi,\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_\cC+\frac{\partial f^\prime_{\bar\mu}}{\partial\nu^\prime}\Bigl|_\cC\right)_\cC. \end{equation*} This is true for every $\varphi\in{\mathrm{dom\,}} Q(\mu)$ and hence we conclude $\psi\in{\mathrm{dom\,}} Q(\mu)^*$ and \begin{equation*} Q(\mu)^*\psi=-\frac{\partial f_{\bar\mu}}{\partial\nu}\Bigl|_\cC-\frac{\partial f^\prime_{\bar\mu}}{\partial\nu^\prime}\Bigl|_\cC =Q(\bar\mu)\psi. \end{equation*} Let $\Gamma(\cdot)$ be as in Proposition~\ref{Gammalambda0prop2}. We prove now that \begin{equation}\label{qrel2} Q(\lambda)-Q(\mu)^*=(\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda),\qquad \lambda,\mu\in\rho(A) \end{equation} holds on ${\mathrm{dom\,}}\Gamma(\lambda)=H^{3/2}(\cC)$. For this let $\varphi,\psi\in H^{3/2}(\cC)$ and choose the unique elements $f_\lambda\oplus f^\prime_\lambda\in\cN_\lambda(T)$, $f_\mu\oplus f^\prime_\mu\in\cN_\mu(T)$ with the properties \begin{equation}\label{bts} f_\lambda|_\cC=f^\prime_\lambda|_\cC=\varphi\quad\text{and}\quad f_\mu|_\cC=f^\prime_\mu|_\cC=\psi. \end{equation} Then according to Proposition~\ref{Gammalambda0prop2}~(ii) $\Gamma(\lambda)\varphi=f_\lambda\oplus f^\prime_\lambda$ and $\Gamma(\mu)\psi=f_\mu\oplus f^\prime_\mu$ and by the definition of $Q(\cdot)$ in \eqref{qcoup} we have \begin{equation*} Q(\lambda)\varphi=-\frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC - \frac{\partial f^\prime_\lambda}{\partial\nu^\prime}\Bigl|_\cC \quad\text{and}\quad Q(\mu)\psi=-\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC - \frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC. \end{equation*} Therefore \begin{equation*} \bigl((Q(\lambda)-Q(\mu)^*)\varphi,\psi\bigr)_\cC=-\left(\frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC + \frac{\partial f^\prime_\lambda}{\partial\nu^\prime}\Bigl|_\cC,\psi \right)_\cC+ \left(\varphi,\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC + \frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC \right)_\cC \end{equation*} and inserting \eqref{bts} gives \begin{equation*} -\left(\frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC,f_\mu|_\cC\right)_\cC - \left(\frac{\partial f^\prime_\lambda}{\partial\nu^\prime}\Bigl|_\cC,f^\prime_\mu|_\cC \right)_\cC+ \left(f_\lambda|_\cC,\frac{\partial f_\mu}{\partial\nu}\Bigl|_\cC\right)_\cC + \left(f_\lambda^\prime|_\cC,\frac{\partial f^\prime_\mu}{\partial\nu^\prime}\Bigl|_\cC \right)_\cC. \end{equation*} Making use of Green's identity the above relations then become \begin{equation*} \begin{split} &\bigl((Q(\lambda)-Q(\mu)^*)\varphi,\psi\bigr)_\cC\\ &\qquad\quad=(\cL_\Omega f_\lambda,f_\mu)_\Omega-(f_\lambda,\cL_\Omega f_\mu)_\Omega +(\cL_{\Omega^\prime}f^\prime_\lambda,f^\prime_\mu)_{\Omega^\prime}-(f^\prime_\lambda,\cL_{\Omega^\prime}f^\prime_\mu)_{\Omega^\prime}\\ &\qquad\quad=(\lambda-\bar\mu)\bigl( (f_\lambda,f_\mu)_\Omega+(f^\prime_\lambda,f^\prime_\mu)_{\Omega^\prime} \bigr) =(\lambda-\bar\mu)\bigl(f_\lambda\oplus f^\prime_\lambda,f_\mu\oplus f^\prime_\mu\bigr)\\ &\qquad\quad=(\lambda-\bar\mu)(\Gamma(\lambda)\varphi,\Gamma(\mu)\psi)=\bigl((\lambda-\bar\mu)\Gamma(\mu)^*\Gamma(\lambda)\varphi,\psi\bigr)_\cC. \end{split} \end{equation*} Since this is true for any $\psi\in H^{3/2}(\cC)$ we conclude that \eqref{qrel2} holds on $H^{3/2}(\cC)$. Thus $Q$ in \eqref{qcoup} is a generalized $Q$-function for the triple $\{S,A,T\}$. \vskip 0.3cm\noindent (ii) We check first that $\ker Q(\lambda)=\{0\}$ holds for $\lambda\in\rho(A)\cap\rho(\widetilde A)$. Assume that $Q(\lambda)\varphi=0$ for some $\varphi\in H^{3/2}(\cC)$ and let $f_\lambda\oplus f_\lambda^\prime\in\cN_\lambda(T)$ be the unique element with the property $f_\lambda|_\cC=f^\prime_\lambda|_\cC=\varphi$. Then the definition of $Q$ and the assumption $Q(\lambda)\varphi=0$ imply \begin{equation*} \frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC=-\frac{\partial f^\prime_\lambda}{\partial\nu^\prime}\Bigl|_\cC. \end{equation*} According to Lemma~\ref{usefullemma} this yields $f_\lambda\oplus f_\lambda^\prime\in{\mathrm{dom\,}}\widetilde A\cap\cN_\lambda(T)$. But as $\lambda\in\rho(\widetilde A)$ we conclude $f_\lambda=0$ and $f^\prime_\lambda=0$, and hence $\varphi=0$. Now we prove the formula \eqref{resform2} for the difference of the resolvents of $A$ and $\widetilde A$. By the above argument $Q(\lambda)^{-1}$ exists for $\lambda\in\rho(A)\cap\rho(\widetilde A)$. Furthermore, \eqref{tracemaptt} implies ${\mathrm{ran\,}} Q(\lambda)=H^{1/2}(\cC)$ and it follows from Proposition~\ref{Gammalambda0prop2} that the right hand side in \eqref{resform2} is well defined. Let $h\in L^2(\dR^n)$ and define the function $k$ as \begin{equation}\label{k} k=(A-\lambda)^{-1}h-\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^*h. \end{equation} We show $k\in{\mathrm{dom\,}}\widetilde A$. First of all it is clear that $k\in{\mathrm{dom\,}} T$ since $(A-\lambda)^{-1}h\in{\mathrm{dom\,}} A\subset{\mathrm{dom\,}} T$ and $\Gamma(\lambda)$ maps into $\cN_\lambda(T)$. Therefore $k=g\oplus g^\prime$, where $g\in H^2(\Omega)$, $g^\prime\in H^2(\Omega^\prime)$, and $g|_\cC=g^\prime|_\cC$. According to Lemma~\ref{usefullemma} for $k\in{\mathrm{dom\,}} \widetilde A$ it is sufficient to check \begin{equation}\label{gbc} \frac{\partial g}{\partial\nu}\Bigl|_\cC+\frac{\partial g^\prime}{\partial\nu^\prime}\Bigl|_\cC=0. \end{equation} We proceed in a similar way as in the proof of Theorem~\ref{bigthm1}. Let $h_A=f_A\oplus f_A^\prime\in{\mathrm{dom\,}} A$ be such that $h=(A-\lambda)h_A$. Making use of Proposition~\ref{Gammalambda0prop2}~(iii) we obtain \begin{equation}\label{k2} k=h_A+\Gamma(\lambda)Q(\lambda)^{-1}\left(\frac{\partial f_A}{\partial\nu}\Bigl|_\cC+\frac{\partial f_A^\prime}{\partial\nu^\prime}\Bigl|_\cC\right) \end{equation} from \eqref{k}. Let \begin{equation*} \cN_\lambda(T)\ni f_\lambda\oplus f_\lambda^\prime:=\Gamma(\lambda)Q(\lambda)^{-1}\left(\frac{\partial f_A}{\partial\nu}\Bigl|_\cC+\frac{\partial f_A^\prime}{\partial\nu^\prime}\Bigl|_\cC\right). \end{equation*} Then by Proposition~\ref{Gammalambda0prop2}~(ii) we have \begin{equation*} f_\lambda|_\cC=f^\prime_\lambda|_\cC=Q(\lambda)^{-1}\left(\frac{\partial f_A}{\partial\nu}\Bigl|_\cC+\frac{\partial f_A^\prime}{\partial\nu^\prime}\Bigl|_\cC\right). \end{equation*} This together with the definition of $Q(\lambda)$ in \eqref{qcoup} implies \begin{equation*} \frac{\partial f_A}{\partial\nu}\Bigl|_\cC+\frac{\partial f_A^\prime}{\partial\nu^\prime}\Bigl|_\cC= Q(\lambda)(f_\lambda|_\cC)=Q(\lambda)(f^\prime_\lambda|_\cC) = -\frac{\partial f_\lambda}{\partial\nu}\Bigl|_\cC-\frac{\partial f_\lambda^\prime}{\partial\nu^\prime}\Bigl|_\cC. \end{equation*} Hence we conclude that the function $k=g\oplus g^\prime$ in \eqref{k2} fulfils \eqref{gbc}, i.e., $k\in{\mathrm{dom\,}}\widetilde A$. From \eqref{k} and $A,\widetilde A\subset T$ we obtain \begin{equation*} (\widetilde A-\lambda)k=(T-\lambda)(A-\lambda)^{-1}h-(T-\lambda)\Gamma(\lambda)Q(\lambda)^{-1}\Gamma(\bar\lambda)^*h=h \end{equation*} and now $k=(\widetilde A-\lambda)^{-1}h$ and \eqref{k} imply \eqref{resform2}. \end{proof} The following corollaries can be proved in the same way as Corollary~\ref{prop1} and Corollary~\ref{prop2}. \begin{corollary}\label{prop12} For $\lambda,\lambda_0\in\rho(A)$ the following holds. \begin{enumerate} \item [{\rm (i)}] $Q(\lambda)$ is a non-closed unbounded operator in $L^2(\cC)$ defined on $H^{3/2}(\cC)$ with ${\mathrm{ran\,}} Q(\lambda)\subset H^{1/2}(\cC)$; \item [{\rm (ii)}] $Q(\lambda)-{\rm Re\,} Q(\lambda_0)$ is a non-closed bounded operator in $L^2(\cC)$ defined on $H^{3/2}(\cC)$; \item [{\rm (iii)}] the closure $\widetilde Q(\lambda)$ of the operator $Q(\lambda)-{\rm Re\,} Q(\lambda_0)$ in $L^2(\cC)$ satisfies $$\frac{d}{d\lambda}\,\widetilde Q(\lambda)=\Gamma(\bar\lambda)^*\overline{\Gamma(\lambda)}$$ and $\widetilde Q$ is a $\cL(L^2(\cC))$-valued Nevanlinna function. \end{enumerate} \end{corollary} \begin{corollary}\label{prop22} For $\lambda\in\rho(A)\cap\rho(\widetilde A)$ the following holds. \begin{enumerate} \item [{\rm (i)}] $Q(\lambda)^{-1}$ is a non-closed bounded operator in $L^2(\cC)$ defined on $H^{1/2}(\cC)$ with ${\mathrm{ran\,}} Q(\lambda)^{-1}=H^{3/2}(\cC)$; \item [{\rm (ii)}] the closure $\overline{Q(\lambda)^{-1}}$ is a compact operator in $L^2(\cC)$; \item [{\rm (iii)}] the function $\lambda\mapsto -\overline{Q(\lambda)^{-1}}$ is a $\cL(L^2(\cC))$-valued Nevanlinna function. \end{enumerate} \end{corollary} As a corollary of Theorem~\ref{bigthm2} we obtain a trace formula for the difference of the resolvents of $A$ and $\widetilde A$. \begin{corollary} Let the assumptions be as in Theorem~\ref{bigthm2}, let $\widetilde Q$ be the Nevanlinna function from Corollary~\ref{prop12} and suppose, in addition, $n=2$. Then \begin{equation*} {\mathrm{tr}}\bigl((A-\lambda)^{-1}-(\widetilde A-\lambda)^{-1}\bigr)={\mathrm{tr}}\left(\overline{Q(\lambda)^{-1}} \frac{d}{d\lambda}\,\widetilde Q(\lambda)\right) \end{equation*} holds for all $\lambda\in\rho(A)\cap\rho(\widetilde A)$. \end{corollary}
1,314,259,992,679
arxiv
\section{Acknowledgement} We are grateful to the National Center for High-performance Computing for computer time and facilities, and Google Research, MediaTek, MOST 107-2634-F-007-007 for their support. This research is also supported in part by the Ministry of Science and Technology of Taiwan (MOST 107-2633-E-002-001), National Taiwan University, Intel Corporation, and Delta Electronics. \section{Search Architecture} \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{figures/big_diagram.eps} \caption{\textbf{Network architecture for CIFAR-10 and ImageNet.} Our final network structure is fully specified by defining the Dense Cell topology. The number of Dense Cell repetitions $C$ and the growth rate $G$ are different for CIFAR-10 and ImageNet architecture. Note that in ImageNet Architecture we set the stride value of initial convolution 2 and the pool size in global pooling to 7 due to the scale of the input image.} \label{fig.bigdiagram} \end{figure} In Fig.\ref{fig.bigdiagram} we illustrate the overall architectures. We repeat an identical ``cell'' (Dense Cell) numerous of times following the connecting rules of CondenseNet \cite{huang2017condensenet}. We take inspirations from CondenseNet, which optimizes both classification accuracy and inference speed for mobile devices. The feature maps are directly connected even with different resolution and the growth rate is doubled whenever the size of the feature maps reduces. The strategy to make fully dense connections encourage feature re-use and the exponentially increased growth rate reduces computational costs. These characteristics are beneficial when deploying models on energy constrained devices. As we conduct our searching on CIFAR-10, transferring the searched model to ImageNet, requires more stride 2 pooling layers and Dense Cells since the size of the input images (224 x 224) is way larger than CIFAR10 (32 x 32). Finally, a global average pooling layer is appended to the last Dense Cell to obtain the final output. The overall architectures (e.g., how many cells are connected, initial output feature map size, growth rate) are fixed before searching, the only component we are going to search is the cell structure, this idea follows the heuristics of searching for a ``block'' similar to \cite{zoph2017learning,liu2017progressive}. Each cell to be searched consists of multiple layers of two types - normalization (Norm) and convolutional (Conv) layers. We progressively add layers following the Norm-Conv-Norm-Conv order (Fig.\ref{fig.searchspace}(a)-Right). The operations available for Norm (yellow boxes) and Conv (green boxes) layers are shown in the left and right column below, respectively: \begin{multicols}{2} \begin{enumerate} \item Batch Normalization + Relu \item Batch Normalization \item No op (Identity) \end{enumerate} \hfill\linebreak \begin{enumerate} \item 1x1 Convolution \item 3x3 Convolution \item 1x1 Group Convolution \item 3x3 Group Convolution \item 1x1 Learned Group Convolution \item 3x3 Depth-wise Convolution \end{enumerate} \end{multicols} \section{Search Space} \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{figures/searchspace_ppp-net.eps} \end{center} \caption{\textbf{Search Space Design.} Panel (a): We show the cell structure of our DPP-Net. Panel (b): cells of efficient CNNs. BN, DW, LG, G stands for Batch Norm, Depth-wise, Learned Group, Group, respectively. All the group convolutions are implicitly followed by channel shuffle operation.} \label{fig.searchspace} \end{figure} Our search space covers well-designed efficient operations (e.g., Depth-wise Convolution \cite{chollet2016xception}, Learned Group Convolution \cite{huang2017condensenet}) to take advantages of empirical knowledge when designing efficient CNNs. This not only ensures the robustness and efficiency of our searched architectures but also reduces the training time of the searched model, therefore reduce the search time as well. Finally, the block of other efficient CNNs, e.g., MobileNet \cite{howard2017mobilenets}, ShuffleNet \cite{zhang2017shufflenet} are also shown in Fig.\ref{fig.searchspace}(b) for a more thorough comparison. We now measure the complexity of our search space to have an intuition of the size of the search problem. For a $\ell$-layer cell, the total number of possible cell structures is $O_{0} \times O_{1} \times ... \times O_{i} \times ... \times O_{\ell} \text{ where } O_{i} = \left | Norm \right | \text{ if } i \text{ mod } 2 = 0 \text{ or } O_{i} = \left | Conv \right |$. As shown above, the number of operations in the Norm set is 3 and the number of operations in the Conv set is 6. Therefore, a 3-layer cell structure has $3 \times 6 \times 3 = 54$ possibilities, a 4-layer cell will have $54 \times 6 = 324$ possible structures. As the number of layer increases, it is hardly pragmatic to train all the architectures. This search space is undersized comparing to the search space of \cite{zoph2017learning,liu2017progressive} because we discarded the operations that are rarely used in modern mobile CNNs and we do not need to search for which layer to connect to. Nevertheless, this search space is still versatile enough to cover a wide variety of possible mobile models. \section{Search Algorithm} \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{figures/solution_diagram_old.eps} \end{center} \caption{\textbf{Flow Diagram of Our Search Algorithm}. We adopt Sequential Model-Based Optimization (\cite{hutter2011sequential}) algorithm to search efficiently with the following three steps: \textbf{(1)Train and Mutation}, \textbf{(2)Update and Inference}, and \textbf{(3)Model Selection}. Note that $\ell$ is the layers in a cell, $K$ is the number of models to train, and ${K}'$ is the number of models after \textit{Mutation}.} \label{fig.diagram} \end{figure} \subsection{Overview} Many architecture search approaches intuitively search on the complete search space which requires significant computing power. Inspired by \cite{liu2017progressive}, which progressively search the architectures from a small search space to a large one, we adopt Sequential Model-Based Optimization (\cite{hutter2011sequential}) algorithm to navigate through the search space efficiently. Our search algorithm consists of the following three main steps (Fig.~\ref{fig.diagram}). \begin{enumerate} \item{\textbf{Train and Mutation}.} In this stage, we train $K$ $\ell$-layer models and acquire their accuracies after $N$ epochs. Meanwhile, for each $\ell$-layer model, we mutate it and acquire a $\ell+1$-layer model by exploring all possible combinations. Assuming that we have $K$ models before mutation, the number of models after mutation $K'$ process will be the following. \begin{equation}\label{eq.mutation} {K}' = \begin{dcases} K \times \left | Norm \right |, & \text{if } \ell + 1 \text{ mod } 2 = 0 \\ K \times \left | Conv \right |, & \text{otherwise} \end{dcases} \end{equation} \item{\textbf{Update and Inference}.} In \textbf{Train and Mutation} step, the algorithm will generate a large number of candidate models that are usually beyond our ability to evaluate. We use a surrogate function to predict the networks' accuracies with the given architectures. The surrogate function is updated with the evaluation accuracies (output) and the architectures (inputs) of the $K$ $\ell$-layer models from the \textbf{Train and Mutation} step. After the surrogate function is updated, we predict the accuracies of the mutated $\ell+1$-layer models. Using a surrogate function avoids time-consuming training to obtain true accuracy of a network with only a slight drawback of regression error. \item{\textbf{Model Selection}.} There are two ways we can select $\ell+1$-layers models. \\ \textit{PNAS Method.} \cite{liu2017progressive} adopted the SMBO algorithm to search for block architectures of increasing complexity. During the search process, SMBO simply selects top $K$ performing models based on predicted accuracies. This approach is inconsiderate of the heterogeneity of real-world portable device, which is only equipped with limited power supply. \textit{Our Method.} Our method considers not only the accuracy of the models but also the device-aware characteristics. Those characteristics include QoS (Quality of Service) and hardware requirements (e.g., memory size), which are critical metrics to be considered on mobile and embedded devices. Given the device we are searching on, multiple hard constraints $\mu$ and soft constraints $\xi$ are set. A hard constraint $\mu$ is considered to be the minimal requirement of the model. A model that does not meet the hard constraint will be removed from the candidate list. On the other hand, a soft constraint $\xi$ is treated as one of the objectives to be optimized which will be eventually selected using Pareto Optimality selection. \end{enumerate} \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{figures/pareto_group.eps} \end{center} \caption{\textbf{A symbolic figure for Pareto Optimality.} Panel (a) illustrates an example of two objectives Pareto front. Every box represents a feasible choice. In this case, our goal is to minimize both objectives. Since box C is dominated by both box A and box B, it is not on the Pareto front. While box A and box B both lie on the front because none of them dominated another. Panel (b) demonstrates that when the number of objectives is more than two, the Pareto front becomes more complicated.} \label{fig.pareto} \end{figure} \subsection{Pareto Optimality} Since we are optimizing the problem using multiple objectives, no single solution will optimize each objective simultaneously and compromises will have to be made. We treat neural network architecture search as a multi-objective optimization problem and use Pareto Optimality over a set of pre-defined objectives to select models. Using Pareto Optimization, it is likely that there exists a number of optimal solutions. A solution is said to be Pareto optimal if none of the objectives can be improved without worsening some of the other objectives, and the solutions achieve Pareto-optimality are said to be in the Pareto front. \subsection{Surrogate Function} To accurately predict the classification accuracy of an architecture, a surrogate function is used. The surrogate function is able to learn efficiently from a few data points and handle variable-sized inputs (models with different number of layers). Hence, we choose a Recurrent Neural Network (RNN), the last hidden state of the RNN is followed by a fully connected layer with sigmoid nonlinearity to regress accuracy. The reason for choosing RNN as the surrogate function is because of its high sampling efficiency and the ability to handle different length of inputs. The input to the RNN is the one-hot encoding of our cell structure and each structure has its own embedding. \begin{figure}[h] \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{figures/surrogate.eps}\label{fig.surrogate} \end{minipage \hspace{.01\textwidth} \begin{minipage}{0.4\textwidth} \caption{The architecture diagram of our Recurrent Neural Network (RNN). The dashed block indicates we progressively search for more layers architectures.} \end{minipage} \end{figure} \section{Conclusions} Our proposed \textit{DPP-Net} is the first device-aware neural architecture search approach outperforming state-of-the-art handcrafted mobile CNNs. Experimental results on CIFAR-10 demonstrate the effectiveness of Pareto-optimal networks found by DPP-Net, for three different devices: (1) a workstation with NVIDIA Titan X GPU, (2) NVIDIA Jetson TX1 embedded system, and (3) mobile phone with ARM Cortex-A53. Compared to CondenseNet and NASNet (Mobile), DPP-Net achieves better performances: higher accuracy \& shorter inference time on these various devices. Additional experimental results also show that models found by DPP-Net achieve state-of-the-art performance on ImageNet. \section{Experiments and Results} \subsection{Experimental Details} We conduct our search on the CIFAR-10 dataset with standard augmentation, the training set consists of 50,000 images and the testing set consists of 10,000 images. After the search is done, we use the cell structure to form a larger model and train on ImageNet \cite{deng2009imagenet} classification task to see how well the search performs. For the surrogate function, we use a standard LSTM with layer normalization \cite{ba2016layer}, the hidden state size and the embedding size are both set to 128. Bias in the fully connected layer is initialized to 2, and the embeddings use random uniform initializer in range 0 to 1. To train the surrogate function, we use Adam Optimizer \cite{kingma2014adam} with learning rate 0.008. During the search, the number of repeated blocks $C_1, C_2, C_3$ are set to 14, 14, 14, $G_1, G_2, G_3$ are set to 8, 16, 32 for CIFAR-10 and the searching end at $\ell$ = 4. Each sampled architecture is trained for 10 epochs with batch size 256 using Stochastic Gradient Descent and Nesterov momentum weight 0.9. The learning rate is set to 0.1 with cosine decay \cite{loshchilov2016sgdr}. At each iteration of the search algorithm, our number of models to train, $K$, is set to 128. After searching is done, we train the final models on ImageNet with batch size 256 for 120 epochs, the number of repeated blocks $C_1, C_2, C_3, C_4, C_5$ are set to 4, 6, 8, 10, 8 and $G_1, G_2, G_3, G_4, G_5$ are set to 8, 16, 32, 64, 128. The detail settings of the devices to search on are shown in Table.\ref{tb.hwsettings}. When searching models on WS and ES, we consider 4 objectives, evaluation error rate, number of parameters, FLOPs, and actual inference time on different computing devices. While on Mobile Phone, we consider an additional metric, memory usage, as our $5^{th}$ objective. \begin{table}[h] \caption{\textbf{Hardware Specifications and Numbers of Objectives.} For WS, 64 GB is the CPU memory and 12 GB is the GPU memory. In ES, memory space is shared among CPU and GPU} \label{tb.hwsettings} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline & \textbf{Workstation (WS)} & \textbf{Embedded System (ES)} & \textbf{Mobile Phone (M)} \\ \hline Instance & Desktop PC & NVIDIA Jetson TX1 & Xiaomi Redmi Note 4 \\ \hline CPU & Intel i5-7600 & ARM Cortex57 & ARM Cortex53 \\ \hline Cores & 4 & 4 & 8 \\ \hline GHz & 3.5 & 1.9 & 2.0 \\ \hline CUDA & Titan X (Pascal) & Maxwell 256 & - \\ \hline Memory & 64 GB / 12 GB & 4 GB & 3 GB \\ \hline Objectives & 4 & 4 & 5 \\ \hline \end{tabular}} \end{table} \subsection{Results on CIFAR-10} We first provide the results about the Pareto-optimal candidates (each trained for 10 epochs) found during the search process, and then demonstrate the evaluations of final models (trained for 300 epochs). \begin{figure}[h] \centering \subfigure[]{\label{fig.params}\includegraphics[width=0.325\textwidth]{figures/params_acc.eps}} \subfigure[]{\label{fig.flops}\includegraphics[width=0.325\textwidth]{figures/flops_acc.eps}} \subfigure[]{\label{fig.dur}\includegraphics[width=0.325\textwidth]{figures/dur_parm_flops.eps}} \caption{\textbf{Pareto-optimal candidates on WS (trained with 10 epochs) evaluated with Cifar10 dataset}. (a) is the scatter plot between error rate (Y-axis) and the number of parameters (X-axis), whereas (b) stands for error rate v.s. FLOPs. (c) is the number of parameters (left Y-axis) and FLOPs (right Y-axis) v.s. actual inference time (X-axis), where the dot represents params v.s. inference time and the cross is FLOPs v.s. inference time. Each model is color-coded: green (DPP-Net-PNAS), yellow (DPP-Net-WS), and cyan (DPP-Net-Panacea). Notice that each candidate here represents a neural architecture that achieves Pareto optimality. Finally, CondenseNet (red dots) is included for comparison.} \label{fig.paretofront} \end{figure} Fig.\ref{fig.paretofront} shows the candidates extracted from the Pareto front during the search process. In Fig.\ref{fig.paretofront}(a,b), no clear pattern (or association) is observed between the error rate and the number of parameters (or FLOPs). Similarly, from Fig.\ref{fig.paretofront}(c), the inference time couldn't be simply associated with the device-agnostic objectives (FLOPs and number of parameters). As we will show later in Table.\ref{tb.cifar} and Fig.\ref{fig.sort_time}, not surprisingly, inference time is device-dependent since, in addition to modeling, the hardware implementation also affects the inference time. For a better comparison and also to showcase our DPP-Net, we evaluate and plot the performance of CondenseNet (reproduce 10 epochs performance), which is also included in our search space but not on the Pareto front. \begin{table}[t] \caption{\textbf{Cifar10 Classification Results.} Missing values are the metrics not reported in their original papers. Pareto front visualizations of our searched networks can also be found in Fig.~\ref{fig.paretofront}. The standard deviation of the metrics of DPP-Net-Panacea are calculated across 10 runs} \label{tb.cifar} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|lll|llll} \hline & \multicolumn{3}{c|}{\textit{Device-agnostic metrics}} & \multicolumn{4}{c}{\textit{Device-aware metrics}} \\ \hline \textbf{Model from previous works} & Error rate & Params & FLOPs & Time-WS & Time-ES & Time-M & Mem-M \\ \hline Real et al. \cite{real2017large} & 5.4 & 5.4M & - & - & - & - & - \\ NASNet-B \cite{zoph2017learning} & 3.73 & 2.6M & - & - & - & - & - \\ PNASNet-1 \cite{liu2017progressive}& 4.01 & 1.6M & - & - & - & - & - \\ \hline DenseNet-BC (k=12) \cite{huang2017densely} & 4.51 & 0.80M & - & - & - & 0.273 & 79MB \\ CondenseNet-86 \cite{huang2017condensenet} & 5.0 & 0.52M & 65.8M & 0.009 & 0.090 & 0.149 & 113MB \\ \hline & \multicolumn{3}{c|}{\textit{Device-agnostic metrics}} & \multicolumn{4}{c}{\textit{Device-aware metrics}} \\ \hline \textbf{Model from DPP-Net} & Error rate & Params & FLOPs & Time-WS & Time-ES & Time-M & Mem-M \\ \hline DPP-Net-PNAS & \textbf{4.36} & 11.39M & 1364M & 0.013 & 0.062 & 0.912 & 213MB \\ \hline DPP-Net-WS & 4.78 & 1.00M & 137M & \textbf{0.006} & 0.075 & 0.210 & 129MB \\ DPP-Net-ES & 4.93 & 2.04M & 270M & 0.007 & \textbf{0.044} & 0.381 & 100MB \\ DPP-Net-M & 5.84 & \textbf{0.45M} & \textbf{59.27M} & 0.008 & 0.065 & \textbf{0.145} & \textbf{58MB} \\ \hline DPP-Net-Panacea & 4.62 $\pm$ 0.23 & 0.52M & 63.5M & 0.009 $\pm$ 7.4e-5 & 0.082 $\pm$ 0.011 & 0.149 $\pm$ 0.017 & 104MB \end{tabular}} \end{table} During the searching process, the surrogate function was updated several times. The best regression error (on the validation set) is around 12\%. At the first glance, this number is a bit large in terms of predicting the true accuracy. However, it is important to clarify that the purpose of using the surrogate function is to suggest what kind of models may have a relatively good accuracy instead of exactly how accurate the models are. For the search time, we use 4 GTX 1080 GPUs and search for two days (around 48 hours). After searching process is done, we select two architectures (from others on the Pareto front) for detailed evaluation: DPP-Net-\textit{Device} and DPP-Net-Panacea. DPP-Net-\textit{Device} has a small error rate and the shortest inference time when running on certain \textit{Device} (WS or ES), whereas DPP-Net-Panacea also has a small error rate and performs relatively well on every objective (but longer inference time than DPP-Net-\textit{Device}). These two best models, in terms of Pareto Optimality, are trained for 300 epochs and the evaluation metrics are reported in Table.\ref{tb.cifar} (bottom half). We also include the results of the neural architecture searched by DPP-Net with PNAS \cite{liu2017progressive} criterion: the highest classification accuracy among all the candidates. Furthermore, for the completeness and comprehensive study, in the top half of Table.\ref{tb.cifar}, we include the results from the best models of previous NAS works \cite{zoph2017learning,real2017large,liu2017progressive}, as well as the current state-of-the-art handcrafted mobile CNN models (bottom half) \cite{huang2017condensenet,huang2017densely}. The architectures of these models are shown in Fig.\ref{fig.mobile_archis}. DPP-Net-PNAS results in finding models with possible large number of parameters and very slow inference time. Our results are compared with state-of-the-art handcrafted mobile CNNs (second group) and models designed using architecture search methods (first group). Our DPP-Net clearly strikes better trade-off among multiple objectives. \begin{figure}[h] \begin{center} \includegraphics[width=1\textwidth]{figures/pnas_overall.eps} \end{center} \caption{\textbf{The result of our searched dense cell topology.}} \label{fig.mobile_archis} \end{figure} \subsection{Results on ImageNet} We further transfer our searched architecture to test the performance on ImageNet classification task. The cell structures searched using CIFAR-10 dataset are directly used for ImageNet with only a slight modification on the number of repeated Dense Cells. The hyper-parameters for training DPP-Net on ImageNet are nearly identical to training DPP-Net on CIFAR-10, except for the parameter of group lasso regularizer which we set to 1e-5. This regularization induces group-level sparsity for Learned Group Convolution as suggested in \cite{huang2017condensenet}. The results of ImageNet training is shown in Table.\ref{tb.imagenet}. DPP-Net-Panacea performs better in nearly every aspect than Condensenet-74. Moreover, DPP-Net-Panacea outperforms NASNet (Mobile), a state-of-the-art mobile CNN designed by an architecture search method \cite{zoph2017learning} in every metrics. We further argue that the sophisticated architecture makes NASNet (Mobile) not practical on mobile devices although it has a relatively small number of parameters compared to traditional CNNs. These results again show the versatility and robustness of our device-aware search method. \begin{table}[h] \caption{\textbf{ImageNet Classification Results.} Time-M and Mem-M is the inference time and memory usage of the corresponding model on our mobile phone using ONNX and Caffe2. Due to operations not supported on this framework, we cannot measure the inference time and memory usage of NASNet (Mobile) on our mobile phone} \label{tb.imagenet} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|lllllll} \textbf{Model} &\textbf{Top-1} &\textbf{Top-5} &\textbf{Params} &\textbf{FLOPs} &\textbf{Time-ES} &\textbf{Time-M} &\textbf{Mem-M} \\ \hline Densenet-121 \cite{huang2017densely} & 25.02 & 7.71 & - & - & 0.084 & 1.611 & 466MB \\ Densenet-169 \cite{huang2017densely} & 23.80 & 6.85 & - & - & 0.142 & 1.944 & 489MB \\ Densenet-201 \cite{huang2017densely} & 22.58 & 6.34 & - & - & 0.168 & 2.435 & 528MB \\ \hline ShuffleNet 1x (g=8) & 32.4 & - & 5.4M & 140M & 0.051 & 0.458 & 243MB \\ MobileNetV2 & 28.3 & - & 1.6M & - & 0.032 & 0.777 & 270MB \\ Condensenet-74 (G=4)\cite{huang2017condensenet}& 26.2 & 8.30 & 4.8M & 529M & 0.072 & 0.694 & 238MB \\ \hline NASNet (Mobile) & 26.0 & 8.4 & 5.3M & 564M & 0.244 & - & - \\ \hline DPP-Net-PNAS & 24.16 & 7.13 & 77.16M & 9276M & 0.218 & 5.421 & 708MB \\ DPP-Net-Panacea & 25.98 & 8.21 & 4.8M & 523M & 0.069 & 0.676 & 238MB \\ \end{tabular}} \end{table} \subsection{Device Performance Study} Our main idea is that models searched on one device does not necessarily guarantee good performance on the other devices when it comes to device-related metrics, such as actual inference time. A small number of parameters or FLOPs does not always indicate fast inference time, this is due to existing problems of hardware optimization and software implementations (e.g., implementation of depth-wise convolution is inefficient and group convolution cannot reach theoretical speedups). To prove that inference time is device-aware, we measured the inference time of all 4-layers models (measuring only the network forward time can be done very efficiently) on 3 devices and plot them in Fig.\ref{fig.sort_time}. For WS and ES environments, we test our models on PyTorch 0.3.0 \cite{paszke2017automatic} built with Python 3.5, CUDA-8.0, and CUDNN-6.0, as for M, we follow the instructions from the PyTorch official guide and port the models to Caffe2 for deployment. The X-axis in Fig.\ref{fig.sort_time} is the inference time of all the 4-layer cell structures sorted by WS (green line/bottom line) in ascending order. The red line and the blue line is the inference time on ES and M, respectively. \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{figures/sort_time.eps} \end{center} \caption{\textbf{Models inference time on different devices}. We show that the inference time is highly device-related. The X-axis is the index of all 4-layers models, sorted by inference time on WS in ascending order.} \label{fig.sort_time} \end{figure} The plot shows that even on similar devices with identical software settings (WS v.s. ES), the inference time can be sensitive to particular devices. Moreover, inference time on M is significantly disparate to that of WS. Therefore, we conclude that only searching models on an actual device can ensure the robustness of the searched results. \section{Introduction} Deep Neural Networks (DNNs) have demonstrated impressive performance on many machine-learning tasks such as image recognition \cite{krizhevsky2012imagenet}, speech recognition \cite{hannun2014deep}, and language modeling \cite{sutskever2014sequence}. Despite the great successes achieved by DNNs, crafting neural architectures is usually a manual, time-consuming process that requires profound domain knowledge. Recently, automating neural architecture search (NAS) has drawn lots of attention from both industry and academia \cite{negrinho2017deeparchitect,zoph2016neural}. Approaches for NAS can mainly be categorized into two branches: based on Reinforcement Learning (RL) \cite{pham2018efficient,zoph2016neural,baker2016designing,zoph2017learning,zhong2017practical} or Genetic Algorithm (GA) \cite{real2017large,xie2017genetic,liu2017hierarchical,real2018regularized}. There are also works not based on RL or GA, such as \cite{liu2017progressive}, achieving comparable performance by using other efficient search algorithms. However, most of these works mentioned above focus on optimizing one single objective (e.g., accuracy), and other objectives have been largely ignored, especially thoses related to devices (e.g., latency). On the other hand, while designing complex, sophisticated architectures have already been treated more like an art than science, searching for neural architectures optimized for multiple objectives has posed an even more significant challenge. To this end, new architectures leveraging novel operations \cite{howard2017mobilenets,zhang2017shufflenet,huang2017condensenet} have been developed to achieve higher computing efficiency than conventional convolution. Not surprisingly, designing such architectures requires, again, profound domain knowledge and much effort. Therefore, how to automatically search for network architectures jointly considering high accuracy and other objectives (e.g., inference time, model size, etc. to conforms to device-related constraints) remains a critical yet less addressed question. To the best of our knowledge, there is one previous work \cite{kimnemo} that searches network architectures by considering both accuracy and inference time. Nevertheless, the computational power required during training by their algorithm is very significant, and their search space is naively small. \begin{figure}[t!] \begin{minipage}{0.55\textwidth} \includegraphics[width=\textwidth]{figures/concept.eps} \end{minipage \hspace{.01\textwidth} \begin{minipage}{0.4\textwidth} \caption{Different devices share different Pareto-Optimality. An optimal point on Device A's Pareto front may not lie on Device B's Pareto front. Given multiple device-related (e.g., inference time and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. Our DPP-Net can efficiently find various network architectures at the Pareto-front for the corresponding device.} \end{minipage} \label{fig.concept} \end{figure} We propose \textit{DPP-Net}: \textit{D}evice-aware \textit{P}rogressive Search for \textit{P}areto-optimal Neural Architectures given multiple device-related (e.g., inference time and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. It is an efficient search algorithm to find various network architectures at the Pareto-front (Fig.\ref{fig.concept}) in the multiple objectives space to explore the trade-off among these objectives. In this way, a deep learning practitioner can select the best architecture under the specific use case. We define our search space by taking inspirations from state-of-the-art handcrafted mobile CNNs, which is more compact and efficient comparing to usual NAS architectures. For search efficiency, we have also adopted the progressive search strategy used in \cite{liu2017progressive} to speed up the search process. Experimental results on CIFAR-10 demonstrate that DPP-Net can find various Pareto-optimal networks on three devices: (1) a workstation with Titan X GPU, (2) NVIDIA Jetson TX1 embedded system, and (3) a mobile phone with ARM Cortex-A53. Most importantly, DPP-Net achieves better performances in both (a) higher accuracy and (b) shorter inference time, comparing to the state-of-the-art CondenseNet on three devices. Finally, our searched DPP-Net achieves considerably good performance on ImageNet as well. \section{Related Work} Recent advancements on neural architecture search can be classified into three basic categories: Reinforcement Learning (RL) based approaches, Genetic Algorithm (GA) based ones and the third category of methods that involve optimization techniques other than those two. In addition to architecture search techniques, we will also focus on those methods that work on multiple objectives. \paragraph{RL-based approach.} Seminal work by \cite{zoph2016neural} proposed ``Neural Architecture Search (NAS)'' using REINFORCE algorithm \cite{Williams:1992:SSG:139611.139614} to learn a network architecture called ``controller'' RNN that generates a sequence of actions representing the architecture of a CNN. Classification accuracies of the generated bypass CNN models on a validation dataset are used as rewards for the controller. NASNet \cite{zoph2017learning} further improves NAS by replacing REINFORCE with proximal policy-optimization (PPO) \cite{schulman2017proximal} and search architectures of a ``block'' which repeatedly concatenated itself to form a complete model. This techniques has not only reduced the search space but also managed to incorporate empirical knowledge when designing a CNN. Other works in the field including approach used in \cite{cai2018efficient} which searches model architectures by manipulating the depth of the width of the layers using policy gradient, and the methods proposed by \cite{baker2016designing} and \cite{zhong2017practical} which search network architectures using Q-learning. A concurrent work \cite{pham2018efficient} proposed a model to force all child networks to share weights, which largely reduced the computational costs needed to search in a space as defined by \cite{zoph2017learning}. \paragraph{GA-based approach.} Except for RL-based methods, Genetic Algorithm based methods \cite{real2017large,xie2017genetic,liu2017hierarchical} are also popular in architecture search research. One of the recent work in this field \cite{real2018regularized} achieves state-of-the-art performance on CIFAR-10 image classification task over RL-based method. \paragraph{Other approaches} Methods using either RL-based or GA-based algorithm usually requires a significant amount of computational power and are therefore infeasible in certain situations. Many approaches are proposed to specifically address this issue by proposing their search strategies that cannot be categories using methods that belong to the previous two families. \cite{negrinho2017deeparchitect} use Monte Carlo Tree Search (MCTS) to search through the space of CNN architectures in a shallow-to-deep manner and randomly select which branch to expand at each node. A sequential Model-based Optimization (SMBO) \cite{hutter2011sequential} that learns a predictive model is further adopted to help the decision making of node expansion. \cite{liu2017progressive} also use SMBO as the search algorithm and have achieved comparable performance to NASNet using significantly less computational resources while operated on the same search space. \cite{baker2018accelerating} proposed to predict performance to reduce the effort of searching model architectures. A concurrent work \cite{brock2017smash} proposed to train a network to predict the weights of another network and combine this method with random search to search for good candidate models. Despite the small number of resources required in each search, the performance of the model is hard to compete with state-of-the-art approaches. \paragraph{Architecture search with multiple objectives.} All the previously mentioned works focus on searching models that achieve highest performance (e.g. classification accuracy) regardless of the model complexity. \cite{kimnemo} proposed to treat neural network architecture search as a multi-objective optimization task and adopt an evolutionary algorithm to search models with two objectives, run-time speed, and classification accuracy. However, the performances of the searched models are not comparable to handcrafted small CNNs, and the numbers of GPUs they required are enormous. \paragraph{Handcrafted models with multiple objectives.} The machine learning and computer vision community are rich in handcrafted neural architectures. Here we will list some of the most recent work that involves multiple objectives. \cite{howard2017mobilenets} and ShuffleNet \cite{zhang2017shufflenet} have utilized depth-wise convolution and largely reduced computational resources required but remained comparably accurate. However, the real-world implementation of depth-wise convolution in most of the deep learning framework have not reached the theoretical efficiency and results in much inferior inference time. CondenseNet \cite{huang2017condensenet} proposed to use a group convolution \cite{krizhevsky2012imagenet} variant in order to achieve state-of-the-art computational efficiency.
1,314,259,992,680
arxiv
\section{Conclusion} In this work, we have developed, to the best of our knowledge, one of the first fully spatiotemporal cardiac surface tracking methods for 4DE data, which was then used to generate dense displacements using a regularized radial basis function framework. Our method was also able to account for the cyclical nature of the cardiac motion. Strains calculated from these displacements were then used to analyze the changes in global and local deformation of the LV. We also proposed an unsupervised neural network-based feature generation method for motion tracking. \par We obtained very good tracking accuracy with our method and features on the Leuven synthetic dataset. Even though we only tracked endocardial and epicardial surfaces, we obtained comparable results to the FFD method, which accounts for motion throughout the myocardium. We were also able to detect local regional changes in strain patterns and demonstrate expected strain profiles for different samples based on the location and extent of infarction. \par In the future, the tracking method can be expanded to model dense myocardial displacements as well. This would have the advantage of not requiring perfect segmentations. A rough region-of-interest would suffice as long as it contains the LV. Since doing this would make the problem more ill-posed, the motion model would also have to be expanded to incorporate more constraints. The neural network-based feature generation strategy can be expanded further to develop supervised or transfer learning based methods applicable to \textit{in vivo} data as well. For instance, the Siamese neural network based approach we explored in our previous work can be extensively leveraged with more training data \citep{parajuli2017flow}. \section{Acknowledgement} We are immensely thankful for the efforts of many past and present members of Dr. Albert Sinusas's group that were involved in the image acquisitions. We would also like to thank Dr. Hemant Tagare and Dr. Lawrence Staib of the Image Processing and Analysis Group at Yale for many fruitful discussions. This work was supported in part by the National Institute of Health (NIH) grant numbers R01HL121226 and T32HL098069. \section{The Elsevier article class} \paragraph{Installation} If the document class \emph{elsarticle} is not available on your computer, you can download and install the system package \emph{texlive-publishers} (Linux) or install the \LaTeX\ package \emph{elsarticle} using the package manager of your \TeX\ installation, which is typically \TeX\ Live or Mik\TeX. \paragraph{Usage} Once the package is properly installed, you can use the document class \emph{elsarticle} to create a manuscript. Please make sure that your manuscript follows the guidelines in the Guide for Authors of the relevant journal. It is not necessary to typeset your manuscript in exactly the same way as an article, unless you are submitting to a camera-ready copy (CRC) journal. \paragraph{Functionality} The Elsevier article class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the \begin{itemize} \item document style \item baselineskip \item front matter \item keywords and MSC codes \item theorems, definitions and proofs \item lables of enumerations \item citation style and labeling. \end{itemize} \section{Front matter} The author names and affiliations could be formatted in two ways: \begin{enumerate}[(1)] \item Group the authors per affiliation. \item Use footnotes to indicate the affiliations. \end{enumerate} See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to. \section{Bibliography styles} There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available. Here are two sample references: \cite{Feynman1963118,Dirac1953888}. \section*{References} \section{Methods} \begin{figure} \centering \includegraphics[scale=.32]{images/system_pipeline} \caption{Overall method outline.} \label{fig:system_pipeline} \end{figure} \par Many point matching methods that model rigid or non-rigid deformations try to achieve one-to-one or symmetric matches \citep{belongie2000shape, chui2003new}. Even within the free-form deformation framework, diffeomorphic transformations are a popular choice, which utilize a similar concept. The resulting displacement fields from such mappings are more realistic and robust to noise and artifacts. However, matching algorithms that impose one-to-one correspondence only at a frame-to-frame level cannot guarantee that a composition of those one-to-one correspondences is also similarly one-to-one. Thus, in the work presented in this paper below, we develop an approach that i.) incorporates global one-to-one correspondence for all points being tracked and ii.) accounts for the cyclical nature of cardiac motion. \begin{figure} \centering \includegraphics[scale=.28]{images/dsea_seg_mask_and_points.png} \caption{Preprocessing steps for \textit{in vivo} data to get endocardial and epicardial surface points.} \label{fig:preprocess_surfaces} \end{figure} \subsection{Constrained Flow Optimization} \par First, point clouds are obtained by uniformly sampling the endocardial and epicardial surfaces (see figure \ref{fig:preprocess_surfaces}). The sequence of point clouds, through the cardiac cycle, is then set up as nodes in a graph with directed edges between points and their match candidates in the next time frame. Each node is endowed with a feature vector representing local appearance characteristics and the match candidates are chosen based on feature distances between nodes and their spatial neighbors. This is illustrated in Figure \ref{fig:graphical_structure}. \begin{figure} \centering \includegraphics[scale = .43]{images/flow_network_diagram.png} \caption{Nodes, edges and other relationships in the network. The point sets are sampled from the myocardial surface sequence at each frame.} \label{fig:graphical_structure} \end{figure} \begin{table} \centering \caption{Notations} \begin{tabular}{r l} \hline $T$ & $\text{Number of frames} $ \\ $N(t)$ & $\text{Number of points per frame}, $ \\ &\qquad \qquad $t \in [1:T] $ \\ $x^t_i$ & $i^{th} \text{ point of frame } t,$ \\ & \qquad \qquad $i \in [1:N(t)]$ \\ $e^t_{ij}$ & \text{Edge from point } i \text{ in frame } t \\ & $\text{ to point } j \text{ in frame } t+1$ \\ & \qquad \qquad $i \in [1:N(t)]$ \\ & \qquad \qquad $j \in [1:N(t+1)]$ \\ $w^t_{ij}$ & $\text{Weight associated with } e^t_{ij}$ \\ $f^t_{ij}$ & $\text{Flow through } e^t_{ij}$ \\ $ \eta (t, i)$ & $\text{Indices of points in the}$ \\ & $\text{neighborhood of } x^t_i \text{ in frame } t+1$ \\ \hline \end{tabular} \end{table} \par The edges capture particle (tissue) motion possibilities, and their weights capture the likelihood of the motion. We have $T$ time frames in total with $N(t)$ ($t \in [1:T]$) points per frame. Each node is defined as $x^t_i$ ($i \in [1:N(t)]$), where an edge $e^t_{ij}$ exists between $x^t_i$ at time $t$ and its neighbor $x^{t+1}_j$ (based on feature distances) at time $t+1$ ($i \in [1:N(t)]$ and $j \in [1:N(t+1)]$). The flow through an edge $e^t_{ij}$ in this network is captured by the binary-valued variable $f^t_{ij}$, and the corresponding edge weight is $w^t_{ij}$. $f^t_{ij} = 1$ implies that the point $x^t_{i}$ and $x^{t+1}_j$ are a match. \par We would like to solve for flow $f$ that is proportional to the edge weights $w$. This amounts to maximizing the inner product $w'f$, subject to the following constraints at each node $x^t_i$ ($\eta(t, i)$ indexes neighbors of $x^t_i$ in frame $t + 1$): \begin{enumerate} \item Flows are non-negative: \begin{equation} \forall t, i \qquad f \geq 0. \end{equation} \item Sum of outgoing flows is less than or equal to one ($C_{out}$, see Figure \ref{fig:node_constraints}a). \begin{equation} \label{eqn:cout_sum} \forall t, i \qquad \sum_{j\in \eta(t, i)} f^t_{ij} \leqslant 1 \end{equation} \item Sum of outgoing and incoming flows is equal ($C_{bal}$, see Figure \ref{fig:node_constraints}b). \begin{equation} \forall t, i \sum_{h:i\in \eta(t-1, h)} f^{t-1}_{hi} = \sum_{j\in \eta(t, i)} f^{t}_{ij} \end{equation} \end{enumerate} \begin{figure} \centering \subfloat[A node $x^t_i$ and outgoing flows $f^t_{ij}$, which must sum to 1]{\includegraphics[scale=.43]{images/cout_constraint.png}} \qquad \subfloat[A node $x^t_i$ and outgoing flows $f^t_{ij}$ and incoming flows $f^t_{ij}$. These must be in balance.]{\includegraphics[scale=.43]{images/cbal_constraint.png}} \caption{A node and different edges/flows visualized} \label{fig:node_constraints} \end{figure} \par $C_{out}$ ensures that total flow is preserved from one frame to another. By itself, it would result in complete trajectories that traverse through the best possible path but without any consideration for spatial consistency with other trajectories. $C_{bal}$ is enforced so that incoming and outgoing flows through nodes are equal. This is helpful in preventing many-to-one correspondences, which create an uncharacteristic stretching and shrinking in the displacement fields. One-to-many correspondences are also avoided for the same reason. \par Even though we would ideally like to solve an integer programming (IP) optimization to get a binary-valued solution for $f$, we start with a linear programming (LP) relaxation, and solve the following optimization instead (all variables are considered in their vectorized forms here for simplicity): \begin{equation} \centering \label{eqn:strict_optim} \begin{aligned} &\text{Maximize} && w' f\\ &\text{subject to} && f \geqslant 0, && C_{out} f \leqslant 1, && C_{bal} f \leqslant 0 \\ \end{aligned} \end{equation} \par Despite the relaxation of an IP into an LP, we still obtain a binary-valued solution. This is of great consequence because solving IPs directly is an NP-hard problem. Due to the special nature of the constraint matrices that define our LP, our solutions remain integer-valued. This builds upon work that proves that inequalities in LP with constraint matrices that satisfy the total unimodularity property, and that have an integral right-hand side, lead to an integral solution \citep{hoffman2010integral}. A unimodular matrix is a square integer matrix with determinant $+1$ or $-1$. This results in the inverse of these matrices also being an integer matrix. \citet{berclaz2011multiple} show that such flow balance constraint matrices satisfy total unimodularity. This allows us to obtain fully connected non-intersecting trajectories throughout the cardiac cycle. \par It is instructive to think of the effects of the constraints in terms of an equivalent graph-based matching/tracking methods. For instance, imposing only $C_{out}$ results in shortest path tracking for each point. In addition to $C_{out}$, if we also impose $C_{in}$, such that the sum of incoming flows at a node is also constrained ($C_{in} f \leq 1$), we achieve a maximum bipartite match, with one-to-one correspondence, between point sets in consecutive time frames. Finally, if $C_{out}$ and $C_{bal}$ are both enforced, we obtain a complete maximum bipartite match, which results in one-to-one correspondence that extends through the entire cardiac cycle ($C_{in}$ gets implicitly imposed in this case). The effects of these constraints on the resulting trajectories are illustrated in figure \ref{fig:constraint_outcomes}(a)-\ref{fig:constraint_outcomes}(c). \begin{figure} \centering \subfloat[$C_{out}$ only: outgoing flows sum to $\leq 1$ leading to shortest paths tracking for individual points.]{\includegraphics[scale = .32]{images/test_pts_cout_only.png}} \qquad \subfloat[$C_{out}$ and $C_{in}$: outgoing and incoming flows sum to $\leq 1$ at all nodes leading to frame-to-frame maximum bipartite match.]{\includegraphics[scale = .32]{images/test_pts_cout_and_cin.png}} \\ \subfloat[$C_{out}$ and $C_{bal}$: outgoing and incoming flows sum to $\leq 1$ and are also equal at all nodes leading to total maximum bipartite match.]{\includegraphics[scale = .32]{images/test_pts_cout_and_cbal.png}} \qquad \subfloat[$C_{out}$, $C_{bal}$ and $C_{loop}$: additional equality (balance) constraints lead to a flow balance between first and last nodes leading closed-looped behavior as well.]{\includegraphics[scale = .32]{images/test_pts_trking_with_closed_loop.png}} \caption{Outcomes of applying different constraints on 1D+t point sets, with points stacked vertically in space and horizontally in time.} \label{fig:constraint_outcomes} \end{figure} \par Since our constraint matrices are very sparse, the LP can be solved very efficiently. We used the CVX package (in MATLAB), for specifying the LP and other optimizations in this work \citep{cvx, gb08}. CVX was used in conjunction with the MOSEK solver for solving the LP, which uses the interior-point method \citep{mosek}. \subsubsection{Constraint Matrices} \begin{figure} \centering \includegraphics[scale = .46]{images/cout_balance_edges.png} \caption{Incoming and outgoing edges at a node.} \label{fig:out_balance_fig} \end{figure} At a node associated with $x_i^t$, let $e_1, e_2, e_3$ be the incoming edges and let $e_{11}, e_{12}, e_{13}$ be the outgoing edges (in vectorized form). Fig \ref{fig:out_balance_fig} illustrates this relationship and the form of the equation corresponding to a row of the $C_{bal}$ matrix appears as follows: \begin{equation} \begin{bmatrix} -1 -1 -1 & \ldots \quad 1 \quad 1 \quad 1 \end{bmatrix} \begin{bmatrix} f_1 \\ f_2 \\ f_3 \\ \vdots \\ f_{11} \\ f_{12} \\ f_{13} \end{bmatrix} \leq 0. \label{eqn:cbal_eqn} \end{equation} Similarly, $C_{out}$ has $1$'s where $C_{bal}$ also has $1$ and zeros elsewhere. Conversely, $C_{in}$ has $1$'s where $C_{bal}$ has $-1$. \par Fig \ref{fig:constraint_outcomes} displays how imposing $C_{out}$, $C_{out}$ and $C_{in}$, $C_{out}$ and $C_{bal}$ affects tracking outcomes in a toy $1D+t$ problem. The points are stacked vertically for each time step (total of $5$). We can clearly see how just imposing $C_{out}$ leads to the qualitatively worst result in Fig \ref{fig:constraint_outcomes}(a). This is because there is nothing preventing two trajectories from merging and occupying the same nodes. \par We can also notice, in figure \ref{fig:constraint_outcomes}(c), as $C_{bal}$ is imposed strictly, one-to-one point correspondence leads to non-overlapping complete trajectories. However, nodes on the top areas in figure \ref{fig:constraint_outcomes}(c) have no trajectories passing through. In an earlier implementation of this algorithm, we attempted to remedy situations like this by relaxing this strict balance constraint \citep{parajuli2017flow}. In addition to that, we also added spatiotemporal smoothness constraints. However, in this work, we impose $C_{bal}$ strictly and develop a different strategy of regularization suitable to cardiac motion. We discuss this strategy next. \subsubsection{Imposing Loop Constraints} \begin{figure} \centering \includegraphics[scale=.46]{images/periodicity_loops.png} \caption{A simple flow network displaying the additional loop edges between the last frame and the first (not all shown) which helps us obtain closed-looped trajectories. The source node and edges are also shown.} \label{fig:closed_looped} \end{figure} A simple extension to the algorithm described so far encourages trajectories to end up in close proximity to their start location. This is done by adding one more set of edges (and constraints) between the points in the last frame and the first frame (see figure \ref{fig:closed_looped}.). First, we impose the following constraint between the flow from source node $x_{src}$ (let the flow be $f^{x_{src}}_i$) and the flow from the first frame to second ($f^1_{ij}$): \begin{equation} \label{eqn:loop_constraint_1} \forall i, i \in [1:N(1)], \qquad f^{x_{src}}_i = \sum_{j \in \eta(1, i)} f^1_{ij}. \end{equation} Here, since each node in frame $1$ is connected directly to $x^{src}$, we are saying that if there is a flow into that node from the source, there has to be a flow out of the node going into frame $2$ as well. Next, we impose a balance between flow from the last frame to the first frame ($f^T_{hi}$ via the loop edges) and flow from first to second ($f^1_{ij}$): \begin{equation} \label{eqn:loop_constraint_2} \forall i, i \in [1:N(1)], \qquad \sum_{h : i \in \eta(T, h)} f^T_{hi} = \sum_{j \in \eta(1, i)} f^1_{ij}. \end{equation} This is the key flow balance constraint that encourages trajectories to be closed-looped. Since edges always exist between nodes that are close spatially, and we already have a mechanism for obtaining non-overlapping complete trajectories, this helps us ensure that our trajectories are roughly closed-looped. Finally, to make sure that the total flow leaving the source node is also equal to the total flow leaving the last frame (and going to the first implicitly), we also impose the following constraint: \begin{equation} \label{eqn:loop_constraint_3} \sum^{N(1)}_{i=1} f^{x_{src}}_i = \sum^{N(1)}_{i=1} \sum_{h : i \in \eta(T, h)} f^T_{hi}. \end{equation} \par We shall call these constraints $C_{loop}$, which is applied in the same way as $C_{bal}$ and is incorporated within it during the optimization (see equation \ref{eqn:strict_optim}). In figure \ref{fig:constraint_outcomes}(d), we can see how the loop constraint helps us recover trajectories that lead to the end points of the trajectories being in close proximity of their start points, which is what we expect due to the periodicity of the LV motion/displacement. \subsubsection{Outlier Handling} Because we are maximizing the flow through the network, our algorithm always solves for the maximum number of possible trajectories, even if they are of very low quality. Figure \ref{fig:bad_tracking_conceptual} displays such an example. The ideal match would have been if point $1$ was matched to point $4$ and point $3$ to $6$, but instead, the opposite has happened. We tackle this by probabilistic thresholding. All edges with weights below a certain threshold $P_{th}$ are omitted. An example is shown in figure \ref{fig:degenerate_example} with randomly generated points for illustration. \begin{figure} \centering \includegraphics[scale=.32]{images/degenerate_balanced_and_closed.png} \caption{An unlikely, but a valid scenario where balance and closed loop constraints are satisfied but the result is poor qualitatively.} \label{fig:bad_tracking_conceptual} \end{figure} \begin{figure} \centering \subfloat[Trajectories computed from random $1D+t$ data.]{\includegraphics[scale = .28]{images/degenerate_no_thresh.png}} \qquad \subfloat[Trajectories computed from random $1D+t$ data after thresholding.]{\includegraphics[scale = .28]{images/degenerate_thresh.png}} \caption{Outcome of applying thresholding (of $P_{th} = .3$) on the edge weights.} \label{fig:degenerate_example} \end{figure} \subsubsection{Edge weight calculation.} Each edge $e^t_{ij}$ has the following weight: \begin{equation} \label{eqn:edge_wt} w^t_{ij} = exp\left( \frac{-||x^t_i - x^{t+1}_{j}||^2}{2\sigma^2_x} \right) exp\left( \frac{-|| F(x^t_i) - F (x^{t+1}_{j}) ||^2}{2\sigma^2_{f}} \right) \end{equation} \par $F$ can be any shape or appearance-based feature associated with $x^t_i$ and $x^{t+1}_j$. $\sigma_x$ and $\sigma_{f}$ are normalization constants and are calculated using the standard deviations of Euclidean and feature distances for each image frame. \subsection{Neural Network Based Appearance Features} \par In contrast to our previous work \citep{parajuli2017flow}, we use a convolutional autoencoder to learn appearance features because we do not have ground truth trajectories for \textit{in vivo} data. Convolutional autoencoders are known to produce a state-of-the-art solutions in unsupervised learning problems. The structure of the network is shown in figure \ref{fig:unsupervised_network} and has the standard encoder-decoder format. We use Euclidean distance to quantify the similarity between two learned embeddings. The learned embedding $F$ is used to calculate edge weights in equation \ref{eqn:edge_wt}. \begin{figure} \centering \includegraphics[scale = .28]{images/autoencoder_embedding.png} \caption{Autoencoder network: A low dimensional embedding of image patches that captures images statistics is learned.} \label{fig:unsupervised_network} \end{figure} \subsection{Dense Field Generation} \par We use radial basis functions (RBFs) as interpolants and calculate dense displacement fields, which is necessary for generating dense motion trajectories and Lagrangian strain which helps us assess myocardial function. This is based on our previous work in \citet{parajuli2015sparsity} and that of \citet{compas2014radial}. \par We regularize our motion field ($U$) by imposing sparsity on the weights associated with the basis functions representing our motion field that account for tissue incompressibility. We do so under the assumption that the cardiac tissue is roughly incompressible and therefore, the motion vector field should be roughly divergence-free (i.e., $\nabla \cdot U = 0$) \citep{song1991computation}. Finally, we also mildly penalize the norm of the spatial derivatives ($\nabla U$) to discourage jumps and discontinuities in the motion fields. We use a compactly supported basis function, which results in a sparse basis matrix unlike other popular choices such as Gaussian and thin plate spline based RBFs. Sparse matrices are more conducive to numerical optimization \citep{compas2014radial, wendland1995piecewise}. Once dense displacements are obtained, Lagrangian strain is calculated using the method described in \citet{yan2007boundary}. \section{Introduction} \par Cardiovascular diseases (CVDs) were the leading cause of death in the world in 2013 according to the AHA - as stated in the heart disease and stroke statistics, 2017 update \citep{benjamin2017heart}. Among CVDs, ischemic heart diseases are the most common, which occur as a result of the formation of atherosclerotic plaques in the coronary arteries. This results in the narrowing of the arteries that can lead to an inadequate supply of blood to the left ventricle (myocardial ischemia). A sudden blockage of the arteries, for instance, due to plaque ruptures, can lead to irreversible tissue damage (myocardial infarction) and ultimately heart failure, which can be fatal. \par Analysis of the left ventricle (LV), which is the main pumping chamber of the heart, can provide invaluable insights into cardiac health. An ischemic event in the ventricle is manifest as a reduction of the contractility of the LV wall muscle (myocardium). However, these wall motion abnormalities can be localized to a specific area of the LV and therefore, difficult to detect. Global measures of left ventricular function, such as ejection fraction (EF), are often not sensitive enough to detect these changes and do not provide important information on the location of the dysfunctional tissue. The 2013 ACC/AHA guideline on heart failure states that approximately $50\% $ of heart failure cases present themselves with preserved ejection fraction \citep{yancy20132013}. A significant proportion of these cases are ischemic diseases with wall motion abnormalities. Therefore, it is crucial that a more local and informative measure be developed and adopted. \par Visual wall motion scoring is a popular clinical technique for assessing such local deformation behavior and has been shown to be more predictive of clinical outcomes than EF \citep{galasko2001prospective, eek2010strain}. Stress imaging based wall motion scoring is also of interest and is shown to be of great utility in identifying and stratifying risk factors associated with mortality \citep{yao2003practical}. However, visual wall motion scoring is prone to a high level of uncertainty because it is a semi-quantitative, subjective metric, and hence, the interobserver variability has been shown to be high in a study involving multiple imaging modalities \citep{hoffmann2006analysis}. In this context, Lagrangian strain analysis has emerged as a viable method for wall motion quantification and can assist detection and diagnosis of disease, as well as track the therapy, recovery, and management process \citep{pellerin2003tissue, gotte2006myocardial}. \par Among the popular imaging modalities that could be useful for assessing myocardial strain such as magnetic resonance imaging (MRI, e.g., \citet{le2017sparse, papademetris2002estimation}), computed tomography (CT, e.g., \citet{cury2008comprehensive, sugeng2006quantitative}) and echocardiography (echo, e.g., \citet{compas2014radial, heyde2013elastic}), echo is of special interest because of its affordability, higher frame rates compared to MR and CT, and portability. However, the trade-off is that ultrasound imaging is prone to artifacts such as bone shadows, attenuation, signal dropouts, incomplete geometry due to wrong imaging angle and location. These issues call for robust image analysis algorithms and processing pipeline. \par In this work we present a novel method aimed at improving upon prior efforts to quantify LV motion \citep{ledesma2005spatio, de2012temporal, compas2014radial} by capturing the myocardial dynamics with a fully spatiotemporal model. A spatiotemporal viewpoint is consistent with the manner in which clinical readings of echo is performed - as a movie, rather than viewing still frames. Most cardiac motion analysis methods perform frame-to-frame displacement estimation and combine the series of deformations to obtain Lagrangian displacements. Uncertainties and ambiguities, that arise at each step in time, get compounded and propagated as frame-to-frame estimations are aggregated. Therefore, significant drift can occur while tracking voxels through the cardiac cycle, particularly past the systolic phase. Another aspect of cardiac motion that is ignored by most methods is the periodicity of the deformation estimates over the cardiac cycle. \par Therefore, in this work, we propose a method where the motion model accounts for global spatiotemporal consistency and correspondence as well as periodicity. We build a graphical network where myocardial surface points are set up as nodes and each node is connected to a few other nodes that are its candidate matches in the next time frame, via edges. The edges are associated with weights that capture the likelihood of a particular match. The flow $f$ through the network - a binary variable that captures whether or not a particular match amongst the candidates was chosen - is then solved via optimization and subject to a variety of constraints. \par We previously reported a preliminary version of flow network tracking (FNT) in \citet{parajuli2017flow}. In that work, we introduced the graphical network model for motion analysis. Here, we expand that model further to get binary-valued flow solutions in order to obtain non-overlapping and complete motion trajectories through the entire cardiac cycle. Instead of defining edge relationships by a nearest neighbor search using spatial distance, we do this by feature distance. Furthermore, by introducing additional constraints in the optimization, we are now able to encourage trajectories to be closed-looped and thereby model the periodic aspect of cardiac motion. Also in our previous effort, we used a supervised learning based Siamese network for feature learning. While that performed well with synthetic data, it was not easy to use a similar strategy with \textit{in vivo} data due to a lack of training samples. Therefore, in this work, we use an unsupervised method involving convolutional autoencoders to derive features. \par We validate the application of our FNT shape tracking method on 8 synthetic 3D+t ultrasound image sequences developed by \citep{alessandrini2015pipeline} and on 8 open-chested canines imaged at baseline, after coronary occlusion, and with dobutamine stress (24 studies in total). Validation is done by comparing with strains obtained from implanted sonomicrometric crystals in the LV. Sonomicrometry derived strains were available for 7 baseline canine studies and 5 canine studies during ischemia and dobutamine stress. We perform a correlation analysis to compare echocardiography based strains and crystal based strains. \subsection{\textit{In vivo} Data} \subsubsection{Echocardiography Data} We applied our method and explored physiological variations in the heart by analyzing \textit{in vivo} canine 4D (3D+t) echocardiography (4DE) data from 8 canine studies. The imaging was done on anesthetized open chested animals with a transducer suspended in a water bath. The animals were imaged in baseline condition (BL), an ischemic condition (high occlusion - HO), induced by occluding the left anterior descending artery (LAD), and stress condition (HODOB), induced by infusing dobutamine at a low dosage: $5\mu g \backslash kg \backslash min$ in the presence of LAD ischemia. These conditions were tested due to our interest in ultimately developing a automated analysis of rest-stress images. Echocardiographic images were available for all 8 studies in all conditions but sonomicrometry data was available for 7 studies at BL and 5 studies during HO and HODOB. \par Philips iE33 ultrasound system (Philips Medical Systems, Andover, MA), with the X7-2 phase array transducer and a hardware attachment that provided RF data, were used for acquisition. Imaging frequency ranged from 50-60 fps, which typically gave us 20-30 volumes per 4D image sequence. All experiments were conducted in compliance with the Institutional Animal Care and Use Committee policies. \begin{table} \centering \caption{Different physiological conditions of imaging (\textit{in vivo} data).} \begin{tabular}{|p{2.7cm}|p{8cm}|} \hline \textbf{Condition} & \textbf{Description} \\ \hline BL & Baseline. \\ \hline HO & High LAD occlusion. \\ \hline HODOB & High LAD occlusion with low dobutamine stress. \\ \hline \end{tabular} \label{table:different_conditions} \end{table} \begin{figure} \centering \includegraphics[scale=.29]{results/bmode_images_and_segmentation.png} \caption{Example of \textit{in vivo} images and segmentation contours in one image sequence. $I_1$, $I_2$ and $I_3$ are three images in the systolic cycle.} \label{fig:in_vivo_example} \end{figure} \par Once these images were acquired, they were segmented using a semi-automated scheme. Endocardial and Epicardial surfaces were manually traced for the first frame of all images (see figure \ref{fig:in_vivo_example}). Then we used a dictionary learning based level set algorithm to propagate these surfaces through the cardiac cycle \citep{huang2014contour}. The FNT algorithm was then applied to these data with some adjustments. Since the extent of LV captured by imaging is different for different sequence in the long axis, $Z_{fr}$ is set as $ Z_{fr} = max(25, \text{total z slices available})$. $\theta_{fr}$ is then set as $\theta_{fr} = Z_{fr}/1.3$ since $1.3$ was the ratio of the best ($Z_{fr}$, $\theta_{fr}$) combination for the synthetic data - ($40, 30$). We used the unsupervised learning derived feature here as well, by training an autoencoder with \textit{in vivo} data. Other parameters were the same - $NK = 3$, $P_{th} = .5$. Radial basis function based interpolation method was used post FNT tracking and Lagrangian strains were calculated based on the techniques outlined in \citet{yan2007boundary}. \par We compared these strains with the ones obtained from sonomicrometric crystals implanted close to the mid-anterior LV wall during the same imaging studies as described above. We focused on analyzing if the trends were consistent using correlation analysis. We describe the sonomicrometric crystal processing and calculations next. \subsubsection{Sonomicrometer Data} We used sonomicrometric transducer crystals (crystals), recording instrument and processing software \textit{SonoSoft} and \textit{SonoView} (Sonometrics Corporation, London, Ontario, Canada) to acquire signal from crystals and process them. We implanted 19 crystals in the heart, 16 in the targeted areas of the left ventricle and 2 at the base in anterior and posterior locations and 1 at the true apex for a reference of the cardiac axis. Three additional crystals were placed in the edges of the transducer surface to align the crystals in echocardiographic LV coordinates. \par The 16 crystals were arranged in such a way that they formed three adjacent cuboidal arrays. The three cubes were roughly located in: (i) the ischemic region of LV (ISC), which was caused by the aforementioned LAD occlusion (ii) the remote region (away from the ischemic area) and (iii) the border region between the two, as shown in figure \ref{fig:crystal_placement} (also table \ref{table:cube_names}). \begin{figure} \centering \subfloat[Crystals arranged in 3 cuboidal lattices.]{\includegraphics[scale = .17]{results/all_crystals.png}} \qquad \qquad \qquad \subfloat[Crystal alignment in the LV.]{\includegraphics[scale = .35]{results/crystals_in_heart.png}} \caption{Crystals and their relative position in the LV} \label{fig:crystal_placement} \end{figure} \begin{table} \centering \caption{Cubical areas of crystal placement.} \begin{tabular}{|c|c|} \hline \textbf{Area} & \textbf{Description} \\ \hline \textbf{ISC} & Ischemic area. \\ \hline \textbf{BOR} & Borderline area between ischemic and remote. \\ \hline \textbf{REM} & Remote area. \\ \hline \end{tabular} \label{table:cube_names} \end{table} \par We adapted the 2D sonomicrometry-based strain calculation method outlined in \citet{waldman1985transmural} for 3D. We calculated radial, circumferential and longitudinal strains using the apical and basal crystals that help define the cardiac geometry. We could then compare these strains with echocardiography (echo) based strains after aligning them in the LV coordinate system using rigid registration. \subsubsection{Changes across Physiological Conditions} \par We continued rest of our analysis by just using the FNT method. First, we explored how strains change from BL to HO and then to HODOB. We were primarily interested in exploring if the patterns of changes were different in the ischemic areas in comparison to the non-ischemic areas. The ischemic (ISC) and non-ischemic areas were defined by the location of the crystals that defined the 3 cubic regions ISC, BOR and REM (see \ref{fig:crystal_placement}). Figures \ref{fig:dsea16_BL_comp_w_crystal}, \ref{fig:dsea16_HO_comp_w_crystal} and \ref{fig:dsea16_HODOB_comp_w_crystal} show strains for BL, HO and HODOB conditions respectively for 1 representative dataset. \begin{figure} \centering \subfloat[ISC]{\includegraphics[scale=.4]{results/dsea16_bl_echo_crys_strains_isc.png}} \\ \subfloat[BOR]{\includegraphics[scale=.4]{results/dsea16_bl_echo_crys_strains_bor.png}} \\ \subfloat[REM]{\includegraphics[scale=.4]{results/dsea16_bl_echo_crys_strains_rem.png}} \caption{FNT and crystal strains in BL condition for a data across the 3 cubic regions (ISC, BOR and REM top to bottom) for 1 dataset. Radial (red), circumferential (cyan) and longitudinal (green) strains from left to right.} \label{fig:dsea16_BL_comp_w_crystal} \end{figure} \begin{table} \centering \caption{Median of peak \textbf{radial} strains for data across BL, HO and HODOB, also broken down by regions - ISC, BOR and REM. See figure \ref{fig:physio_response} for pictorial representation.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Method} & \multicolumn{3}{|c|}{\textbf{BL} $(\%)$} & \multicolumn{3}{|c|}{\textbf{HO} $(\%)$} & \multicolumn{3}{|c|}{\textbf{HODOB} $(\%)$} \\ \hline \textbf{Crys} & \multicolumn{3}{|c|}{12.3 $\pm$ 9.7} & \multicolumn{3}{|c|}{11.6 $\pm$ 7.1} & \multicolumn{3}{|c|}{30.2 $\pm$ 15.0} \\ \hline \textbf{Echo} & \multicolumn{3}{|c|}{13.7 $\pm$ 7.3} & \multicolumn{3}{|c|}{11.5 $\pm$ 4.4} & \multicolumn{3}{|c|}{22.0 $\pm$ 16.1} \\ \hline & \textbf{ISC} & \textbf{BOR} & \textbf{REM} & \textbf{ISC} & \textbf{BOR} & \textbf{REM} & \textbf{ISC} & \textbf{BOR} & \textbf{REM} \\ \hline \textbf{Crys} & 12.3 & 11.8 & 20.6 & 8.9 & 11.6 & 16.2 & 29.5 & 23.8 & 34.6 \\ \hline \textbf{Echo} & 16.0 & 15.7 & 13.0 & 12.0 & 11.3 & 12.0 & 22.0 & 25.0 & 19.5 \\ \hline \end{tabular} \label{table:physio_response_vals_rad} \end{table} \par There was a decrease in overall strain magnitudes, across all regions, going from BL (figure \ref{fig:dsea16_BL_comp_w_crystal}) to HO (figure \ref{fig:dsea16_HO_comp_w_crystal}). There was also recovery in all regions going from HO (figure \ref{fig:dsea16_HO_comp_w_crystal}) to HODOB (figure \ref{fig:dsea16_HO_comp_w_crystal}). \begin{figure} \centering \subfloat[ISC]{\includegraphics[scale=.4]{results/dsea16_ho_echo_crys_strains_isc.png}} \\ \subfloat[BOR]{\includegraphics[scale=.4]{results/dsea16_ho_echo_crys_strains_bor.png}} \\ \subfloat[REM]{\includegraphics[scale=.4]{results/dsea16_ho_echo_crys_strains_rem.png}} \caption{FNT and crystal strains in HO condition for a data across the 3 cubic regions (ISC, BOR and REM top to bottom) for 1 dataset. Radial (red), circumferential (cyan) and longitudinal (green) strains from left to right.} \label{fig:dsea16_HO_comp_w_crystal} \end{figure} \begin{figure} \centering \subfloat[ISC]{\includegraphics[scale=.4]{results/dsea16_hodob_echo_crys_strains_isc.png}} \\ \subfloat[BOR]{\includegraphics[scale=.4]{results/dsea16_hodob_echo_crys_strains_bor.png}} \\ \subfloat[REM]{\includegraphics[scale=.4]{results/dsea16_hodob_echo_crys_strains_rem.png}} \caption{FNT and crystal strains in HODOB condition for a data across the 3 cubic regions (ISC, BOR and REM top to bottom) for 1 dataset. Radial (red), circumferential (cyan) and longitudinal (green) strains from left to right.} \label{fig:dsea16_HODOB_comp_w_crystal} \end{figure} \par Figure \ref{fig:physio_response} shows the changes in peak radial, circumferential and longitudinal strains from BL to HO to HODOB (for 8 BL data and 5 HO and HODOB data). Median values across different groups are shown, along with the IQR. The trend of decrease in peak strains from BL to HO and then the subsequent increase from HO to HODOB was strongly shown by radial strains. The corresponding radial strains can be found in table \ref{table:physio_response_vals_rad} as well. The change in HO to HODOB is more substantial for both crystal and FNT-based strains. There is a subtle difference in the magnitude of increase, from BL to HO, and decrease, from HO to HODOB, of radial strains across ISC, BOR and REM regions for the crystal-based strains. Such a difference was not found for the echo-based strains. \begin{figure} \centering \includegraphics[height=14cm, width=14cm]{results/boxplots_peak_vals_physiological_summary.png} \caption{Peak strain bar graphs (with median and IQR) for radial (top), circumferential (middle) and longitudinal (bottom) strains at BL, HO and HODOB - shown across ISC, BOR and REM regions for echo and crystal-based strains.} \label{fig:physio_response} \end{figure} \begin{table} \centering \caption{Median of peak \textbf{circumferential} strains for data across BL, HO and HODOB, also broken down by regions - ISC, BOR and REM. See figure \ref{fig:physio_response} for pictorial representation.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Method} & \multicolumn{3}{|c|}{\textbf{BL $(\%)$}} & \multicolumn{3}{|c|}{\textbf{HO $(\%)$}} & \multicolumn{3}{|c|}{\textbf{HODOB $(\%)$}} \\ \hline \textbf{Crys} & \multicolumn{3}{|c|}{-11.2 $\pm$ 2.3} & \multicolumn{3}{|c|}{-10.2 $\pm$ 7.1} & \multicolumn{3}{|c|}{-16.4 $\pm$ 3.7} \\ \hline \textbf{Echo} & \multicolumn{3}{|c|}{-7.0 $\pm$ 2.8} & \multicolumn{3}{|c|}{-8.3 $\pm$ 3.4} & \multicolumn{3}{|c|}{-10.8 $\pm$ 3.1} \\ \hline & \textbf{ISC} & \textbf{BOR} & \textbf{REM} & \textbf{ISC} & \textbf{BOR} & \textbf{REM} & \textbf{ISC} & \textbf{BOR} & \textbf{REM} \\ \hline \textbf{Crys} & -12.8 & -10.0 & -10.7 & -12.1 & -12.1 & -10.4 & -19.3 & -14.5 & -16.4 \\ \hline \textbf{Echo} & -7.4 & -6.8 & -7.0 & -8.3 & -8.7 & -7.4 & -10.8 & -9.6 & -12.1 \\ \hline \end{tabular} \label{table:physio_response_vals_circ} \end{table} \par Circumferential strain values across the regions are reported in table \ref{table:physio_response_vals_circ}. Both crystal-based and echo-based (FNT) circumferential strains did not display changes in absolute magnitudes across ISC, BOR and REM (see figure \ref{fig:physio_response} and table \ref{table:physio_response_vals_circ}). Changes in longitudinal strains are also shown in figure \ref{fig:physio_response} and values reported in table \ref{table:physio_response_vals_long}. The crystal-based strains show decreases in strain magnitudes from BL to HO and increases from HO to HODOB. The echo strain magnitudes were very small and therefore not very informative since the IQRs were fairly high. Just like radial strains, the crystal-based longitudinal strains also point to the existence of a subtle difference in the magnitude of decrease from BL to HO, and increase from HO to HODOB, across ISC, BOR and REM regions. \begin{table} \centering \caption{Median of peak \textbf{longitudinal} strains for data across BL, HO and HODOB, also broken down by regions - ISC, BOR and REM. See figure \ref{fig:physio_response} for pictorial representation.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Method} & \multicolumn{3}{|c|}{\textbf{BL $(\%)$}} & \multicolumn{3}{|c|}{\textbf{HO $(\%)$}} & \multicolumn{3}{|c|}{\textbf{HODOB $(\%)$}} \\ \hline \textbf{Crys} & \multicolumn{3}{|c|}{-9.1 $\pm$ 2.93} & \multicolumn{3}{|c|}{-3.2 $\pm$ 2.9} & \multicolumn{3}{|c|}{-11.6 $\pm$ 6.9} \\ \hline \textbf{Echo} & \multicolumn{3}{|c|}{-5.1 $\pm$ 3.2} & \multicolumn{3}{|c|}{-3.0 $\pm$ 5.2 } & \multicolumn{3}{|c|}{-4.2 $\pm$ 3.0} \\ \hline & \textbf{ISC} & \textbf{REM} & \textbf{BOR} & \textbf{ISC} & \textbf{REM} & \textbf{BOR} & \textbf{ISC} & \textbf{REM} & \textbf{BOR} \\ \hline \textbf{Crys} & -8.1 & -9.8 & -9.4 & -2.6 & -5.4 & -4.9 & -11.3 & -14.1 & -13.3 \\ \hline \textbf{Echo} & -5.1 & -4.9 & -5.7 & -3.0 & -2.6 & -5.8 & -3.9 & -4.7 & -3.3 \\ \hline \end{tabular} \label{table:physio_response_vals_long} \end{table} \par The results suggest that the overall pattern of changes in the echo-based strain magnitudes, aggregated over the three functional regions (ischemic, border and normal), are consistent with our expectations. In BL condition, we expected normal heart function and strain values. During HO, which induces ischemia, we expected an overall decrease in the heart function and strain magnitudes (primarily in the ischemic region). From HO to HODOB, we expected a recovery of function (primarily in the non-ischemic regions). However, the echo based strain results for the individual functional regions (ischemic, border and normal) were not distinctive enough. We expected there to be a greater decrease in strain magnitudes from BL to HO and smaller increase in strain magnitudes from HO to HODOB in ischemic areas (and vice versa for non-ischemic areas). While we were able to observe this to some extent with the crystal-based strains, that was not the case for the echo-based strains. \par A source of uncertainty in this crystal-based analysis was the challenge involved in registering the crystals with the LV. While the positions of the transducer were available from the reference crystals in the transducer, there still remained the task of rotational alignment that required some manual intervention. Furthermore, the transducer position crystals, which served as references, were themselves subject to noise and uncertainties. The ischemia that we induced was also possibly not severe enough to cause highly localized functional difference. Overall, even though the analysis of echo did not display highly localized sensitivity, the crystal strains did, which is a highly encouraging sign. \par \section{Experiments and Results} \input{results} \input{in_vivo} \input{conclusion} \clearpage \section*{References} \section{Related Work} \subsection{Speckle/Image-based Tracking Methods} \subsubsection{Non-rigid Registration} Non-rigid registration methods typically consist of a model where the motion (displacement) field is parametrized by smooth functions such as B-splines. \citet{ledesma2005spatio} applied it to 3D ultrasound sequences in a frame-to-frame manner. \citet{heyde2012three} proposed a 3D deformation model where the LV image is transformed from Cartesian coordinate to an anatomical LV shaped coordinate system. However, most methods of this class do not use a fully spatiotemporal motion model. The optimizations are highly non-convex and are prone to get stuck in local minimums that can yield non-optimal solutions. \par Some work has been done towards addressing the spatiotemporal alignment issue. \citet{ledesma2005spatio} proposed a 3d+t B-spline spatiotemporal model which parameterized the Lagrangian motion of a point at end diastole (ED) through the cardiac cycle . However, their model does not explicitly capture any notion of global spatiotemporal correspondence and consistency. This is because their cost function accounts for the dissimilarity with the ED frame but not with any other frames, including the neighboring frames. Also, this method does not capture large deformations effectively. \citet{de2012temporal} proposed a 3D+t diffeomorphic map-based registration method, where a B-spline parameterization over the velocity field is used. While this explicitly models the notion of spatiotemporal correspondence, the concern with velocity-based parametrization, in general, is that it is prone to error accumulation as Lagrangian displacements are calculated by integrating the velocities. \subsubsection{Block Matching and Optical Flow} Block matching involves taking a patch of image in a frame and searching for the best match in a spatial window in the next time frame. \citet{langeland2005experimental} implemented this on 2D echo RF (radio frequency) images. \citet{lubinski1999speckle} also implemented this on RF images and further refined displacement estimation in the beam direction using zero-crossings of the phase of the complex correlation function. Optical flow methods assume that the intensity of a point in a moving image is consistent across time and that motion is responsible for temporal intensity variation. \citet{song1991computation} applied this to model cardiac motion in 3D CT images. These methods can be time-consuming due to a large search space and also lack a regularization term. \subsection{Shape Matching/Tracking Methods} Shape-based methods try to match shape/image descriptors derived from a point set. Pre-processing is necessary to generate the points either by simple edge/feature detection algorithms or by a more sophisticated segmentation algorithm. Post-processing is also generally required for smoothing and dense field generation as the solutions are sparse \citep{papademetris2002estimation}. \subsubsection{Frame-to-frame Matching} \citet{chui2003new} proposed a point matching algorithm that modeled deformation using non-rigid thin plate spline parameterization and used it to align point sets derived from brain-imaging. The correspondences that map point sets are fuzzy (non-binary) initially and are refined iteratively to obtain one-to-one binary correspondences. \citet{belongie2000shape} introduced the shape context feature, which is more global than the local curvature, and used it to match point sets. They solve a weighted bipartite graph matching problem using the Hungarian algorithm to obtain one-to-one correspondences. \subsubsection{Temporal Tracking} We previously proposed a method that tracks individual points on myocardial surfaces through time \citep{parajuli2016integrated}. Points on the myocardial surfaces form nodes in a graph and edges exist between points and their spatial neighbors in the next time frame. The motion of an individual point is then posed as the shortest path through this graph. Our current work improves this by modeling the motion of all points on the surface together as opposed to individually. \citet{berclaz2011multiple} used a flow network structure to build a fully spatiotemporal model for an object tracking problem. We expand upon their work, as will be seen below, by providing a probabilistic mechanism of outlier handling and by accounting for periodic motion. Furthermore, while their work handles the uncertainty in the nodes of the graph, we handle uncertainty in the edges to solve for correspondences. \subsection{Post hoc Regularization Models} Methods lacking in inherent regularization, or producing a sparse set of displacements as our method does, rely on post hoc regularization of the initial tracking results to produce smooth displacement fields. \citet{papademetris2002estimation} first estimated initial correspondences between myocardial surfaces using a shape matching approach. The initial estimation was then regularized by using a biomechanically inspired finite element method approach. \par \citet{compas2014radial} et al. proposed the use of radial basis functions to generate smooth and dense displacements from the integration of sparse sets of shape and speckle tracking displacements in. We expanded this strategy to impose further smoothness and biomechanical constraints on the displacement fields in \citet{parajuli2015sparsity}. \citet{lu2017learning} learned how to regularize noisy motion by training a neural network to filter noisy 4D Lagrangian displacement vector fields. \subsection{Synthetic Data} \subsubsection{Data Description} \par We used 8 synthetic 3D+t ultrasound image sequences developed by \citet{alessandrini2015pipeline}. The dataset consisted of 3 categories of image sequences. The first consisted of just one normal sequence (normal). The second consisted of ischemic sequences with ischemia in the distal and proximal left anterior descending artery (LADDIST and LADPROX), right circumflex artery (RCA) and left circumflex artery (LCX). The third consisted of dilated myocardium sequences - one synchronous (SYNC) and two dyssynchronous, induced by left bundle branch block (LBBB and LBBBSMALL). Examples of the synthetic data - 2D slices from one image sequence at end diastole and corresponding contours - are given in figure \ref{fig:leuven_example}. \begin{figure} \centering \subfloat[Short-axis view]{\includegraphics[scale=.4]{results/leuven_normal_fr_1_view1.png}} \qquad \qquad \subfloat[Long-axis view]{\includegraphics[scale=.26]{results/leuven_normal_fr_1_view2.png}} \caption{Synthetic data image example with endocardial and epicardial contours (normal data).} \label{fig:leuven_example} \end{figure} \par We report tracking errors on 2250 myocardial mesh points, for which the positions through the sequence were provided as ground truth. These points were evenly distributed in the endocardium, epicardium and in the mid-wall. We calculated distances between ground truth mesh points and mesh points from our tracking algorithm by propagating the known first frame mesh points through time. The errors are summarized using median and interquartile range (IQR). Overall errors, aggregated over all time frames, are displayed separately from end diastolic (last frame) and ES errors because the number of data points is different. Analyzing errors at ES and ED is important as many clinical readings are made at these time points. \subsubsection{Parameter Selection for FNT} \par We vary $Z_{fr}$, $\theta_{fr}$ and $NK$, which control the level of sampling of our surface masks and graph neighbor assignment (see table \ref{table:param_consider} for description). We adopted a cylindrical sampling strategy where we sample uniformly along the $z$ axis and along the circumference (see figure \ref{fig:angular_sampling}). We present the median tracking errors (MTE) under different combination of these parameters on the normal data in table \ref{table:changing_z_theta}. The combination of $Z_{fr} = 40$, $\theta_{fr} = 30$ and $NK = 3$ provided the lowest overall MTE on the normal data. \begin{table} \centering \caption{Parameters that were tuned for the FNT algorithm.} \begin{tabular}{|p{2cm}|p{7cm}|} \hline \textbf{Name} & \textbf{Description} \\ \hline $NK$ & Number of nearest neighbors (by feature distance) in consideration for next frame. \\ \hline $Z_{fr}$ & Number of slices sampled in the long ($z$) axis per frame. \\ \hline $\theta _{fr}$ & Angular sampling along the circumference. (roughly along the short axis.)\\ \hline $P_{th}$ & Probabilistic threshold for outlier edges removal. \\ \hline \end{tabular} \label{table:param_consider} \end{table} \begin{figure} \centering \subfloat[Along the circumference of the surfaces, $\theta_{fr}$ points are uniformly sampled in terms of angle.]{\includegraphics[scale=.28]{images/angular_sampling}} \qquad \qquad \subfloat[Along the long (z) axis of the surface, $Z_{fr}$ points are uniformly sampled. ]{\includegraphics[scale=.28]{images/slice_sampling}} \caption{Sampling scheme.} \label{fig:angular_sampling} \end{figure} \begin{table} \centering \caption{Result of changing $Z_{fr}$, $\theta_{fr}$ and $NK$ on the normal data and median square errors (MTE).} \begin{tabular}{|c|c|c|c|c|c|} \hline $\mathbf{Z_{fr}}$ & $\mathbf{\theta_{fr}}$ & $\mathbf{NK}$ & \textbf{Overall/mm} & \textbf{ES/mm} & \textbf{ED/mm}\\ \hline 30 & 15 & 3 & 0.96 $\pm$ 0.74 & 1.13 $\pm$ 0.75 & 1.01 $\pm$ 0.82 \\ \hline 30 & 30 & 3 & 0.94 $\pm$ 0.71 & 1.25 $\pm$ 0.87 & 0.77 $\pm$ 0.56 \\ \hline 30 & 40 & 3 & 0.95 $\pm$ 0.73 & 1.30 $\pm$ 0.87 & 0.82 $\pm$ 0.53 \\ \hline 30 & 15 & 5 & 0.90 $\pm$ 0.71 & \textbf{1.09 $\pm$ 0.68} & 0.90 $\pm$ 0.79 \\ \hline 30 & 30 & 5 & 0.97 $\pm$ 0.73 & 1.24 $\pm$ 0.83 & 0.98 $\pm$ 0.77 \\ \hline 30 & 40 & 5 & 0.93 $\pm$ 0.70 & 1.20 $\pm$ 0.80 & 0.90 $\pm$ 0.65 \\ \hline \hline % 40 & 15 & 3 & 0.90 $\pm$ 0.68 & 1.17 $\pm$ 0.76 & 0.78 $\pm$ 0.66 \\ \hline 40 & 30 & 3 & \textbf{0.86 $\pm$ 0.65} & 1.09 $\pm$ 0.73 & 0.82 $\pm$ 0.58 \\ \hline 40 & 40 & 3 & 0.90 $\pm$ 0.66 & 1.21 $\pm$ 0.73 & 0.77 $\pm$ 0.53 \\ \hline 40 & 15 & 5 & 0.91 $\pm$ 0.66 & 1.11 $\pm$ 0.71 & 0.87 $\pm$ 0.64 \\ \hline 40 & 30 & 5 & 0.95 $\pm$ 0.74 & 1.24 $\pm$ 0.75 & 0.87 $\pm$ 0.75 \\ \hline 40 & 40 & 5 & 0.95 $\pm$ 0.71 & 1.16 $\pm$ 0.73 & 0.92 $\pm$ 0.79 \\ \hline \hline % 50 & 15 & 3 & 0.96 $\pm$ 0.75 & 1.15 $\pm$ 0.72 & 1.02 $\pm$ 0.84 \\ \hline 50 & 30 & 3 & 0.93 $\pm$ 0.74 & 1.21 $\pm$ 0.79 & 0.86 $\pm$ 0.78 \\ \hline 50 & 40 & 3 & 0.91 $\pm$ 0.72 & 1.09 $\pm$ 0.72 & 0.80 $\pm$ 0.72 \\ \hline 50 & 15 & 5 & 0.90 $\pm$ 0.67 & 1.13 $\pm$ 0.82 & 0.86 $\pm$ 0.59 \\ \hline 50 & 30 & 5 & 0.88 $\pm$ 0.73 & 1.31 $\pm$ 0.75 & \textbf{0.75 $\pm$ 0.67} \\ \hline 50 & 40 & 5 & 0.91 $\pm$ 0.75 & 1.34 $\pm$ 0.80 & 0.83 $\pm$ 0.66 \\ \hline \end{tabular} \label{table:changing_z_theta} \end{table} \par We examined how performance changed as we changed $P_{th}$ (see table \ref{table:param_consider} for description). Each edge in our flow network is associated with a probabilistic weight that represents how likely that edge transition is. Dropping edges whose weights are below $P_{th}$ from consideration is a method of handling outliers. The results are shown in table \ref{table:changing_k_p}. MTE values were typically the lowest for $P_{th} = .5$. With high $P_{th}$, low-quality edges get removed and the resulting trajectories are better probabilistically. \begin{table} \centering \caption{Outcome of changing $P_{th}$ on the normal data and MTE.} \begin{tabular}{|c|c|c|c|} \hline $\mathbf{P_{th}}$ & \textbf{Overall/mm} & \textbf{ES/mm} & \textbf{ED/mm} \\ \hline 0.1 & 0.92 $\pm$ 0.75 & 1.27 $\pm$ 0.79 & 0.83 $\pm$ 0.55 \\ \hline 0.3 & 0.90 $\pm$ 0.70 & 1.24 $\pm$ 0.74 & \textbf{0.80 $\pm$ 0.49} \\ \hline 0.5 & \textbf{0.87 $\pm$ 0.63} & 1.23 $\pm$ 0.74 & 0.81 $\pm$ 0.51 \\ \hline \end{tabular} \label{table:changing_k_p} \end{table} \subsubsection{Effect of Different Constraints} Next, we see how the algorithm performs under different constraints. We start by applying $C_{out}$ only, then $C_{in}$, $C_{bal}$ and $C_{loop}$ in an incremental fashion. $C_{out}$ does not enforce any one-to-one correspondence constraint. It is equivalent to tracking all points independently using a shortest path formulation. $C_{out}$ and $C_{in}$ together enforce one-to-one correspondence at a frame-to-frame level. $C_{out}$ and $C_{bal}$ together enforce one-to-one correspondence throughout the cardiac cycle. Finally, applying $C_{loop}$ in addition to $C_{out}$ and $C_{bal}$ also enforces a balance between first and last frames, thereby encouraging trajectories to start and end at nearby positions. The findings are summarized in Table \ref{table:different_constraint_setting} and figure \ref{fig:mse_boxplot_w_wo_loop}. \par Not surprisingly, there is an incremental improvement with each additional constraint. The jump from having no loop constraint to the loop constraint is especially significant. This validates our intuition that accounting for the cyclical nature of cardiac motion is necessary. \begin{figure} \centering \subfloat[Overall MTE for all data.]{\includegraphics[scale=.30]{results/Leuven_overall_mse_diff_constraints.png}} \qquad \subfloat[MTE for ES and ED for all data.]{\includegraphics[scale=.30]{results/Leuven_ED_ES_mse_diff_constraints.png}} \caption{MTE for all data, for different constraint setting. $C_{in}$, $C_{bal}$ and $C_{loop}$ were added incrementally.} \label{fig:mse_boxplot_w_wo_loop} \end{figure} \begin{table} \centering \caption{Result of changing constraints applied to the optimization and MTE.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Constraint configuration} & \textbf{Overall/mm} & \textbf{ES/mm} & \textbf{ED/mm} \\ \hline $C_{out}$ only & 1.22 $\pm$ 1.15 & 1.50 $\pm$ 1.19 & 1.24 $\pm$ 1.25 \\ \hline $C_{out}$ and $C_{in}$ & 1.17 $\pm$ 1.04 & 1.39 $\pm$ 1.16 & 1.19 $\pm$ 1.02 \\ \hline $C_{out}$ and $C_{bal}$ & 1.09 $\pm$ 0.92 & 1.24 $\pm$ 0.93 & 1.23 $\pm$ 1.03 \\ \hline $C_{out}$, $C_{bal}$ and $C_{loop}$ & \textbf{0.84 $\pm$ 0.68} & \textbf{1.13 $\pm$ 0.73} & \textbf{0.72 $\pm$ 0.56} \\ \hline \end{tabular} \label{table:different_constraint_setting} \end{table} \subsubsection{Comparing Features} \par We also compared the performance of the learned (unsupervised) feature against other features (and/or metric). A comparison to the shape context feature \citep{belongie2000shape}, gradient histogram feature and intensity cross-correlation metric-based approaches is shown in table \ref{table:shpvslrnd} and figure \ref{fig:mse_boxplot_w_sh_ctxt_and_lrnd_feats}. \begin{table} \centering \caption{Average MTE for different feature generation methods.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Feature} & \textbf{Overall/mm} & \textbf{ES/mm} & \textbf{ED/mm} \\ \hline Shape context & 1.19 $\pm$ 0.99 & 1.41 $\pm$ 1.15 & 1.19 $\pm$ 0.91 \\ \hline Gradient histograms & 1.20 $\pm$ 1.03 & 1.47 $\pm$ 1.16 & 1.18 $\pm$ 1.00\\ \hline Intensity Cross-correlation & 0.98 $\pm$ 0.81 & 1.24 $\pm$ 0.91 & 0.86 $\pm$ 0.72 \\ \hline Learned feature (Autoencoder) & \textbf{0.84 $\pm$ 0.68} & \textbf{1.13 $\pm$ 0.73} & \textbf{0.72 $\pm$ 0.56} \\ \hline \end{tabular} \label{table:shpvslrnd} \end{table} \par The learned feature using an Autoencoder provides better tracking results in comparison to other features generated using shape and image information. It was fascinating to find that the cross-correlation of intensity patches also performed relatively well. This is likely because speckle de-correlation is probably not substantial across time in this synthetic dataset. \begin{figure} \centering \subfloat[Overall MTE for all data.]{\includegraphics[scale=.32]{results/Leuven_overall_feat_comp.png}} \qquad \subfloat[MTE for ES and ED for all data.]{\includegraphics[scale=.32]{results/Leuven_ED_ES_feat_comp.png}} \caption{MTE for all data, comparing different features. The same tracking method (FNT) was used for all of these.} \label{fig:mse_boxplot_w_sh_ctxt_and_lrnd_feats} \end{figure} \subsubsection{Comparing Methods} \par We compared the FNT method against other point tracking methods. Table \ref{table:fnt_vs_others} and figure \ref{fig:mse_boxplot_w_fnt_vs_others} summarize the findings. For shape context matching \citep{belongie2000shape}, we had to use a lower sampling rate ($Z_{fr} = 20$, $\theta_{fr} = 10$) because the algorithm took over an hour to run per frame and therefore was not tractable at a higher sampling rate. The Dynamic Shape Tracking (DST) algorithm is run with the same setting as the FNT algorithm ($Z_{fr} = 40$, $\theta_{fr} = 30$ and $NK=3$). A free-form deformation (FFD, \citet{rueckert1999nonrigid}) implementation available from the Bioimagesuite package \citep{bioimagesuite} was also applied to our data for reference. FFD was applied by registering all frames to the next frame in the sequence (FFD fr-to-fr) and also by registering all frames to the first frame (FFD fr-to-ED). \begin{table} \centering \caption{MTE for different tracking methods.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Method} & \textbf{Overall/mm} & \textbf{ES/mm} & \textbf{ED/mm} \\ \hline Shape Context Matching & 1.60 $\pm$ 1.22 & 2.03 $\pm$ 1.35 & 1.52 $\pm$ 1.01 \\ \hline DST & 1.22 $\pm$ 1.11 & 1.54 $\pm$ 1.27 & 1.21 $\pm$ 1.12 \\ \hline FFD fr-to-fr & 1.30 $\pm$ 1.05 & 1.41 $\pm$ 0.94 & 1.64 $\pm$ 1.28 \\ \hline FNT & 0.84 $\pm$ 0.68 & \textbf{1.13 $\pm$ 0.73} & 0.72 $\pm$ 0.56 \\ \hline FFD fr-to-ED & \textbf{0.83 $\pm$ 0.80} & 1.37 $\pm$ 1.08 & \textbf{0.56 $\pm$ 0.50} \\ \hline \end{tabular} \label{table:fnt_vs_others} \end{table} \begin{figure} \centering \subfloat[Overall MTE for all data.]{\includegraphics[scale=.34]{results/Leuven_overall_mse_fnt_vs_others.png}} \qquad \subfloat[MTE for ES and ED for all data.]{\includegraphics[scale=.34]{results/Leuven_ED_ES_mse_fnt_vs_others.png}} \caption{MTE for all data, comparing different methods.} \label{fig:mse_boxplot_w_fnt_vs_others} \end{figure} \par We first note that the shape context matching and FFD applied frame-to-frame seem to provide the worst results overall. Since these methods did not have any temporal aspect to them, their tracking suffers a lot post systole, which is evident from the high ED errors. The DST method performs only slightly better as it does not track all points together but provides improved tracking post systole and therefore results in lower ED errors. The frame-to-ED FFD, where images are registered directly to the first frame and FNT seem to provide similar results overall. But FNT is better at ES. This is because the deformation from ED to ES is large and the FFD algorithm was most likely not able to account for that. Good performance at ES is crucial since peak strains typically occur around ES and are widely evaluated and reported clinically as a measure of function. \subsubsection{Regional Strain Analysis} Since we are ultimately interested detecting regional (within the LV) changes in strain values in order to isolate areas with injury, we also test whether the strain values we calculate can help us do this. Radial strain curves for the normal dataset, obtained using the FNT method and ground truth positions, are shown for basal, mid and apical areas of the LV in figure \ref{fig:leuven_strain_normal_curves}. Basal and mid areas are divided into 6 sectors - anterior (Ant), antero-septal (Ant-Sept), infero-septal (Inf-Sept), inferior (Inf), infero-lateral (Inf-Lat) and antero-lateral (Ant-Lat) sectors. Apical area is divided into 4 sectors - anterior (Ant), septal (Sept), inferior (Inf) and lateral (Lat) sectors. For the normal data, we can see that strain values do not differ significantly across different sectors. Compared to the ground truth, our method seems to underestimate peak radial strain values as evident from figure \ref{fig:leuven_strain_normal_curves}. However, the broader trends seem to be consistent with the ground truth strain values. \begin{figure} \centering \subfloat[Normal (FNT)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_normal_rad_strains.png}} \\ \subfloat[Normal (Ground Truth)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_normal_rad_strains_gt.png}} \\ \caption{Radial strain curves in the basal, mid and apical area of the LV for the normal Leuven data (our method and ground truth). Curves indicating mean strains for anterior (Ant) antero-septal (Ant-Sept), infero-septal (Inf-Sept), inferior (Inf), infero-lateral (Inf-Lat) and antero-lateral (Ant-Lat) regions are shown.} \label{fig:leuven_strain_normal_curves} \end{figure} \par In figure \ref{fig:leuven_strain_ladprox_curves}, radial strain curves for the LADPROX data are shown (FNT and ground truth position based). There appears to be a significant reduction of strain values around the infero-septal and infero-lateral sectors in the basal and mid regions. In the apical region, the anterior and lateral region strains are also lower. Figure \ref{fig:leuven_strain_rca_curves} shows radial strain curves for the RCA data (FNT and ground truth position based). Infero-lateral and inferior strain values are reduced in basal and mid regions. There appears to be no significant changes in the apical region. Similar to the normal data, there also appears to be an underestimation of strains for both the ischemic data. However, there is also a broad agreement on the strain trends. \begin{figure} \centering \subfloat[LADPROX (FNT)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_ladprox_rad_strains.png}} \\ \subfloat[LADPROX (Ground Truth)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_ladprox_rad_strains_gt.png}} \\ \caption{Radial strain curves in the basal, mid and apical area of the LV for the LADPROX Leuven data (our method and ground truth). Curves indicating mean strains for anterior (Ant) antero-septal (Ant-Sept), infero-septal (Inf-Sept), inferior (Inf), infero-lateral (Inf-Lat) and antero-lateral (Ant-Lat) regions are shown.} \label{fig:leuven_strain_ladprox_curves} \end{figure} \begin{figure} \centering \subfloat[RCA (FNT)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_rca_rad_strains.png}} \\ \subfloat[RCA (Ground Truth)]{\includegraphics[width=12cm, height=3.5cm]{results/strains/leuven_rca_rad_strains_gt.png}} \\ \caption{Radial strain curves in the basal, mid and apical area of the LV for the RCA Leuven data (our method and ground truth). Curves indicating mean strains for anterior (Ant) antero-septal (Ant-Sept), infero-septal (Inf-Sept), inferior (Inf), infero-lateral (Inf-Lat) and antero-lateral (Ant-Lat) regions are shown.} \label{fig:leuven_strain_rca_curves} \end{figure} \par It should be clear from figures \ref{fig:leuven_strain_normal_curves}, \ref{fig:leuven_strain_ladprox_curves} and \ref{fig:leuven_strain_rca_curves} that, we are able to discern changes in strains across different regions of the LV. For instance, from normal to RCA, there is barely any change in the strain values in the apical area, whereas there is a significant reduction in inferior and infero-lateral strain values in both the basal and mid regions. From normal to LADPROX, there is a significant change in the apical strains. The observation that different injury profiles lead to different regional strain patterns, as demonstrated here, is of great value. \par To quantify this more stringently, we directly compared strains obtained from FNT and the ground truth positions by analyzing the differences in strain values and cross-correlations. Table \ref{table:leuven_strain_abs_diff_by_group} summarizes absolute difference in radial, circumferential and longitudinal strain values for normal, Ischemic (LADPROX, LADDIST and RCA) and Dilated (LBBB, LBBBSMALL and SYNC) data groups. While the median absolute difference in strain values are within reasonable range for the normal and ischemic data, it is rather large for the dilated data group. This is perhaps indicative of the fact that we are not able to account for their complex motion patterns, which are different than that of the normal and ischemic group. \begin{table} \centering \caption{Median absolute difference of Lagrangian strains: Comparing FNT and ground truth.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Data group} & $\mathbf{Radial \quad (\%)}$ & $\mathbf{Circumferential \quad (\%)}$ & $\mathbf{Longitudinal \quad (\%)}$ \\ \hline \textbf{Normal} & 2.19 $\pm$ 2.94 & 1.37 $\pm$ 2.02 & 0.16 $\pm$ 0.21 \\ \hline \textbf{Ischemic} & 2.39 $\pm$ 3.58 & 1.55 $\pm$ 2.24 & 0.18 $\pm$ 0.27 \\ \hline \textbf{Dilated} & 6.93 $\pm$ 9.69 & 3.79 $\pm$ 6.83 & 0.65 $\pm$ 1.12 \\ \hline \hline \textbf{Overall} & 3.42 $\pm$ 5.85 & 2.12 $\pm$ 3.43 & 0.27 $\pm$ 0.53 \\ \hline \end{tabular} \label{table:leuven_strain_abs_diff_by_group} \end{table} \par To understand whether these differences in strains were random or systematic, we also summarized median differences (not absolute) in table \ref{table:leuven_strain_diff_by_group}. This helps us identify the direction in which the algorithm is biased. The first thing of note is that, overall, there is no substantial bias in the circumferential and longitudinal strains. There is bias in all strain types for the dilated data group. Radial strains seem to be underestimated for all data groups, which is consistent with our findings from earlier as we observed the strain curves. \begin{table} \centering \caption{Median difference in Lagrangian strains: Comparing FNT and ground truth.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Data group} & $\mathbf{Radial \quad (\%)}$ & $\mathbf{Circumferential \quad (\%)}$ & $\mathbf{Longitudinal \quad (\%)}$ \\ \hline \textbf{Normal} & 2.13 $\pm$ 3.56 & 0.00 $\pm$ 2.75 & 0.06 $\pm$ 0.33 \\ \hline \textbf{Ischemic} & 1.99 $\pm$ 4.19 & 0.00 $\pm$ 3.10 & 0.04 $\pm$ 0.36 \\ \hline \textbf{Dilated} & 5.18 $\pm$ 10.79 & -1.33 $\pm$ 7.69 & -0.03 $\pm$ 1.29 \\ \hline \hline \textbf{Overall} & 2.79 $\pm$ 6.13 & -0.20 $\pm$ 4.54 & 0.02 $\pm$ 0.54 \\ \hline \end{tabular} \label{table:leuven_strain_diff_by_group} \end{table} \par A reason behind this systematic bias in the radial strains is our use of segmented surfaces. Since points were constrained to move between surfaces, the maximum radial displacement was constrained. At a very small spatial scale, if we assume surfaces are flat (planes in 3D) and consecutive surfaces are parallel, the maximum possible radial motion is fixed - the projection of the normal vector between the two surfaces along the radial direction. However, because point correspondences are not perfect during optimization, there is noise in the displacement vectors. As these noisy vectors are regularized and smoothed, the final radial displacements are lower in aggregate than the ground truth. Such constraints do not exist circumferentially or longitudinally as long as points are sampled densely and regularly. \par Finally, we compare how well the trends agree with ground truth for the three strain types in table \ref{table:leuven_strain_correlations_by_group} by looking at the summary of correlation values of individual sector strain curves. Again, the dilated data seem to have the worst correlations, which is consistent with the findings so far. Longitudinal strains are slightly worse and noisier overall. This is partly due to the fact that we use a sparse set of displacements located on the myocardial surfaces. Longitudinal motion is hard to quantify towards the basal and apical regions in this setting. Overall, the results with synthetic data were good both in terms of point tracking and strain analysis. \begin{table} \centering \caption{Median correlations of Lagrangian strains: Comparing FNT and ground truth.} \begin{tabular}{|c|c|c|c|} \hline \textbf{Data group} & $\mathbf{Radial}$ & $\mathbf{Circumferential}$ & $\mathbf{Longitudinal}$ \\ \hline \textbf{Normal} & 0.99 $\pm$ 0.02 & 0.96 $\pm$ 0.05 & 0.98 $\pm$ 0.04 \\ \hline \textbf{Ischemic} & 0.98 $\pm$ 0.04 & 0.96 $\pm$ 0.07 & 0.96 $\pm$ 0.15 \\ \hline \textbf{Dilated} & 0.87 $\pm$ 0.16 & 0.74 $\pm$ 0.35 & 0.42 $\pm$ 0.43 \\ \hline \hline \textbf{Overall} & 0.96 $\pm$ 0.10 & 0.93 $\pm$ 0.17 & 0.90 $\pm$ 0.56 \\ \hline \end{tabular} \label{table:leuven_strain_correlations_by_group} \end{table} \par In figure \ref{fig:strain_maps_example} we also display radial strain maps at ES calculated using our method and the ground truth for 3 ischemic datasets. Areas with injuries have low strains and this can be seen in the strain maps. Again, this is to illustrate that we could reliably localize injuries since the maps from our method and ground truth appear fairly similar. \begin{figure} \centering \includegraphics[scale=.3]{results/leuven_strain_maps.png} \caption{Epicardial surfaces displaying radial strains for three different types of ischemia, induced by occlusion at left anterior descending artery (LADPR0X), right coronary artery (RCA) and left circumflex artery (LCX).} \label{fig:strain_maps_example} \end{figure} \subsection{Data driven feature generation and metric} \subsubsection{Motivation} \par In any feature based tracking method, the quality of the tracking is highly dependent on the quality of the features. The features have to be both representative and discriminative. The features should capture underlying image statistic while at the same time also be unique and salient so different tissue classes, locations and orientations result in different features (if translational and rotational invariance is not desired). Similarly, the metric to calculate the distance between features is also of importance. We typically have no a-priori information regarding which feature component is important, and therefore should get weighed heavily. \par For these reasons, we turn to learning based methods. This allows us to sidestep the tedious task of feature engineering and metric composition. We will explain how supervised, unsupervised and semi-supervised methods can be used and discuss the implications of each of these strategies. \subsubsection{Supervised method - Siamese Neural Network} From a set of simulated images and ground truth motion trajectories, we sampled pairs of similar image patches that are centered at consecutive points in the same trajectory and dissimilar image patches that are centered at points that are close but not in the same trajectory. Then we train a neural network to maximize representation distance between dissimilar patches and minimize it for similar patches. \par The network learns the weights $W$ that parametrize a function $G_W$ on the input signals (see Figure \ref{fig:siamese_network}), such that, $E_W(x_1, x_2) = ||G_W(x_1) - G_W(x_2))||$ is minimized when $x_1$ and $x_2$ are similar and maximized when they are dissimilar \cite{chopra2005learning} ($x_1$ is short for $I(x^t_i)$, the image patch centered at $x^t_1$ and similarly $x_2$ is short for $I(x^{t+1}_j)$). \begin{figure} \centering \includegraphics[scale = .32]{images/siamese_network.png} \caption{Siamese network: light gray - convolutional layer, dark gray - fully connected layer. ReLU activations used between layers. Input image patches are of size $11 \times 11 \times 11$.} \label{fig:siamese_network} \end{figure} \par $y$, a binary label, is assigned to each pairs of patches $x_1$ and $x_2$. Similar pairs of patches are assigned the label $y = 1$ and dissimilar pairs are assigned $y = 0$. Learning is done by minimizing the following hinged contrastive loss function, which is more robust than a generic contrastive loss \cite{hadsell2006dimensionality}: \begin{equation} L(W, y, x_1, x_2) = \frac{1}{2} y E_W^2 + \frac{1}{2} (1 - y) max(1- E_W, 0)^2 \end{equation} \par A test pair of image patches will result in a value $E_W \in [0, 1]$. For an edge $e^t_{ij}$ between $x^t_{i}$ and $x^{t+1}_{j}$, if we let the corresponding image patches be $I^t(x^t_{i})$ and $I^{t+1}(x^{t+1}_{j})$: \begin{equation} E_W\left( x^t_{i}, x^{t+1}_{j}\right) = ||G_W\left(I^t(x^t_{i}) \right) - G_W\left(I^{t+1}(x^{t+1}_{j})\right) || \end{equation} \par This quantity is used to set edge weights on our flow network (see Equation \ref{eqn:edge_wt}). Key hyper-parameters, such as number of convolution kernels and the number of nodes in the fully connected layer were tuned via leave-one-out cross validation. Batch-normalization and dropout layers are used following all layers (except the final). Approximately $100,000$ patches were used for training. \subsubsection{Transfer learning - using segmentation embedding} It is rather difficult to collect labeled data of the nature we used to construct the Siamese neural network. There are no ground truth trajectories available for real life data. It is near impossible to manually mark where a image voxel has traversed across the cardiac cycle because of noise, inherent deformation and other uncertainties. Therefore, an unsupervised method can be of great value when no such labeled information is available. However, even though an unsupervised method might learn an embedding that best captures the variance of the data, the `axis' of these variations, might not be the most relevant for the task at hand. Therefore, we strike a compromise and train a neural network for the task of tissue segmentation with the hypothesis that this will allow us to learn a more discriminative embedding. Fig \ref{fig:semi_supervised_network} illustrates the design of such a network. \par Because the network is learning how to do semantic segmentation (and not just classification), it is also positionally aware of where the object is. Equation \ref{eqn:mahalanobis_wt} is used to calculate the distance between the learned representations. $F\left(x^t_{i}\right)$ represents the learned embedding on the image patch centered at $x^t_{i}$. $ C_f ^{-1}$ is the covariance matrix of the feature distances. \begin{equation} \label{eqn:mahalanobis_wt} E_W\left( x^t_{i}, x^{t+1}_{j}\right) = \sqrt{F\left(x^t_{i}\right) C_f ^{-1} F\left(x^{t+1}_{j}\right)} \end{equation} \begin{figure} \centering \includegraphics[scale = .32]{images/semi_supervised_semantic_embedding.png} \caption{Segmentation network: A low dimensional embedding of image patches, that captures images statistics but also positionally aware is learned.} \label{fig:semi_supervised_network} \end{figure}
1,314,259,992,681
arxiv
\section{Introduction} The Teichm\"uller\ theory and the theory of cluster varieties \cite{FG09} are deeply connected, producing fruitful applications in the both sides. Given a marked surface $\Sigma$, we have two kinds of dual cluster varieties, called the \emph{cluster $K_2$-variety} $\A_\Sigma$ and the \emph{cluster Poisson variety} $\X_\Sigma$ \cite{FG09} defined by certain quivers associated with ideal triangulations of $\Sigma$. The positive structure on these spaces allows us to consider their sets of semifield-valued points, for example the positive parts $\A_\Sigma(\mathbb{R}_{>0})$ and $\X_\Sigma(\mathbb{R}_{>0})$, which are real-analytic manifolds. On the other hand, there are two extensions of the usual Teichm\"uller\ space: the \emph{decorated Teichm\"uller\ space} $\mathcal{T}^a(\Sigma)$ introduced by Penner \cite{Penner} and the \emph{enhanced Teichm\"uller\ space} $\mathcal{T}^x(\Sigma)$ studied by Chekhov--Fock--Goncharov \cite{CF,FG07}. It is known \cite{FG07,FST} that we have canonical isomorphisms \begin{align*} \mathcal{T}^a(\Sigma) \cong \A_\Sigma(\mathbb{R}_{>0}), \quad \mathcal{T}^x(\Sigma) \cong \X^\mathrm{uf}_\Sigma(\mathbb{R}_{>0}), \end{align*} which are equivariant under the natural actions of the mapping class group $MC(\Sigma)$. These isomorphisms are provided by special coordinate functions on these Teichm\"uller\ spaces, called the $\lambda$-lengths and the cross ratios, respectively. Here, $\X^\mathrm{uf}_\Sigma$ stands for the cluster Poisson variety \underline{without frozen coordinates} -- there is no natural way to define the cross ratio coordinates on $\mathcal{T}^x(\Sigma)$ associated to the boundary edges. The supplement of frozen coordinates on boundary intervals is the main theme in this paper. The varieties $\A_\Sigma$ and $\X^\mathrm{uf}_\Sigma$ are birationally isomorphic to certain moduli spaces $\A_{SL_2,\Sigma}$ and $\X_{PGL_2,\Sigma}$ of local systems on $\Sigma$ \cite{FG06}. Here, the moduli space $\X_{PGL_2,\Sigma}$ also misses frozen coordinates. After a decade, in their seminal paper \cite{GS19}, Goncharov--Shen introduced a new moduli space $\P_{PGL_2,\Sigma}$ closely related to $\X_{PGL_2,\Sigma}$, but with additional data called the \emph{pinnings}. The data of pinnings allows one to define frozen coordinates as well, and thus provides a birational isomorphism $\X_\Sigma \cong \P_{PGL_2,\Sigma}$. \subsection{Teichm\"uller\ space with pinnings} In this paper, we introduce a variant of the Teichm\"uller\ space corresponding to $\P_{PGL_2,\Sigma}$, which we call the \emph{Teichm\"uller\ space with pinnings} $\mathcal{T}^p(\Sigma)$. Although it should be nothing but a certain ``real locus" of the moduli space $\P_{PGL_2,\Sigma}$, what we elaborate in this paper is its description purely in terms of the hyperbolic geometry. Mimicking \cite[Lemma-Definition 3.7]{GS19} in our setting, we introduce the notion of pinnings in four equivalent ways, and define $\mathcal{T}^p(\Sigma)$ to be the Teichm\"uller\ space of marked hyperbolic structures equipped with such data on each boundary interval (\cref{dfn:T^p}). Then we define cross ratio coordinates on $\mathcal{T}^p(\Sigma)$, and show that they combine to give an $MC(\Sigma)$-equivariant isomorphism (\cref{cor:Teich_X-variety}) \begin{align*} \mathcal{T}^p(\Sigma) \xrightarrow{\sim} \X_\Sigma(\mathbb{R}_{>0}). \end{align*} We also describe the \emph{gluing map} \cite{GS19} in terms of the hyperbolic structures (\cref{prop:amalgamation}). It also clarifies the appearance of spiralling geodesics in the enhanced Teichm\"uller\ space in relation with the Thurston's completeness criterion. The decorations induce pinnings. Hence we get an \emph{ (extended) ensemble map} \begin{align}\label{eq:ensemble_map} p_\Sigma: \mathcal{T}^a(\Sigma) \to \mathcal{T}^p(\Sigma). \end{align} The coordinate expression of the map $p_\Sigma$ (\cref{prop:ensemble}) is exactly the one known in the cluster theory, enhanced by Goncharov--Shen \cite[Section 18]{GS19}. It expresses the cross ratios as Laurent monomials of $\lambda$-lengths. If $\Sigma$ has no interior marked points (\emph{i.e.}, punctures), it turns out that $p_\Sigma$ is invertible. Then we obtain the inverse formula which expresses the $\lambda$-lengths in terms of the cross ratios, which seems to be well-known to specialists but new in the literature: \begin{introthm}[\cref{thm:A to X}] Assume that $\Sigma$ has no punctures. Then for each edge $\alpha \in e(\triangle)$ of an ideal triangulation, we have the inverse formula \begin{align*} A_\alpha = \prod_{\beta \in e(\triangle)} (X^\triangle_{\beta})^{q_{\alpha\beta}}. \end{align*} Here $q_{\alpha\beta}:=-\mathsf{a}_{\beta}(\alpha_\mathbb{B})$, and $\mathsf{a}_{\beta}(\alpha_\mathbb{B}) \in \frac{1}{2}\mathbb{Z}_{\geq 0}$ denotes half the geometric intersection number between the curves $\beta$ and the positive $\mathbb{B}$-shift $\alpha_\mathbb{B}$ (\cref{def:shift_ideal}) of the ideal arc $\alpha$. \end{introthm} As a consequence, we can compute the Poisson brackets of $\lambda$-lengths. We see that the Poisson algebra $C^\infty(\mathcal{T}^p(\Sigma))$ is a classical analogue of the Muller's skein algebra \cite{Muller}. We also investigate the \emph{Wilson lines} introduced in \cite{IO20} in terms of hyperbolic geometry. We obtain the transition formulae between the $\lambda$-length/cross ratio coordinates and the matrix coefficients of Wilson lines (\cref{thm:LR-formula,prop:Wilson_lambda}). \subsection{Lamination space with pinnings} For $\mathbb{A}=\mathbb{Z},\mathbb{Q}$ or $\mathbb{R}$, let $\mathbb{A}^T=(\mathbb{A},\max,+)$ denote the (max-plus) tropical semifield. Then we can consider the sets $\A_\Sigma(\mathbb{A}^T)$ and $\X^\mathrm{uf}_\Sigma(\mathbb{A}^T)$ of tropical points, which are known to be canonically isomorphic to certain spaces of measured laminations \cite{FG07}. Here $\X^\mathrm{uf}_\Sigma(\mathbb{A}^T)$ also misses the frozen coordinates. We introduce the space $\mathcal{L}^p(\Sigma,\mathbb{Q})$ of \emph{rational $\P$-laminations}, and show that a natural extension of the shear coordinates gives an$MC(\Sigma)$-equivariant piecewise-linear isomorphism \begin{align*} \mathcal{L}^p(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \X_\Sigma(\mathbb{Q}^T). \end{align*} Its $\mathfrak{sl}_3$-version has already appeared in the work \cite{IK22}. We introduce a gluing map (\cref{dfn:gluing_lamination}) purely in terms of laminations, and prove that it is a tropical analogue of the Goncharov--Shen's gluing map (\cref{prop:amalgamation}). Combining the results in the Teichm\"uller\ and lamination sides, we can form a ``$\P$-version" of the Thurston compactification $\overline{\mathcal{T}^p(\Sigma)}:=\mathcal{T}^p(\Sigma) \cup \mathbb{S} \mathcal{L}^p(\Sigma,\mathbb{R})$ (\cref{dfn:Thurston_P}). Here $\mathcal{L}^p(\Sigma,\mathbb{R})$ is the completion of the space $\mathcal{L}^p(\Sigma,\mathbb{Q})$ with respect to the shear coordinates. Then we obtain the following: \begin{introthm}[\cref{thm:gluing_Thurston}] The gluing maps on the Teichm\"uller\ and lamination spaces combine to give a continuous map \begin{align*} \overline{q}_{\Sigma,\Sigma'}: \overline{\mathcal{T}^p(\Sigma)} \to \overline{\mathcal{T}^p(\Sigma')} \end{align*} between the Thurston compactifications. \end{introthm} \subsection{Ensemble compatibility of duality maps} Fock--Goncharov's duality conjecture is one of the most fascinating conjectures in the theory of cluster varieties. See \cite{FG09,GHKK}; also \cite{Qin21} for a recent review on this subject. It asks a construction of \emph{duality maps} \begin{align} &\mathbb{I}_\X: \X_{\Sigma}(\mathbb{Z}^T) \to \mathcal{O}(\A_{\Sigma}), \label{eq:duality_X}\\ &\mathbb{I}_\A: \A_{\Sigma}(\mathbb{Z}^T) \to \mathcal{O}(\X_{\Sigma}) \label{eq:duality_A} \end{align} that parametrize linear bases of the function algebras of cluster varieties, satisfying certain axioms formulated in \cite[Section 4]{FG09}. A topological construction of the duality map \eqref{eq:duality_X}, nowadays called the \emph{bracelet basis}, is first given by Fock--Goncharov \cite{FG06,FG07} for a general marked surface, and further studied by Musiker--Schiffler--Williams \cite{MSW} in the absence of punctures. A duality map in the direction \eqref{eq:duality_A} is also constructed by Fock--Goncharov \cite{FG06,FG07}, and further enhanced by Goncharov--Shen \cite{GS15} in the ``$\P$-type" setting. Here the work of Goncharov--Shen gives a basis of the function ring of the moduli space $\P_{SL_2,\Sigma}$ (written as $\mathrm{Loc}_{SL_2,S}$ \emph{loc.~sit.}) parametrized by the space $\mathcal{L}^a(\Sigma,\mathbb{Z}) \supset \A_\Sigma(\mathbb{Z}^T)$ (whose elements are called \emph{$PGL_2$-laminations} \emph{loc.~sit.}). Essentially as a restriction of their construction, we obtain: \begin{introthm}[\cref{thm:X_basis}] Assume that $\Sigma$ is unpunctured, having at least two marked points. Then the functions $\mathbb{I}_\A(L)$, where $L$ runs over all the integral $\A$-laminations, form a linear basis of the function algebra $\mathcal{O}(\X_{\Sigma})$. \end{introthm} In this paper, we describe the functions $\mathbb{I}_\A(L)$ by assembling the trace functions along loops and certain matrix coefficients of Wilson lines along arcs. We give a proof of this theorem based on the description of $\mathcal{O}(\X_{\Sigma})$ as the classical limit of the \emph{congruent subalgebra} of the reduced stated skein algebra \cite{IKar}. A proof similar to that of \cite[Theorem 10.14]{GS15} will be also possible, with the restriction to the representations of $PGL_2$. We then turn our attention to the compatibility of the duality maps \eqref{eq:duality_X} and \eqref{eq:duality_A} under the ensemble map \eqref{eq:ensemble_map}. While such a compatibility has been already formulated in \cite[Conjecture 4.1.3]{FG09}, the importance to extend the ensemble map on the frozen variables seems to be only recognized after then. Indeed, our ensemble map \eqref{eq:ensemble_map} is an extended version according to the choice made in \cite{GS19}. Our compatibility statement is the following, which is the main theorem of this paper: \begin{introthm}[Ensemble compatibility of duality maps: \cref{prop:duality_compatible}] For any unpunctured marked surface $\Sigma$, the following diagram commutes: \begin{equation}\label{introeq:duality_compatible} \begin{tikzcd} \A_{\Sigma}(\mathbb{Z}^T) \ar[d,"\check{p}_\Sigma^{\mathsf{T}}"'] \ar[rr,"\mathbb{I}_\A"] && \mathcal{O}(\X_{\Sigma}) \ar[d,"p_\Sigma^\ast"] \\ \X_{\Sigma}(\mathbb{Z}^T) \ar[rr,"\mathbb{I}_\X"'] && \mathcal{O}(\A_{\Sigma}), \end{tikzcd} \end{equation} where we use the Langlands dual ensemble map $ \check{p}_\Sigma^{\mathsf{T}}: \mathcal{L}^a(\Sigma,\mathbb{Z}) \to \mathcal{L}^p(\Sigma,\mathbb{Z})$ \eqref{eq:dual_ensemble} on the tropical side, and $\mathbb{I}_\X$ denotes the bracelet basis (\cref{def:skein_lift_X}). \end{introthm} Here it is remarkable that the non-trivial Langlands duality comes into play in order to get the commutative diagram \eqref{introeq:duality_compatible}, even if the exchange matrix is skew-symmetric. Actually, it concerns with the extension of the ensemble map on the frozen coordinates, and the Langlands dual comes from the algebraic consistency of coordinate expressions of $\mathbb{I}_\A$ and $\mathbb{I}_\X$. See \cref{rem:duality_constraint}. In the end of the paper ,we also investigate the amalgamation of bracelet bases $\mathbb{I}_\X$. See \cref{thm:amal_bracelet} and \cref{rem:amal_weak}. \bigskip \subsection*{Organization of the paper} In this section below, we summarize our notation on marked surfaces. In \cref{sec:Teich}, we investigate the Teichm\"uller\ space with pinnings $\mathcal{T}^p(\Sigma)$. This section is partially intended to be an introduction to the cluster variety for hyperbolic geometers. Basic definitions on cluster varieties in the surface case are summarized in \cref{app:cluster}. Conversely, those who are familiar with cluster variety may safely skip this section by quickly picking up the algebraic results, such as \cref{thm:A to X,prop:Wilson_lambda}. We investigate the lamination space with pinnings $\mathcal{L}^p(\Sigma,\mathbb{Q})$ in \cref{sec:lamination} as a tropical counterpart of the previous section, while most constructions are logically independent. The contents in \cref{sec:duality} are of cluster algebraic nature. Here we choose to discuss inside the algebra $C^\infty(\mathcal{T}^p(\Sigma))$ containing $\mathcal{O}(\X_\Sigma)$ to avoid the problem on the square roots of cluster coordinates. \subsection*{Acknowledgements} The author is grateful to Shunsuke Kano for the insightful discussion on the definition of the lamination space $\mathcal{L}^p(\Sigma,\mathbb{Q})$ and the gluing of $\P$-laminations in several stages of this work. The author also thanks Wataru Yuasa and Hiroaki Karuo for the valuable discussions on the stated skein algebras. The author is supported by JSPS KAKENHI Grant Number~JP20K22304. \subsection*{Marked surfaces} A marked surface $(\Sigma,\mathbb{M})$ is a compact oriented surface $\Sigma$ together with a fixed non-empty finite set $\mathbb{M} \subset \Sigma$ of \emph{marked points}. When the choice of $\mathbb{M}$ is clear from the context, we simply denote a marked surface by $\Sigma$. A marked point is called a \emph{puncture} if it lies in the interior of $\Sigma$, and a \emph{special point} otherwise. Let $\mathbb{M}_\circ=\mathbb{M}_\circ(\Sigma)$ (resp. $\mathbb{M}_\partial=\mathbb{M}_\partial(\Sigma)$) denote the set of punctures (resp. special points), so that $\mathbb{M}=\mathbb{M}_\circ \sqcup \mathbb{M}_\partial$. We say that $\Sigma$ is \emph{unpunctured} if $\mathbb{M}_\circ=\emptyset$. Let $\Sigma^*:=\Sigma \setminus \mathbb{M}$. We always assume the following conditions: \begin{enumerate} \item[(S1)] Each boundary component (if exists) has at least one marked point. \item[(S2)] $-2\chi(\Sigma^*)+|\mathbb{M}_\partial| >0$. \end{enumerate} We call a connected component of the punctured boundary $\partial^\ast \Sigma:=\partial\Sigma\setminus \mathbb{M}_\partial$ a \emph{boundary interval}. The set of boundary intervals is denote by $\mathbb{B}=\mathbb{B}(\Sigma)$. Note that $|\mathbb{B}|=|\mathbb{M}_\partial|$. By convention, we endow each boundary interval $\alpha \in \mathbb{B}$ with the orientation induced from $\partial\Sigma$. Let $m^+_\alpha$ (resp. $m^-_\alpha$) denote its initial (resp. terminal) marked point. An \emph{ideal arc} in $(\Sigma,\mathbb{M})$ is the isotopy class of an immersed arc in $\Sigma$ with endpoints in $\mathbb{M}$ having no self-intersections except for its endpoints, and not contractible in $\Sigma^\ast$. An \emph{ideal triangulation} is a triangulation $\triangle$ of $\Sigma$ whose set of $0$-cells (vertices) coincides with $\mathbb{M}$, $1$-cells (edges) being ideal arcs. In this paper, we always consider an ideal triangulation without \emph{self-folded triangles} where two of its sides are identified. The conditions (S1) and (S2) ensure the existence of such an ideal triangulation. See, for instance, \cite[Lemma 2.13]{FST}. For an ideal triangulation $\triangle$, denote the set of edges (resp. interior edges, triangles) by $e(\triangle)$ (resp. $e_{\interior}(\triangle)$, $t(\triangle)$). Since the boundary intervals belong to any ideal triangulation, we always have $e(\triangle)=e_{\interior}(\triangle) \sqcup \mathbb{B}$. By a computation on the Euler characteristics, we get \begin{align*} &|e(\triangle)|=-3\chi(\Sigma^*)+2|\mathbb{M}_\partial|, \quad |e_{\interior}(\triangle)|=-3\chi(\Sigma^*)+|\mathbb{M}_\partial|, \\ &|t(\triangle)|=-2\chi(\Sigma^*)+|\mathbb{M}_\partial|. \end{align*} Since the main contribution of this paper is on the structures associated with special points/boundary intervals, we do not discuss much details on those around punctures, such as tagged arcs and tagged triangulations. The interested readers are referred to \cite{FST} and \cite[Section 12]{FG06}. \input{2_Teichmuller.tex} \input{3_lamination.tex} \input{4_duality.tex} \input{5_appendix} \section{Teichm\"uller spaces with pinnings}\label{sec:Teich} In this section, we introduce the \emph{Teichm\"uller\ space with pinnings $\mathcal{T}^p(\Sigma)$}, which will be identified with the set of positive real points of the moduli space $\P_{PGL_2,\Sigma}$. For the basic terminologies in hyperbolic geometry, we refer the reader to \cite{Penner} and the references therein. See also \cite{FST}. \subsection{The Teichm\"uller\ space $\mathcal{T}^p(\Sigma)$ and the cross ratio coordinates} Let $\mathbb{H}^2=\{z \in \mathbb{C} \mid \Im z >0\}$ denote the upper-half plane model of the hyperbolic plane, equipped with the metric $dzd\bar{z}/(\Im z)^2$. The isometry group of $\mathbb{H}^2$ is isomorphic to the Lie group $PSL_2(\mathbb{R})$, which acts on $\mathbb{H}^2$ by the M\"obius transformations. Another model of the hyperbolic plane is the Poincar\'e disk model $\mathbb{D}^2=\{w \in \mathbb{C} \mid |w|<1\}$ equipped with the metric $4dwd\bar{w}/(1-|w|^2)^2$. We tacitly identify these two models via the Cayley transformation \begin{align*} \mathbb{H}^2 \xrightarrow{\sim} \mathbb{D}^2, \quad z \mapsto \frac{z-\sqrt{-1}}{z+\sqrt{-1}}. \end{align*} Here are basic notions in hyperbolic geometry: \begin{itemize} \item Geodesics in $\mathbb{D}^2$ are euclidean circles/lines perpendicular to the boundary of $\mathbb{D}^2$. The stabilizer of a geodesic is conjugate to $\left\{\begin{bmatrix}\lambda & 0 \\ 0 & \lambda^{-1}\end{bmatrix}\ \middle|\ \lambda\in \mathbb{R}^\ast \right\}$. \item Horocycles in $\mathbb{D}^2$ are euclidean circles tangent to the boundary of $\mathbb{D}^2$. The touching point is called its center. The stabilizer of a horocycle is conjugate to $\left\{\begin{bmatrix}1 & t \\ 0 & 1\end{bmatrix}\ \middle|\ t \in \mathbb{R} \right\}$. \end{itemize} A \emph{decoration} of a geodesic $g$ in $\mathbb{H}^2$ is a pair $(h_1,h_2)$ of horocycles centered at the two endpoints of $g$. Given such horocycles $(h_1,h_2)$, the geodesic $g$ is uniquely determines as it connects their centers. \begin{dfn}[lambda-length] The \emph{lambda-length} \cite{Penner} of a decorated geodesic $(g;h_1,h_2)$ (or the pair $(h_1,h_2)$) is defined to be $\lambda(h_1,h_2):=\exp (\delta/2) \in \mathbb{R}_{>0}$, where $\pm\delta$ is the signed hyperbolic length of the segment of $g$ between the horocycles $h_1,h_2$, and the sign is $+$ if and only if the horocycles are disjoint. \end{dfn} \begin{lem} Given an oriented geodesic $g$ in $\mathbb{H}^2$, there are bijections between the following four notions: \begin{enumerate} \item A decoration $(h,h')$ of $g$ with lambda-length $1$. \item A horocycle $h$ centered at the initial endpoint of $g$. \item A point $x$ on $g$. \item An ideal triangle having $g$ as one of its sides, lying on the right of $g$. \end{enumerate} \end{lem} \begin{proof} The equivalence of the former three notions is obvious: the intersection of the horocycle centered at the initial endpoint and the geodesic $g$ determines a point. Given a point $x \in g$, let $g^\perp$ denote the unique geodesic through $x$ and perpendicular to $g$. Orient $g^\perp$ so that the frame $(T_xg^\perp, T_x g)$ is positive and take the ideal triangle spanned by the terminal endpoint $g_R$ of $g^\perp$ and the two endpoints of $g$. See \cref{fig:pinnings}. Conversely, the point $x$ is uniquely determined as the foot of $g_R$. \end{proof} \begin{figure} \begin{tikzpicture}[scale=1.2] \draw(0,0) circle(2cm) coordinate(O); \draw[->-={0.7}{}](-2,0) node[left]{$g_+$} -- (2,0) node[right]{$g_-$}; \draw(1,0) node[above]{$g$}; \draw[red](-1,0) circle(1cm); \draw[red](-1,1) node[above]{$h$}; \draw(1,0) circle(1cm); \draw(1,1) node[above]{$h'$}; \draw(0,0) node[above left=0.3em]{$x$}; \pinn{0,0}{-45}{0.13}{0.035cm}; \draw[myblue,dashed](0,0) --node[midway,above right]{$g^\perp$} (0,-2); \draw[myblue] ([xshift=-4pt] O) -- ([xshift=-4pt,yshift=-4pt] O) -- ([yshift=-4pt] O); \draw[myblue](0,-2) node[below]{$g_R$}; \draw[myblue] (-2,0)arc[radius=2cm,start angle=90, end angle=0]; \draw[myblue] (2,0)arc[radius=2cm,start angle=90, end angle=180]; \end{tikzpicture} \caption{The correspondence between the four notions of pinnings.} \label{fig:pinnings} \end{figure} \begin{dfn} We call one of these equivalent notions a \emph{pinning} over the oriented geodesic $g$. When we speak about a particular one, the notion (1) or (2) is a called a \emph{horocycle pinning}; (3) is called a \emph{point pinning}; (4) is called a \emph{triangle pinning}. \end{dfn} \begin{rem} The equivalence (1) $\Longleftrightarrow$ (2) is an analogue of \cite[Lemma-Definition 3.7]{GS19}. The equivalence (1) $\Longleftrightarrow$ (4) resembles the discussion in \cite[Section 7.1]{GS19}. \end{rem} \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.8] \draw (-4.5,0) ellipse (3 and 2); \draw (-5.5,-0.5) .. controls (-5.5,-1.35) and (-3.5,-1.35) .. (-3.5,-0.5); \draw (-5.4,-0.8) .. controls (-5.4,-0.2) and (-3.6,-0.2) .. (-3.6,-0.8); \draw (-5.5,0.65) ellipse (0.5 and 0.5); \node [fill, circle, inner sep=1.3pt] at (-3.5,0.65) {}; \node [fill, circle, inner sep=1.3pt] at (-5.05,0.85) {}; \node [fill, circle, inner sep=1.3pt] at (-5.95,0.85) {}; \node [fill, circle, inner sep=1.3pt] at (-5.5,0.15) {}; \draw[red] (-5.5,0.15) .. controls (-5.2,-0.35) and (-3.5,0) .. (-3.5,0.65); \node [red] at (-4.4,0.25) {$\alpha$}; \node at (-4.5,-2.5) {$\Sigma$}; \draw[->] (-2,2) --node[midway,above]{$f_1$}++ (2,1); \draw[->] (-2,-2) --node[midway,below]{$f_2$}++ (2,-1); \begin{scope}[xshift=-7cm,yshift=3.5cm] \draw (10.5,0) ellipse (3 and 2); \draw [white, ultra thick](8.8,1.65) .. controls (9.1,1.8) and (9.5,1.9) .. (9.8,1.95); \draw [white, ultra thick](11.45,1.95) .. controls (11.5,1.9) and (11.8,1.8) .. (11.9,1.7); \draw (9.5,-0.5) .. controls (9.5,-1.35) and (11.5,-1.35) .. (11.5,-0.5); \draw (9.6,-0.8) .. controls (9.6,-0.2) and (11.4,-0.2) .. (11.4,-0.8); \draw (8.35,0.7) .. controls (8.85,0.95) and (8.85,2.3) .. (8.85,3.15) .. controls (8.85,2.3) and (9.3,1.8) ..node[pos=0.4,inner sep=0](A){} (9.3,2.65) .. controls (9.3,1.8) and (9.75,2.3) ..node[pos=0.6,inner sep=0](B){} (9.75,3.15) .. controls (9.75,2.3) and (8.85,2.3) ..node[pos=0.7,inner sep=0](C){} (8.85,3.15); \pinn{A}{45}{0.1}{0.03cm}; \pinn{B}{135}{0.1}{0.03cm}; \pinn{C}{-135}{0.1}{0.03cm}; \draw (10.25,0.7) .. controls (9.75,0.95) and (9.75,2.3) .. (9.75,3.15); \draw (10.7,0.7) .. controls (11.2,0.95) and (11.65,1.75) .. (11.65,2.5); \draw (12.55,0.7) .. controls (12.05,0.95) and (11.65,1.75) .. (11.65,2.5); \draw[dashed] (11.65,1.34) ellipse (0.3 and 0.1); \node at (10.5,-2.5) {$X_1$}; \node [red] at (12,.3) {$f_1(\alpha)$}; \draw [red](9.3,2.65) .. controls (9.3,0.5) and (10,0.25) .. (10.5,0.25) .. controls (11,0.25) and (11.65,0.5) .. (11.65,1.35) .. controls (11.65,1.7) and (11.65,1.9) .. (11.65,2.5); \end{scope} \begin{scope}[xshift=-7cm,yshift=-3.5cm] \draw (10.5,0) ellipse (3 and 2); \draw [white, ultra thick](8.8,1.65) .. controls (9.1,1.8) and (9.5,1.9) .. (9.8,1.95); \draw [white, ultra thick](11.25,1.95) .. controls (11.5,1.9) and (11.8,1.8) .. (12.05,1.7); \draw (9.5,-0.5) .. controls (9.5,-1.35) and (11.5,-1.35) .. (11.5,-0.5); \draw (9.6,-0.8) .. controls (9.6,-0.2) and (11.4,-0.2) .. (11.4,-0.8); \draw (8.35,0.7) .. controls (8.85,0.95) and (8.85,2.3) .. (8.85,3.15) .. controls (8.85,2.3) and (9.3,1.8) ..node[pos=0.4,inner sep=0](A){} (9.3,2.65) .. controls (9.3,1.8) and (9.75,2.3) ..node[pos=0.6,inner sep=0](B){} (9.75,3.15) .. controls (9.75,2.3) and (8.85,2.3) ..node[pos=0.7,inner sep=0](C){} (8.85,3.15); \pinn{A}{45}{0.1}{0.03cm}; \pinn{B}{135}{0.1}{0.03cm}; \pinn{C}{-135}{0.1}{0.03cm}; \draw (10.25,0.7) .. controls (9.75,0.95) and (9.75,2.3) .. (9.75,3.15); \draw (10.7,0.7) .. controls (11.2,0.95) and (11.25,1.95) .. (11.25,2.5); \draw (12.55,0.7) .. controls (12.05,0.95) and (12.05,1.95) .. (12.05,2.5); \draw(11.65,2.5) ellipse (0.4 and 0.2); \node at (10.5,-2.5) {$X_2$}; \node [red] at (12,.3) {$f_2(\alpha)$}; \draw [red](9.3,2.65) .. controls (9.3,0.5) and (10,0.25) .. (10.5,0.25) .. controls (11,0.25) and (11.65,0.5) .. (11.65,1.35) .. controls (11.65,1.7) and (11.45,1.9) .. (11.23,2); \draw [red](12.05,2.05) .. controls (11.85,1.95) and (11.45,1.95) .. (11.25,2.15);\draw [red](12.05,2.25) .. controls (11.85,2.1) and (11.45,2.1) .. (11.25,2.3); \draw [red](12.05,2.4) .. controls (11.85,2.2) and (11.45,2.2) .. (11.25,2.4); \end{scope} \end{tikzpicture} \caption{Two marked hyperbolic structures with pinnings having different natures at a puncture.} \label{fig:cusp_flare} \end{figure} In this paper, a \emph{marked hyperbolic structure} on $\Sigma$ means a pair $(X,f)$, where \begin{itemize} \item $X$ is a complete hyperbolic surface with finite area and totally geodesic boundary. Let $X^\circ \subset X$ be the complement of the closed geodesic boundary. \item $f: \Sigma^\ast \to X^\circ$ is an orientation-preserving homeomorphism which maps a representative of each ideal arc to a complete geodesic, where each end either enters into a cusp or spirals around a closed geodesic boundary. \end{itemize} The hyperbolic surface $X$ can have either cusps or closed geodesic boundary components arising from $m \in \mathbb{M}_\circ$, and spikes arising from $m \in \mathbb{M}_\partial$. Boundary intervals give rise to complete geodesics. See \cref{fig:cusp_flare}. A \emph{hyperbolic structure with pinnings} on $\Sigma$ consists of the following data: \begin{itemize} \item A marked hyperbolic structure $(X,f)$ on $\Sigma$, \item A tuple $p=(p_\alpha)_{\alpha \in \mathbb{B}}$ of pinnings over the complete geodesics arising from the boundary intervals, oriented positively with respect to $\partial\Sigma$. \end{itemize} Two such data $(X_1,f_1;p_1)$ and $(X_2,f_2;p_2)$ are said to be equivalent if there exists an isometry $h: X_1 \to X_2$ homotopic to $f_2 \circ f_1^{-1}$ relative to the boundary, which sends the pinnings $p_1$ to $p_2$. \begin{dfn}\label{dfn:T^p} The \emph{Teichm\"uller\ space with pinnings} $\mathcal{T}^p(\Sigma)$ (or the \emph{Teichm\"uller\ $\P$-space}) is the set of equivalence classes of hyperbolic structures with pinnings on $\Sigma$. \end{dfn} Forgetting the data of pinnings, we get the \emph{enhanced Teichm\"uller\ space} $\mathcal{T}^x(\Sigma)$ (or the \emph{Teichm\"uller\ $\X$-space} \cite{FG07}). Let $\pi_\Sigma: \mathcal{T}^p(\Sigma) \to \mathcal{T}^x(\Sigma)$ be the natural projection. Now we are going to define a coordinate system on $\mathcal{T}^p(\Sigma)$ using the cross ratio parameters. \begin{dfn}[cross ratio] For an ideal quadrilateral $Q \subset \mathbb{H}^2$ with a fixed diagonal $\alpha$, define $r(Q;\alpha) >0$ to be the cross ratio of the four vertices $x_1,x_2,x_3,x_4 \in \partial \mathbb{H}^2 = \mathbb{R} \cup \{\infty\}$ of $Q$ in this counter-clockwise order, $x_1$ being one of the endpoints of $\alpha$. Explicitly, we have \begin{align*} r(Q;\alpha) = -\frac{x_1-x_4}{x_3-x_4}\frac{x_3-x_2}{x_1-x_2}. \end{align*} Thanks to the cyclic symmetry and the $PSL_2(\mathbb{R})$-invariance, it depends only on the isometry class of the quadrilateral $Q$ and its diagonal $\alpha$. \end{dfn} \paragraph{\textbf{Straightening the arcs.}} Suppose a hyperbolic structure with pinnings $(h,p) \in \mathcal{T}^p(\Sigma)$ and an ideal triangulation $\triangle$ are given. Here $h=(X,f)$ is a marked hyperbolic structure. For each edge $\alpha \in e(\triangle)$, let $\alpha^h:=f(\alpha) \subset X^\circ$ be the corresponding complete geodesic. Similarly, any polygon $P$ in $\triangle$ is straightened to a geodesic polygon $P^h \subset X^\circ$. Given an ideal triangulation $\triangle$ of $\Sigma$, we define a coordinate system $X_\triangle=(X_\alpha^\triangle)_{\alpha \in e(\triangle)}: \mathcal{T}^p(\Sigma) \to \mathbb{R}^\triangle_{>0}$ as follows. Let $(h,p)$ be a hyperbolic structure with pinnings. \begin{itemize} \item For an interior edge $\alpha \in e_{\interior}(\triangle)$, let $Q$ be the unique quadrilateral of $\triangle$ having $\alpha$ as its diagonal. Define \begin{align*} X_\alpha^\triangle(h,p):=r(\widetilde{Q}^h;\widetilde{\alpha}^h), \end{align*} where $(\widetilde{Q}^h,\widetilde{\alpha}^h)$ is a lift of $(Q^h,\alpha^h)$. \item For a boundary interval $\alpha \in \mathbb{B}$, let $T$ be the unique triangle having $\alpha$ as one of its sides, which necessarily lies on the left of $\alpha$. Choose their lifts $\widetilde{\alpha}^h \subset \widetilde{T}^h$. The datum $p_\alpha$, seen as a triangle pinning over the oriented geodesic $\widetilde{\alpha}^h$, determines a triangle $T(p_\alpha)$ on the right of $\widetilde{\alpha}^h$. Then define \begin{align*} X_\alpha^\triangle(h,p):=r(\widetilde{T}^h \cup T(p_\alpha);\widetilde{\alpha}^h). \end{align*} \end{itemize} We call the coordinate system $X_\triangle$ the \emph{cross ratio coordinates} associated with $\triangle$. The following is essentially due to a combination of Fock--Goncharov \cite{FG07} and Goncharov--Shen \cite{GS19}: \begin{thm} For any ideal triangulation $\triangle$, the cross ratio coordinate $X_\triangle: \mathcal{T}^p(\Sigma) \xrightarrow{\sim} \mathbb{R}^\triangle_{>0}$ gives a bijection. For the flip $f_{\kappa}: \triangle \to \triangle'$ along an interior edge $\kappa \in e_{\interior}(\triangle)$, the coordinate transformation $X_{\triangle'}\circ X_\triangle^{-1}$ is given as shown in \cref{fig:flip}. \end{thm} \begin{proof} It is known that $X_\triangle^{\interior}:=(X_\alpha^\triangle)_{\alpha \in e_{\interior}(\triangle)}: \mathcal{T}^x(\Sigma) \xrightarrow{\sim} \mathbb{R}^{e_{\interior}(\triangle)}_{>0}$ gives a bijection \cite{FG07}. Hence for a given $(h,p) \in \mathcal{T}^p(\Sigma)$ with pinnings, the underlying enhanced hyperbolic structure $h \in \mathcal{T}^x(\Sigma)$ is determined by the coordinates assigned to the interior edges. In order to see that the coordinates assigned to the boundary intervals determine the pinnings, just note that the cross ratio is a complete invariant of a $PSL_2(\mathbb{R})$-orbit of four distinct points. In particular, the triangle pinnings are uniquely determined by the underlying hyperbolic structure and the boundary coordinates. The formula for coordinate transformation follows from that for the space $\mathcal{T}^x(\Sigma)$ (\cite[Figure 11]{FG07}). \end{proof} \begin{figure}[ht] \[\hspace{1.4cm} \begin{tikzpicture}[scale=0.8] \path(0,0) node [fill, circle, inner sep=1.6pt] (x1){}; \path(135:4) node [fill, circle, inner sep=1.6pt] (x2){}; \path(0,4*1.4142) node [fill, circle, inner sep=1.6pt] (x3){}; \path(45:4) node [fill, circle, inner sep=1.6pt] (x4){}; \draw[blue](x1) to node[midway,left,black]{$X_\beta$} (x2) to node[midway,left,black]{$X_\alpha$} (x3) to node[midway,right,black]{$X_\delta$} (x4) to node[midway,right,black]{$X_\gamma$} (x1) to node[midway,left,black]{$X_\kappa$} (x3); \draw[-implies, double distance=2pt](4,2*1.4142) to (6,2*1.4142); \begin{scope}[xshift=10cm] \path(0,0) node [fill, circle, inner sep=1.6pt] (x1){}; \path(135:4) node [fill, circle, inner sep=1.6pt] (x2){}; \path(0,4*1.4142) node [fill, circle, inner sep=1.6pt] (x3){}; \path(45:4) node [fill, circle, inner sep=1.6pt] (x4){}; \draw[blue](x1) to node[midway,left,black]{\scalebox{0.8}{$X_\beta(1+X_\kappa^{-1})^{-1}$}} (x2) to node[midway,left,black]{\scalebox{0.8}{$X_\alpha(1+X_\kappa)$}} (x3) to node[midway,right,black]{\scalebox{0.8}{$X_\delta(1+X_\kappa^{-1})^{-1}$}} (x4) to node[midway,right,black]{\scalebox{0.8}{$X_\gamma(1+X_\kappa)$}} (x1); \draw[blue] (x2) to node[midway,above,black]{$X_\kappa^{-1}$} (x4); \end{scope} \end{tikzpicture} \] \caption{The coordinate transformation for the flip along an edge $\kappa$. The formula is the same when some of the surrounding edges are boundary intervals, and still valid when some of the edges are identified as $\alpha=\gamma$ and/or $\beta=\delta$.} \label{fig:flip} \end{figure} Since the coordinate transformations are real-analytic, we can endow $\mathcal{T}^p(\Sigma)$ with a real-analytic structure so that each $X_\triangle$ is a real-analytic diffeomorphism. Moreover, the formula coincides with the \emph{cluster Poisson transformation} \eqref{eq:X-transf}. As a consequence, we get: \begin{cor}\label{cor:Teich_X-variety} The cross ratio coordinates $X_\triangle: \mathcal{T}^p(\Sigma) \xrightarrow{\sim} \mathbb{R}^\triangle_{>0}$ associated with ideal triangulations $\triangle$ of $\Sigma$ combine to give a canonical $MC(\Sigma)$-equivariant diffeomorphism \begin{align*} \mathcal{T}^p(\Sigma) \xrightarrow{\sim} \X_\Sigma(\mathbb{R}_{>0}). \end{align*} \end{cor} In particular, we have a $MC(\Sigma)$-invariant Poisson bracket $\{-,-\}$ on $C^\infty(\mathcal{T}^p(\Sigma))$ such that \begin{align*} \{X_\alpha^\triangle,X_\beta^\triangle\} = \varepsilon_{\alpha\beta}^\triangle X_\alpha^\triangle X_\beta^\triangle \end{align*} for any ideal triangulation $\triangle$. Here $(\varepsilon_{\alpha\beta}^\triangle)_{\alpha,\beta \in e(\triangle)}$ denotes the \emph{exchange matrix} (see \cref{app:cluster}). \begin{rem} It is straightforward to extend the construction of coordinates for a \emph{tagged triangulation}. See, for instance, \cite[Section 9]{AB}. \end{rem} \subsection{Gluing map}\label{subsec:Teich_amalgamation} We are going to discuss a map between the Teichm\"uller\ spaces with pinnings, called the \emph{gluing map}. Let $\Sigma$ be a marked surface (possibly disconnected), and choose distinct boundary intervals $\alpha_L,\alpha_R \in \mathbb{B}$. Let $\Sigma'$ be the marked surface obtained from $\Sigma$ by gluing the edges $\alpha_L$ and $\alpha_R$ together. We define a map $q_{\Sigma,\Sigma'}:\mathcal{T}^p(\Sigma) \to \mathcal{T}^p(\Sigma')$ as follows. Let $(h,p)$ be a hyperbolic structure with pinnings on $\Sigma$. Then the data $p_{\alpha_L}$ and $p_{\alpha_R}$, seen as point pinnings, determine a point on each of the boundary geodesics $\alpha_L^h$ and $\alpha_R^h$. Gluing the hyperbolic surface $(\Sigma,h)$ along these edges so that these points match, we get a new hyperbolic surface $(\Sigma',h')$. Since the remaining pinnings naturally induces pinnings over $h'$, we get a pair $(h',p')=q_{\Sigma,\Sigma'}(h,p)$ on $\Sigma'$. The resulting map \begin{align}\label{eq:gluing_Teich} q_{\Sigma,\Sigma'}:\mathcal{T}^p(\Sigma) \to \mathcal{T}^p(\Sigma') \end{align} is called the \emph{gluing map}. \begin{ex} Let us illustrate the construction in a simple example. Let $\Sigma_L$ (resp. $\Sigma_R$) be an $n_L$-gon (resp. $n_R$-gon), \emph{i.e.}, a disk with $n_L$ (resp. $n_R$) special points, and consider the marked surface $\Sigma:=\Sigma_L \sqcup \Sigma_R$. For $Z \in \{L,R\}$, choose a side $\alpha_Z$ of the polygon $\Sigma_Z$ and glue them together. The resulting surface $\Sigma'$ is an $(n_L+n_R-2)$-gon. Given a hyperbolic structure with pinnings $(h,p) \in \mathcal{T}^p(\Sigma)$, each polygon $\Sigma_Z$ is realized as an ideal polygon $\widetilde{\Pi}^h_Z \subset \mathbb{H}^2$. The geodesic lift $\widetilde{\alpha}_Z^h \subset \widetilde{\Pi}^h_Z$ of $\alpha_Z$ is equipped with a point pinning given by the data $p_{\alpha_Z}$. Then there exists a unique hyperbolic isometry $g \in PSL_2(\mathbb{R})$ which maps the geodesic $\widetilde{\alpha}_R^h$ to $\widetilde{\alpha}_L^h$ and matches the point pinnings. The resulting polygon $\widetilde{\Pi}^h_L \cup g(\widetilde{\Pi}^h_R)$ gives the hyperbolic structure $h'$ on $\Sigma'$, together with a pinning determined by $p \setminus \{p_{\alpha_L},p_{\alpha_R}\}$. \end{ex} \begin{figure}[ht] \begin{tikzpicture}[scale=1] \begin{scope} \draw(0,0) circle(2cm); \clip(0,0) circle(2cm); \draw(-2,0) -- (2,0); \draw[red](-1.5,0) circle(0.5cm); \pinn{-1,0}{-135}{0.13}{0.03cm} \draw[red,thick,->,>=latex](-1,0) -- (0,0); \draw[red,dashed] (-0.5,0) -- (0,-1) node[fill=white,inner sep=2pt,scale=0.9]{$\log X_{\alpha_L}^\triangle(h,p)$}; {\color{myblue} \draw \angleBL{-1,0}; \draw[dashed] (-1,0)arc[radius=1cm,start angle=0, end angle=-75]; \hgline{180}{210}{2} \hgline{0}{210}{2} } \hgline{90}{0}{2} \hgline{90}{180}{2} \draw[dashed] (0,2) -- (0,0); \draw \angleAL{0,0}; \draw(-0.8,1) node[scale=0.9]{$\widetilde{T}_1^h$}; \end{scope} \begin{scope}[yshift=-4.5cm] \draw(0,0) circle(2cm); \clip(0,0) circle(2cm); \draw(-2,0) -- (2,0); \draw[red](1,0) circle(1cm); \pinn{0,0}{-135}{0.13}{0.03cm} \draw[red,thick,->,>=latex](-1,0) -- (0,0); \draw[red,dashed] (-0.5,0) -- (0,-1.5) node[fill=white,inner sep=2pt,scale=0.9]{$\log X_{\alpha_R}^\triangle(h,p)$}; \draw\angleBL{-1,0}; \draw[dashed] (-1,0)arc[radius=1cm,start angle=0, end angle=-75]; \hgline{180}{210}{2} \hgline{0}{210}{2} {\color{myblue} \hgline{90}{0}{2} \hgline{90}{180}{2} \draw[dashed] (0,2) -- (0,0); \draw \angleAL{0,0}; } \draw(0.8,-0.5) node[scale=0.9]{$\widetilde{T}_2^h$}; \end{scope} \snake{2.5,-2.25}{4.5,-2.25} node[midway,above=0.2em]{Glue}; \begin{scope}[xshift=7.5cm,yshift=-2.25cm] \draw(0,0) circle(2cm); \clip(0,0) circle(2cm); \draw(-2,0) -- (2,0); \pinn{-1,0}{-135}{0.13}{0.03cm} \draw[red,thick,->,>=latex](-1.5,0) -- (0,0); \draw[dashed] (-1.5,0)arc[radius=0.5cm,start angle=0, end angle=-75]; \hgline{180}{195}{2} \hgline{0}{195}{2} \hgline{90}{0}{2} \hgline{90}{180}{2} \draw[dashed] (0,2) -- (0,0); \draw \angleAL{0,0}; \draw(-0.8,1) node[scale=0.9]{$\widetilde{Q}_{h'}$}; \end{scope} \begin{scope}[xshift=7.5cm,yshift=-2.25cm] \draw[red,dashed] (-0.8,0) -- (0,-2.5) node[fill=white,inner sep=2pt,scale=0.9]{$\log X_{\alpha_L}^\triangle(h,p)+ \log X_{\alpha_R}^\triangle(h,p)$}; \draw \angleBL{-1.5,0}; \end{scope} \end{tikzpicture} \caption{Gluing of two hyperbolic triangles. Here triangle pinnings are shown in blue. The second triangle $\widetilde{T}_2^h$ is mapped by a hyperbolic isometry so that the pointed geodesic $\widetilde{\alpha}_2^h$ is matched with $\widetilde{\alpha}_1^h$ to form an ideal quadrilateral $\widetilde{Q}^{h'}$.} \label{fig:gluing_triangle} \end{figure} Note that an ideal triangulation $\triangle$ on $\Sigma$ naturally induces an ideal triangulation $\triangle'$ on $\Sigma'$. Let $\overline{\alpha} \in e_{\interior}(\triangle')$ be the interior edge arising from $\alpha_L$ and $\alpha_R$. \begin{prop}\label{prop:amalgamation} We have \begin{align*} q_{\Sigma,\Sigma'}^*X_{\overline{\alpha}}^{\triangle'} = X_{\alpha_L}^\triangle \cdot X_{\alpha_R}^\triangle. \end{align*} \end{prop} In other words, the map $q_{\Sigma,\Sigma'}$ agrees with the cluster amalgamation map $\P_{PGL_2,\Sigma} \to \P_{PGL_2,\Sigma'}$ ( \cite[Definition 2.1]{FG06}). In the proof, we use another characterization of the cross ratio. Let $Q$ be an ideal quadrilateral with a diagonal $\alpha$ in $\mathbb{H}^2$, and $x_1,x_2,x_3,x_4$ its vertices in this clockwise order, $x_1$ being one of the endpoints of $\alpha$. Let $g_2$ (resp. $g_4$) be the oriented geodesic perpendicular to $\alpha$ emanating from the vertex $x_2$ (resp. $x_4$). Then the cross ratio $r_{Q;\alpha}$ coincides with the exponential of the signed hyperbolic length of the segment of $\alpha$ bounded by the geodesics $g_2$ and $g_4$, where the sign is positive when one geodesic is seen on the right on the other (\cite[Chapter 1, Corollary 4.14 (c)]{Penner}). \begin{proof} Consider $(h,p) \in \mathcal{T}^p(\Sigma)$ and $(h',p'):=q_{\Sigma,\Sigma'}(h,p)$. For $Z \in \{L,R\}$, consider a lift $\widetilde{T}_Z^h \subset \mathbb{H}^2$ of the triangle in $\triangle$ having $\alpha_Z$ as one of its sides. In the universal cover of $(\Sigma',h')$, these triangles are glued together and forms a quadrilateral $\widetilde{Q}^{h'}:=\widetilde{T}_L^h \cup g(\widetilde{T}_R^h)$ by using some isometry $g\in PSL_2(\mathbb{R})$. See \cref{fig:gluing_triangle}. From the definition of the coordinate assigned to the boundary interval $\alpha_Z$, it is given by the exponential of the signed hyperbolic between the point pinning $p_{\alpha_Z} \in \widetilde{\alpha}_Z^h$ and the perpendicular geodesic from the vertex of $\widetilde{T}_Z^h$ other than the endpoints of $\widetilde{\alpha}_Z^h$. Then we see that the coordinate $\log X_{\overline{\alpha}}^{\triangle'}(h',p')$ coincides with the sum $\log X_{\alpha_L}^\triangle(h,p) + \log X_{\alpha_R}^\triangle(h,p)$, from which we get the desired assertion. \end{proof} \paragraph{\textbf{Relation to the Thurston's completeness criterion.}} If $\alpha_L,\alpha_R$ are consecutive boundary intervals (say, $\alpha_R$ follows $\alpha_L$ along the boundary orientation), then we get a new puncture $m:=m^-_{\alpha_L}=m^+_{\alpha_R}$ in $\Sigma'$ arising from their common marked point. Let us investigate what happens here. Let $(h,p) \in \mathcal{T}^p(\Sigma)$ be a hyperbolic structure with pinnings. Recall that the datum $p_{\alpha_R}$ gives a horocycle pinning, which is a horocycle $C_R$ centered at $m$. Let $\widetilde{C}_0^h \subset \widetilde{\Sigma}^h$ be its lift, a horocyclic arc in the universal cover. Similarly, the datum $p_{\alpha_L}$ gives a horocycle pinning $C_L$ centered at $m^+_\alpha$, and a point pinning $x \in \widetilde{\alpha}^h_L$. Suppose first that the lambda length $\lambda(C_L,C_R) >1$. In particular, the horocyclic arc $\widetilde{C}_0^h$ does not pass through the point $x$ again. Extending $\widetilde{C}_0^h$ as a horocyclic arc over the geodesic $\widetilde{\alpha}^h_L$, we get a new horocyclic arc $\widetilde{C}_1^h$ in the universal cover of $\Sigma'$, whose projection to $\Sigma'$ is ``closer'' than that of $\widetilde{C}_0^h$ to the puncture $m$. See \cref{fig:horocycle_spiral}. Continuing in this manner, we get an infinite horocyclic arc $\widetilde{C}_\infty^h:=\bigcup_{i=0}^\infty \widetilde{C}_i^h$ ``spiralling'' around $m$, where the hyperbolic distance between the consecutive segments $\widetilde{C}_i^h$ and $\widetilde{C}_{i+1}^h$ is given by the constant $2\log \lambda(C_L,C_R)$. This is exactly the situation discussed in the Thurston's completeness criterion \cite[Proposition 3.4.8]{Thu}, the constant $2\log \lambda(C_L,C_R)$ being the \emph{invariant} $d(v)$. In particular, the resulting hyperbolic surface is not complete, and its metric completion gets a new closed geodesic boundary. Indeed, the intersection points between any ray $\ell$ from the ideal vertex $m$ and $\widetilde{C}_\infty^h$ constitute a non-convergent Cauchy sequence. Such sequences give rise to a new $S^1$ after the completion. The geodesics entering the spike $m$ become spiralling geodesics around the new closed geodesic. The case $\lambda(C_L,C_R) <1$ is similar, where the spiralling direction is reversed. In the case $\lambda(C_L,C_R) =1$, the arc $C_R$ is glued into a horocycle $\overline{C}$ around $m$, and the resulting hyperbolic structure is complete around the cusp given by $m$ equipped with the decoration $\overline{C}$. \begin{figure}[ht] \centering \begin{tikzpicture} \fill[gray!20] (3,0) -- (-3,0) -- (-3,-0.2) -- (3,-0.2) --cycle; \draw[myorange,thick] (-1.3,0) arc (0:180:0.7) node[midway,above,scale=0.9]{$C_L$}; \pinn{-1.3,0}{-135}{0.13}{0.03cm} \node[scale=0.9] at (-1.2,0.3){$x$}; \draw[red,thick] (-0.9,0) arc (180:0:0.9) node[midway,above,scale=0.9]{$C_R$}; \pinn{0.9,0}{-135}{0.13}{0.03cm} \draw[red,thick,dashed,->-] (-0.9,0) arc (180:360:0.7); \draw[thick] (3,0) -- (-3,0); \draw[dashed,<-,>=latex] (-1.3,-0.1) --++(0,-1) coordinate(A); \draw[dashed,<-,>=latex] (0.9,-0.1) --++(0,-1) coordinate(B); \draw[dashed] (A) --node[midway,below,scale=0.9]{glued} (B); \foreach \i in {0,2,-2} \fill(\i,0) circle(1.5pt); \node[scale=0.9] at (0,0.3) {$m$}; \snake{3.5,0}{5,0} node[midway,above=0.2em]{Glue}; \begin{scope}[xshift=8.5cm,yshift=-1cm] \fill[gray!20] (1.5,0) -- (-1.5,0) -- (-1.5,-0.2) -- (1.5,-0.2) --cycle; \draw[thick] (1.5,0) -- (-1.5,0); \draw[myorange,thick] (-0.7,0) arc(180:90:0.7) coordinate(X); \pinn{X}{-135}{0.13}{0.03cm} \node[scale=0.9] at (0.2,0.5) {$x$}; \draw[red,thick,name path=C1] (X) arc(-90:90:1.1) arc(90:270:0.9) coordinate(Y); \draw[red,thick,name path=C2] (Y) arc(-90:30:0.7); \draw (0,0) -- (0,2); \draw[blue,<->] ($(X)+(-0.1,0)$) coordinate(X') -- ($(Y)+(-0.1,0)$) coordinate(Y'); \draw[blue,dashed] ($(X')!0.5!(Y')$) --++(-2,-0)--++(0,-0.3) node[below,scale=0.9]{$2\log\lambda(C_L,C_R)$}; \draw[red,dashed,name path=ray] (0,2) --++(-45:2) node[above]{$\ell$}; \draw[name intersections={of=C1 and ray,by=F1}]; \draw[name intersections={of=C2 and ray,by=F2}]; \fill[red] (F1) circle(1.5pt); \fill[red] (F2) circle(1.5pt); \draw(0,2) ++(45:1.4) node[red]{$\widetilde{C}_\infty^h$}; \fill(0,0) circle(1.5pt); \fill(0,2) circle(1.5pt); \end{scope} \end{tikzpicture} \caption{Topological picture of the gluing that produces a new puncture in the case $\lambda(C_L,C_R) >1$. } \label{fig:horocycle_spiral} \end{figure} \subsection{Ensemble map} Recall the \emph{decorated Teichm\"uller\ space} introduced by Penner \cite{Penner}. Let $h$ be a marked hyperbolic structure having no closed geodesic boundary. In other words, the monodromy around each $m \in \mathbb{M}_\circ$ is assumed to be parabolic (unipotent). In the universal cover, each marked point gives rise to a $\pi_1(\Sigma)$-invariant collection of spikes. A \emph{decoration} of $h$ is a $\pi_1(\Sigma)$-equivariant collection $d$ of horocycles centered at these points. We call the pair $(h,d)$ a \emph{decorated hyperbolic structure}. An equivalence of decorated hyperbolic structures is similarly defined as in the case of hyperbolic structures with pinnings. \begin{dfn}[\cite{Penner}] The \emph{decorated Teichm\"uller\ space} (or the \emph{Teichm\"uller\ $\A$-space}) $\mathcal{T}^a(\Sigma)$ of $\Sigma$ is the set of equivalence classes of the decorated hyperbolic structures on $\Sigma$. \end{dfn} Since each geodesic lift on an ideal arc $\alpha$ in $\Sigma$ is equipped with a decoration, we have a \emph{lambda-length function} $A_\alpha:\mathcal{T}^a(\Sigma) \to \mathbb{R}_{>0}$ associated to $\alpha$. Given an ideal triangulation $\triangle$, the collection of lambda-length functions gives a real-analytic coordinate system \begin{align*} A_\triangle:=(A_\alpha)_{\alpha \in e(\triangle)}: \mathcal{T}^a(\Sigma) \xrightarrow{\sim} \mathbb{R}_{>0}^\triangle, \end{align*} and they combine to give an $MC(\Sigma)$-equivariant diffeomorphism $\mathcal{T}^a(\Sigma) \xrightarrow{\sim} \A_{SL_2,\Sigma}(\mathbb{R}_{>0})$. See \cite[Chapter 2]{Penner} for a detail. Now we are going to study a relation between the Teichm\"uller\ spaces $\mathcal{T}^a(\Sigma)$ and $\mathcal{T}^p(\Sigma)$. We define the \emph{ensemble map} $p_\Sigma: \mathcal{T}^a(\Sigma) \to \mathcal{T}^p(\Sigma)$ as follows. Let $(h,d) \in \mathcal{T}^a(\Sigma)$ be a decorated hyperbolic structure. For each boundary interval $\alpha \in \mathbb{B}$, the decoration $d$ gives a horocycle on each endpoint of $\alpha$. We adopt the one assigned to the initial marked point $m^+_\alpha$ as the horocycle pinning over $\alpha$ (cf.~\cite[Section 12.2]{GS19}). Thus we get a hyperbolic structure with pinnings $(h,p)=p_\Sigma(h,d) \in \mathcal{T}^p(\Sigma)$. When $\Sigma$ has a puncture, the ensemble map is neither injective nor surjective, since it forgets the decorations on punctures and only produces marked hyperbolic structures without closed geodesic boundary. \begin{prop}\label{prop:ensemble} If $\Sigma$ is unpunctured, then the ensemble map $p_\Sigma: \mathcal{T}^a(\Sigma) \xrightarrow{\sim} \mathcal{T}^p(\Sigma)$ is a $C^\omega$-diffeomorphism. For any ideal triangulation $\triangle$ of $\Sigma$ and an edge $\kappa \in e(\triangle)$, the pull-back $p_\Sigma^*X_\kappa^\triangle$ is given as follows. \begin{enumerate} \item If $\kappa \in e_{\interior}(\triangle)$, then \begin{align*} p_\Sigma^*X_\kappa^\triangle = \frac{A_{\alpha} A_{\gamma}}{A_{\beta} A_{\delta}}, \end{align*} where the edges around $\kappa$ is labeled in the same way as in \cref{fig:flip}. \item If $\kappa \in \mathbb{B}$, then \begin{align*} p_\Sigma^*X_\kappa^\triangle = \frac{A_{\beta}}{A_{\kappa} A_{\alpha}}, \end{align*} where we relabel the edges sharing a triangle with $\kappa$ by $\alpha,\beta$ as in \cref{fig:ensemble_boundary}. \end{enumerate} \end{prop} \begin{figure} \begin{tikzpicture} \draw[fill, gray!30] (0,-0.2) rectangle (3,0); \path(0,0) node [fill, circle, inner sep=1.6pt] {} node[left]{$m$}; \path(3,0) node [fill, circle, inner sep=1.6pt] {}; \path(60:3) node [fill, circle, inner sep=1.6pt] {}; \draw[blue](0,0) --node[midway,below=0.2em]{$\kappa$} (3,0) --node[midway,right]{$\beta$} (60:3) --node[midway,left]{$\alpha$} cycle; \draw[red,thick] (0.7,0) arc[radius=0.7cm,start angle=0, end angle=60]; \node at (6,1.5) {$\displaystyle p_\Sigma^*X_{\kappa}^\triangle = \frac{A_{\beta}}{A_{\kappa} A_{\alpha}}$}; \end{tikzpicture} \caption{The pull-back action of the ensemble map on a boundary coordinate. Here $\kappa$ is a boundary interval. } \label{fig:ensemble_boundary} \end{figure} For the proof, the \emph{hyperboloid model} of the hyperbolic plane is useful. Let us briefly recall it. For a detail, see \cite[Chapter 1]{Penner}. Let $\mathbb{R}^{2,1}:=\mathbb{R}^3$ be the Minkowski space of signature $(2,1)$, having the inner product $\langle x,x'\rangle:=x_1x'_1+x_2x'_2-x_3x'_3$. We endow the hyperboloid $\mathcal{H}:=\{x\in \mathbb{R}^{2,1} \mid \langle x,x\rangle=-1,~x_3>0\}$ with the induced metric, which turns out to be isometric to $\mathbb{H}^2$. In this model, \begin{description} \item[Geodesic] given by the intersection of $\mathcal{H}$ and a timelike plane, which is the orthogonal complement $n^\perp$ of a spacelike vector $n$ ($\langle n,n\rangle >0$). We will denote such a geodesic simply by $n^\perp$. Two geodesics $n^\perp,(n')^\perp$ are perpendicular to each other if and only if $\langle n,n'\rangle=0$. The distance between a point $x$ and a geodesic $n^\perp$ satisfies $\sinh d(x,n^\perp)=\langle x,n \rangle$. \item[Horocycle] has the form \begin{align*} h(u):=\left\{ v \in \mathcal{H} ~\middle|~ \langle u,v\rangle = -\frac{1}{\sqrt{2}} \right\} \end{align*} for a lightlike vector $u=(u_1,u_2,u_3)$ ($\langle u,u\rangle=0$) with $u_3 >0$. We will identify the horocycle $h(u)$ and the vector $u$. The lambda-length is given by $\lambda(u,u')=\sqrt{-\langle u,u'\rangle}$. \end{description} \begin{proof} The formula for the first case is well-known. See \cite[Chapter 1, Corollary 4.14 (b)]{Penner}. For the second case, it suffices to think about a hyperbolic triangle equipped with a horocycle at each vertex. It is convenient to use the light-cone basis \begin{align*} u=\frac{1}{\sqrt{2}}(-1,0,1), \quad v=\frac{1}{\sqrt{2}}(1,0,1), \quad w=\sqrt{2}(0,-1,1), \end{align*} which satisfies $\langle u,v\rangle=\langle v,w\rangle=\langle w,u\rangle=-1$. The lambda-lengths between the rescaled horocycles $u':=(A_{\beta}A_{\kappa}/A_{\alpha})u$, $v':=(A_{\alpha}A_{\kappa}/A_{\beta})v$, $w':=(A_{\beta}A_{\alpha}/A_{\kappa})w$ are given as follows: \begin{align*} \lambda(u',v') = A_{\kappa},\quad \lambda(v',w') = A_{\alpha},\quad \lambda(w',u') = A_{\beta}. \end{align*} Now let us consider the hyperbolic triangle $(\bar{u},\bar{v},\bar{w})$ spanned by the centers of the horocycles $u',v',w'$. Let $x$ be the intersection point of the geodesic $\gamma:=[\bar{u},\bar{v}]$ and the horocycle $h(v')$, and $y$ the foot of the geodesic $\delta$ from $\bar{w}$ perpendicular to $\gamma$. See \cref{fig:ensemble_computation}. Our aim is to compute the distance between $x$ and $y$ in terms of the lambda-lengths $A_{\kappa},A_{\alpha},A_{\beta}$. Since $\gamma$ is clearly the intersection of $\mathcal{H}$ and the $uv$-plane, its orthonormal vector is $u+v-w$. Then the point $x$ can be computed as a solution of two linear equations (defining $\gamma$ and $h(u')$) and a quadratic equation (defining $\mathcal{H}$), which is given by \begin{align*} x = \frac{1}{\sqrt{2}}\frac{A_{\beta}}{A_{\kappa}A_{\alpha}}u + \frac{1}{\sqrt{2}}\frac{A_{\kappa}A_{\alpha}}{A_{\beta}}v. \end{align*} A short computation also shows that the unit normal vector of $\delta$ is $n=1/\sqrt{2}(u-v)$. Therefore the distance $d(x,y)$ is computed as \begin{align*} \sinh d(x,y) = \sinh d(x,n^\perp) = \langle x,n\rangle = \frac{1}{2}\left( \frac{A_{\beta}}{A_{\kappa}A_{\alpha}} - \frac{A_{\kappa}A_{\alpha}}{A_{\beta}} \right). \end{align*} Thus we get $d(x,y) = A_{\beta}/(A_{\kappa}A_{\alpha})$ as desired. \end{proof} \begin{figure} \begin{tikzpicture}[scale=1] \begin{scope} \draw[fill, gray!30] (-2,0.1) rectangle (2,0); \draw(0,0) circle(2cm); \clip(0,0) circle(2cm); \hgline{0}{180}{2} \hgline{0}{270}{2} \hgline{270}{180}{2} \draw(0,0) node[above=0.3em,scale=0.9]{$A_{\kappa}$}; \draw(1,-1) node[above,scale=0.9]{$A_{\alpha}$}; \draw(-1,-1) node[above,scale=0.9]{$A_{\beta}$}; \draw[red] (1.7,0) circle(0.3cm); \path(1.4,0) node [fill, circle, inner sep=1.1pt] {} node[above left]{$x$}; \draw[dashed] (0,-2) --node[midway,above left,scale=0.9]{$\delta$} (0,0); \path(0,0) node [fill, circle, inner sep=1.1pt] {}; \draw[dashed,<-] (0,0) -- (-1,0.5) node[left,scale=0.9]{$y$}; \end{scope} \draw(-2,0) node[left,scale=0.9]{$u'$}; \draw(2,0) node[right,scale=0.9]{$v'$}; \draw(0,-2) node[below,scale=0.9]{$w'$}; \end{tikzpicture} \caption{Computation of the ensemble map.} \label{fig:ensemble_computation} \end{figure} We set $m_{\alpha\beta}:=-\delta_{\alpha\beta}$ if $\alpha,\beta \in \mathbb{B}$, and otherwise $m_{\alpha\beta}:=0$. Then the two formulae in \cref{prop:ensemble} are combined into \begin{align}\label{eq:ensemble_combined} p_\Sigma^\ast X_\kappa^\triangle = \prod_{\alpha \in e(\triangle)} A_\alpha^{\varepsilon_{\kappa\alpha}^\triangle+m_{\kappa\alpha}}, \end{align} where recall the exchange matrix $\varepsilon^\triangle=(\varepsilon_{\alpha\beta}^\triangle)$ from \cref{app:cluster}. This agrees with the formula given in \cite[Proposition 12.4]{GS19} for the $A_1$ case. \begin{rem}The right-hand side of the formula in \cref{prop:ensemble} (2) coincides with the \emph{$h$-length} of the decoration assigned to $m^+_\kappa$ \cite[Chapter 1, Lemma 4.7]{Penner}. \end{rem} \subsection{The $\lambda$-lengths in terms of the cross ratios} Assuming that $\Sigma$ is unpunctured, we are going to give the inverse formula to \eqref{eq:ensemble_combined}. In this case, we can identify the two Teichm\"uller\ spaces $\mathcal{T}^a(\Sigma)$ and $\mathcal{T}^p(\Sigma)$ via the ensemble map $p_\Sigma$. Thus we omit the symbol $p_\Sigma^\ast$ in the following. \begin{dfn}[positive $\mathbb{B}$-shift of ideal arcs]\label{def:shift_ideal} Given an ideal arc $\alpha$ on $\Sigma$, we define its \emph{(positive) $\mathbb{B}$-shift} to be the simple curve $\alpha_ {\mathbb{B}}$ having its endpoints on $\partial^\ast \Sigma$ obtained from $\alpha$ by shifting its endpoints to the next boundary interval in the positive direction along $\partial\Sigma$. See \cref{fig:shifting}. \end{dfn} \begin{figure}[ht] \centering \begin{tikzpicture} \begin{scope}[xshift=0cm] \fill[gray!20] (0,1.5) -- (-0.2,1.5) -- (-0.2,-1.5) -- (0,-1.5) --cycle; \fill[gray!20] (4,1.5) -- (4+0.2,1.5) -- (4+0.2,-1.5) -- (4,-1.5) --cycle; \draw[thick] (0,1.5) -- (0,-1.5); \draw[thick] (4,-1.5) -- (4,1.5); \filldraw(0,1) circle(1.5pt); \filldraw(0,0) circle(1.5pt); \filldraw(0,-1) circle(1.5pt); \filldraw(4,1) circle(1.5pt); \filldraw(4,0) circle(1.5pt); \filldraw(4,-1) circle(1.5pt); \draw[red,thick] (0,0) to[out=0,in=180] node[midway,above]{$\alpha$} (4,0); \end{scope} \begin{scope}[xshift=6cm] \fill[gray!20] (0,1.5) -- (-0.2,1.5) -- (-0.2,-1.5) -- (0,-1.5) --cycle; \fill[gray!20] (4,1.5) -- (4+0.2,1.5) -- (4+0.2,-1.5) -- (4,-1.5) --cycle; \draw[thick] (0,1.5) -- (0,-1.5); \draw[thick] (4,-1.5) -- (4,1.5); \filldraw(0,1) circle(1.5pt); \filldraw(0,0) circle(1.5pt); \filldraw(0,-1) circle(1.5pt); \filldraw(4,1) circle(1.5pt); \filldraw(4,0) circle(1.5pt); \filldraw(4,-1) circle(1.5pt); \draw[red,thick] (0,-0.5) to[out=0,in=180] node[midway,above]{$\alpha_\mathbb{B}$} (4,0.5); \end{scope} \end{tikzpicture} \caption{The positive $\mathbb{B}$-shift of an ideal arc.} \label{fig:shifting} \end{figure} \begin{thm}\label{thm:A to X} Assume that $\Sigma$ is unpunctured. Then for each edge $\alpha \in e(\triangle)$ of an ideal triangulation, we have the inverse formula to \eqref{eq:ensemble_combined}: \begin{align*} A_\alpha = \prod_{\beta \in e(\triangle)} (X^\triangle_{\beta})^{q_{\alpha\beta}}. \end{align*} Here $q_{\alpha\beta}:=-\mathsf{a}_{\beta}(\alpha_\mathbb{B})$, and $\mathsf{a}_{\beta}(\alpha_\mathbb{B}) \in \frac{1}{2}\mathbb{Z}_{\geq 0}$ denotes half the geometric intersection number between the two curves $\alpha_\mathbb{B}$ and $\beta$. \end{thm} \begin{proof} Let us write \begin{align*} n_{\alpha\beta}:=\sum_{\gamma \in e(\triangle)} q_{\alpha\gamma}p^\triangle_{\gamma\beta} \end{align*} for $\alpha,\beta \in e(\triangle)$. Then it suffices to prove the equation $n_{\alpha\beta}=\delta_{\alpha\beta}$ for all $\alpha,\beta \in e(\triangle)$. Fix an edge $\alpha \in e(\triangle)$, and give $\alpha_\mathbb{B}$ an arbitrary orientation. Let $\alpha_0,\dots,\alpha_m$ be the edges of $\triangle$ traversed by $\alpha_\mathbb{B}$ in this order, where $\alpha_0,\alpha_m \in \mathbb{B}$ are end-intervals of $\alpha$, and the other $\alpha_i$ are interior edges. Then we get $n_{\alpha\beta}=-1/2\sum_{i=0}^m p^\triangle_{\alpha_i,\beta}$, with a notice that we allow $\alpha_i=\alpha_j$ for some $i\neq j$. First consider the case where $\alpha$ is an interior edge. Then $\alpha$ is the diagonal of a unique quadrilateral $Q_\alpha$ in $\triangle$. There is a unique $0 \leq i_0 \leq m$ such that $\alpha_{i_0}=\alpha$ and $\alpha_{i_0\pm 1}$ are the opposite sides of $Q_\alpha$. See the left picture in \cref{fig:shift_interior}. Then one can verify the equations \begin{align*} n_{\alpha,\alpha_i} = \begin{cases} -\frac{1}{2}(p^\triangle_{\alpha_0,\alpha_0}+p^\triangle_{\alpha_1,\alpha_0}) =-\frac{1}{2}(-1+1) = 0 & \mbox{for $i=0$}, \\ -\frac{1}{2}(p^\triangle_{\alpha_{i-1},\alpha_{i}}+p^\triangle_{\alpha_{i+1},\alpha_{i}}) & \mbox{for $0 < i < m$}, \\ -\frac{1}{2}(p^\triangle_{\alpha_{m-1},\alpha_m}+p^\triangle_{\alpha_m,\alpha_m}) =-\frac{1}{2}(1-1) = 0 & \mbox{for $i=m$}. \end{cases} \end{align*} The middle case produces $n_{\alpha,\alpha_{i_0}}=1$ if $i=i_0$, and otherwise $n_{\alpha,\alpha_i}=0$. It is easier to verify $n_{\alpha\beta}=0$ for $\beta \in e(\triangle) \setminus \{\alpha_i\}_{i=0}^m$. Thus $n_{\alpha\beta}=\delta_{\alpha\beta}$ holds in this case. \begin{figure}[ht] \centering \begin{tikzpicture} \bline{-2,-2}{2,-2}{0.2}; \tline{-2,2}{2,2}{0.2}; \draw[blue] (-2,0) -- (0,-2) -- (2,0) -- (0,2) -- cycle; \draw[blue] (0,2) -- (0,-2); \foreach \i in {1,2} \draw(-2,0)++(0,\i*0.5) coordinate(A\i); \draw[blue] (-2,0) -- (A1) -- (0,2); \draw[blue] (A1) --(A2) -- (0,2); \foreach \i in {1,2} \draw(2,0)++(0,-\i*0.5) coordinate(B\i); \draw[blue] (2,0) -- (B1) -- (0,-2); \draw[blue] (B1) --(B2) -- (0,-2); \draw[blue,thick,dotted] (0,2)++(-160:1.5) arc(-160:-178:1.5); \draw[blue,thick,dotted] (0,-2)++(20:1.5) arc(20:2:1.5); \filldraw (0,2) circle(1.5pt); \filldraw (0,-2) circle(1.5pt); \draw[red,thick] (0.5,-2) to[out=90,in=-45] (0,0) to[out=135,in=-90] (-0.5,2); \node[blue] at (-1,2.4) {$\alpha_0$}; \node[blue] at (1,-2.4) {$\alpha_m$}; \node[blue] at (0.3,0.4) {$\alpha_{i_0}$}; \node[red] at (-0.3,-0.1) {$\alpha_\mathbb{B}$}; \begin{scope}[xshift=7cm] \tline{-2,2}{2,2}{0.2}; \foreach \i in {-30,-45,-60,-90,-120,-135,-150} \draw[blue] (0,2) --++(\i:2); \draw[blue,thick,dotted] (0,2)++(-160:1.5) arc(-160:-178:1.5); \draw[blue,thick,dotted] (0,2)++(-20:1.5) arc(-20:-2:1.5); \filldraw (0,2) circle(1.5pt); \draw[red,thick] (-1,2) arc(-180:0:1); \node[blue] at (-1,2.4) {$\alpha_0$}; \node[blue] at (1,2.4) {$\alpha_m$}; \node[red] at (-0.3,0.7) {$\alpha_\mathbb{B}$}; \end{scope} \end{tikzpicture} \caption{Computation of the matrix $n_{\alpha\beta}$. Left: the case $\alpha \in e_{\interior}(\triangle)$, Right: the case $\alpha \in \mathbb{B}$.} \label{fig:shift_interior} \end{figure} In the case where $\alpha$ is a boundary interval, the curve $\alpha_\mathbb{B}$ is the corner arc surrounding its terminal marked point $m \in \mathbb{M}$. Let us give $\alpha_\mathbb{B}$ an orientation so that it runs around $m$ in the counter-clockwise direction. We have $\alpha=\alpha_m$. See the right picture in \cref{fig:shift_interior}. Then one can verify that $n_{\alpha,\alpha_i}=0$ for $0 \leq i <m$, and $n_{\alpha,\alpha_m}=-\frac{1}{2}(p^\triangle_{\alpha_{m-1},\alpha_m}+p^\triangle_{\alpha_m,\alpha_m})=-\frac{1}{2}(-1-1)=1$ in this case. Thus $n_{\alpha\beta}=\delta_{\alpha\beta}$ holds in this case. The assertion is proved. \end{proof} In particular, the Poisson bracket of $\lambda$-length functions along compatible arcs are computed as \begin{align}\label{eq:Poisson_lambda} \{A_\alpha,A_\beta\} = \left\{ \prod_{\gamma} (X^\triangle_{\gamma})^{q_{\alpha\gamma}}, \prod_{\delta} (X^\triangle_{\delta})^{q_{\beta\delta}}\right\} = \left(\sum_{\gamma,\delta} q_{\alpha\gamma}\varepsilon^\triangle_{\gamma\delta}q_{\beta\delta}\right) A_\alpha A_\beta, \end{align} where we take any ideal triangulation $\triangle$ containing $\alpha,\beta$. In order to describe it more precisely, recall the Muller's compatibility matrix $\pi_{\alpha\beta}$, defined as follows. For an ideal arc $\alpha$, let $\alpha_+,\alpha_-$ denote its two ends (with an arbitrary labeling). For two ideal arcs $\alpha,\beta$, define \begin{align*} \pi_{\alpha_\mu,\beta_{\nu}}:=\begin{cases} 1 & \mbox{if $\alpha_\mu$ is clockwise to $\beta_{\nu}$ at a common marked point}, \\ -1 & \mbox{if $\alpha_\mu$ is counter-clockwise to $\beta_{\nu}$ at a common marked point}, \\ 0 & \mbox{otherwise}, \end{cases} \end{align*} and set $\pi_{\alpha\beta}:=\sum_{\mu,\nu=+,-} \pi_{\alpha_\mu,\beta_{\nu}}$. \begin{lem} For any unpunctured marked surface, we have \begin{align*} \sum_{\gamma,\delta \in e(\triangle)} q_{\alpha\gamma}\varepsilon^\triangle_{\gamma\delta}q_{\beta\delta}=-\frac{1}{4}\pi_{\alpha\beta}. \end{align*} In particular, the Poisson bracket \eqref{eq:Poisson_lambda} becomes \begin{align*} \{A_\alpha,A_\beta\} = -\frac{1}{4}\pi_{\alpha\beta} A_\alpha A_\beta. \end{align*} \end{lem} \begin{proof} Since $\varepsilon^\triangle_{\gamma\delta} = p^\triangle_{\gamma\delta} -m_{\gamma\delta} = (q^{-1})_{\gamma\delta} - m_{\gamma\delta}$ by the lemma above, we have \begin{align*} &\sum_{\gamma,\delta \in e(\triangle)} q_{\alpha\gamma}\varepsilon^\triangle_{\gamma\delta}q_{\beta\delta} \\ &=q_{\beta\alpha} - \sum_{\gamma \in \mathbb{B}, \delta \in e(\triangle)} q_{\alpha\gamma}m_{\gamma\delta}q_{\beta\delta}\\ &= q_{\beta\alpha} + \sum_{\gamma \in \mathbb{B}} q_{\alpha\gamma}q_{\beta\gamma} \\ &= -\mathsf{a}_\alpha(\beta_\mathbb{B}) + \sum_{\gamma \in \mathbb{B}} \mathsf{a}_{\gamma}(\alpha_\mathbb{B})\mathsf{a}_{\gamma}(\beta_\mathbb{B}). \end{align*} The last expression is clearly $0$ if $\alpha$ and $\beta$ do not share endpoints. If an end $\alpha_\mu$ of $\alpha$ is clockwise to an end $\beta_\nu$ of $\beta$ at a common marked point, then such a pair $(\alpha_\mu,\beta_\nu)$ contributes to $\sum_{\gamma \in \mathbb{B}} \mathsf{a}_{\gamma}(\alpha_\mathbb{B})\mathsf{a}_{\gamma}(\beta_\mathbb{B})$ by $1/4$, and to $-\mathsf{a}_\alpha(\beta_\mathbb{B})$ by $-1/2$. In total, its contribution is $-1/4$. If $\alpha_\mu$ is counter-clockwise to $\beta_\nu$, then the contribution of the pair $(\alpha_\mu,\beta_\nu)$ is $0+1/4$, since the shifted end $\beta_{\nu,\mathbb{B}}$ is disjoint from $\alpha_\mu$ in this case. Thus the assertion is proved. \end{proof} \begin{rem} \begin{enumerate} \item By the lemma above, the Poisson algebra $(C^\infty(\mathcal{T}^a(\Sigma)),-4\cdot\{\ ,\ \})$ is the classical limit of the Muller's skein algebra $\mathscr{S}_\Sigma^{q}$. \item A similar computation works for higher rank cases as well. In the $\mathfrak{sl}_3$-case, the $A$-variables are expressed as Laurent monomials of $X$-variables with exponents given by $-1/3$ times the Douglas--Sun coordinates \cite{DS20I} of the bounded $\mathfrak{sl}_3$-laminations obtained by $\mathbb{B}$-shifting the corresponding elementary webs \cite{IYsl3}. The Poisson bracket multiplied by $-6$ gives the classical limit of the skein algebra studied in \cite{IYsl3}. \end{enumerate} \end{rem} \begin{rem} Via the correspondence between the decorations and pinnings, one can also consider the gluing map $q_{\Sigma,\Sigma'}:\mathcal{T}^a(\Sigma) \to \mathcal{T}^a(\Sigma')$ for any marked surface $\Sigma$ in the way illustrated as \begin{align*} \begin{tikzpicture}[scale=0.9] \begin{scope} \draw (-2,0) -- (0,0) -- (0,3) -- (-2,3); \draw[red] (0,0) arc(-45:-45+360:0.9); \draw[red] (1,3) arc(135:135-360:0.6); \draw[blue] (0,3) arc(45:45+360:0.4); \draw[mygreen] (1,0) arc(-135:-135+360:0.6); \foreach \i in {0,1} \foreach \j in {0,3} \fill(\i,\j) circle(1.5pt); \pinn{0,0.9*1.414}{180}{0.1}{0.03cm} \draw[dashed] (0.1,0.9*1.414) -- (0.9,3-0.6*1.414); \end{scope} \begin{scope}[xshift=1cm] \draw (2,0) -- (0,0) -- (0,3) -- (2,3); \pinn{0,3-0.6*1.414}{0}{0.1}{0.03cm} \end{scope} \draw[thick,|->] (3.5,1.5) --node[midway,above]{$q_{\Sigma,\Sigma'}$}++(1,0); \begin{scope}[xshift=7cm] \draw (-2,0) -- (2,0); \draw (-2,3) -- (2,3); \draw[dashed] (0,0) -- (0,3); \draw[blue] (0,3) arc(45:45+360:0.4); \draw[mygreen] (0,0) arc(-135:-135+360:0.6); \foreach \j in {0,3} \fill(0,\j) circle(1.5pt); \end{scope} \end{tikzpicture}\ , \end{align*} which is invariant under the $\mathbb{R}_{>0}$-action rescaling the $h$-lengths of the red horocycles by $(\lambda,\lambda^{-1})$ for $\lambda \in \mathbb{R}_{>0}$. It clearly satisfies $q_{\Sigma,\Sigma'}^\ast A_{\overline{\alpha}}=A_{\alpha_L}\cdot A_{\alpha_R}$. \end{rem} \subsection{Wilson lines and $\lambda$-length} In addition to the usual trace functions of monodromy (\emph{a.k.a.} Wilson loops), the data of pinnings allow us to consider a wider class of functions associated to arcs connecting boundary intervals, which we call the \emph{Wilson lines}. Let us recall the setting from \cite[Section 3.3]{IO20} with a specialization to the $A_1$ case. An \emph{arc class} is the homotopy class $[c]$ of a path $c$ in $\Sigma$ which runs between two boundary interval $\alpha_\mathrm{in}$ and $\alpha_\mathrm{out}$, where the homotopies are relative to $\partial^\ast \Sigma$. Given an arc class $[c]$ from $\alpha_\mathrm{in}$ to $\alpha_\mathrm{out}$ and $(h,p) \in \mathcal{T}^p(\Sigma)$, we define an isometry $g_{[c]}(h,p) \in PSL_2(\mathbb{R})$ as follows. Choose a fundamental polygon $\widetilde{\Pi}^h \subset \mathbb{H}^2$ of $\Sigma$ so that the unique lift $\widetilde{\alpha}^h_\mathrm{in}$ of $\alpha_\mathrm{in}$ contained in $\widetilde{\Pi}^h$ sits in the \lq\lq normalized" position: $\widetilde{\alpha}^h_\mathrm{in}=\sqrt{-1}\mathbb{R}_{>0}$ and the point pinning $p_{\alpha_\mathrm{in}}$ gives $\sqrt{-1} \in \widetilde{\alpha}^h_\mathrm{in}$. Let $\widetilde{c}^h$ be the lift of $c$ which starts from $\widetilde{\alpha}^h_\mathrm{in}$, which ends on a certain side $\widetilde{\alpha}^h_\mathrm{out}$ of $\widetilde{\Pi}^h$. The terminal side $\widetilde{\alpha}^h_\mathrm{out}$ must be a lift of $\alpha_\mathrm{out}$, so it is equipped with a point pinning determined by $p_{\alpha_\mathrm{out}}$. Define $g=g_{[c]}(h,p) \in PSL_2(\mathbb{R})$ to be the unique isometry such that $g(\widetilde{\alpha}^h_\mathrm{in})=\widetilde{\alpha}^h_\mathrm{out}$, matching the point pinnings on them. In this way, we get a map \begin{align*} g_{[c]}: \mathcal{T}^p(\Sigma) \to PSL_2(\mathbb{R}), \end{align*} which we call the \emph{Wilson line} along $[c]$. Let $\triangle$ be an ideal triangulation of $\Sigma$. Represent an arc class $[c]$ by a curve $c$ so that the intersection with $\triangle$ is minimal. Label the edges (resp. triangles) of $\triangle$ that $c$ traverses as $\alpha_\mathrm{in}=\alpha_0,\dots,\alpha_M=\alpha_\mathrm{out}$ (resp. $T_1,\dots,T_M$) in this order. Note that each intersection $c \cap T_\nu$ is either one of the two patterns shown in \cref{f:intersection}. The \emph{turning pattern} of $[c]$ with respect to $\triangle$ is the sequence $\tau_\triangle([c]) = (\tau_\nu)_{\nu=1}^M \in \{L,R\}^{M}$, where $\tau_\nu = L$ (resp. $\tau_\nu=R$) if $c \cap T_\nu$ is the left (resp. right) pattern in \cref{f:intersection}. \begin{figure}[hb] \centering \begin{tikzpicture}[scale=0.9] \draw (0,0) coordinate (B1) node[below]{$\ast$}; \draw (240: 2) coordinate (B2); \draw (300: 2) coordinate (B3); \draw (B1) -- (B2) -- (B3) --cycle; \draw[->,thick,color=red] (240:1) arc[start angle=240, end angle=300, radius=1cm] node[midway,below]{$c$}; \draw(-2,-0.5) node{$L$}; \begin{scope}[xshift=5cm] \draw (0,0) coordinate (B1) node[below]{$\ast$}; \draw (240: 2) coordinate (B2); \draw (300: 2) coordinate (B3); \draw (B1) -- (B2) -- (B3) --cycle; \draw[->,thick,color=red] (300:1) arc[start angle=300, end angle=240, radius=1cm] node[midway,below]{$c$}; \draw(-2,-0.5) node{$R$}; \end{scope} \end{tikzpicture} \caption{Two intersection patterns of $c \cap T_\nu$} \label{f:intersection} \end{figure} \begin{thm}[\cite{FG07,Penner,IO20}]\label{thm:LR-formula} Let $\triangle$ be an ideal triangulation of $\Sigma$, and $[c]$ an arc class. Then in terms of the cross ratios $X_\nu:=X_{\alpha_\nu}^\triangle(h,p)$ for $\nu=0,\dots,M$, the Wilson line $g_{[c]}$ is expressed as \begin{align}\label{eq:LR-formula} g_{[c]}(h,p) = H(X_0)\mathbb{E}^{\tau_1}H(X_2)\mathbb{E}^{\tau_2}\dots H(X_{M-1})\mathbb{E}^{\tau_{M}}H(X_M), \end{align} where \begin{align*} H(X):=\begin{bmatrix}X^{1/2} & 0 \\ 0 & X^{-1/2} \end{bmatrix}, \quad \mathbb{E}^L:=\begin{bmatrix}1 & 1 \\ 0 & 1 \end{bmatrix}, \quad \mathbb{E}^R:=\begin{bmatrix}1 & 0 \\ 1 & 1 \end{bmatrix} \in PSL_2(\mathbb{R}). \end{align*} \end{thm} The Wilson line reproduces the lambda-length function, as follows. For a $2\times 2$ matrix $M$, let $\Delta_{ij}(M)$ denote its $(i,j)$-entry for $i,j=1,2$. For $M \in PSL_2(\mathbb{R})$, $\Delta_{ij}(M)$ is defined up to sign. Observe that an arc class $[c]$ without self-intersections is represented by the $\mathbb{B}$-shift $c=\alpha_\mathbb{B}$ (\cref{def:shift_ideal}) of some ideal arc $\alpha$, together with an arbitrary orientation. \begin{prop}\label{prop:Wilson_lambda} Let $[\alpha_\mathbb{B}]$ be the arc class represented by the $\mathbb{B}$-shift of an ideal arc $\alpha$. \begin{enumerate} \item For any ideal triangulation $\triangle$ of $\Sigma$, we have \begin{align*} |\Delta_{22}(g_{[\alpha_\mathbb{B}]})| = \prod_{\beta \in e(\triangle)}(X_\beta^\triangle)^{-\mathsf{a}_\beta(\alpha_\mathbb{B})}\cdot F_\alpha^\triangle(X_\triangle), \end{align*} where $F_\alpha^\triangle(X_\triangle)$ is a polynomial of the cross ratio coordinates with respect to $\triangle$ having the constant term $1$. \item We have \begin{align*} |\Delta_{22}(g_{[\alpha_\mathbb{B}]})| = A_\alpha. \end{align*} \end{enumerate} \end{prop} \begin{proof} Let $\triangle$ be any ideal triangulation, and $\alpha_0,\dots,\alpha_M$ its edges that the curve $\alpha_\mathbb{B}$ traverses in this order. Let us rescale the diagonal matrices as \begin{align*} H'(X):=X^{1/2}H(X) = \begin{bmatrix}X & 0 \\ 0 & 1 \end{bmatrix}. \end{align*} Then the formula in \cref{thm:LR-formula} becomes Then the formula \eqref{eq:LR-formula} becomes \begin{align*} g_{[\alpha_\mathbb{B}]} = \prod_{\nu=0}^M X_\nu^{-1/2}\cdot H'(X_0)\mathbb{E}^{\tau_1}H'(X_2)\mathbb{E}^{\tau_2}\dots H'(X_{M-1})\mathbb{E}^{\tau_{M}}H'(X_M). \end{align*} With a notice that the monomial term $\prod_{\nu=0}^M X_\nu^{-1/2}$ coincides with $\prod_{\beta \in e(\triangle)}(X_\beta^\triangle)^{-\mathsf{a}_\beta(\alpha_\mathbb{B})}$, we see that the first assertion holds. If $\triangle$ contains the ideal arc $\alpha$, then we have $A_\alpha=\prod_{\beta \in e(\triangle)}(X_\beta^\triangle)^{-\mathsf{a}_\beta(\alpha_\mathbb{B})}$ by \cref{thm:A to X}. In this case, the turning pattern is given by $\tau_1=\cdots=\tau_{i_0}=L$ and $\tau_{i_0+1}=\cdots=\tau_{M}=R$ in the notation of \cref{fig:shift_interior}. In particular we get $F_\alpha^\triangle=1$, and hence $|\Delta_{22}(g_{[\alpha_\mathbb{B}]})| = A_\alpha$ holds. \end{proof} \begin{rem} The expressions of the other entries of $g_{[c]}$ are also given in \cite[(5.1)]{IOS22}. \end{rem} \section{Lamination spaces with pinnings}\label{sec:lamination} In this section, we introduce the space of \emph{$\P$-laminations} which will be identified with the set of real tropical points of the moduli space $\P_{PGL_2,\Sigma}$. \subsection{The space $\mathcal{L}^p(\Sigma,\mathbb{Q})$ of rational $\P$-laminations and shear coordinates} Let $\Sigma$ be a marked surface. During this section, by a \emph{curve} we mean an unoriented curve $\gamma$ in $\Sigma$ which is either closed or having endpoints in $\mathbb{M}_\circ \cup \partial^\ast \Sigma$, and the other part is embedded into $\Sigma^\ast$. Isotopies of curves are considered within this class. Such a curve $\gamma$ is said to be \begin{itemize} \item \emph{peripheral} \footnote{It is called \lq\lq special" in \cite{FG07}.} if it is either isotopic to a puncture $m \in \mathbb{M}_\circ$ or an interval in $\partial \Sigma$ which contains exactly one special point $m \in M_\partial$. \begin{align}\label{eq:peripheral} \begin{tikzpicture}[scale=.8] \draw[dashed, fill=white] (0,0) circle [radius=1]; \draw[thick, red, fill=pink!60] (0,0) circle [radius=0.5]; \filldraw[draw=black,fill=white] (0,0) circle(2.5pt); \begin{scope}[xshift=5cm] \coordinate (P) at (-0.5,0) {}; \coordinate (P') at (0.5,0) {}; \coordinate (C) at (0,0.5) {}; \draw[thick, red, fill=pink!60] (P) to[out=north, in=west] (C) to[out=east, in=north] (P'); \draw[dashed] (1,0) arc (0:180:1cm); \bline{-1,0}{1,0}{0.2} \draw[fill=black] (0,0) circle(2pt); \end{scope} \end{tikzpicture} \end{align} In each case, it is called a peripheral curve \emph{around $m$}. \item \emph{contractible} if it is isotopic to a point. \end{itemize} \begin{dfn}\label{d:P-lamination} A \emph{rational $\P$-lamination} on $\Sigma$ consists of the following data: \begin{itemize} \item a collection $L=\{(\gamma_j,w_j)\}_j$ of mutually disjoint non-peripheral curves $\gamma_j$ in $\Sigma$ equipped with non-negative rational weights $w_j\geq 0$; \item a tuple $\sigma_L=(\sigma_m)_{m \in \mathbb{M}_\circ} \in \{+,0,-\}^{\mathbb{M}_\circ}$ of signs assigned to punctures such that $\sigma_m=0$ if and only if there are no curves incident to $m$; \item a tuple $\nu=(\nu_\alpha)_{\alpha \in \mathbb{B}} \in \mathbb{Q}^\mathbb{B}$ of rational numbers assigned to boundary intervals. \end{itemize} Such a data is considered modulo the equivalence relation generated by isotopies and the following operations: \begin{enumerate} \item Remove a contractible curve or a curve with weight $0$. \item Combine a pair of isotopic curves with weights $u$ and $v$ into a single curve with the weight $u+v$. \end{enumerate} \end{dfn} Let $\mathcal{L}^p(\Sigma,\mathbb{Q})$ denote the set of rational $\P$-laminations. We call the tuple $\sigma_L=(\sigma_m)_{m \in \mathbb{M}_\circ}$ the \emph{lamination signature}, and $\nu$ the \emph{pinning}. Forgetting the pinnings, we get the projection \begin{align*} \pi_\Sigma^\mathsf{T}: \mathcal{L}^p(\Sigma,\mathbb{Q}) \to \mathcal{L}^x(\Sigma,\mathbb{Q}), \quad (L,\sigma_L,\nu) \mapsto (L,\sigma_L), \end{align*} where $\mathcal{L}^x(\Sigma,\mathbb{Q})$ denotes the space of rational $\X$-laminations of Fock--Goncharov \cite{FG07}. A rational $\P$-lamination is said to be \emph{integral} if all the weights of the curves and the pinnings $\nu_\alpha$ are integers. Let $\mathcal{L}^p(\Sigma,\mathbb{Z})\subset \mathcal{L}^p(\Sigma,\mathbb{Q})$ denote the subset of integral $\P$-laminations. \smallskip \paragraph{\textbf{Spiralling diagram.}} Given a rational $\X$-lamination $(L,\sigma_L) \in \mathcal{L}^x(\Sigma,\mathbb{Q})$, we deform each curve $\gamma_j$ in $L$ incident to a puncture $m$, as follows: if the sign $\sigma_m$ is positive (resp. negative), then replace the corresponding end of $\gamma_j$ with an infinite curve $\widehat{\gamma}_j$ that spirals around $m$ in the clockwise (resp. counter-clockwise) direction. See \cref{fig:spiral}. The resulting diagram $\widehat{L}$ is called the \emph{spiralling diagram} of $(L,\sigma_L)$. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.9] \draw[dashed] (-2.5,-1.5) circle(2cm); \draw [red](-3,0.45) .. controls (-2.5,0) and (-2.9,-0.8) .. (-2.5,-1.5); \filldraw[fill=white] (-2.5,-1.5) circle(2pt); \node[red] at (-2.4,-0.1) {$\gamma_j$}; \node[red] at (-2.3,-1.7) {$+$}; \node at (-2.8,-1.7) {$m$}; \draw (5,-1.5) circle(2pt); \draw[dashed] (5,-1.5) circle(2cm); \node[red] at (5.3,-0.1) {$\widehat{\gamma}_j$}; \draw [red](4.5,0.45) .. controls (5,0) and (5.55,-1.05) .. (5.55,-1.5) .. controls (5.55,-1.85) and (5.25,-2) .. (5,-2) .. controls (4.75,-2) and (4.55,-1.8) .. (4.55,-1.5) .. controls (4.55,-1.25) and (4.75,-1.1) .. (5,-1.1) .. controls (5.25,-1.1) and (5.4,-1.25) .. (5.4,-1.5) .. controls (5.4,-1.75) and (5.2,-1.85) .. (5,-1.85) .. controls (4.85,-1.85) and (4.7,-1.7) .. (4.7,-1.5) .. controls (4.7,-1.35) and (4.85,-1.25) .. (5,-1.25) .. controls (5.15,-1.25) and (5.25,-1.35) .. (5.25,-1.5) .. controls (5.25,-1.6) and (5.15,-1.7) .. (5,-1.7) .. controls (4.9,-1.7) and (4.85,-1.6) .. (4.85,-1.5); \draw [red, thick, dotted](4.85,-1.5) .. controls (4.85,-1.3) and (5.15,-1.3) .. (5.15,-1.5); \draw [thick,-{Classical TikZ Rightarrow[length=4pt]},decorate,decoration={snake,amplitude=2pt,pre length=2pt,post length=3pt}](0.65,-1.5) -- (2,-1.5); \end{tikzpicture} \caption{Construction of a spiralling diagram. The negative sign similarly produce an end spiralling counter-clockwisely.} \label{fig:spiral} \end{figure} Given an ideal triangulation $\triangle$ of $\Sigma$, it is easy to verify that we can move such a spiralling diagram by an isotopy fixing a small neighborhood of $\mathbb{M}_\circ$ into a position such that its restriction to each triangle of $\triangle$ consists only of corner arcs (\emph{i.e.} curves connecting distinct edges). We call such a position a \emph{good position} with respect to $\triangle$. Then we define a coordinate system \begin{align*} \mathsf{x}_\triangle=(\mathsf{x}_\alpha^\triangle)_{\alpha \in e(\triangle)}:\mathcal{L}^p(\Sigma,\mathbb{Q}) \to \mathbb{Q}^\triangle \end{align*} associated with an ideal triangulation $\triangle$, as follows. Given $(L,\sigma_L,\nu) \in \mathcal{L}^p(\Sigma,\mathbb{Q})$, let $\widehat{L}$ be the spiralling diagram of $(L,\sigma_L)$ in a good position with respect to $\triangle$. For each edge $\alpha \in e(\triangle)$ and a curve $\widehat{\gamma}_j$ in the spiralling diagram, let $(\alpha:\widehat{\gamma}_j) \in \mathbb{Z}$ be the integer defined as follows: \begin{itemize} \item if $\alpha$ is an interior edge, then it is the diagonal of a unique quadrilateral $Q_\alpha$ in $\triangle$. An intersection between a portion of $\widehat{\gamma}_j$ and $Q_\alpha$ as in the left (resp. right) of \cref{f:intersection sign} contributes as $+1$ (resp. $-1$), and the others $0$. Then $(\alpha:\widehat{\gamma}_j)$ is the sum of these local contributions. \item if $\alpha$ is a boundary interval, then $(\alpha:\widehat{\gamma}_j):=+1$ if $\widehat{\gamma}_j$ contains a corner arc around the initial marked point $m^+_\alpha$ as its portion, and otherwise $0$. \end{itemize} \begin{figure}[ht] \centering \begin{tikzpicture}[scale=1.15] \path(0,0) node [fill, circle, inner sep=1.5pt] (x1){}; \path(135:2) node [fill, circle, inner sep=1.5pt] (x2){}; \path(0,2*1.4142) node [fill, circle, inner sep=1.5pt] (x3){}; \path(45:2) node [fill, circle, inner sep=1.5pt] (x4){}; \draw[blue](x1) to (x2) to (x3) to (x4) to (x1) to node[midway,left]{$\alpha$} node[midway,above right,black]{$\oplus$} (x3); \draw [red] (2,0.7) to[out=135,in=-45] (0,1.5) to[out=135,in=-45] (-1,3); \draw [red] (2,0.7) node[below]{$\gamma_j$}; \begin{scope}[xshift=5cm] \path(0,0) node [fill, circle, inner sep=1.5pt] (x1){}; \path(135:2) node [fill, circle, inner sep=1.5pt] (x2){}; \path(0,2*1.4142) node [fill, circle, inner sep=1.5pt] (x3){}; \path(45:2) node [fill, circle, inner sep=1.5pt] (x4){}; \draw[blue] (x1) to (x2) to (x3) to (x4) to (x1) to node[midway,left]{$\alpha$} node[midway,below right,black]{$\ominus$} (x3); \draw [red] (-1,0.5) to[out=45,in=215] (0,1.2) to[out=45,in=215] (1,3); \draw [red] (-1,0.5) node[below]{$\gamma_j$}; \end{scope} \end{tikzpicture} \caption{Contributions to $(\alpha:\widehat{\gamma}_j)$.} \label{f:intersection sign} \end{figure} Although $\widehat{\gamma}_j$ may intersect with $\alpha$ infinitely many times in the first case, the number $(\alpha:\widehat{\gamma}_j)$ is always finite. Then we define $\mathsf{x}_\alpha^\triangle(L,\sigma_L,\nu) \in \mathbb{Q}$ by the following rule: \begin{itemize} \item For an interior edge $\alpha \in e_{\interior}(\triangle)$ define \begin{align*} \mathsf{x}^\triangle_\alpha(L,\sigma_L,\nu):=\sum_j w_j (\alpha:\widehat{\gamma}_j). \end{align*} \item For a boundary interval $\alpha\in \mathbb{B}$, define \begin{align}\label{eq:boundary_coord} \mathsf{x}^\triangle_\alpha(L,\sigma_L,\nu):=\nu_\alpha- \sum_j w_j (\alpha:\widehat{\gamma}_j). \end{align} \end{itemize} We call the coordinate system $\mathsf{x}_\triangle$ the \emph{(lamination) shear coordinates} associated with $\triangle$. The following is a slight extension of the result in \cite[Section 3.1]{FG07}. \begin{thm}\label{prop:p-lamination} For any ideal triangulation $\triangle$ of $\Sigma$, the map \begin{align*} \mathsf{x}_\triangle: \mathcal{L}^p(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \mathbb{Q}^{\triangle} \end{align*} gives a bijection. For the flip $f_{\alpha}:\triangle \to \triangle'$ along an interior edge $\alpha \in e_{\interior}(\triangle)$, the coordinate transformation $\mathsf{x}_{\triangle'} \circ \mathsf{x}_{\triangle}^{-1}$ is given as in \cref{f:tropical x-flip}. Here we assume that both $\triangle$ and $\triangle'$ do not have self-folded triangles. \end{thm} \begin{figure}[ht] \[\hspace{1.4cm} \begin{tikzpicture}[scale=0.8] \path(0,0) node [fill, circle, inner sep=1.6pt] (x1){}; \path(135:4) node [fill, circle, inner sep=1.6pt] (x2){}; \path(0,4*1.4142) node [fill, circle, inner sep=1.6pt] (x3){}; \path(45:4) node [fill, circle, inner sep=1.6pt] (x4){}; \draw[blue](x1) to node[midway,left,black]{$\mathsf{x}_4$} (x2) to node[midway,left,black]{$\mathsf{x}_1$} (x3) to node[midway,right,black]{$\mathsf{x}_2$} (x4) to node[midway,right,black]{$\mathsf{x}_3$} (x1) to node[midway,left,black]{$\mathsf{x}_0$} (x3); \draw[-implies, double distance=2pt](4,2*1.4142) to node[midway,above]{$f_{\alpha}$} (6,2*1.4142); \begin{scope}[xshift=10cm] \path(0,0) node [fill, circle, inner sep=1.6pt] (x1){}; \path(135:4) node [fill, circle, inner sep=1.6pt] (x2){}; \path(0,4*1.4142) node [fill, circle, inner sep=1.6pt] (x3){}; \path(45:4) node [fill, circle, inner sep=1.6pt] (x4){}; \draw[blue](x1) to node[midway,left,black]{\scalebox{0.8}{$\mathsf{x}_4-\max\{0,-\mathsf{x}_0\}$}} (x2) to node[midway,left,black]{\scalebox{0.8}{$\mathsf{x}_1+\max\{0,\mathsf{x}_0\}$}} (x3) to node[midway,right,black]{\scalebox{0.8}{$\mathsf{x}_2-\max\{0,-\mathsf{x}_0\}$}} (x4) to node[midway,right,black]{\scalebox{0.8}{$\mathsf{x}_3+\max\{0,\mathsf{x}_0\}$}} (x1); \draw[blue] (x2) to node[midway,above,black]{$-\mathsf{x}_0$} (x4); \end{scope} \node [blue] at (0.25,2.1) {$\alpha$}; \node [blue] at (10.5,2.5) {$\alpha'$}; \end{tikzpicture} \] \caption{The coordinate transformation for a flip. } \label{f:tropical x-flip} \end{figure} \begin{proof} It is known that $\mathsf{x}_\triangle^{\mathrm{uf}}:=(\mathsf{x}_\alpha^\triangle)_{\alpha \in e_{\interior}(\triangle)}: \mathcal{L}^x(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \mathbb{Q}^{e_{\interior}(\triangle)}$ gives a bijection \cite[Section 3.1]{FG07}. In other words, given a vector $\mathsf{x}=(\mathsf{x}_\alpha) \in \mathbb{Q}^{e(\triangle)}$, one can uniquely reconstruct a rational $\X$-lamination $(L,\sigma_L)$ such that $\mathsf{x}_\alpha^\triangle(L,\sigma_L)=\mathsf{x}_\alpha$ for $\alpha \in e_{\interior}(\triangle)$. Then the pinning $\nu$ can be reconstructed from $(L,\sigma_L)$ and the boundary coordinates via the relation \eqref{eq:boundary_coord}. Thus the first statement holds. When all the edges in \cref{f:tropical x-flip} are interior edges, the formula is the one given in \cite{FG07}. Consider the case where one of the edges, say $\alpha_1$, is a boundary interval. Fix a rational $\P$-lamination $(L,\sigma_L,\nu) \in \mathcal{L}^p(\Sigma,\mathbb{Q})$. For $i \neq j \in \{0,1,2,3,4\}$, let $\mathsf{w}_{ij}^\triangle=\mathsf{w}_{ij}^\triangle(L,\sigma_L,\nu)$ denote the weighted sum of the leaves which surround the corner bounded by the edges $\alpha_i$ and $\alpha_j$ in $\triangle$. Let $\mathsf{w}_{ij}^{\triangle'}$ be the similar quantity for the triangulation $\triangle'$. Since the pinnings contributes to the frozen coordinates linearly, we may assume that $\nu_\alpha=0$ for all $\alpha \in \mathbb{B}$ without loss of generality. Then from the definitions, $\mathsf{x}_{\alpha_1}^\triangle= -\mathsf{w}_{01}^\triangle$ and $\mathsf{x}_{\alpha_1}^{\triangle'} = -\mathsf{w}_{12}^{\triangle'}$. If $\mathsf{x}_{\alpha_0}^\triangle \geq 0$, then $\mathsf{w}_{12}^{\triangle'} = \mathsf{w}_{01}^\triangle - \mathsf{x}_{\alpha_0}^\triangle=-(\mathsf{x}_{\alpha_1}^\triangle+\mathsf{x}_{\alpha_0}^\triangle)$, and hence $\mathsf{x}_{\alpha_1}^{\triangle'}=\mathsf{x}_{\alpha_1}^\triangle+\mathsf{x}_{\alpha_0}^\triangle$. If $\mathsf{x}_{\alpha_0}^\triangle \leq 0$, then $\mathsf{w}_{12}^{\triangle'} = \mathsf{w}_{01}^\triangle$ and hence $\mathsf{x}_{\alpha_1}^{\triangle'}=\mathsf{x}_{\alpha_1}^\triangle$. By a similar argument for the edge $\alpha_2$ and the symmetry, we get the desired formula. \end{proof} The formula in \cref{f:tropical x-flip} is the tropical analogue of the cluster Poisson transformation \eqref{eq:X-transf}. Then we get: \begin{cor} The shear coordinates $\mathsf{x}_\triangle: \mathcal{L}^p(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \mathbb{Q}^{e(\triangle)}$ associated with ideal triangulations $\triangle$ of $\Sigma$ combine to give a canonical $MC(\Sigma)$-equivariant isomorphism $\mathcal{L}^p(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \X_{\Sigma}(\mathbb{Q}^\mathsf{T})$. \end{cor} \paragraph{\textbf{Fock--Goncharov's reconstruction, revisited.}} For later use in \cref{subsec:lamination_amalgamation}, let us recall the reconstruction procedure of a rational $\X$-laminations from the shear coordinates given in \cite{FG07}. Suppose $(\mathsf{x}_\alpha)_\alpha \in \mathbb{Z}^{e_{\interior}(\triangle)}$ is given. On each triangle $T \in t(\triangle)$, draw an infinite collection of disjoint corner arcs around each corner (\cref{fig:gluing block}). We are going to glue these local blocks together to form an integral $\X$-lamination. \begin{figure}[ht] \centering \begin{tikzpicture} \draw[blue] (-30:2) coordinate(A) -- (90:2) coordinate(B) -- (210:2) coordinate(C) --cycle; \begin{scope} \clip (-30:2) -- (90:2) -- (210:2) --cycle; \foreach \i in {0.8,1,1.2,1.4} { \foreach \x in {0,120,240} { \draw(-30+\x:2)++(120+\x:\i) coordinate(a); \draw[red] (a) arc(120+\x:180+\x:\i); \draw[red,thick,dotted] (-30+\x:1.7) -- (-30+\x:1.34); } } \end{scope} \end{tikzpicture} \caption{The building block for reconstruction from the shear coordinates.} \label{fig:gluing block} \end{figure} Consider two triangles $T_L$ and $T_R$ that share an interior edge $\alpha$. Fatten $\alpha$ into a biangle $B_\alpha$, which is bounded by the boundary intervals $\alpha_L$ and $\alpha_R$ of $T_L$ and $T_R$, respectively. For $Z \in \{L,R\}$, let $S_{Z}$ denote the set of endpoints of the infinite corner arcs on $\alpha_Z$. We connect the points in $S_L$ and $S_R$ inside the biangle $B_\alpha$ by the following rule. See \cref{fig:FG_gluing}. \begin{itemize} \item For $Z \in \{L,R\}$, choose an orientation-preserving homeomorphism $\phi_{Z}:\mathbb{R} \to \alpha_Z$ so that $\phi_{Z}(\frac{1}{2}+\mathbb{Z})=S_{Z}^\pm$, and $\phi_Z(\mathbb{R}_{<0}) \cap S_Z$ consists of all the strands coming from the corner arcs around $m^+_{\alpha_Z}$. \item Put the points \begin{align}\label{eq:pins_reconstruction} p_L:=\phi_{L}(\mathsf{x}_{\alpha})\quad \mbox{and} \quad p_R:=\phi_{R}(0), \end{align} which we call the \emph{pins}. \item There exists an orientation-reversing homeomorphism $f:\alpha_L \to \alpha_R$ such that $f(\frac{1}{2}+\mathbb{Z})=\frac{1}{2}+\mathbb{Z}$ and $f(p_L)=p_R$. Connect the points $s \in S_L$ to the points $f(s) \in S_R$ by a disjoint collection of curves. \end{itemize} \begin{figure}[ht] \centering \begin{tikzpicture} \draw[blue] (-2,0) -- (0,2) -- (0,-2) --cycle; \node[below] at (0,-2) {$S_L$}; \foreach \x in {1.8,1.6,1.4,1.2} { \draw[red] (0,-2+\x) arc(90:135:\x); \draw[red] (0,2-\x) arc(-90:-135:\x); } \draw[red,thick,dotted] ($(0,-2)+(112.5:0.9)$)--++(-67.5:0.4); \draw[red,thick,dotted] ($(0,2)+(-112.5:0.9)$)--++(67.5:0.4); \draw(-0.1,-0) -- (0.1,-0) node[right,scale=0.8]{$0$}; \pinn{0,0.5}{180}{0.1}{0.03cm} \draw(0.1,0.5) coordinate(p1); \node[scale=0.8] at (0.3,0.7) {$p_L$}; \begin{scope}[xshift=1cm] \draw[blue] (2,0) -- (0,2) -- (0,-2) --cycle; \node[below] at (0,-2) {$S_R$}; \foreach \x in {1.8,1.6,1.4,1.2} { \draw[red] (0,-2+\x) arc(90:45:\x); \draw[red] (0,2-\x) arc(-90:-45:\x); } \draw[red,thick,dotted] ($(0,-2)+(67.5:0.9)$)--++(-112.5:0.4); \draw[red,thick,dotted] ($(0,2)+(-67.5:0.9)$)--++(112.5:0.4); \pinn{0,0}{0}{0.1}{0.03cm} \draw(-0.1,0) coordinate(p2); \node[scale=0.8] at (-0.2,-0.25) {$p_R$}; \end{scope} \draw[dashed] (p1) -- (p2); \draw [thick,-{Classical TikZ Rightarrow[length=4pt]},decorate,decoration={snake,amplitude=1.8pt,pre length=2pt,post length=3pt}](3.7,0) --(5.3,0); \begin{scope}[xshift=8cm] \draw[blue] (2,0) -- (0,2) -- (-2,0) -- (0,-2) --cycle; \draw[blue] (0,-2) to[bend left=30pt] (0,2); \draw[blue] (0,-2) to[bend right=30pt] (0,2); \foreach \x in {1.4,1.2} { \draw[red] (0,-2+\x) arc(90:135:\x); \draw[red] (0,-2+\x) arc(90:45:\x); \draw[red] (0,2-\x) arc(-90:-135:\x); \draw[red] (0,2-\x) arc(-90:-45:\x); } \draw[red,thick,dotted] ($(0,-2)+(90:0.9)$)--++(-90:0.4); \draw[red,thick,dotted] ($(0,2)+(-90:0.9)$)--++(90:0.4); \draw[red] (0,2)++(-135:1.6) ..controls ++(-45:1) and ($(0,-2)+(45:1.8)+(135:1)$).. ($(0,-2)+(45:1.8)$); \draw[red] (0,2)++(-135:1.8) ..controls ++(-45:1) and ($(0,-2)+(45:1.6)+(135:1)$).. ($(0,-2)+(45:1.6)$); \end{scope} \end{tikzpicture} \caption{Fock--Goncharov's gluing procedure of $\X$-laminations.} \label{fig:FG_gluing} \end{figure} Then we get an infinite collection of curves on the quadrilateral $T_L \cup B_\alpha \cup T_R$. Applying this construction to each pair of consecutive triangles, we get an infinite collection of curves on $\Sigma$. Then we do the followings: \begin{itemize} \item Remove the peripheral curves around each special point of $\Sigma$. \item For each puncture $m \in \mathbb{M}_\circ$ of $\Sigma$, replace each spiralling end around $m$ with a signed end at $m$, while encoding the spiralling directions in signs by reversing the rule in \cref{fig:spiral}. \end{itemize} Then we get an integral $\X$-lamination $(L,\sigma_L)$, which satisfies $\mathsf{x}^\triangle_\alpha(L,\sigma_L) = \mathsf{x}_\alpha$ for $\alpha \in e_{\interior}(\triangle)$. \begin{rem}\label{rem:pin_shift} The asymmetry of the pins $p_L$ and $p_R$ is explained as follows. In fact, if we change the pins to $p'_L:=\phi_L(\mathsf{x}_{\alpha}-\nu)$ and $p'_R:=\phi_R(\nu)$ for $\nu \in \mathbb{Z}$, the resulting pairing of points does not change. In particular, we could instead use the pins $p'_L=\phi_L(0)$ and $p'_R=\phi_R(\mathsf{x}_\alpha)$, which produces the same result. \end{rem} \subsection{Gluing map}\label{subsec:lamination_amalgamation} Now let us turn our attention to the tropical analogue of the gluing map. In the setting at the beginning of \cref{subsec:Teich_amalgamation}, we are going to construct a map \begin{align*} q_{\Sigma,\Sigma'}^\mathsf{T}: \mathcal{L}^p(\Sigma,\mathbb{Q}) \to \mathcal{L}^p(\Sigma',\mathbb{Q}) \end{align*} satisfying the equation $(q_{\Sigma,\Sigma'}^\mathsf{T})^*\mathsf{x}_{\overline{\alpha}}^{\triangle'} = \mathsf{x}_{\alpha_L}^\triangle + \mathsf{x}_{\alpha_R}^\triangle$, which is the tropical analogue of the formula given in \cref{prop:amalgamation}. It is also defined so that equivariant under the $\mathbb{Q}_{>0}$-action rescaling the weights on the curves, and the action $\sigma_{\alpha_L,\alpha_R}:\mathbb{Q} \curvearrowright \mathcal{L}^p(\Sigma,\mathbb{Q})$ given by the shift \begin{align}\label{eq:gluing_action} \mu.(\nu_{\alpha_L},\nu_{\alpha_R}):= (\nu_{\alpha_L}+\mu,\nu_{\alpha_R}-\mu) \end{align} for $\mu \in \mathbb{Q}$, keeping the other $\nu_\alpha$, $\alpha\neq \alpha_L,\alpha_R$ intact. Let $(L,\sigma_L,\nu) \in \mathcal{L}^p(\Sigma,\mathbb{Z})$ be an integral $\P$-lamination. Represent $L$ by a collection of curves with weight $1$. Around each endpoint of $\alpha_L$ and $\alpha_R$, draw an infinite collection of disjoint peripheral curves so that they are disjoint from the curves in $L$. For $Z \in \{L,R\}$, let $S_Z$ denote the set of the endpoints of the curves in $L$ and these additional peripheral curves on the edge $\alpha_Z$. Insert a biangle $B$ between $\alpha_L$ and $\alpha_R$, and identify $\Sigma'= \Sigma \cup B$. We connect the points in $S_L$ and $S_R$ inside the biangle $B$ by the following rule: \begin{itemize} \item Choose an orientation-preserving homeomorphism $\psi_{Z}:\mathbb{R} \to \alpha_Z$ so that $\psi_{Z}(\frac{1}{2}+\mathbb{Z})=S_{Z}$, and $\psi_Z(\mathbb{R}_{<0}) \cap S_Z$ consists of all the endpoints of the additional peripheral curves around the marked point $m^+_{\alpha_Z}$. \item Put the point $p_Z:=\psi_{Z}(\nu_{\alpha_Z}) \in \alpha_Z$, which we call the \emph{pin}. \item There exists an orientation-reversing homeomorphism $f:\alpha_L \to \alpha_R$ such that $f(\frac{1}{2}+\mathbb{Z})=\frac{1}{2}+\mathbb{Z}$ and $f(p_L)=p_R$. Connect the points $s \in S_L$ to the points $f(s) \in S_R$ by a disjoint collection of curves. \end{itemize} Then we get an infinite collection of curves on $\Sigma'=\Sigma\cup B$. Here the reader should notice the similarity to the reconstruction procedure given in the previous subsection. The marked points of $\alpha_L$ and $\alpha_R$ are identified, and regarded as new marked points in $\Sigma'$. For each of these new marked points, do the similar procedure as before: remove the peripherals around new special points, and replace spiralling ends to signed ends around new punctures. Thus we get an integral $\P$-lamination $\widehat{L}'=q^\mathsf{T}_{\Sigma,\Sigma'}(\widehat{L}) \in \mathcal{L}^p(\Sigma,\mathbb{Z})$. The construction is clearly equivariant under the rescaling $\mathbb{Z}_{>0}$-action. \begin{dfn}\label{dfn:gluing_lamination} By extending the above construction $\mathbb{Q}_{>0}$-equivariantly, we obtain a map $q^\mathsf{T}_{\Sigma,\Sigma'}: \mathcal{L}^p(\Sigma,\mathbb{Q}) \to \mathcal{L}^p(\Sigma',\mathbb{Q})$, which we call the \emph{gluing map} along $\alpha_L$ and $\alpha_R$. \end{dfn} The following is easily verified with \cref{rem:pin_shift} in mind: \begin{lem}\label{lem:shift-invariance} The gluing map $q^\mathsf{T}_{\Sigma,\Sigma'}$ is invariant under the shift action \eqref{eq:gluing_action}. \end{lem} Any ideal triangulation $\triangle$ of $\Sigma$ naturally induces a triangulation $\triangle'$ of $\Sigma'$, where the edges $\alpha_L$ and $\alpha_R$ are identified and give an interior edge $\alpha$ of $\triangle$. The other edges are naturally inherited to $\triangle'$. \begin{thm}\label{thm:amalgamation} The gluing map $q^\mathsf{T}_{\Sigma,\Sigma'}$ is the tropical analogue of \cref{prop:amalgamation}. Namely, for any ideal triangulation $\triangle$ of $\Sigma$ and the induced triangulation $\triangle'$ of $\Sigma'$, it satisfies \begin{align*} (q^\mathsf{T}_{\Sigma,\Sigma'})^\ast\mathsf{x}^{\triangle'}_{\overline{\alpha}} = \mathsf{x}^\triangle_{\alpha_L} + \mathsf{x}^\triangle_{\alpha_R}, \end{align*} and the other coordinates are kept intact: $(q^\mathsf{T}_{\Sigma,\Sigma'})^\ast\mathsf{x}^{\triangle'}_{\alpha} = \mathsf{x}^\triangle_{\alpha}$ for $\alpha \neq \overline{\alpha}$. \end{thm} \begin{proof} The last statement is clear from the definition. To see the relation between the coordinates on the edges $\alpha_L$, $\alpha_R$ and $\alpha$, it suffices to consider an integral lamination $(L,\sigma_L,\nu) \in \mathcal{L}^p(\Sigma,\mathbb{Z})$ by $\mathbb{Q}_{>0}$-equivariance. Write $\mathsf{x}_\alpha:=\mathsf{x}_\alpha^\triangle(L,\sigma_L,\nu)$ for $\alpha \in e(\triangle)$ . Recall from \eqref{eq:boundary_coord} that the pinnings are given by \begin{align*} \begin{aligned} \nu_{\alpha_L}= \mathsf{x}_{\alpha_L} +c_{\alpha_L}, \quad \nu_{\alpha_R}&= \mathsf{x}_{\alpha_R} +c_{\alpha_R}, \end{aligned} \end{align*} where we write $c_{\alpha}:= \sum_j w_j(\alpha:\widehat{\gamma}_j)$ for $\alpha\in \mathbb{B}$ with $L=\{(\gamma_j,w_j)\}_j$. By \cref{rem:pin_shift}, the result of gluing is the same if we use the pins $\widetilde{p}_Z=\psi_Z(\widetilde{\nu}_{\alpha_Z})$ with \begin{align}\label{eq:pins_gluing} \begin{aligned} \widetilde{\nu}_{\alpha_L}:= (\mathsf{x}_{\alpha_L}+\mathsf{x}_{\alpha_R}) +c_{\alpha_L}, \quad \widetilde{\nu}_{\alpha_R}:= c_{\alpha_R}. \end{aligned} \end{align} Comparing to the reconstruction procedure in the previous subsection, we here have \lq\lq original'' corner arcs of $L$ in $T_L$ and $T_R$ before adding infinite collections of peripheral curves in the gluing procedure. Hence the two parametrizations of edges are related by \begin{align*} \phi_Z(n) = \psi_Z(n+c_{\alpha_Z}) \end{align*} for $n \in \mathbb{Z}$ and $Z \in \{L,R\}$. See \cref{fig:difference_gluing}. Comparing two choices of pins \eqref{eq:pins_reconstruction} and \eqref{eq:pins_gluing} under this relation, we see that $(L',\sigma_{L'},\nu')=q_{\Sigma,\Sigma'}(L,\sigma_L,\nu)$ if and only if $\mathsf{x}_{\overline{\alpha}}(L',\sigma_{L'},\nu')=\mathsf{x}_{\alpha_L}(L,\sigma_L,\nu) + \mathsf{x}_{\alpha_R}(L,\sigma_L,\nu)$. \end{proof} \begin{figure}[ht] \begin{tikzpicture} \draw[blue] (0,-2) -- (0,2) --(-2,0) --cycle; \draw[thick] (0,-2.5) -- (0,2.5); \fill(0,2) circle(2pt); \fill(0,-2) circle(2pt); \foreach \i in {0.8,1.0,1.2,1.4} \draw[myblue] (0,-2+\i) arc(90:135:\i); \foreach \i in {1.6,1.8} \draw[red] (0,-2+\i) arc(90:135:\i); \foreach \i in {0.8,1.0,1.2,1.4} \draw[myblue] (0,2-\i) arc(-90:-135:\i); \draw[myblue,thick,dotted] ($(0,-2)+(112.5:0.6)$)--++(-67.5:0.3); \draw[myblue,thick,dotted] ($(0,2)+(-112.5:0.6)$)--++(67.5:0.3); \draw[dashed] (-0.1,-0.5) --++(0.6,0) node[anchor=west]{$\psi_L^+(0)$}; \draw[dashed] (-0.1,0) --++(0.6,0) node[anchor=west]{$\phi_L^+(0)$}; \node at (-1.5,1) {$T_L$}; \node at (0.5,1.2) {$\alpha_L$}; \end{tikzpicture} \caption{Comparison of two edge parametrizations. Arcs in the given lamination are shown in red, while the peripheral curves added upon the gluing procedure are shown in blue.} \label{fig:difference_gluing} \end{figure} \subsection{Ensemble map} Recall the following from \cite{FG07}: \begin{dfn} A \emph{rational $\A$-lamination} on $\Sigma$ is the isotopy class of a mutually non-isotopic, disjoint collection $\{\gamma_i\}_i$ of curves in $\Sigma$ that do not incident to punctures, together with rational weights $w_i \in \mathbb{Q}$ such that $w_i \geq 0$ if $\gamma_i$ is non-peripheral. Such a data is considered modulo the equivalence relation generated by isotopies and the following operations: \begin{enumerate} \item Remove a contractible curve or a curve with weight $0$. \item Combine a pair of isotopic curves with weights $u$ and $v$ into a single curve with the weight $u+v$. \end{enumerate} Let $\mathcal{L}^a(\Sigma,\mathbb{Q})$ denote the set of integral $\A$-laminations, whose element is denoted by $L=\{(\gamma_i,w_i)\}_i$. \end{dfn} For each ideal arc $\alpha$ on $\Sigma$ and a rational $\A$-lamination $L=\{(\gamma_i,w_i)\}_i$, isotope each curve $\gamma_i$ so that the intersection with $\alpha$ is minimal. Then we define \begin{align*} \mathsf{a}_\alpha(L):= \sum_i w_i \mathsf{a}_\alpha(\gamma_i), \end{align*} where $\mathsf{a}_\alpha(\gamma_i) \in \frac 1 2 \mathbb{Z}_{\geq 0}$ denotes half the geometric intersection number of $\alpha$ and $\gamma_i$. Then it is known that, for any ideal triangulation $\triangle$ of $\Sigma$, the map \begin{align*} \mathsf{a}_\triangle:=(\mathsf{a}_\alpha)_{\alpha \in e(\triangle)}: \mathcal{L}^a(\Sigma,\mathbb{Q}) \to \mathbb{Q}^{e(\triangle)} \end{align*} gives a bijection. They transform by the tropical analogue of cluster $K_2$-transformation \eqref{eq:A-transf}, and thus combine to give an $MC(\Sigma)$-equivariant isomorphism $\mathcal{L}^a(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \A_{\Sigma}(\mathbb{Q}^\mathsf{T})$. A rational $\A$-lamination $L$ is said to be \emph{integral} if $\mathsf{a}_\triangle(L) \in \mathbb{Z}^{e(\triangle)}$ for any ideal triangulation $\triangle$. Notice that an $\A$-lamination with integral weights may not be integral in this sense. Since the coordinate transformations are integral piece-wise linear, it suffices to check this condition for one triangulation. Let $\mathcal{L}^a(\Sigma,\mathbb{Z}) \subset \mathcal{L}^a(\Sigma,\mathbb{Q})$ denote the subset of integral $\A$-laminations, which is identified with $\A_\Sigma(\mathbb{Z}^\mathsf{T})$. Let us define the \emph{extended ensemble map} \begin{align} p_\Sigma^\mathsf{T}: \mathcal{L}^a(\Sigma,\mathbb{Q}) \to \mathcal{L}^p(\Sigma,\mathbb{Q}) \end{align} by forgetting the peripheral components, and defining the pinning $\nu_\alpha \in \mathbb{Z}$ to be minus the weight of the peripheral component around the initial marked point $m^+_\alpha$. \begin{prop}\label{prop:ensemble_tropical} For any ideal triangulation $\triangle$ of $\Sigma$, we have \begin{align}\label{eq:ensemble_tropical} (p_\Sigma^\mathsf{T})^\ast \mathsf{x}_\kappa^\triangle = \sum_{\alpha \in e(\triangle)} (\varepsilon_{\kappa\alpha}^\triangle+ m_{\kappa\alpha}) \mathsf{a}_\alpha \end{align} for all $\kappa \in e(\triangle)$. \end{prop} \begin{proof} For $\kappa \in e_{\interior}(\triangle)$, we have $m_{\kappa\alpha}=0$ and hence the formula is proved in \cite{FG07}. For $\kappa \in \mathbb{B}$, label the edges of the unique triangle $T$ containing $\kappa$ as $\alpha,\beta,\kappa$ in this clockwise order (as in \cref{fig:ensemble_boundary}). Let $m \in \mathbb{M}$ be the initial marked point of $\kappa$. Let $L$ be a rational $\A$-lamination, and denote by $\mathsf{w}_{T,m}(L)$ (resp. $\mathsf{w}_m(L)$) the total weight of the corner arcs of $T\cap L$ (resp. the total weight of the peripheral components of $L$) around $m$. Then we have \begin{align*} \mathsf{w}_{T,m}(L) = \mathsf{a}_\alpha(L) +\mathsf{a}_\kappa(L) - \mathsf{a}_\beta(L). \end{align*} Observe that the integral $\P$-lamination $p_\Sigma^\mathsf{T}(L)$ has the corner arcs with the total weight $\mathsf{w}_{T,m}(L) - \mathsf{w}_m(L)$, equipped with the pinning $\nu_\kappa=-\mathsf{w}_m(L)$. Then by definition of the coordinate $\mathsf{x}_\kappa$, we get \begin{align*} \mathsf{x}^\triangle_\kappa(p_\Sigma^\mathsf{T}(L)) = \nu_\kappa - (\mathsf{w}_{T,m}(L) - \mathsf{w}_m(L)) = -\mathsf{w}_{T,m}(L) =-\mathsf{a}_\alpha(L) -\mathsf{a}_\kappa(L) + \mathsf{a}_\beta(L). \end{align*} This is exactly the desired formula. The assertion is proved. \end{proof} When $\mathbb{M}_\circ \neq \emptyset$, the map $p_\Sigma^\mathsf{T}$ is neither injective nor surjective, since it forgets peripheral components and its image does not have components incident to punctures. \begin{thm}\label{thm:index_2} If $\mathbb{M}_\circ=\emptyset$, then $p_\Sigma^\mathsf{T}: \mathcal{L}^a(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \mathcal{L}^p(\Sigma,\mathbb{Q})$ is a bijection. Moreover, its restriction to the subset $\mathcal{L}^a(\Sigma,\mathbb{Z})$ is an embedding of index $2$. \end{thm} \begin{proof} The first assertion is clear since the weights of peripheral components around special points can be recovered from the pinnings. For the second assertion, observe that the inverse formula of \eqref{eq:ensemble_tropical} is given by \begin{align*} \mathsf{a}_\alpha= \sum_{\beta \in e(\triangle)} q_{\alpha\beta} \mathsf{x}_\beta^\triangle \end{align*} as the linear version of the formula given in \cref{thm:A to X}. The image $p_\Sigma^\mathsf{T}(\mathcal{L}^a(\Sigma,\mathbb{Z}))$ is characterized as the subset where the coordinates $\mathsf{a}_\alpha$ are integral for all $\alpha\in e(\triangle)$, which is obviously a sub-lattice of index $2$. \end{proof} \subsection{Thurston compactification with pinnings} The coordinate transformation $\mathsf{x}_{\triangle'} \circ \mathsf{x}_{\triangle}^{-1}$ given in \cref{prop:p-lamination} is a Lipschitz map with respect to the Euclidean metric on $\mathbb{Q}^{\triangle} \cong \mathbb{Q}^{-3\chi(\Sigma^\ast)+2|\mathbb{M}_\partial|}$. Let $\mathcal{L}^p(\Sigma,\mathbb{R})$ be the corresponding metric completion of $\mathcal{L}^p(\Sigma,\mathbb{Q})$, which does not depend on a specific coordinate system. Each coordinate system $\mathsf{x}_\triangle$ is extended to a homeomorphism $\mathsf{x}_\triangle: \mathcal{L}^p(\Sigma,\mathbb{R}) \xrightarrow{\sim} \mathbb{R}^\triangle$, being still denoted by the same symbol. We call an element of $\mathcal{L}^p(\Sigma,\mathbb{R})$ a \emph{real $\P$-lamination}. We have the following structures: \begin{itemize} \item Since the $\mathbb{Q}_{>0}$-action on $\mathcal{L}^p(\Sigma,\mathbb{Q})$ rescaling the weights is continuous, we get a continuous $\mathbb{R}_{>0}$-action on $\mathcal{L}^p(\Sigma,\mathbb{R})$. \item The Shift action \eqref{eq:gluing_action} of the pinnings is also extended to a continuous action $\sigma_{\alpha_L,\alpha_R}:\mathbb{R} \curvearrowright \mathcal{L}^p(\Sigma,\mathbb{R})$. \item Since the coordinate expression of the gluing map given in \cref{thm:amalgamation} is continuous, it is extended to a continuous map \begin{align}\label{eq:gluing_Th_boundary} q_{\Sigma,\Sigma'}^\mathsf{T}: \mathcal{L}^p(\Sigma,\mathbb{R}) \to \mathcal{L}^p(\Sigma'\mathbb{R}), \end{align} which is invariant under $\sigma_{\alpha_L,\alpha_R}$ above. \end{itemize} Let us consider the sphere $\mathbb{S} \mathcal{L}^p(\Sigma,\mathbb{R}):=\mathcal{L}^p(\Sigma,\mathbb{R})/\mathbb{R}_{>0} \cong S^{-3\chi(\Sigma^\ast)+2|\mathbb{M}_\partial|-1}$. \begin{dfn}\label{dfn:Thurston_P} The \emph{Thurston compactification} of the Teichm\"uller\ space with pinnings is defined to be \begin{align*} \overline{\mathcal{T}^p(\Sigma)}:=\mathcal{T}^p(\Sigma) \cup \mathbb{S} \mathcal{L}^p(\Sigma,\mathbb{R}), \end{align*} where the topology is endowed so that a sequence $(g_n)$ in $\mathcal{T}^p(\Sigma)$ converges to a point $[G] \in \mathbb{S}\mathcal{L}^p(\Sigma,\mathbb{R})$ if \begin{align}\label{eq:Thurston_convergence} [\log X_{\alpha_1}^\triangle(g_n):\cdots: \log X_{\alpha_N}^\triangle(g_n)] \to [\mathsf{x}^\triangle_{\alpha_1}(G):\cdots: \mathsf{x}^\triangle_{\alpha_N}(G)], \quad n\to\infty \end{align} for any ideal triangulation $\triangle$. Here $e(\triangle)=\{\alpha_1,\dots,\alpha_N\}$. \end{dfn} It is known \cite{FG16,Le16,Ish19} that the condition \eqref{eq:Thurston_convergence} does not depend on the triangulation. In particular, the action of the mapping class group $MC(\Sigma)$ continuously extends to $\overline{\mathcal{T}^p(\Sigma)}$. The topological space $\overline{\mathcal{T}^p(\Sigma)}$ is homeomorphic to a closed ball of dimension $-3\chi(\Sigma^\ast)+2|\mathbb{M}_\partial|$. \begin{thm}\label{thm:gluing_Thurston} The gluing maps \eqref{eq:gluing_Teich} and \eqref{eq:gluing_Th_boundary} combine to give a continuous map \begin{align*} \overline{q}_{\Sigma,\Sigma'}: \overline{\mathcal{T}^p(\Sigma)} \to \overline{\mathcal{T}^p(\Sigma')} \end{align*} between the Thurston compactifications. \end{thm} \begin{proof} It immediately follows from the coordinate expressions \begin{align*} q_{\Sigma,\Sigma'}^\ast (\log X_{\overline{\alpha}}^{\triangle'}) &= \log X_{\alpha_L}^\triangle + \log X_{\alpha_R}^\triangle, \\ (q^\mathsf{T}_{\Sigma,\Sigma'})^\ast (\mathsf{x}_{\overline{\alpha}}^{\triangle'}) &= \mathsf{x}_{\alpha_L}^\triangle + \mathsf{x}_{\alpha_R}^\triangle \end{align*} and the definition of the topology on the compactification. \end{proof} \section{Duality maps}\label{sec:duality} The Teichm\"uller\ spaces $(\mathcal{T}^a(\Sigma),\mathcal{T}^p(\Sigma))$ are ``positive real parts'' of the moduli spaces $(\A^\times_{SL_2,\Sigma},\P_{PGL_2,\Sigma})$ introduced by Fock, Goncharov and Shen. These moduli spaces have natural cluster structures, for which we have algebra isomorphisms $\mathcal{O}(\A_\Sigma) \cong \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ and $\mathcal{O}(\X_\Sigma) \cong \mathcal{O}(\P_{PGL_2,\Sigma})$ over $\mathbb{C}$ \cite{Shen20,IOS22}. Recall the canonical isomorphisms $\X_{\Sigma}(\mathbb{Z}^\mathsf{T}) \cong \mathcal{L}^p(\Sigma,\mathbb{Z})$ and $\A_{\Sigma}(\mathbb{Z}^\mathsf{T}) \cong \mathcal{L}^a(\Sigma,\mathbb{Z})$. In this section, we study duality maps \begin{align*} &\mathbb{I}_\X: \X_{\Sigma}(\mathbb{Z}^\mathsf{T}) \to \mathcal{O}(\A_{\Sigma}), \\ &\mathbb{I}_\A: \A_{\Sigma}(\mathbb{Z}^\mathsf{T}) \to \mathcal{O}(\X_{\Sigma}) \end{align*} based on our investigation on the ``$\P$-type" spaces in the previous sections. \subsection{Relation with the moduli spaces of $SL_2$-/$PGL_2$-local systems} In order to precisely state algebraic results, we quickly review the relation between the Teichm\"uller\ theory developed in the previous sections with the moduli spaces of $SL_2$-/$PGL_2$-local systems introduced in \cite{FG06,GS19}. Let $\Sigma$ be a marked surface, and consider the algebraic group $SL_2$ over $\mathbb{C}$. To the pair $(SL_2,\Sigma)$, associated is the moduli space $\A_{SL_2,\Sigma}$ of \emph{decorated twisted $SL_2$-local systems} on $\Sigma$. It is an algebraic stack over $\mathbb{C}$. The reader is referred to \cite{FG06} for details. Fock--Goncharov showed that the moduli space $\A_{G,\Sigma}$ has a canonical cluster $K_2$-structure, and its positive real part is canonically identified with the decorated Teichm\"uller\ space $\mathcal{T}^a(\Sigma)$ \cite[Theorem 1.7 (b)]{FG06}\footnote{Indeed, the isomorphism $\mathcal{T}^a(\Sigma)\xrightarrow{\sim} \A_{SL_2,\Sigma}(\mathbb{R}_{>0})$ is obtained as follows. We can lift the monodromy representation $\rho:\pi_1(\Sigma) \to PSL_2(\mathbb{R})$ of a marked hyperbolic structure to a twisted representation $\widetilde{\rho}: \pi_1(T'\Sigma) \to SL_2(\mathbb{R})$, as discussed in \cite[Section 1.3]{BW}. See \cite[Section 11]{FG06} for an appropriate way to lift a cyclic configuration of horocycles to a twisted cyclic configuration in the decorated flag variety $\A_{SL_2}(\mathbb{R})=\mathbb{R}^2\setminus\{0\}$.}. From this, we get an algebra embedding \begin{align} \iota_\A: \mathcal{O}(\A_{SL_2,\Sigma}) \hookrightarrow \mathcal{C}^\infty(\mathcal{T}^a(\Sigma)), \end{align} where $\mathcal{O}(\A_{SL_2,\Sigma})$ denotes the $\mathbb{C}$-algebra of global functions on $\A_{SL_2,\Sigma}$. The $\lambda$-length coordinate $A_\alpha$ along an ideal arc $\alpha$ lies in the image of $\iota_\A$, and realized by the cluster $K_2$-coordinate on $\A_{SL_2,\Sigma}$ assigned to $\alpha$. See \cite[Section 11.2]{FG06}. When $\Sigma$ is unpunctured, the function algebra $\mathcal{O}(\A^\times_{SL_2,\Sigma})$ of a certain open subspace $\A^\times_{SL_2,\Sigma} \subset \A_{SL_2,\Sigma}$ is known to coincide with the associated \emph{cluster algebra} $\mathscr{A}_{\mathfrak{sl}_2,\Sigma}$ (see, for instance, \cite{IOS22}). Hence the cluster $\A$-coordinates, together with the inverses of frozen coordinates, generate the algebra $\mathcal{O}(\A^\times_{SL_2,\Sigma})$. There is a similar results related to $\mathcal{T}^p(\Sigma)$. Let $PGL_2:=GL_2/\mathbb{G}_m$, the adjoint group of $SL_2$ having the same Lie algebra $\mathfrak{sl}_2$. To the pair $(PGL_2,\Sigma)$, associated is the moduli space $\P_{PGL_2,\Sigma}$ of \emph{framed $PGL_2$-local systems with pinnings} on $\Sigma$. It is introduced in \cite{GS19}, extending the moduli space $\X_{PGL_2,\Sigma}$ studied in \cite{FG06}. See Section 3 \emph{loc.~sit.~}for the $PGL_2$-case. The moduli space $\P_{PGL_2,\Sigma}$ has a cluster Poisson structure, and the pair $(\A^\times_{SL_2,\Sigma},\P_{PGL_2,\Sigma})$ forms a \emph{cluster ensemble} in the sense in \cite{FG09}. In particular, there is the \emph{(extended) ensemble map} $p_\Sigma: \A^\times_{SL_2,\Sigma} \to \P_{PGL_2,\Sigma}$. In terms of the coordinates, it is expressed as \begin{align*} p_\Sigma^\ast X_\kappa^\triangle = \prod_{\alpha \in e(\triangle)} A_\alpha^{\varepsilon^\triangle_{\kappa\alpha}+m_{\kappa\alpha}} \end{align*} for all $\kappa \in e(\triangle)$. When $\Sigma$ is unpunctured, the induced homomorphism \begin{align*} p_\Sigma^\ast: \mathcal{O}(\P_{PGL_2,\Sigma}) \to \mathcal{O}(\A^\times_{SL_2,\Sigma}) \end{align*} is an injective, finite homomorphism of index $2$. As a slight extension of \cite[Theorem 1.7 (a)]{FG06}, one can verify that the positive real part of $\P_{PGL_2,\Sigma}$ is identified with the Teichm\"uller\ space with pinnings $\mathcal{T}^p(\Sigma)$. From this, we get an algebra embedding \begin{align} \iota_\X: \mathcal{O}(\P_{PGL_2,\Sigma}) \hookrightarrow \mathcal{C}^\infty(\mathcal{T}^p(\Sigma)). \end{align} Although the cross ratios $X_\alpha^\triangle$ are not extended to global functions on $\P_{PGL_2,\Sigma}$, they can be defined on an open subspace $\P_{PGL_2,\Sigma}^\triangle \subset \P_{PGL_2,\Sigma}$ associated with an ideal triangulation $\triangle$. Hence the cross ratios lie in the image of a similar embedding \begin{align} \iota_\X^\triangle: \mathcal{O}(\P_{PGL_2,\Sigma}^\triangle) \hookrightarrow \mathcal{C}^\infty(\mathcal{T}^p(\Sigma)), \end{align} and realized by the cluster Poisson coordinates. From \cite{IO20}, we have Wilson line morphisms $g_{[c]}: \P_{PGL_2,\Sigma} \to PGL_2$ associated with any arc class $[c]$. The following result allows us to study the cluster algebras $\mathcal{O}(\A_\Sigma)$ and $\mathcal{O}(\X_\Sigma)$ in terms of these moduli spaces: \begin{thm}[\cite{Shen20} for $\X_\Sigma$, \cite{IOS22} for $\A_\Sigma$] We have algebra isomorphisms $\mathcal{O}(\A_\Sigma) \cong \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ and $\mathcal{O}(\X_\Sigma) \cong \mathcal{O}(\P_{PGL_2,\Sigma})$ over $\mathbb{C}$. \end{thm} Summarizing, we have \begin{align*} \mathcal{O}(\A_\Sigma)=\mathcal{O}(\A_{SL_2,\Sigma}^\times) \subset C^\infty(\mathcal{T}^a(\Sigma)), \quad \mathcal{O}(\X_\Sigma)=\mathcal{O}(\P_{PGL_2,\Sigma}) \subset C^\infty(\mathcal{T}^p(\Sigma)). \end{align*} In particular, the relations (for instance those given in \cref{prop:ensemble,thm:A to X,prop:Wilson_lambda}) among the coordinates/Wilson lines mentioned above are valid inside these subalgebras. \subsection{The basis of $\mathcal{O}(\X_\Sigma)$ parametrized by the integral $\A$-laminations} For simplicity, let us restrict our attention to an unpunctured surface $\Sigma$. It is straightforward to extend our construction to the general case, following \cite{FG09}. \begin{dfn}\label{def:skein_lift_A} Let $L=\{(\gamma_i,w_i)\} \in \mathcal{L}^a(\Sigma,\mathbb{Z})$ be an integral $\A$-lamination. We define the corresponding function $\mathbb{I}_\A(L) \in C^\infty(\mathcal{T}^p(\Sigma))$, as follows. \begin{itemize} \item For each weighted non-peripheral loop $(\gamma_i,w_i)$, associate the trace-of-monodromy function \begin{align*} \mathrm{Tr}_{[\gamma_i]^{w_i}}, \end{align*} where $[\gamma_i]^{w_i} \in \pi_1(\Sigma)$ denotes the $w_i$-th power of a based loop homotopic to $\gamma_i$. \item For each weighted non-peripheral arc $(\gamma_i,w_i)$, associate the function \begin{align*} \Delta_{22}(g_{[\gamma_i]})^{w_i}. \end{align*} \item For each weighted peripheral arc $(\gamma_i,w_i)$ around a special point, associate the function \begin{align*} \Delta_{22}(g_{[\gamma_i]})^{w_i}. \end{align*} Here note that $\Delta_{22}(g_{[\gamma_i]})^{-1}=\Delta_{11}(g_{[\gamma_i]})$, $\gamma_i$ being peripheral. \end{itemize} Then the function $\mathbb{I}_\A(L) \in C^\infty(\mathcal{T}^p(\Sigma))$ is defined to be the product of these elements. \end{dfn} The map $\mathbb{I}_\A:\A_\Sigma(\mathbb{Z}^{\mathsf{T}}) \to C^\infty(\mathcal{T}^p(\Sigma))$ is clearly $MC(\Sigma)$-equivariant. Notice that the trace functions $\mathrm{Tr}_{[\gamma_i]^{w_i}}$ and the matrix coefficients $\Delta_{kl}(g_{[\gamma_i]})$ themselves do not belong to the subalgebra $\mathcal{O}(\P_{PGL_2,\Sigma})$, since the Wilson line takes its value in $PGL_2$, rather than $SL_2$. Nevertheless, we have: \begin{lem}\label{lem:function_even} For any integral $\A$-lamination, the product $\mathbb{I}_\A(L)$ belongs to $\mathcal{O}(\P_{PGL_2,\Sigma})$. Namely it is a well-defined global function on the moduli space $\P_{PGL_2,\Sigma}$. The Laurent expression of $\mathbb{I}_\A(L)$ in the cluster coordinates has the unique lowest term $\prod_{\alpha \in e(\triangle)} (X_\alpha^\triangle)^{-\mathsf{a}_\alpha(L)}$ for any ideal triangulation $\triangle$ \end{lem} \begin{proof} Fix an ideal triangulation $\triangle$ of $\Sigma$, and consider the coordinate expression of $\mathbb{I}_\A(L)$. From \cref{prop:Wilson_lambda} (1), the $(2,2)$-entry of Wilson lines are expressed as \begin{align*} |\Delta_{22}(g_{[\gamma_i]})|^{w_i}&=\prod_{\alpha \in e(\triangle)} (X^\triangle_\alpha)^{-w_i\mathsf{a}_\alpha(\gamma_i)}F_i^\triangle(X_\triangle)^{w_i} \end{align*} for some polynomial $F_i^\triangle$ in the coordinates $X_\alpha^\triangle$ with constant term $1$. A similar computation is applied to the monodromy, and hence we get \begin{align*} \mathrm{Tr}_{[\gamma_i]^{w_i}} =\prod_{\alpha \in e(\triangle)} (X^\triangle_\alpha)^{-w_i\mathsf{a}_\alpha(\gamma_i)}F_{i,w_i}^\triangle(X_\triangle) \end{align*} for some polynomial $F^\triangle_{i,w_i}$ in the coordinates $X_\alpha^\triangle$ with constant term $1$. Recall that the integral $\A$-lamination satisfies the integrality condition $\mathsf{a}_\alpha(L) \in \mathbb{Z}$. Hence the product $\mathbb{I}_\A(L)$ only has integral exponents in the coordinates $X_\alpha^\triangle$. Since the above argument applies for any ideal triangulation $\triangle$, it follows that $\mathbb{I}_\A(L)$ is a universally Laurent polynomial, hence it belongs to $\mathcal{O}(\X_\Sigma)=\mathcal{O}(\P_{PGL_2,\Sigma})$. Thus the assertion is proved. \end{proof} \begin{rem} Our construction is essentially the restriction of the construction given in \cite[Section 10.3]{GS15} to the integral $\A$-laminations. Indeed, their function $\Delta_\beta$ is exactly our function $\Delta_{22}(g_{[\beta]})$ if we reinterpret it by identifying the $PGL_2$-version of their moduli space $\mathrm{Loc}_{SL_2,\Sigma}$ with $\P_{PGL_2,\Sigma}$ (cf.~\cite[Remark 3.9]{IOS22} and the proof of \cref{prop:duality_compatible} below). \end{rem} \begin{thm}\label{thm:X_basis} Assume that $\Sigma$ is unpunctured, having at least two marked points. Then the functions $\mathbb{I}_\A(L)$, where $L$ runs over all the integral $\A$-laminations, form a linear basis of the function algebra $\mathcal{O}(\X_\Sigma)=\mathcal{O}(\P_{PGL_2,\Sigma})$. \end{thm} We prove this theorem based on the results on the skein algebras \cite{IY,IKar}. Let $\Sigma$ be an unpunctured marked surface, and $\Skein{\Sigma}^q(\mathbb{B})$ the \emph{stated skein algebra} on $\Sigma$. It consists of $\mathbb{Z}_q$-linear combinations of framed tangles in $\Sigma \times [0,1]$, whose ends lie in $\partial\Sigma \times [0,1]$ and are equipped with states $\{1,2\}$, modulo certain relations. See \cite{TTQLe18} for a detail, where the states $+,-$ \emph{loc.~sit.}~corresponds to our states $1,2$, respectively. Let $\mathcal{I}_{\mathrm{bad}} \subset \Skein{\Sigma}^q(\mathbb{B})$ denote the ideal generated by \emph{bad arcs}, which are peripheral tangles around a special point with particular states: \begin{align*} \begin{tikzpicture} \coordinate (P) at (-0.5,0) {}; \coordinate (P') at (0.5,0) {}; \coordinate (C) at (0,0.5) {}; \draw[very thick, red] (P) to[out=north, in=west] (C) to[out=east, in=north] (P'); \draw(P)++(0,-0.4) node[red,scale=0.8]{$1$}; \draw(P')++(0,-0.4) node[red,scale=0.8]{$2$}; \draw[dashed] (1,0) arc (0:180:1cm); \bline{-1,0}{1,0}{0.2} \draw[fill=black] (0,0) circle(2pt); \end{tikzpicture} \end{align*} The quotient \begin{align*} \Skeinr{\Sigma}^q(\mathbb{B}) := \Skein{\Sigma}^q(\mathbb{B})/\mathcal{I}_{\mathrm{bad}} \end{align*} is called the \emph{reduced stated skein algebra}. We denote its classical specialization $q^{1/2}=1$ by $\Skeinr{\Sigma}^1(\mathbb{B})$. We have the following: \begin{thm}[{\cite{CL19,IY}}] We have an isomorphism of $\mathbb{C}$-algebras \begin{align}\label{eq:O=S} \mathcal{O}(\A_{SL_2,\Sigma}^\times) \cong \Skeinr{\Sigma}^1(\mathbb{B})\otimes \mathbb{C}, \quad \Delta_{ij}(g_{[c]}) \mapsto \tau([c])_{ij} \end{align} where the matrix coefficient $\Delta_{ij}(g_{[c]})$ of the Wilson line along an arc class $[c]$ corresponds to a framed tangle $\tau([c])$ that projects to $[c]$ together with the state $i$ (resp. $j$) on its initial (resp. terminal) end. \end{thm} This theorem follows from \cite[Theorem 8.12]{CL19} by taking the following observation into account: the Wilson line along a peripheral arc class around a special point is a triangular matrix by \cref{thm:LR-formula}. The bad arcs correspond to the vanishing entries of these triangular matrices. Now we want to restrict the isomorphism \eqref{eq:O=S} to the subalgebra $\mathcal{O}(\P_{PGL_2,\Sigma}) \subset \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ of index $2$. Let $\Skeinr{\Sigma}^q(\mathbb{B})_{\mathrm{cong}} \subset \Skeinr{\Sigma}^q(\mathbb{B})$ be the \emph{congruent subalgebra} \cite{IKar} of the reduced stated skein algebra, which is generated by \emph{congruent} (or \emph{even}) tangles. Here a (stated) tangle is said to be \emph{congruent} with respect to a given triangulation $\triangle$ if its geometric intersection with each edge $\alpha \in e(\triangle)$ is even. This condition turns out to be independent of triangulations, and invariant under the isotopy and skein relations. \begin{thm}\label{thm:O=S_even} The isomorphism \eqref{eq:O=S} restricts to an isomorphism \begin{align}\label{eq:O=S_even} \mathcal{O}(\P_{PGL_2,\Sigma}) \cong \Skeinr{\Sigma}^1(\mathbb{B})_{\mathrm{cong}}\otimes \mathbb{C}. \end{align} \end{thm} \begin{proof} By \cite[Corollary 3.16]{IO20}, the function algebra $\mathcal{O}(\P_{PGL_2,\Sigma})$ is generated by the matrix coefficients of Wilson lines. The projection $SL_2 \to PGL_2$ induces an embedding $\mathcal{O}(PGL_2) \to \mathcal{O}(SL_2)$, whose image is generated by the elements $\Delta_{ij}\Delta_{kl}$ for $i,j,k,l \in \{1,2\}$. Hence the elements $\Delta_{ij}(g_{[c]})\Delta_{kl}(g_{[c]})$ generate $\mathcal{O}(\P_{PGL_2,\Sigma})$, which are send to elements $\tau([c])_{ij}\tau([c])_{kl} \in \Skeinr{\Sigma}^1(\mathbb{B})_{\mathrm{cong}}$ in the congruent subalgebra. Conversely, each element $W \in \Skeinr{\Sigma}^1(\mathbb{B})_{\mathrm{cong}}$ corresponds to a certain polynomial $F_W \in \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ of matrix entries of Wilson lines valued in $SL_2$. Then by the same argument as in the proof of \cref{lem:function_even}, one can verify that $F_W$ actually lies in the subalgebra $\mathcal{O}(\P_{PGL_2,\Sigma})$, thanks to the congruent condition of $W$. Thus the assertion is proved. \end{proof} \begin{proof}[Proof of \cref{thm:X_basis}] The functions $\mathbb{I}_\A(L) \in \mathcal{O}(\P_{PGL_2,\Sigma})$ are classical counterparts of the elements in the $\mathbb{Z}_q$-basis $\mathsf{B}_{\mathrm{cong}}(\Sigma) \subset \Skeinr{\Sigma}^q(\mathbb{B})_{\mathrm{cong}}$ constructed in \cite{IKar}. Then the assertion follows from \cref{thm:O=S_even}. \end{proof} \begin{rem} Without referring to the forthcoming result in \cite{IKar}, the linear independence of the elements $\mathbb{I}_\A(L) \in \mathcal{O}(\P_{PGL_2,\Sigma})$ also follows from \cref{prop:duality_compatible} below and the linear independence of the bracelet basis. \end{rem} \subsection{The basis of $\mathcal{O}(\A_\Sigma)$ parametrized by the integral $\P$-laminations} Let $\Sigma$ be an unpunctured marked surface. In this case, we do not need the data of lamination signature. We basically follow \cite[Definition 12.4]{FG06} with an extra assignment for pinnings. In particular, we lift each loop $\gamma$ to the punctured tangent bundle $T'\Sigma$, and understand the trace function $\mathrm{Tr}_{[\gamma]}$ on $\A_{SL_2,\Sigma}$ as the trace of monodromy of twisted $SL_2$-local systems along this lift. We also use the following shifting operation on the curves. Compare with \cref{def:shift_ideal}. \begin{dfn}[negative $\mathbb{M}$-shift of curves]\label{def:shift_curve} For a curve $\gamma$ in $\Sigma$ having its endpoints on $\partial^\ast\Sigma$, we define its \emph{(negative) $\mathbb{M}$-shift} to be the ideal arc $\gamma^{\mathbb{M}}$ obtained from $\gamma$ by shifting its endpoints to the nearest special point in the negative direction along the boundary. See \cref{fig:shifting_curve}. \end{dfn} \begin{figure}[ht] \centering \begin{tikzpicture} \fill[gray!20] (0,1.5) -- (-0.2,1.5) -- (-0.2,-1.5) -- (0,-1.5) --cycle; \fill[gray!20] (4,1.5) -- (4+0.2,1.5) -- (4+0.2,-1.5) -- (4,-1.5) --cycle; \draw[thick] (0,1.5) -- (0,-1.5); \draw[thick] (4,-1.5) -- (4,1.5); \filldraw(0,1) circle(1.5pt); \filldraw(0,0) circle(1.5pt); \filldraw(0,-1) circle(1.5pt); \filldraw(4,1) circle(1.5pt); \filldraw(4,0) circle(1.5pt); \filldraw(4,-1) circle(1.5pt); \draw[red,thick] (0,-0.5) to[out=0,in=180] node[midway,above]{$\gamma$} (4,0.5); \begin{scope}[xshift=6cm] \fill[gray!20] (0,1.5) -- (-0.2,1.5) -- (-0.2,-1.5) -- (0,-1.5) --cycle; \fill[gray!20] (4,1.5) -- (4+0.2,1.5) -- (4+0.2,-1.5) -- (4,-1.5) --cycle; \draw[thick] (0,1.5) -- (0,-1.5); \draw[thick] (4,-1.5) -- (4,1.5); \filldraw(0,1) circle(1.5pt); \filldraw(0,0) circle(1.5pt); \filldraw(0,-1) circle(1.5pt); \filldraw(4,1) circle(1.5pt); \filldraw(4,0) circle(1.5pt); \filldraw(4,-1) circle(1.5pt); \draw[red,thick] (0,0) to[out=0,in=180] node[midway,above]{$\gamma^{\mathbb{M}}$} (4,0); \end{scope} \end{tikzpicture} \caption{The negative $\mathbb{M}$-shift of a curve.} \label{fig:shifting_curve} \end{figure} The two shifting operations are related by $(\alpha_\mathbb{B})^{\mathbb{M}}=\alpha$ for an ideal arc $\alpha$, and $(\gamma^\mathbb{M})_\mathbb{B}=\gamma$ for a curve having its endpoints on $\partial^\ast\Sigma$. The following is a slight enhancement of the construction given in \cite[Section 12.3]{FG06} and \cite[Section 7.2]{FG07}: \begin{dfn}\label{def:skein_lift_X} Given an integral $\P$-lamination $(L=\{(\gamma_i,w_i)\},\nu) \in \mathcal{L}^p(\Sigma,\mathbb{Z})$, we define the corresponding function $\mathbb{I}_\X(L,\nu)\in \mathcal{O}(\A^\times_{SL_2,\Sigma})$, as follows. \begin{itemize} \item For each weighted loop $(\gamma_i,w_i)$, associate the trace function \begin{align*} \mathrm{Tr}_{[\gamma_i]^{w_i}} \in \mathcal{O}(\A_{SL_2,\Sigma}), \end{align*} where $[\gamma_i]^{w_i} \in \pi_1(T'\Sigma)$ denotes the $w_i$-th power of a based loop homotopic to (the lift of) $\gamma_i$. \item For each weighted non-peripheral arc $(\gamma_i,w_i)$, associate the function \begin{align*} (A_{\gamma^\mathbb{M}})^{w_i} \in \mathcal{O}(\A_{SL_2,\Sigma}). \end{align*} \item For each boundary interval $\alpha \in \mathbb{B}$, associate the function \begin{align}\label{eq:duality_pinning} A_\alpha^{\nu_\alpha} \in \mathcal{O}(\A_{SL_2,\Sigma}^\times). \end{align} \end{itemize} Then the function $\mathbb{I}_\X(L,\nu) \in \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ is defined to be the product of these elements. \end{dfn} The map $\mathbb{I}_\X:\X_\Sigma(\mathbb{Z}^{\mathsf{T}}) \to \mathcal{O}(\A_{SL_2,\Sigma}^\times)$ is clearly $MC(\Sigma)$-equivariant. Via the isomorphism $\mathcal{O}(\A_\Sigma) \cong \mathcal{O}(\A_{SL_2,\Sigma}^\times)$, we have the following: \begin{thm}[Musiker--Schiffler--Williams {\cite[Theorem 1.1 and Corollary 1.3]{MSW}}] Suppose that $\Sigma$ has at least two marked points. Then the functions $\mathbb{I}_\X(L,\nu)$, where $(L,\nu)$ runs over all the integral $\P$-laminations, form a linear basis of the upper cluster algebra $\mathcal{O}(\A_{\Sigma})$. \end{thm} \begin{rem} The construction can be generalized to any marked surface so that it is equivariant under the $\mathbb{Z}/2\mathbb{Z}$-action at each puncture which alternates the lamination signature and the tag. See \cite[Section 12.6]{FG06}. \end{rem} \subsection{Ensemble compatibility of duality maps} We are going to discuss the compatibility of the two constructions of duality maps with respect to the structure of cluster ensemble. It turns out that a non-trivial Langlands duality comes into play. \paragraph{\textbf{Langlands dual coordinates on $\mathcal{L}^p(\Sigma,\mathbb{Q})$.}} For an ideal triangulation $\triangle$ of $\Sigma$, we define the \emph{Langlands dual coordinates} \begin{align}\label{eq:coordinate_dual} \check{\mathsf{x}}_\triangle=(\check{\mathsf{x}}^\triangle_\alpha)_{\alpha \in e(\triangle)}: \mathcal{L}^p(\Sigma,\mathbb{Q}) \xrightarrow{\sim} \mathbb{Q}^{e(\triangle)}, \end{align} as follows. For $\alpha \in e_{\interior}(\triangle)$, let $\check{\mathsf{x}}^\triangle_\alpha:=\mathsf{x}^\triangle_\alpha$. We modify the frozen coordinates $\check{\mathsf{x}}^\triangle_\alpha$, $\alpha \in \mathbb{B}$ into \begin{align*} \check{\mathsf{x}}^\triangle_\alpha(L,\sigma_L,\nu) := \nu_\alpha +\sum_{j} w_j (\alpha:\widehat{\gamma}_j)^\vee, \end{align*} where $(\alpha:\widehat{\gamma}_j)^\vee:= +1$ if $\widehat{\gamma}_j$ contains a corner arc around the \underline{terminal} marked point $m^-_\alpha$ as its portion, and otherwise $0$. Compare with \eqref{eq:boundary_coord}. We define the \emph{Langlands dual ensemble map} \begin{align}\label{eq:dual_ensemble} \check{p}_\Sigma^{\mathsf{T}}: \mathcal{L}^a(\Sigma,\mathbb{Q}) \to \mathcal{L}^p(\Sigma,\mathbb{Q}) \end{align} by forgetting the peripheral components, and defining the pinning $\nu_\alpha \in \mathbb{Z}$ to be the weight of the peripheral component around the \underline{terminal} marked point $m^-_\alpha$. Then similarly to \cref{prop:p-lamination,prop:ensemble_tropical}, we get: \begin{thm} \begin{enumerate} \item For any ideal triangulation $\triangle$ of $\Sigma$, the map \eqref{eq:coordinate_dual} gives a bijection. The coordinate transformations are again tropical cluster Poisson transformations. \item For any ideal triangulation $\triangle$ of $\Sigma$, we have \begin{align}\label{eq:ensemble_dual} (\check{p}_\Sigma^\mathsf{T})^\ast \check{\mathsf{x}}_\kappa^\triangle = \sum_{\alpha \in e(\triangle)} (\varepsilon_{\kappa\alpha}^\triangle- m_{\kappa\alpha}) \mathsf{a}_\alpha \end{align} for all $\kappa \in e(\triangle)$. \end{enumerate} \end{thm} Observe that for the square matrix $p^\triangle:=(\varepsilon_{\kappa\alpha}^\triangle+ m_{\kappa\alpha})_{\kappa,\alpha\in e(\triangle)}$, its \emph{Langlands dual} \cite[Section 1.2.10]{FG09} is $(-p^\triangle)^\bot = (\varepsilon_{\kappa\alpha}^\triangle- m_{\kappa\alpha})_{\kappa,\alpha\in e(\triangle)}$. The following property shows that our assignment rule in \cref{def:skein_lift_X} satisfies one of the axioms of Fock--Goncharov duality with respect to this dual coordinates: \begin{prop}\label{prop:cluster_monomial} Suppose that an integral $\P$-lamination $(L,\nu)$ satisfies $\check{\mathsf{x}}_\alpha:=\check{\mathsf{x}}^\triangle_\alpha(L,\nu) \geq 0$ for all $\alpha \in e(\triangle)$ for some ideal triangulation $\triangle$. Then we have \begin{align*} \mathbb{I}_X(L,\nu) = \prod_{\alpha \in e(\triangle)} A_\alpha^{\check{\mathsf{x}}_\alpha}. \end{align*} \end{prop} \begin{proof} Such an integral $\P$-lamination is given by $L=\{(\gamma_\alpha,\check{\mathsf{x}}_\alpha)\}_{\alpha \in e_{\interior}(\triangle)}$ such that $(\gamma_\alpha)^\mathbb{M}=\alpha$, together with the pinnings $\nu_\beta:=\check{\mathsf{x}}_\beta$ for $\beta \in \mathbb{B}$. Then we get \begin{align*} \mathbb{I}_\A(L) = \prod_{\alpha \in e_{\interior}(\triangle)} A_\alpha^{\check{\mathsf{x}}_\alpha} \cdot \prod_{\beta \in \mathbb{B}} A_\beta^{\check{\mathsf{x}}_\beta} = \prod_{\alpha \in e(\triangle)} A_\alpha^{\check{\mathsf{x}}_\alpha}, \end{align*} as desired. \end{proof} \begin{rem} In the original coordinates $\mathsf{x}_\triangle$, the $\P$-laminations in the negative cone $\mathsf{x}_\alpha^\triangle \leq 0$ corresponds to the positive $\mathbb{M}$-shifts (defined with the opposite direction) of ideal arcs $\alpha \in e_{\interior}(\triangle)$ and negative pinnings on boundary intervals. They give rise to cluster monomials $\prod_{\alpha \in e(\triangle)}A_\alpha^{-\mathsf{x}_\alpha^\triangle(L,\nu)}$. \end{rem} The two constructions of duality maps are compatible in the following sense: \begin{thm}[Ensemble compatibility of duality maps]\label{prop:duality_compatible} For any unpunctured marked surface $\Sigma$, the following diagram commutes: \begin{equation}\label{eq:duality_compatible} \begin{tikzcd} \A_{\Sigma}(\mathbb{Z}^T) \ar[d,"\check{p}_\Sigma^{\mathsf{T}}"'] \ar[rr,"\mathbb{I}_\A"] && \mathcal{O}(\X_{\Sigma}) \ar[d,"p_\Sigma^\ast"] \\ \X_{\Sigma}(\mathbb{Z}^T) \ar[rr,"\mathbb{I}_\X"'] && \mathcal{O}(\A_{\Sigma}), \end{tikzcd} \end{equation} where we use the Langlands dual ensemble map $ \check{p}_\Sigma^{\mathsf{T}}: \mathcal{L}^a(\Sigma,\mathbb{Z}) \to \mathcal{L}^p(\Sigma,\mathbb{Z})$ on the tropical side. \end{thm} \begin{proof} Let $L \in \A_\Sigma(\mathbb{Z}^\mathsf{T})$ be an integral $\A$-lamination. It suffices to consider the case where $L$ consists of a single weighted curve $(\gamma,k)$. \begin{itemize} \item If $\gamma$ is a non-peripheral loop, the assertion is obvious. \item If $\gamma$ is a non-peripheral arc, then $\check{p}_\Sigma^\mathsf{T}(\gamma,k)$ is the same weighted arc. Then we need the equality \begin{align*} \Delta_{22}(g_{[\gamma]})^k=(A_{\gamma^\mathbb{M}})^k. \end{align*} Since $\gamma=(\gamma^\mathbb{M})_\mathbb{B}$, it is exactly the formula given in \cref{prop:Wilson_lambda} (2). \item If $\gamma$ is a peripheral arc around a special point $m \in \mathbb{M}$, then $\check{p}_\Sigma^\mathsf{T}(\gamma,k)$ consists of an empty collection of curves together with the pinning $\nu_\alpha=k$ assigned to the boundary interval $\alpha \in \mathbb{B}$ such that $m=m^-_\alpha$ as its terminal endpoint: \begin{align*} \begin{tikzpicture} \bline{-0.5,0}{2.5,0}{0.2} \draw[red,thick] (2,0) arc(0:180:0.5cm) node[midway,above]{$(\gamma,k)$}; \node at (1.5,-0.4) {$m$}; \draw[fill=black] (0,0) circle(2pt); \draw[fill=black] (1.5,0) circle(2pt); \draw[|->,thick] (3,0.5) --node[midway,above]{$\check{p}_\Sigma^{\mathsf{T}}$} (4,0.5); \begin{scope}[xshift=5cm] \bline{-0.5,0}{2.5,0}{0.2} \node at (0.75,0.3) {$\alpha$}; \node[red] at (0.75,-0.5) {$\nu_\alpha=k$}; \draw[fill=black] (0,0) circle(2pt); \draw[fill=black] (1.5,0) circle(2pt); \end{scope} \end{tikzpicture} \end{align*} Then we need the equality \begin{align*} \Delta_{22}(g_{[\gamma]})^k=A_\alpha^{k} \end{align*} Since $\gamma=\alpha_\mathbb{B}$, it follows from \cref{prop:Wilson_lambda} (2). \end{itemize} Thus the assertion follows from the multiplicative nature of the both constructions. \end{proof} \begin{rem}\label{rem:duality_constraint} It is easily verified that the requirements for the (lowest term) exponents in \cref{lem:function_even,prop:cluster_monomial} are compatible only if the exponents/coefficients of the maps $p_\Sigma$ and $\check{p}^{\mathsf{T}}_\Sigma$ are Langlands dual to each other. In particular, it is an algebraic matter independent of the topological construction. \end{rem} \subsection{Amalgamation of bracelet bases} Let us investigate the behavior of the duality maps $\mathbb{I}_\X$ under the tropical gluing maps studied in \cref{subsec:lamination_amalgamation}. Let us first modify the gluing map $q^\mathsf{T}_{\Sigma,\Sigma'}$ to its Langlands dual so that it is compatible with the coordinates $\check{\mathsf{x}}_\triangle$. Let $\Sigma'$ be obtained by $\Sigma$ by gluing two boundary intervals $\alpha_L,\alpha_R$. We define the \emph{Langlands dual gluing map} \begin{align*} \check{q}^\mathsf{T}_{\Sigma,\Sigma'}: \mathcal{L}^p(\Sigma,\mathbb{Z}) \to \mathcal{L}^p(\Sigma',\mathbb{Z}) \end{align*} similarly to $q^\mathsf{T}_{\Sigma,\Sigma'}$, but replace the parametrization $\psi_Z$ with the one $\check{\psi}_{Z}:\mathbb{R} \to \alpha_Z$ so that $\check{\psi}_{Z}(\frac{1}{2}+\mathbb{Z})=S_{Z}$, and $\check{\psi}_Z(\mathbb{R}_{>0}) \cap S_Z$ consists of all the endpoints of the additional peripheral curves around the \underline{terminal} marked point $m^-_{\alpha_Z}$ for $Z \in \{L,R\}$. Then the same property as in \cref{thm:amalgamation} with the dual coordinates $\check{\mathsf{x}}_\triangle$ holds. On the moduli side, we have the restriction morphism $\mathrm{Res}_{\Sigma',\Sigma}: \A_{SL_2,\Sigma'} \to \A_{SL_2,\Sigma}$. It induces an algebra homomorphism \begin{align*} \mathrm{Res}^\ast_{\Sigma',\Sigma}: \mathcal{O}(\A^\times_{SL_2,\Sigma}) \to \mathcal{O}(\A^\times_{SL_2,\Sigma'})[A_{\overline{\alpha}}^{-1}], \end{align*} which satisfies \begin{align}\label{eq:res_A-variable} \mathrm{Res}^\ast_{\Sigma',\Sigma}(A_{\alpha_L}) = \mathrm{Res}^\ast_{\Sigma',\Sigma}(A_{\alpha_R}) = A_{\overline{\alpha}}. \end{align} Let us consider the diagram \begin{equation}\label{eq:duality_amalgamation} \begin{tikzcd} \X_{\Sigma}(\mathbb{Z}^T) \ar[d,"\check{q}^\mathsf{T}_{\Sigma,\Sigma'}"'] \ar[rr,"\mathbb{I}_\X"] && \mathcal{O}(\A^\times_{\Sigma}) \ar[d,"\mathrm{Res}^\ast_{\Sigma',\Sigma}"] \\ \X_{\Sigma'}(\mathbb{Z}^T) \ar[rr,"\mathbb{I}_\X"'] && \mathcal{O}(\A^{\times}_{\Sigma'})[A_{\overline{\alpha}}^{-1}]. \end{tikzcd} \end{equation} \begin{figure}[ht] \centering \begin{tikzpicture} \begin{scope} \draw (-2,0) -- (0,0) -- (0,3) -- (-2,3); \foreach \i in {0.5,0.7,0.9,1.1} { \draw[myblue] (0,0)++(0,\i) arc(90:180:\i); \draw[myblue] (0,3)++(0,-\i) arc(-90:-180:\i); } \foreach \i in {0,1} \foreach \j in {0,3} \fill(\i,\j) circle(1.5pt); \draw[red] (-2,1.5) -- (0,1.5); \draw[red] (-2,1.7) -- (0,1.7); \draw (-0.05,1.8) -- (0.05,1.8) node[right,scale=0.7]{$0$}; \pinn{0,2.2}{180}{0.1}{0.03cm} \draw[dashed] (0.1,2.2) -- (0.9,1.2); \node[scale=0.9] at (-0.5,3.5) {$\nu_{\alpha_L}=2$}; \end{scope} \begin{scope}[xshift=1cm] \draw (2,0) -- (0,0) -- (0,3) -- (2,3); \foreach \i in {0.5,0.7,0.9,1.1} { \draw[myblue] (0,0)++(0,\i) arc(90:0:\i); \draw[myblue] (0,3)++(0,-\i) arc(-90:0:\i); } \draw[red] (2,1.5) -- (0,1.5); \pinn{0,1.2}{0}{0.1}{0.03cm} \node[scale=0.9] at (0.5,3.5) {$\nu_{\alpha_R}=0$}; \end{scope} \draw[thick,|->] (0.5,-0.5) --node[midway,left]{$\check{q}^{\mathsf{T}}_{\Sigma,\Sigma'}$}++(0,-1); \draw[thick,|->] (3.5,1.5) --node[midway,above]{$\mathbb{I}_\X$}++(1,0); \begin{scope}[xshift=7cm] \draw (-2,0) -- (0,0) -- (0,3) -- (-2,3); \draw[red] (-2,1.5) ..controls++(1.5,0) and (110:1.5).. (0,0); \draw[red] (-2,1.7) ..controls++(1.5,0) and (105:1.5).. (0,0); \draw[red] (0,0) to[bend left=5pt] (0,3); \draw[red] (0,0) to[bend left=12pt] (0,3); \foreach \i in {0,1} \foreach \j in {0,3} \fill(\i,\j) circle(1.5pt); \draw[thick,|->] (0.5,-0.5) --node[midway,left]{$\mathrm{Res}^\ast_{\Sigma',\Sigma}$}++(0,-1); \end{scope} \begin{scope}[xshift=8cm] \draw (2,0) -- (0,0) -- (0,3) -- (2,3); \draw[red] (2,1.5) ..controls++(-1.5,0) and ($(0,3)+(-80:1.5)$).. (0,3); \end{scope} \begin{scope}[yshift=-5cm,xshift=0.5cm] \draw (-2,0) -- (2,0); \draw (-2,3) -- (2,3); \draw[dashed] (0,0) -- (0,3); \draw[red] (-2,1.5) ..controls++(1.2,0) and ($(0.5,0)+(0,1)$).. (0.5,0); \draw[red] (-2,1.7) ..controls++(1.2,0) and ($(0.7,0)+(0,1)$).. (0.7,0); \draw[red] (2,1.5) ..controls++(-1.2,0) and ($(-0.7,3)+(0,-1)$).. (-0.7,3); \draw[red] (1.1,0) ..controls++(0,1) and ($(-0.9,3)+(0,-1)$).. (-0.9,3); \draw[red] (0.9,0) ..controls++(0,1) and ($(-1.1,3)+(0,-1)$).. (-1.1,3); \foreach \j in {0,3} \fill(0,\j) circle(1.5pt); \draw[thick,|->] (3,1.5) --node[midway,above]{$\mathbb{I}_\X$}++(1,0); \end{scope} \begin{scope}[yshift=-5cm,xshift=7.5cm] \draw (-2,0) -- (2,0); \draw (-2,3) -- (2,3); \draw[dashed] (0,0) -- (0,3); \draw[red] (-2,1.5) ..controls++(1.5,0) and (110:1.5).. (0,0); \draw[red] (-2,1.7) ..controls++(1.5,0) and (105:1.5).. (0,0); \draw[red] (2,1.5) ..controls++(-1.5,0) and ($(0,3)+(-80:1.5)$).. (0,3); \draw[red] (0,0) to[bend left=5pt] (0,3); \draw[red] (0,0) to[bend left=12pt] (0,3); \foreach \j in {0,3} \fill(0,\j) circle(1.5pt); \end{scope} \end{tikzpicture} \caption{Amalgamation of bracelet bases: an example with $\nu_{\alpha_L}+\nu_{\alpha_R}\geq 0$.} \label{fig:amal_example1} \end{figure} \begin{thm}\label{thm:amal_bracelet} For any integral $\P$-lamination $(L,\nu) \in \X_\Sigma(\mathbb{Z}^\mathsf{T})$ such that $\nu_{\alpha_L}+\nu_{\alpha_R} \geq 0$, we have $\mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu))=\mathbb{I}_\X(\check{q}^\mathsf{T}_{\Sigma,\Sigma'}(L,\nu))$. \end{thm} \begin{proof} Let $(L',\nu'):=\check{q}^\mathsf{T}_{\Sigma,\Sigma'}(L,\nu) \in \X_{\Sigma'}(\mathbb{Z}^\mathsf{T})$ and $n:=\nu_{\alpha_L}+\nu_{\alpha_R}$. Represent $L$ and $L'$ by curves with weight $1$. Observe that \begin{enumerate} \item[(1)] the curves in $L$ having endpoints on $\alpha_L$ and $\alpha_R$ give rise to curves in $L'$ ``turning right". In particular, their negative $\mathbb{M}$-shifts are the same before/after the gluing; \item[(2)] the new curves in $L'$ arising via the gluing give rise to $n$ parallel copies of the ideal ideal $\overline{\alpha}$. \end{enumerate} See \cref{fig:amal_example1} for an illustrating example. Hence we have \begin{align*} \mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu)) = \mathrm{Res}^\ast_{\Sigma',\Sigma}\left(A_{\alpha_L}^{\nu_{\alpha_L}}\cdot A_{\alpha_R}^{\nu_{\alpha_R}}\cdot \mathbb{I}_\X(L,\nu') \right) = A_{\overline{\alpha}}^n \cdot \mathbb{I}_\X(L,\nu') = \mathbb{I}_\X(L',\nu'). \end{align*} Here $(L,\nu')$ denotes the data obtained from $(L,\nu)$ by deleting the pinnings $\nu_{\alpha_L}$ and $\nu_{\alpha_R}$, for which $\mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu'))=\mathbb{I}_\X(L,\nu')$ holds from the observation (1). We also used \eqref{eq:res_A-variable} and the observation (2) in the second and third equality, respectively. The first assertion is proved. \end{proof} \begin{ex}\label{ex:amal_dominance} Here is a square example with $\nu_{\alpha_L}+\nu_{\alpha_R}< 0$. \begin{align*} \begin{tikzpicture} \begin{scope} \draw (0,0) -- (0,3) -- (-2,1.5) -- cycle; \foreach \i in {0,1} \foreach \j in {0,3} \fill(\i,\j) circle(1.5pt); \foreach \i in {-2,3} \fill(\i,1.5) circle(1.5pt); \node[red,scale=0.9] at (-0.3,1.5) {$-1$}; \node[scale=0.9] at (0.3,1) {$\alpha_L$}; \node[scale=0.9] at (-1,2.5) {$\beta$}; \node[scale=0.9] at (-1,0.5) {$\gamma$}; \end{scope} \begin{scope}[xshift=1cm] \draw (0,0) -- (0,3) -- (2,1.5)--cycle; \node[red,scale=0.9] at (0.3,1.5) {$0$}; \node[scale=0.9] at (-0.3,2) {$\alpha_R$}; \node[scale=0.9] at (1,2.5) {$\epsilon$}; \node[scale=0.9] at (1,0.5) {$\delta$}; \end{scope} \draw[thick,|->] (3.5,1.5) --node[midway,above]{$\check{q}^{\mathsf{T}}_{\Sigma,\Sigma'}$}++(1,0); \begin{scope}[xshift=7cm] \draw (-2,1.5) -- (0,0) -- (2,1.5) -- (0,3) --cycle; \draw[dashed] (0,0) -- (0,3); \draw[dashed] (-2,1.5) -- (2,1.5); \draw[red] (-1,0.75) ..controls++(50:1) and ($(1,3-0.75)+(-110:1)$).. (1,3-0.75); \node[scale=0.9] at (-0.3,2.3) {$\overline{\alpha}$}; \node[scale=0.9] at (1,1.3) {$\overline{\alpha}'$}; \node[scale=0.9] at (-1,2.5) {$\beta$}; \node[scale=0.9] at (-1,0.5) {$\gamma$}; \node[scale=0.9] at (1,2.5) {$\epsilon$}; \node[scale=0.9] at (1,0.5) {$\delta$}; \foreach \j in {0,3} \fill(0,\j) circle(1.5pt); \foreach \i in {-2,2} \fill(\i,1.5) circle(1.5pt); \end{scope} \end{tikzpicture} \end{align*} Let us consider $(L,\nu)$ as shown in the left, the empty lamination with the pinning $\nu_{\alpha_L}=-1$ and $\nu_{\alpha_R}=0$, which produces a lamination $(L',\nu')$ shown in the right after the gluing. Then we have $\mathbb{I}_\X(L,\nu)=A_{\alpha_L}^{-1}$, while \begin{align*} \mathbb{I}_\X(L',\nu')=A_{\overline{\alpha}'} = \frac{A_\beta A_\delta +A_\gamma A_\epsilon}{A_{\overline{\alpha}}} = \frac{A_\gamma A_\epsilon}{A_{\overline{\alpha}}}\cdot(1+ p_\Sigma^\ast X_{\overline{\alpha}}). \end{align*} Observe that ignoring the frozen term $A_\gamma A_\epsilon$, the function $\mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu))=A_{\overline{\alpha}}^{-1}$ coincides with one of the terms in $\mathbb{I}_\X(L',\nu')$. \end{ex} \begin{rem}\label{rem:amal_weak} In general, one can verify a weak statement that $\mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu))$ corresponds to the highest term in $\mathbb{I}_\X(\check{q}^\mathsf{T}_{\Sigma,\Sigma'}(L,\nu))$ with respect to the \emph{dominance order} \cite{Qin21}, up to certain frozen variables of $\Sigma'$. In the quantum setting based on the skein algebra \cite{Muller}, the term $A_{\overline{\alpha}}^{|n|}\cdot \mathrm{Res}^\ast_{\Sigma',\Sigma}(\mathbb{I}_\X(L,\nu))$ appears in the expansion of the product $A_{\overline{\alpha}}^{|n|}\cdot \mathbb{I}_\X(\check{q}^\mathsf{T}_{\Sigma,\Sigma'}(L,\nu))$ in the graphical basis as the term of highest $q$-exponent, up to frozens. \end{rem} \section{Cluster varieties associated to a marked surface}\label{app:cluster} The reader is referred to \cite{FG09} for details. Let $\Sigma$ be a marked surface. For each ideal (or tagged) triangulation $\triangle$ of $\Sigma$, let $\X_\triangle=(\mathbb{G}_m)^{e(\triangle)}$, $\A_\triangle=(\mathbb{G}_m)^{e(\triangle)}$ be two split algebraic tori, where $\mathbb{G}_m:=\mathop{\mathrm{Spec}} \mathbb{C}[t,t^{-1}]=\mathbb{C}^\ast$ denotes the multiplicative group scheme over $\mathbb{C}$. Let $(X_\alpha^\triangle)_{\alpha \in e(\triangle)}$, $(A_\alpha^\triangle)_{\alpha \in e(\triangle)}$ denote the standard coordinate systems on $\X_\triangle$ and $\A_\triangle$, respectively. These tori are accompanied with the \emph{exchange matrix} $\varepsilon^\triangle=(\varepsilon_{\alpha\beta}^\triangle)_{\alpha,\beta \in e(\triangle)}$ \cite{FST}, defined as follows: for each triangle $T$ of $\triangle$, let \begin{align*} \varepsilon_{\alpha\beta}(T):= \begin{cases} 1 & \mbox{if $T$ has $\alpha$ and $\beta$ as its consecutive edges in the clockwise order}, \\ -1 & \mbox{if the same holds with the counter-clockwise order}, \\ 0 & \mbox{otherwise}. \end{cases} \end{align*} Then we set $\varepsilon_{\alpha\beta}^\triangle:=\sum_T \varepsilon_{\alpha\beta}(T)$, where $T$ runs over all non-self-folded triangles of $\triangle$. When $\triangle$ has a self-folded triangle or it is tagged, $\varepsilon_{\alpha\beta}^\triangle$ is appropriately modified. See \cite{FST}. Then the \emph{cluster Poisson/$K_2$-varieties} \cite{FG09} are defined to be \begin{align*} \X_\Sigma:= \bigcup_{\triangle} \X_\triangle, \quad \A_\Sigma:= \bigcup_{\triangle} \A_\triangle, \end{align*} where the gluing data is given by the birational isomorphisms \begin{align} &\mu_{\kappa}^x: \X_\triangle \to \X_{\triangle'}, \quad (\mu_{\kappa}^x)^\ast X'_\alpha= \begin{cases} X_\kappa^{-1} & (\alpha=\kappa'), \\ X_\alpha (1+ X_\kappa^{-\mathrm{sgn}(\varepsilon_{\alpha\kappa})})^{-\varepsilon_{\alpha\kappa}} & (\alpha \neq \kappa'), \end{cases} \label{eq:X-transf} \\ &\mu_{\kappa}^a: \A_\triangle \to \A_{\triangle'}, \quad (\mu_{\kappa}^a)^\ast A'_\alpha= \begin{cases}A_\kappa^{-1} \big(\prod_{\beta\in e(\triangle)} A_\beta^{[\varepsilon_{\kappa\beta}]_+} + \prod_{\beta\in e(\triangle)} A_\beta^{[-\varepsilon_{\kappa\beta}]_+} \big) & (\alpha = \kappa'), \\ A_\alpha & (\alpha \neq \kappa'), \end{cases} \label{eq:A-transf} \end{align} for each flip $\triangle'=\triangle \setminus \{\kappa\} \cup \{\kappa'\}$ along $\kappa \in e_{\interior}(\triangle)$. Here $\mathrm{sgn} (x) \in \{+,0,-\}$ denotes the sign, and $[x]_+:=\max\{0,x\}$ for $x \in \mathbb{R}$. We abbreviated as $X_\alpha:=X_\alpha^\triangle$, $X'_\alpha:=X_{\alpha}^{\triangle'}$, $\varepsilon_{\alpha\beta}:=\varepsilon_{\alpha\beta}^\triangle$, and so on. The maps \eqref{eq:X-transf}, \eqref{eq:A-transf} are called the \emph{cluster Poisson/$K_2$}-transformations, respectively. From the definition, their function algebras are given by \begin{align*} \mathcal{O}(\X_\Sigma) = \bigcap_{\triangle} \mathbb{C}[(X_\alpha^\triangle)^{\pm 1} \mid \alpha \in e(\triangle)], \quad \mathcal{O}(\A_\Sigma) = \bigcap_{\triangle} \mathbb{C}[(A_\alpha^\triangle)^{\pm 1} \mid \alpha \in e(\triangle)]. \end{align*} In other words, these algebras consists of \emph{universally Laurent polynomials}. The function algebras $\mathcal{O}(\X_\Sigma)$ and $\mathcal{O}(\A_\Sigma)$ are also called the \emph{cluster Poisson algebra} and the \emph{upper cluster algebra}, respectively. \paragraph{\textbf{Ensemble maps}} The exchange matrix $\varepsilon^\triangle$ induces the monomial map \begin{align*} p_{\mathrm{uf}}: \A_\triangle \to \X_\triangle^{\mathrm{uf}}, \quad p_{\mathrm{uf}}^\ast X_\kappa^\triangle = \prod_{\alpha \in e(\triangle)} (A^\triangle_\alpha)^{\varepsilon^\triangle_{\kappa\alpha}} \quad (\kappa \in e_{\interior}(\triangle)), \end{align*} which we call the \emph{ensemble map}. Here $\X_\triangle^{\mathrm{uf}}:=(\mathbb{G}_m)^{e_{\interior}(\triangle)}$ denotes the cluster torus without frozen coordinates.\footnote{This restriction comes from the fact that the entries $\varepsilon_{ij}$ for $i,j$ frozen are allowed to be rational in a general cluster variety.} It commutes with cluster transformations, and hence induces a morphism $p_{\mathrm{uf}}: \A_\Sigma \to \X_\Sigma^{\mathrm{uf}}$. We have a freedom to choose its extension \begin{align*} p_{\Sigma;M}: \A_\triangle \to \X_\triangle, \quad p_{\Sigma;M}^\ast X_\kappa^\triangle = \prod_{\alpha \in e(\triangle)} (A^\triangle_\alpha)^{\varepsilon^\triangle_{\kappa\alpha}+m_{\kappa\alpha}} \end{align*} by specifying a constant matrix $M=(m_{\kappa\alpha})_{\kappa,\alpha \in e(\triangle)}$ such that $m_{\kappa\alpha}=0$ unless $(\kappa,\alpha) \in \mathbb{B} \times \mathbb{B}$ (cf.~\cite[Appendix A]{GHKK} and \cite[Section 18]{GS19}). It also commutes with cluster transformations, and induces a morphism $p_{\Sigma;M}: \A_\Sigma \to \X_\Sigma$. In this paper, following \cite{GS19}, we choose $m_{\kappa\alpha}=\mp \delta_{\kappa\alpha}$ for $(\kappa,\alpha) \in \mathbb{B} \times \mathbb{B}$. \paragraph{\textbf{Tropicalizations.}} Let $\mathbb{P}$ be a semifield. Any positive rational map $f: T_1 \to T_2$ between split algebraic tori naturally induces a map $f(\mathbb{P}):T_1(\mathbb{P}) \to T_2(\mathbb{P})$, where $T_i(\mathbb{P}):=\mathrm{Hom}(\mathbb{G}_m,T_i)\otimes_\mathbb{Z} \mathbb{P}$, $i=1,2$ are sets of $\mathbb{P}$-valued points. Gluing the coordinate tori by the cluster transformations $\mu_\kappa^x(\mathbb{P})$ and $\mu_\kappa^a(\mathbb{P})$, we get the sets $\X_\Sigma(\mathbb{P})$ and $\A_\Sigma(\mathbb{P})$ of $\mathbb{P}$-points. For example: \begin{itemize} \item if $\mathbb{P}=\mathbb{R}_{>0}$ is the semifield of positive real numbers with the usual operations, then $\X_\Sigma(\mathbb{R}_{>0})$ is obtained by gluing $\X_\triangle(\mathbb{R}_{>0})=\mathbb{R}^{e(\triangle)}_{>0}$ with the same formula as \eqref{eq:X-transf}. \item if $\mathbb{P}=\mathbb{Z}^{\mathsf{T}}$ is the (max-plus) tropical semifield with the addition $\max$ and multiplication $+$, then $\X_\Sigma(\mathbb{Z}^{\mathsf{T}})$ is obtained by gluing $\X_\triangle(\mathbb{Z}^{\mathsf{T}})=\mathbb{Z}^{e(\triangle)}$ with the formula obtained from \eqref{eq:X-transf} by replacing the operations as $+ \mapsto \max$, $\times \mapsto +$, which is called the \emph{tropical analogue}. \end{itemize} \paragraph{\textbf{The mapping class group action.}} Let $MC(\Sigma)$ denote the mapping class group of $\Sigma$. Each mapping class $\phi \in MC(\Sigma)$ acts on $\X_\Sigma$ so that $X_\alpha^\triangle(\phi(g)) = X_{\phi^{-1}(\alpha)}^{\phi^{-1}(\triangle)}(g)$ for all $\triangle$ and $\alpha \in e(\triangle)$. It acts on $\A_\Sigma$ in a similar manner, and commutes with the (extended) ensemble maps. These actions are positive, and hence descends to the actions on the sets $\X_\Sigma(\mathbb{P})$ and $\A_\Sigma(\mathbb{P})$ of $\mathbb{P}$-points.
1,314,259,992,682
arxiv
\section{Introduction} \noindent Lattice QCD calculations of meson-meson interactions have yielded predictions for physical scattering lengths at the few percent level~\cite{Beane:2007xs,Beane:2006gj,Beane:2007uh}. Several reasons underlie this striking accuracy. Firstly, at the level of the lattice calculation, Euclidean-space correlation functions involving pseudoscalar mesons have signal/noise ratios\footnote{Here the signal is the Monte Carlo estimate of the quantum correlation function evaluated on the lattice, while the noise represents the statistical fluctuations in the correlation function.} that do not degrade, or only slowly degrade with time. Therefore, highly accurate fits of both single- and multi-meson properties are possible with currently available supercomputer resources. Recent calculations of multi-meson interactions relevant for the study of pion and kaon condensation have been performed with up to twelve mesons interacting on a lattice~\cite{Beane:2007es,Detmold:2008fn,Detmold:2008yn} with no appreciable degradation of signal/noise with time. Secondly, and perhaps more importantly, QCD correlation functions involving Goldstone bosons are subject to powerful chiral symmetry constraints. Since current lattice calculations are carried out at unphysical quark masses, these constraints play an essential role in extrapolating the lattice data to the physical quark masses, as well as to the infinite volume, and continuum limits. Chiral perturbation theory ($\chi$-PT) is the optimal method for implementing QCD constraints due to chiral symmetry, and in essence, provides an expansion of low-energy S-matrix elements in quark masses and powers of momentum~\cite{Bernard:2007zu}. In contrast to the purely mesonic sector, recent studies of baryon-baryon interactions, the paradigmatic nuclear physics process, have demonstrated the fundamental difficulty faced in making predictions for baryons and their interactions~\cite{Beane:2006mx,Beane:2006gf}. Unlike the mesons, correlation functions involving baryons suffer an exponential degradation of signal/noise at large times~\footnote{A recent high-statistics study of baryon correlation functions on anisotropic clover lattices has found that the exponential decay with time of signal/noise occurs only {\it asymptotically} in time, and therefore, the signal/noise problem in baryon correlation functions is not nearly as severe as previously thought~\cite{Beane:2009ky}.} and therefore pose a fundamentally different kind of challenge in extracting signal from data~\cite{Lepage:1989hd}. Furthermore, while baryon interactions are constrained by QCD symmetries like chiral symmetry, the constraints are not nearly as powerful as when there is at least one pion or kaon in the initial or final state. For instance, there is no expectation that the baryon-baryon scattering lengths vanish in the chiral limit as they do in the purely mesonic sector. In nucleon-nucleon scattering, the s-wave interactions are actually enhanced due to the close proximity of a non-trivial fixed point of the renormalization group, which drives the scattering lengths to infinity, thus rendering the effective field theory description of the interaction highly non-perturbative~\cite{Kaplan:1998we}. Given the contrast in difficulty between the purely mesonic and purely baryonic sectors described above, it is clearly of great interest to perform a lattice QCD investigation of the simplest scattering process involving at least one baryon: meson-baryon scattering. While pion-nucleon scattering is the best-studied process, both theoretically and experimentally, its determination on the lattice is computationally prohibitive since it involves annihilation diagrams. At present only a few limiting cases that involve these diagrams are being investigated~\cite{Babich:2009rq}. Combining the lowest-lying $SU(3)$ meson and baryon octets, one can form five meson-baryon elastic scattering processes that do not involve annihilation diagrams. Three of these involve kaons and therefore are, in principle, amenable to an $SU(3)$ heavy-baryon $\chi$-PT (HB$\chi$-PT) analysis~\cite{Jenkins:1990jv} for extrapolation. The remaining two processes involve pions interacting with hyperons and therefore can be analyzed in conjunction with the kaon processes in $SU(3)$ HB$\chi$-PT, or independently using $SU(2)$ HB$\chi$-PT. Meson-baryon scattering has been developed to several non-trivial orders in the $SU(3)$ HB$\chi$-PT expansion in Refs.~\cite{Liu:2006xja,Liu:2007ct}, extending earlier work on kaon-nucleon scattering in Ref.~\cite{Kaiser:2001hr}. A very-recent paper~\cite{Mai:2009ce} has reconsidered the $SU(3)$ HB$\chi$-PT results using a different regularization scheme, and also derived results for pion-hyperon scattering in the $SU(2)$ HB$\chi$-PT expansion. These works make clear that the paucity of experimental data make it is very difficult to assess the convergence of the chiral expansion in the three-flavor case. Further, in the pion-hyperon system, the complete lack of experimental data precludes a separate analysis in the chiral two-flavor expansion. A lattice calculation of meson-baryon scattering analyzed using $\chi$-PT is therefore useful not only in making predictions for low-energy scattering at the physical point, but also for assessing the convergence of the chiral expansion for a range of quark masses at which present-day lattice calculations are being performed. Meson-baryon scattering is also of interest for several indirect reasons. The $K^- n$ interaction is important for the description of kaon condensation in the interior of neutron stars~\cite{KaplanNelson}, and meson-baryon interactions are essential input in determining the final-state interactions of various decays that are interesting for standard-model phenomenology (See Ref.~\cite{Lu:1994ex} for an example). Finally, in determining baryon excited states on the lattice, it is clear that the energy levels that represent meson-baryon scattering on the finite-volume lattice must be resolved before progress can be made regarding the extraction of single-particle excitations. The experimental input to existing $\chi$-PT analyses of meson-baryon scattering is extensively discussed in Refs.~\cite{Kaiser:2001hr,Liu:2006xja,Liu:2007ct,Mai:2009ce}. Threshold pion-nucleon scattering information is taken from experiments with pionic hydrogen and deuterium~\cite{Schroder:1999uq,Schroder:2001rc}, and the kaon-nucleon scattering lengths are taken from model-dependent extractions from kaon-nucleon scattering data~\cite{Martin:1980qe}. There is essentially no experimental information available on the pion-hyperon and kaon-hyperon scattering lengths. There have been two quenched lattice QCD studies of meson-baryon scattering parameters: the pioneering work of Ref.~\cite{Fukugita:1994ve} calculated pion-nucleon and kaon-nucleon scattering lengths at heavy pion masses without any serious attempt to extrapolate to the physical point, and Ref.~\cite{Meng:2003gm} calculated the $I=1$ $KN$ scattering length and found a result consistent with the current algebra prediction. In this work we calculate the lowest-lying energy levels for five meson-baryon processes that have no annihilation diagrams: $\pi^+\Sigma^+$, $\pi^+\Xi^0$, $K^+p$, $K^+n$, and $\overline{K}{}^0 \Xi^0$ in a mixed-action Lattice QCD calculation with domain-wall valence quarks on the asqtad-improved coarse MILC configurations with $b\sim 0.125~{\rm fm}$ at four light-quark masses ($m_\pi\sim 291$, $352$, $491$ and $591$ MeV), and at two light quark masses ($m_\pi\sim 320$ and $441$ MeV) on the fine MILC configurations with $b\sim 0.09~{\rm fm}$, with substantially less statistics on the fine ensembles. We extract the s-wave scattering lengths from the two-particle energies, and analyze the five processes using $SU(3)$ HB$\chi$-PT. We find a rather conclusive lack of convergence in the three-flavor chiral expansion. We then consider $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ using $SU(2)$ HB$\chi$-PT and find that we are able to make reliable predictions of the scattering lengths at the physical point. We find \begin{eqnarray} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.017~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.098\pm 0.017~{\rm fm} \ , \label{eq:MP} \end{eqnarray} where the errors encompass statistical and systematic uncertainties. The leading order $\chi$-PT (current algebra) predictions for the scattering lengths are given by~\cite{Weinberg:1966kf}: \begin{eqnarray} a_{\pi^+\Sigma^+}&=& -0.2294~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.1158~{\rm fm} \ . \label{eq:CA} \end{eqnarray} Ultimately, either the chiral extrapolation should be performed after a continuum limit has been taken, or one should use the mixed-action extension of HB$\chi$-PT to perform the chiral extrapolations~ \cite{Tiburzi:2005is,Chen:2007ug}. However, our results on the fine MILC configurations are statistics-limited and not yet sufficiently accurate to make this a useful exercise. Further, the explicit extrapolation formulas for the meson-baryon scattering lengths have not yet been determined in mixed-action $\chi$-PT. Despite these limitations, we expect the corrections from finite lattice spacing to be small for two principle reasons. Firstly, the meson-baryon scattering lengths are protected by chiral symmetry and therefore the (approximate) chiral symmetry of the domain wall valence fermions used in this work protects the scattering lengths from additive renormalization, which can be explicitly seen in the construction of the mixed-action baryon Lagrangian in Ref.~\cite{Chen:2007ug}. The mixed-action corrections do not appear until next-to-next-to leading order in the chiral expansion of the meson-baryon scattering lengths. Secondly, our previous experience with this mixed-action lattice QCD program leads us to expect that discretization effects will be well-encompassed within the overall errors we quote. In our precise calculation of meson-meson scattering, the predicted mixed-action corrections~ \cite{Chen:2005ab,Chen:2006wf} were smaller than the uncertainties on a given ensemble~\cite{Beane:2007xs,Beane:2007uh}. This paper is organized as follows. In section~\ref{sec:MBSP} we isolate the five meson-baryon processes with no annihilation diagrams that are calculated in this work. We briefly review the standard L\"uscher method for extracting the scattering amplitude from two-particle energy levels in a finite volume in section~\ref{sec:finvol}. Particulars regarding the mixed-action lattice calculation and fitting methods are provided in section~\ref{sec:MAdetails}. Additional details can be found in Ref.~\cite{Beane:2008dv}. Mixing between two of the meson-baryon channels with the same quantum numbers is discussed in section~\ref{sec:MChAm}. In section~\ref{sec:su3CE} we consider chiral extrapolations of the lattice data using $SU(3)$ HB$\chi$-PT, and in section~\ref{sec:su2CE} we analyze the pion-hyperon lattice data using $SU(2)$ HB$\chi$-PT. Finally, we conclude in section~\ref{sec:conc}. \section{Meson-Baryon Scattering Processes} \label{sec:MBSP} \noindent It is a straightforward exercise to construct the six scattering channels involving the lowest-lying octet mesons and baryons that do not have annihilation diagrams, and to determine their isospin.~\footnote{The $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ systems have the same quantum numbers, and therefore require a mixed channel analysis in order to extract the $\overline{K}{}^0\Sigma^+$ scattering length. This is discussed in Section~\ref{sec:MChAm}.} The particle content, isospin, and valence quark content of these meson-baryon states are shown in Table~\ref{tab:quarks1}. \begin{table} \begin{tabular}{ccc} \hline \hline Particles\ \ \ &\ \ \ Isospin\ \ \ &\ \ \ Quark Content \\ \hline $\pi^+\Sigma^+$ & 2 & $uuu\bar{d}s$ \\ $\pi^+\Xi^0$ & 3/2 & $uu\bar{d}ss$ \\ $K^+p$ & 1 & $uuud\bar{s}$ \\ $K^+n$ & 0~{\it and}~1 & $uudd\bar{s}$ \\ $\overline{K}{}^0\Sigma^+$ & 3/2 & $uu\bar{d}ss$ \\ $\overline{K}{}^0\Xi^0$ & 1 & $u\bar{d}sss$ \\ \hline \hline \end{tabular} \caption{Particle content, isospin, and valence quark structure of the meson-baryon states calculated in this work. As is clear from the valence quark content, these meson-baryon states have no annihilation diagrams.} \label{tab:quarks1} \end{table} We adopt the notation of Ref.~\cite{Liu:2006xja}, denoting the threshold T-matrix in the isospin basis as $T^{(I)}_{\phi B}$, where $I$ is the isospin of the meson-baryon combination, $\phi$ is the meson, and $B$ is the baryon. The five elastic meson-baryon scattering processes that we consider are then in correspondence with the isospin amplitudes according to \begin{eqnarray} T_{\pi^+\Sigma^+}=T^{(2)}_{\pi \Sigma}\ &;& \qquad T_{\pi^+\Xi^0}=T^{(3/2)}_{\pi \Xi} \ ; \nonumber \\ T_{K^+p}=T^{(1)}_{KN} \ ; \qquad T_{K^+n}&=&\frac{1}{2}(T^{(1)}_{KN}+T^{(0)}_{KN}) \ ; \qquad T_{\overline{K}{}^0\Xi^0}=T^{(1)}_{\overline{K}\Xi} \ . \nonumber\\ \label{eq:Tmatrices} \end{eqnarray} These threshold T-matrices are related to the scattering lengths $a_{\phi B}$ through \begin{equation} T_{\phi B}=4\pi\left(1+\frac{m_\phi}{m_B}\right) a_{\phi B} \ , \label{eq:Tanda} \end{equation} where $m_\phi$ is the meson mass and $m_B$ is the baryon mass. \section{Finite-Volume Calculation of Scattering Amplitudes} \label{sec:finvol} \noindent The s-wave scattering amplitude for two particles below inelastic thresholds can be determined using L\"uscher's method~\cite{luscher_formula}, which entails a measurement of one or more energy levels of the two-particle system in a finite volume. For two particles with masses $m_\phi$ and $m_B$ in an s-wave, with zero total three momentum, and in a finite volume, the difference between the energy levels and those of two non-interacting particles can be related to the inverse scattering amplitude via the eigenvalue equation~\cite{luscher_formula} \begin{eqnarray} p\cot\delta(p) \ =\ \frac{1}{\pi L}\ {\bf S}\left(\,\frac{p L}{2\pi}\,\right)\ \ , \label{eq:energies} \end{eqnarray} where $\delta(p)$ is the elastic-scattering phase shift, and the regulated three-dimensional sum is \begin{eqnarray} {\bf S}\left(\,{\eta}\, \right)\ \equiv \ \sum_{\bf j}^{ |{\bf j}|<\Lambda} \frac{1}{|{\bf j}|^2-\eta^2}\ -\ {4 \pi \Lambda} \ \ \ . \label{eq:Sdefined} \end{eqnarray} The sum in Eq.~(\ref{eq:Sdefined}) is over all triplets of integers ${\bf j}$ such that $|{\bf j}| < \Lambda$ and the limit $\Lambda\rightarrow\infty$ is implicit~\cite{Beane:2003da}. This definition is equivalent to the analytic continuation of zeta-functions presented by L\"uscher~\cite{luscher_formula}. In Eq.~(\ref{eq:energies}), $L$ is the length of the spatial dimension in a cubically-symmetric lattice. The energy eigenvalue, $E_n$, and its deviation from the sum of the rest masses of the particle, $\Delta E_n$, are related to the center-of-mass momentum $p_n$, a solution of Eq.~(\ref{eq:energies}), by \begin{eqnarray} \Delta E_n \ & \equiv & E_n\ -\ m_\phi \ - \ m_B \ =\ \sqrt{\ p_n^2\ +\ m_\phi^2\ } \ +\ \sqrt{\ p_n^2\ +\ m_B^2\ } \ -\ m_\phi\ - \ m_B \ ; \nonumber\\ & = & \frac{p_n^2}{2 \mu_{\phi B}}\ +\ ... \ \ \ , \label{eq:energieshift} \end{eqnarray} where $\mu_{\phi B}$ is the reduced mass of the meson-baryon system. In the absence of interactions between the particles, $|p\cot\delta|=\infty$, and the energy levels occur at momenta ${\bf p} =2\pi{\bf j}/L$, corresponding to single-particle modes in a cubic cavity with periodic boundary conditions. Expanding Eq.~(\ref{eq:energies}) about zero momenta, $p\sim 0$, one obtains the familiar relation~\footnote{In order to be consistent with the meson-baryon literature, we have chosen to use the ``particle physics'' definition of the scattering length, as opposed to the ``nuclear physics'' definition, which is opposite in sign.} \begin{eqnarray} \Delta E_0 & = & -\frac{2\pi a}{\mu_{\phi B} L^3} \left[\ 1\ +\ c_1 \frac{a}{L}\ +\ c_2 \left( \frac{a}{L} \right)^2 \ \right ] \ +\ {\cal O}\left(\frac{1}{L^6}\right) \ \ , \label{eq:luscher_a} \end{eqnarray} with \begin{eqnarray} c_1 & = & \frac{1}{\pi} \sum_{{\bf j}\ne {\bf 0}}^{ |{\bf j}|<\Lambda} \frac{1}{|{\bf j}|^2}\ -\ 4 \Lambda \ \ =\ -2.837297 \ \ \ ,\ \ \ c_2\ =\ c_1^2 \ -\ \frac{1}{\pi^2} \sum_{{\bf j}\ne {\bf 0}} \frac{1}{|{\bf j}|^4} \ =\ 6.375183 \ , \end{eqnarray} and $a$ is the scattering length, defined by \begin{eqnarray} a & = & \lim_{p\rightarrow 0}\frac{\tan\delta(p)}{p} \ . \label{eq:scatt} \end{eqnarray} As the finite-volume lattice calculation cannot achieve $p=0$ (except in the absence of interactions), in quoting a lattice value for the scattering length extracted from the ground-state energy level, it is important to determine the error associated with higher-order range corrections. \section{Lattice Calculation and Data Analysis} \label{sec:MAdetails} \noindent In calculating the meson-baryon scattering lengths, the mixed-action lattice QCD scheme was used in which domain-wall quark~\cite{Kaplan:1992bt,Shamir:1992im,Shamir:1993zy,Shamir:1998ww,Furman:1994ky} propagators are generated from a smeared source on $n_f = 2+1$ asqtad-improved~\cite{Orginos:1999cr,Orginos:1998ue} rooted, staggered sea quarks~\cite{Bernard:2001av}. To improve the chiral symmetry properties of the domain-wall quarks, hypercubic-smearing (HYP-smearing)~\cite{Hasenfratz:2001hp,DeGrand:2002vu,DeGrand:2003in} was used in the gauge links of the valence-quark action. In the sea-quark sector, there has been significant debate regarding the validity of taking the fourth root of the staggered fermion determinant at finite lattice spacing~\cite{Durr:2004as,Durr:2004ta,Creutz:2006ys,Bernard:2006zw,Bernard:2006vv,Creutz:2007nv,Bernard:2006ee,Bernard:2006qt,Creutz:2007yg,Creutz:2007pr,Durr:2006ze,Hasenfratz:2006nw,Shamir:2006nj,Sharpe:2006re}. While there is no proof, there are arguments to suggest that taking the fourth root of the fermion determinant recovers the contribution from a single Dirac fermion. The results of this paper assume that the fourth-root trick recovers the correct continuum limit of QCD. The present calculations were performed predominantly with the coarse MILC lattices with a lattice spacing of $b\sim 0.125$~fm, and a spatial extent of $L\sim 2.5$~fm. On these configurations, the strange quark was held fixed near its physical value while the degenerate light quarks were varied over a range of masses corresponding to the pion masses shown in Table~\ref{tab:MILCcnfs}. See Ref.~\cite{Beane:2008dv} for further details. Results were also obtained on a coarse MILC ensemble with a spatial extent of $L\sim 3.5$~fm. However, this data is statistics limited. In addition, calculations were performed on two fine MILC ensembles at $L\sim 2.5$~fm with $b\sim 0.09$~fm. On the coarse MILC lattices, Dirichlet boundary conditions were implemented to reduce the original time extent of 64 down to 32, which saved a nominal factor of two in computational time. While this procedure leads to minimal degradation of a nucleon signal, it does limit the number of time slices available for fitting meson properties. By contrast, on the fine MILC ensembles, anti-periodic boundary conditions were implemented and all time slices are available. \begin{table}[!ht] \begin{ruledtabular} \begin{tabular}{cccccccc} Ensemble & $m_\pi$(MeV) & $b m_l$ & $b m_s$ & $b m^{dwf}_l$ & $ b m^{dwf}_s $ & $10^3 \times bm_{res}$~\protect\footnote{Computed by the LHP collaboration for the coarse ensembles.} & \# of props \\ \hline ({\it i}) 2064f21b676m007m050 &291 & 0.007 & 0.050 & 0.0081 & 0.081 & $1.604\pm 0.038$ & 1039\ $\times$\ 24 \\ ({\it ii}) 2064f21b676m010m050 &352 & 0.010 & 0.050 & 0.0138 & 0.081 & $1.552\pm 0.027$ & 769\ $\times$\ 24 \\ ({\it iii}) 2064f21b679m020m050 & 491& 0.020 & 0.050 & 0.0313 & 0.081 & $1.239\pm 0.028$ & 486\ $\times$\ 24 \\ ({\it iv}) 2064f21b681m030m050 &591 & 0.030 & 0.050 & 0.0478 & 0.081 & $0.982\pm 0.030$ & 564\ $\times$\ 24 \\ \hline ({\it v}) 2864f21b676m010m050 &352 & 0.010 & 0.050 & 0.0138 & 0.081 & $1.552\pm 0.027$ & 128\ $\times$\ 8 \\ \hline ({\it vi}) 2896f21b709m0062m031 & 320& 0.0062 & 0.031 & 0.0080 & 0.0423 & $0.380\pm 0.006$ & 1001\ $\times$\ 8 \\ ({\it vii}) 2896f21b709m0124m031 &441 & 0.0124 & 0.031 & 0.0080 & 0.0423 & $0.380\pm 0.006$ & 513\ $\times$\ 3 \\ \end{tabular} \end{ruledtabular} \caption{The parameters of the MILC gauge configurations and domain-wall propagators used in this work. The subscript $l$ denotes light quark (up and down), and $s$ denotes the strange quark. The superscript $dwf$ denotes the bare-quark mass for the domain-wall fermion propagator calculation. The last column is the number of configurations times the number of sources per configuration. Ensembles ({\it i})-({\it iv}) have $L\sim 2.5$~fm and $b\sim 0.125$~fm; Ensemble ({\it v}) has $L\sim 3.5$~fm and $b\sim 0.125$~fm; Ensembles ({\it vi}),({\it vii}) have $L\sim 2.5$~fm and $b\sim 0.09$~fm.} \label{tab:MILCcnfs} \end{table} The correlation function that projects onto the zero momentum state for the meson-baryon system is \begin{equation} C_{\phi B}(t)={\cal P}_{ij}\sum_{{\bf x,y}}\langle \phi^{\dagger}(t,{\bf x}) \overline{B_i}(t,{\bf y}) \phi(0,{\bf 0}) B_j(0,{\bf 0})\rangle \ , \end{equation} where ${\cal P}_{ij}$ is a positive-energy projector. For instance, in the case of $K^+ p$, the interpolating operators for the $K^+$ and the proton are \begin{eqnarray} \phi(t,{\bf x})&=&K^+(t,{\bf x})=\overline{s}(t,{\bf x})\gamma_5 u(t,{\bf x}) \ ; \nonumber \\ B_i(t,{\bf x})&=&p_i(t,{\bf x})=\epsilon_{abc}u_i^a(t,{\bf x})\left( u^{b\mathrm{T}}(t,{\bf x})C\gamma_5 d^c(t,{\bf x})\right) \ . \end{eqnarray} The masses of the mesons and baryons are extracted using the assumed form of the large-time behavior of the single particle correlators as a function of time. As $t\rightarrow \infty$, the ground state dominates; however, fluctuations of the correlator increase with respect to the ground state. The meson and baryon two-point correlators, $C_{\phi}(t)$ and $C_{B}(t)$, behave as \begin{equation} C_{\phi}(t) \ \rightarrow \ {\cal A_\mathrm{1}}\ e^{-m_{\phi} \ t}, \qquad C_{B}(t) \ \rightarrow \ {\cal A_\mathrm{2}}\ e^{-m_{B} \ t}\ , \label{eq:correlator} \end{equation} respectively, in the limits $t\rightarrow\infty$ and $L\rightarrow\infty$. In relatively large lattice volumes the energy difference between the interacting and non-interacting meson-baryon states is a small fraction of the total energy, which is dominated by the masses of the mesons and baryons~\cite{Beane:2007xs}. In order to extract this energy difference the ratio of correlation functions, $G_{\phi B}(t)$, is formed \begin{equation} G_{\phi B}( t) \equiv \frac{C_{\phi B}( t)}{C_{\phi}(t) C_{B}(t)} \ = \ \sum_{n=0}^\infty\ {\cal D}_n\ e^{-\Delta E_n\ t} \ , \label{eq:ratio_correlator} \end{equation} where $\Delta E \equiv \Delta E_0$ is the desired energy shift. With $\Delta E$, and the extracted masses of the meson and baryon, the scattering length can be calculated using Eqs.~(\ref{eq:energies}) and~(\ref{eq:energieshift}), or, if $a<<L$, from Eq.~(\ref{eq:luscher_a}). For the meson-baryon scattering lengths calculated in this work, the difference between the exact and perturbative eigen-equations is negligible. A variety of fitting methods have been used, including standard chi-square minimization fits to one and two exponentials. Generalized effective energy plots are particularly useful for analyzing the lattice data and for estimating systematic errors~\cite{Beane:2009ky}. These plots are constructed by taking the ratio of the correlators at times $t$, and $t+n_J$ (where $n_J$ is an integer) \begin{equation} m_{\phi,B}^{\mathrm{eff}}=\frac{1}{n_J}\mathrm{log} \left(\frac{C_{\phi,B}(t)}{C_{\phi,B}(t+n_J)}\right), \qquad \Delta E_{\phi B}^{\mathrm{eff}}=\frac{1}{n_J}\mathrm{log} \left(\frac{G_{\phi B}(t)}{G_{\phi B}(t+n_J)}\right) \ . \label{eq:effscatteq} \end{equation} With $n_J=1$, the standard effective mass and energy plots are recovered. Generalized effective masses form a system of linear equations for each $n_J$ over the time interval where the data is fit. For instance, if the interval is given by $\Delta t=t_2-t_1$, then there is one equation for $m^\mathrm{eff}$ at each $t$, for any $n_J$ that fits within $\Delta t$. The equations can be solved for $m^\mathrm{eff}$ by casting them into the form of the so-called normal equation~\cite{Dahl}. Since each $n_J$ constitutes a different effective mass plot, the number of degrees of freedom is increased significantly. This method provides a fitting routine that is faster than standard least-squares fitting. Additional details regarding the utility of generalized effective mass and energy plots can be found in Ref.~\cite{Beane:2009gs}. The interpolating operator at the source is constructed from gauge-invariantly-smeared quark field operators, while at the sink, the interpolating operator is constructed from either local quark field operators, or from the same smeared quark field operators used at the source, leading to two sets of correlation functions. For brevity, we refer to the two sets of correlation functions that result from these source and sink operators as {\it smeared-point} (SP) and {\it smeared-smeared} (SS) correlation functions, respectively. By forming a linear combination of the SP and SS correlation functions, $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$, we are able to remove the first excited state, thus gaining early time slices for fitting~\cite{Beane:2009gs}. This effect is illustrated in Fig.~\ref{fig:m010pisigeffSPSS}, which is the effective $\Delta E_{\pi^+\Sigma^+}$ plot for coarse MILC ensemble ({\it ii}). We plot $C^{\mathrm{(SS)}}$, $C^{\mathrm{(SP)}}$, and $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$ with $\alpha$ tuned to remove the first excited state. The effective energies, effective masses, and energy splittings are plotted for coarse MILC ensemble ({\it ii}) in Figs.~\ref{fig:energylevels},~\ref{fig:m010single}, and \ref{fig:m010two}. All of the necessary quantities needed for extraction of the scattering lengths are contained in Table~\ref{tab:latticequant}, which also contains the sum of meson and baryon masses at each quark mass. Fig.~\ref{fig:m010Savageplot} shows the results for all five processes, and the behavior of Eq.~(\ref{eq:energies}), versus the interaction energy, presented in terms of the dimensionless quantities $p\cot\delta/m_\pi$ and $\Delta E/m_\pi$. The curve shown in Fig.~\ref{fig:m010Savageplot} is $p\cot\delta/m_\pi$ for the case of $m_\phi=m_K$, and $m_B=m_p$, as $\Delta E/m_\pi$ is varied. ${\bf S}(\eta)$ in Eq.~(\ref{eq:Sdefined}) is a function of the meson and baryon masses, so there will be a unique curve for each combination of $m_\phi$ and $m_B$. Consequently, the $K^+p$, and $K^+n$ data points fall on this curve. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{pisigmam010multi.eps} \caption{Effective $\Delta E_{\pi^+\Sigma^+}$ plot for coarse MILC ensemble ({\it ii}) from correlation functions $C^{\mathrm{(SS)}}$, $C^{\mathrm{(SP)}}$ and $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$. By taking the linear combination with $\alpha$ tuned to remove the first excited state, earlier time slices are gained for fitting.} \label{fig:m010pisigeffSPSS} \end{figure} \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.47\linewidth]{Pi_Sigma_E_m010.eps}} \hspace{1pt} \vspace{2pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Pi_Xi_E_m010.eps}} \hspace{1pt} \vspace{2pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_Proton_E_m010.eps}} \hspace{1pt} \vspace{2pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_Neutron_E_m010.eps}} \hspace{1pt} \vspace{2pt} \subfloat[]{ \label{fig:energylevels:e} \includegraphics[width=0.47\linewidth]{Kaon_Sigma_E_m010.eps}} \hspace{1pt} \vspace{2pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_Xi_E_m010.eps}} \caption{Effective energy plots of the six meson-baryon processes shown in Table~\ref{tab:quarks1}. The plots are from MILC ensemble ({\it ii}), $n_J=2$, and the linear combination $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$ is plotted. The dashed line is the sum of the meson and baryon masses for each process, while the error bars represent the jackknife uncertainty. Note that the bE axis of (e) is a factor of two larger in span than the other plots to encompass the dashed line at $m_\pi+m_\Xi=1.124$.} \label{fig:energylevels} \end{figure} \begin{figure} \centering \subfloat[]{ \label{fig:m010single:a} \includegraphics[width=0.45\linewidth]{Pion_m010.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:m010single:b} \includegraphics[width=0.45\linewidth]{Kaon_m010.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:m010single:c} \includegraphics[width=0.45\linewidth]{Proton_m010.eps}} \hspace{8pt} \subfloat[]{ \label{fig:m010single:d} \includegraphics[width=0.45\linewidth]{Sigma_m010.eps}} \hspace{8pt} \subfloat[]{ \label{fig:m010single:e} \includegraphics[width=0.45\linewidth]{Xi_m010.eps}} \caption{Single particle effective mass plots for coarse MILC ensemble ({\it ii}). Here we choose $n_J=2$, and the linear combination $C^{\mathrm{(SS)}}-\alpha C^{\mathrm{(SP)}}$ is plotted. The inner shaded bands are the jackknife uncertainties of the fits to the effective masses, and the outer bands are the jackknife uncertainty and systematic uncertainty added in quadrature over the indicated window of time slices.} \label{fig:m010single} \end{figure} \begin{figure} \centering \subfloat[]{ \label{fig:m010two:f} \includegraphics[width=0.45\linewidth]{Pi_Sigma_m010.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:m010two:g} \includegraphics[width=0.45\linewidth]{Pi_Xi_m010.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:m010two:h} \includegraphics[width=0.45\linewidth]{Kaon_Proton_m010.eps}} \hspace{8pt} \subfloat[]{ \label{fig:m010two:i} \includegraphics[width=0.45\linewidth]{Kaon_Neutron_m010.eps}} \hspace{8pt} \subfloat[]{ \label{fig:m010two:k} \includegraphics[width=0.45\linewidth]{Kaon_Xi_m010.eps}} \caption{Meson-baryon effective energy difference plots for coarse MILC ensemble ({\it ii}). Here we choose $n_J=2$, and the linear combination $C^{\mathrm{(SS)}}-\alpha C^{\mathrm{(SP)}}$ is plotted. The inner shaded bands are the jackknife uncertainties of the fits to the effective energy differences, and the outer bands are the jackknife uncertainty and systematic uncertainty added in quadrature over the indicated window of time slices.} \label{fig:m010two} \end{figure} \begin{table} \begin{ruledtabular} \begin{tabular}{ccccc} Quantity & m007 ({\it i})& m010 ({\it ii})& m020 ({\it iii})& m030 ({\it iv}) \\ \hline $m_{\pi}$ & 0.18384(31)(03) & 0.22305(25)(08) & 0.31031(38)(95) & 0.37513(44)(13) \\ $m_{k}$ & 0.36783(32)(42) & 0.37816(26)(11) & 0.40510(33)(37) & 0.43091(66)(16) \\ $m_{p}$ & 0.6978(61)(08) & 0.7324(31)(10) & 0.8069(22)(14) & 0.8741(16)(05) \\ $m_{\Sigma}$ & 0.8390(22)(03) & 0.8531(19)(08) & 0.8830(18)(17) & 0.9213(13)(03) \\ $m_{\Xi}$ & 0.8872(13)(16) & 0.9009(13)(10) & 0.9233(18)(04) & 0.9461(14)(08) \\ $f_{\pi}$ & 0.09257(16) & 0.09600(14) & 0.10208(14) & 0.10763(32) \\ $f_{K}$ & 0.10734(10) & 0.10781(18) & 0.10976(17) & 0.11253(31) \\ \hline $\Delta E_{\pi\Sigma}$ & 0.0150(14)(08) & 0.0148(08)(13) & 0.0111(10)(08) & 0.0100(10)(11) \\ $\Delta E_{\pi\Xi}$ & 0.00646(64)(98) & 0.0062(05)(12) & 0.00431(68)(43) & 0.00421(76)(60) \\ $\Delta E_{K p}$ & 0.0140(22)(30) & 0.0146(15)(13) & 0.0092(10)(51) & 0.0087(16)(16) \\ $\Delta E_{K n}$ & 0.0057(18)(16) & 0.0051(14)(09) & 0.0036(09)(12) & 0.0028(10)(11) \\ $\Delta E_{K\Xi}$ & 0.0118(08)(13) & 0.0125(05)(14) & 0.0085(08)(31) & 0.0086(16)(16) \\ \hline $a_{\pi\Sigma}$ & -2.12(16)(09) & -2.36(09)(15) & -2.30(15)(13) & -2.36(18)(19)\\ $a_{\pi\Xi}$ & -1.08(09)(14) & -1.19(09)(20) & -1.08(15)(09) & -1.20(18)(15) \\ $a_{Kp}$ & -2.80(32)(44) & -2.95(21)(19) & -2.3(0.2)(1.0) & -2.27(31)(32) \\ $a_{Kn}$ & -1.41(37)(34) & -1.33(30)(21) & -1.05(22)(30) & -0.89(27)(31) \\ $a_{K\Xi}$ & -2.62(13)(21) & -2.77(08)(23) & -2.18(15)(63) & -2.29(30)(32) \\ \hline $m_{\pi}+m_{p}$ & 0.8817(61) & 0.9555(31) & 1.1172(23) & 1.2492(18)\\ $m_{\pi}+m_{\Sigma}$ & 1.0229(23) & 1.0761(20) & 1.1933(19) & 1.2964(15) \\ $m_{\pi}+m_{\Xi}$ & 1.0710(14) & 1.1240(14) & 1.2336(19) & 1.3212(16) \\ $m_{K}+m_{p}$ & 1.0657(61) & 1.1106(31) & 1.2119(23) & 1.3050(19) \\ $m_{K}+m_{\Sigma}$ & 1.2069(23) & 1.2312(20) & 1.2881(19) & 1.3522(16) \\ $m_{K}+m_{\Xi}$ & 1.2550(14) & 1.2791(15) & 1.3284(19) & 1.3770(17) \\ \end{tabular} \end{ruledtabular} \caption{Lattice calculation results from the four coarse MILC ensembles which enter the analysis of the meson-baryon scattering lengths. The first uncertainty is statistical and the second uncertainty is systematic due to fitting. All quantities are in lattice units.} \label{tab:latticequant} \end{table} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Savage_plot_m010.eps} \caption{$p\cot\delta/m_\pi$ versus $\Delta E_{\phi B}/m_\pi$ for the five elastic scattering processes from coarse MILC ensemble ({\it ii}). The curve shown is $p\cot\delta/m_\pi$ for the case of $m_\phi=m_K$, and $m_B=m_p$.} \label{fig:m010Savageplot} \end{figure} \section{The Mixed Channel} \label{sec:MChAm} As is clear from Table I, the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ states carry the same global quantum numbers, and therefore couple to the same energy-eigenstates in the finite lattice volume. For energies above both kinematic thresholds, a determination of the three scattering parameters associated with these states (two phases and one mixing-angle) requires a coupled-channel analysis. Therefore, three energy levels above both kinematic thresholds must be determined in the lattice calculation to fully characterize scattering in this kinematic regime. In the present lattice volumes, the two-particle energies in these channels are close to the respective kinematic thresholds, and the energy of the lower-lying $\pi^+\Xi^0$ state (which is below the $\overline{K}{}^0\Sigma^+$ threshold) is determined by the low-energy elastic scattering parameters, making it amenable to analysis using Eqs.~(\ref{eq:energies}), (\ref{eq:Sdefined}), (\ref{eq:energieshift}) and (\ref{eq:luscher_a}). A priori, one would expect both the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ interpolating operators to couple to a common ground state (dominantly the $\pi^+ \Xi^0$ state), with a $\overline{K}{}^0\Sigma^+$-related level as the first excited state (for the lattice volumes considered here, the non-interacting $\pi^+\Xi^0$ system with two units of relative momentum has an energy considerably above the $\overline{K}{}^0\Sigma^+$ threshold). Interestingly, within our statistical and systematic uncertainties, we find distinct energy levels from the two interpolating operators. This is consistent with strong coupling to the color-singlet constituents of the interpolating operator and only very weak couplings to states that require color rearrangement (see Fig.~\ref{fig:energylevels}). While this is suggestive that mixing between the states is small, a definitive interpretation requires an extraction of three energy levels above the kinematic thresholds of the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$, and below the next kinematic threshold, in order to determine the three scattering parameters. The optimal way to extract these levels is to use the variational method~\cite{Michael:1985ne,Luscher:1990ck}, which requires the full matrix of correlation functions to be calculated, and diagonalized. The extraction of the scattering parameters would then proceed via an extension of the variational method to the coupled-channel scenario~\cite{Detmold:2004qn,He:2005ey}. Due to our incomplete knowledge of the three mixed-channel energy levels, we do not attempt to extract any $\overline{K}{}^0\Sigma^+$ scattering parameters in this work. \section{SU(3) HB$\chi$PT Extrapolation} \label{sec:su3CE} \subsection{Scattering Length Formulas} \label{sec:scattLextrap} \noindent The scattering lengths of the five meson-baryon processes listed in Eq.~(\ref{eq:Tmatrices}) are, to $\mathcal{O}(m_\pi^3)$ in $SU(3)$ HB$\chi$-PT~\cite{Liu:2006xja,Liu:2007ct}, \begin{eqnarray} a_{\pi^+\Sigma^+}=\frac{1}{4 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{2m_\pi}{f_\pi^2} + \frac{2m_\pi^2}{f_\pi^2}C_1 + \mathcal{Y}_{\pi^+\Sigma^+}(\mu ) + 8 h_{123}(\mu )\frac{m_\pi^3}{f_\pi^2} \bigg] \ ; \label{eq:apisigfull} \end{eqnarray} \begin{eqnarray} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f_\pi^2} + \frac{m_\pi^2}{f_\pi^2}C_{01} + \mathcal{Y}_{\pi^+\Xi^0}(\mu ) + 8 h_1(\mu )\frac{m_\pi^3}{f_\pi^2} \bigg] \ ; \label{eq:apixifull} \end{eqnarray} \begin{eqnarray} a_{K^+ p}=\frac{1}{4 \pi}\frac{m_N}{m_K+m_N} \bigg[ -\frac{2m_K}{f_K^2} + \frac{2m_K^2}{f_K^2}C_1 + \mathcal{Y}_{K^+ p}(\mu ) + 8 h_{123}(\mu )\frac{m_K^3}{f_K^2} \bigg] \ ; \label{eq:akpfull} \end{eqnarray} \begin{eqnarray} a_{K^+ n}=\frac{1}{4 \pi}\frac{m_N}{m_K+m_N} \bigg[ -\frac{m_K}{f_K^2} + \frac{m_K^2}{f_K^2}C_{01} + \mathcal{Y}_{K^+ n}(\mu ) + 8 h_1(\mu )\frac{m_K^3}{f_K^2} \bigg] \ ; \label{eq:aknfull} \end{eqnarray} \begin{eqnarray} a_{\overline{K}{}^0 \Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_K+m_\Xi} \bigg[ -\frac{2m_K}{f_K^2} + \frac{2m_K^2}{f_K^2}C_1 + \mathcal{Y}_{\overline{K}{}^0 \Xi^0}(\mu ) + 8 h_{123}(\mu )\frac{m_K^3}{f_K^2} \bigg] \ , \label{eq:akxifull} \end{eqnarray} where we have defined $C_{01}\equiv C_0+C_1$ and $h_{123}\equiv h_1-h_2+h_3$, and the loop functions are given by \begin{eqnarray} \mathcal{Y}_{\pi^+\Sigma^+}(\mu )&=&\frac{m_\pi^2}{2\pi^2 f_\pi^4}\bigg\{-m_\pi\bigg(\frac32-2\ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu}\bigg) \nonumber \\ && -\sqrt{m_K^2-m_\pi^2}\arccos\frac{m_\pi}{m_K} + \frac{\pi}{2}\bigg[3F^2 m_\pi-\frac13 D^2 m_\eta\bigg]\bigg\} \ ; \label{eq:pisigloop} \end{eqnarray} \begin{eqnarray} \mathcal{Y}_{\pi^+ \Xi^0}(\mu )&=&\frac{m_\pi^2}{4\pi^2 f_\pi^4} \bigg\{ -m_\pi \bigg( \frac32 -2\ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu}\bigg) -\sqrt{m_K^2-m_\pi^2}\bigg(\pi + \arccos\frac{m_\pi}{m_K}\bigg) \nonumber\\ && +\frac{\pi}{4}\bigg[3(D-F)^2 m_\pi-\frac13(D+3F)^2 m_\eta\bigg]\bigg\} \ ; \label{eq:pixiloop} \end{eqnarray} \begin{eqnarray} \mathcal{Y}_{K^+p}(\mu )&=&\frac{m_K^2}{4\pi^2 f_K^4}\bigg\{m_K \bigg(-3+2\ln\frac{m_\pi}{\mu} + \ln\frac{m_K}{\mu}+3 \ln\frac{m_\eta}{\mu} \bigg) \nonumber \\ && +2\sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt {m_K^2-m_\pi^2}}{m_\pi} -3\sqrt{m_\eta^2-m_K^2}\arccos\frac{m_K}{m_\eta} \nonumber\\ && - \frac{\pi}{6} (D-3F)\bigg[ 2(D+F) \frac{m_\pi^2}{m_\eta+m_\pi} +(D+5F) m_\eta \bigg] \bigg\} \ ; \label{eq:kploop} \end{eqnarray} \begin{eqnarray} \mathcal{Y}_{K^+n}(\mu )&=&\frac{\mathcal{Y}_{K^+p}}{2} + \frac{3m_K^2}{8\pi^2 f_K^4}\bigg\{m_K \bigg( \ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu} \bigg) + \sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt{m_K^2-m_\pi^2}}{m_\pi}\nonumber\\ && + \frac{\pi}{3} (D-3F) \bigg[(D+F) \frac{m_\pi^2}{m_\eta+m_\pi} +\frac16(7D+3F) m_\eta \bigg] \bigg\} \ ; \label{eq:knloop} \end{eqnarray} \begin{eqnarray} \mathcal{Y}_{\overline{K}{}^0\Xi^0}^{(1)}(\mu )&=&\frac{m_K^2}{4\pi^2 f_K^4}\bigg\{m_K \bigg(-3+2\ln\frac{m_\pi}{\mu} + \ln\frac{m_K}{\mu}+3 \ln\frac{m_\eta}{\mu} \bigg) \nonumber \\ && +2\sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt {m_K^2-m_\pi^2}}{m_\pi} -3\sqrt{m_\eta^2-m_K^2}\arccos\frac{m_K}{m_\eta} \nonumber\\ && - \frac{\pi}{6} (D+3F)\bigg[ 2(D-F) \frac{m_\pi^2}{m_\eta+m_\pi} +(D-5F) m_\eta \bigg] \bigg\} \ . \label{eq:kxiloop} \end{eqnarray} In what follows, we choose $\mu=\Lambda_\chi=4\pi f_\pi$ and evaluate $f_\pi$ at its lattice physical value~\cite{Beane:2005rj}, and we take $m_\eta$ from the Gell-Mann-Okubo formula. These choices modify the chiral expansion at $\mathcal{O}(m_\pi^4)$ and are therefore consistent to the order we are working. The first mixed-action modification to these HB$\chi$-PT extrapolation formulas appear as corrections to these loop functions, $\mathcal{Y}_{\phi B}$, and to the corresponding counterterms which absorb the scale dependence. Some of the mesons propagating in the loops appear as mixed valence-sea combinations, and thus the corresponding meson masses appearing in these functions are heavier by a known amount~\cite{Orginos:2007tw}. The precise form of the predicted corrections require a computation of the scattering processes with mixed-action/partially quenched $\chi$-PT. Our physical parameters are consistent with Ref.~\cite{Mai:2009ce} (note that our decay constant convention differs by $\sqrt{2}$). Namely, $f_\pi=130.7~{\rm MeV}$, $m_\pi=139.57~{\rm MeV}$, $f_K=159.8~{\rm MeV}$, $m_K=493.68~{\rm MeV}$, $m_N=938~{\rm MeV}$, $m_\Sigma=1192~{\rm MeV}$ and $m_\Xi=1314~{\rm MeV}$. The axial couplings, $D$ and $F$, for coarse MILC ensembles ({\it ii})-({\it iv}) are taken from the mixed-action calculation of Ref.~\cite{Lin:2007ap}, and we extrapolate for coarse MILC ensemble ({\it i}) using these values. \subsection{Extrapolation to the Physical Point} \noindent For the purposes of fitting and visualization, it is useful to construct from the scattering lengths the functions $\Gamma^{(1,2)}$ which are polynomials in $m_\phi$. For the $\pi^+\Sigma^+$, $K^+p$, and $\overline{K}{}^0\Xi^0$ processes one defines\footnote{Here we use the standard notation, LO = leading order, NLO = next-to-leading order and so on.} \begin{eqnarray} \Gamma_{LO}^{(1)}\equiv-\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1 \ ; \label{eq:GammaLHSLO1} \end{eqnarray} \begin{eqnarray} \Gamma_{NLO}^{(1)}\equiv-\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1-C_1 m_\phi \ ; \label{eq:GammaLHSNLO1} \end{eqnarray} \begin{eqnarray} \Gamma_{NNLO}^{(1)}\equiv -\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg) + \frac{f_\phi^2}{2m_\phi}\mathcal{Y}_{\phi B}(\Lambda_{\chi}) =1-C_1 m_\phi-4h_{123}(\Lambda_{\chi}) m_\phi^2 \ , \label{eq:GammaLHSNNLO1} \end{eqnarray} and for the $\pi^+\Xi^0$, and $K^+n$ processes one defines \begin{eqnarray} \Gamma_{LO}^{(2)}\equiv-\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1 \ ; \label{eq:GammaLHSLO2} \end{eqnarray} \begin{eqnarray} \Gamma_{NLO}^{(2)}\equiv-\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1-C_{01} m_\phi \ ; \label{eq:GammaLHSNLO2} \end{eqnarray} \begin{eqnarray} \Gamma_{NNLO}^{(2)}\equiv -\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg) + \frac{f_\phi^2}{m_\phi}\mathcal{Y}_{\phi B}(\Lambda_{\chi}) =1-C_{01} m_\phi-8h_1(\Lambda_{\chi}) m_\phi^2 \ . \label{eq:GammaLHSNNLO2} \end{eqnarray} Notice that the left-hand sides of these equations are given entirely in terms of lattice-determined quantities, all evaluated under Jackknife, whereas the right-hand side provides a convenient polynomial fitting function. Plots of $\Gamma_{NLO}$ formed from the lattice data (all ensembles listed in Table~\ref{tab:MILCcnfs}) versus the Goldstone masses are given in Fig.~\ref{fig:KostasAlldata}. We see evidence in this plot that the fine and large-volume coarse data are statistically limited as compared to the coarse data. Therefore, we include only the coarse data in our fits. The fine data is, however, indicative that lattice-spacing effects are small. In the three-flavor chiral expansion, we have an overdetermined system at both NLO and NNLO. While there are five observables, there are two Low Energy Constants (LECs) at NLO, $C_0$ and $C_{01}$, and two LECs at NNLO, $h_1$ and $h_{123}$. Fits of the LECs from each process at NLO are given in Table~\ref{tab:LECpi} and the corresponding values of the scattering lengths are given in Table~\ref{tab:scattLpisig}. At NLO, the LECs are of natural size, and provide a consistent extraction within uncertainties. Correspondingly, the scattering lengths appear to deviate perturbatively from the LO values. The perturbative behavior of the scattering lengths at NLO is evident from the plots of $\Gamma_{NLO}$ versus the Goldstone masses given in Fig.~\ref{fig:KostasNLONNLO}. Clearly the deviations of the lattice data from unity are consistent with a perturbative expansion. At NNLO the situation changes dramatically. This is clear from the plots of $\Gamma_{NNLO}$ versus the Goldstone masses given in Fig.~\ref{fig:KostasNLONNLO}. The shift of the value of $\Gamma$ from NLO to NNLO is dependent on the renormalization scale $\mu$. With the choice $\mu=\Lambda_\chi$ one would expect this shift to be perturbative. However, this is not the case and therefore loop corrections are very large at the scale $\Lambda_\chi$. There are many strategies that one may take to fit the LECs in the overdetermined system. Here we fit the LECs to the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ data, and then use these LECs to predict the kaon processes. Therefore, in Fig.~\ref{fig:KostasNLONNLO}, only (a) and (b) are fits. The fit LECs are given in Table~\ref{tab:LECpi}. While the NNLO LECs $h_1$ and $h_{123}$ appear to be of natural size, the NLO LECs $C_0$ and $C_{01}$ are unnaturally large and therefore are countering the large loop effects. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table~\ref{tab:scattLpisig} and appear to be perturbative. Table~\ref{tab:scattLpisig} also gives the extrapolated kaon-baryon scattering lengths with the LECs determined from the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ data. The resulting NNLO predictions deviate by at least 100\% from the LO values. Other fitting strategies lead to this same conclusion: the kaon-baryon scattering lengths are unstable against chiral corrections in the three-flavor chiral expansion, over the range of light-quark masses that we consider. \begin{table} \begin{ruledtabular} \begin{tabular}{cccc} Quantity & NLO fit each process & NNLO fit $\pi^+\Sigma^+$,$\pi^+\Xi^0$ \\ \hline $C_1(\pi^+\Sigma^+)$ & 0.66(04)(11) GeV$^{-1}$ & 3.51(18)(25) GeV$^{-1}$ \\ $C_{01}(\pi^+\Xi^0)$ & 0.69(06)(22) GeV$^{-1}$ & 7.44(29)(69) GeV$^{-1}$ \\ $C_1(K^+ p)$ & 0.44(09)(23) GeV$^{-1}$ & - \\ $C_{01}(K^+ n)$ & 0.56(11)(27) GeV$^{-1}$ & - \\ $C_1(\overline{K}{}^0\Xi^0)$ & 0.50(06)(14) GeV$^{-1}$ & - \\ \hline $h_1$ & - & -0.59(08)(14) GeV$^{-2}$ \\ $h_{123}$ & - & -0.42(10)(10) GeV$^{-2}$ \\ \end{tabular} \end{ruledtabular} \caption{$SU(3)$ LECs fit from each process at NLO, and from $\pi^+\Sigma^+$, and~$\pi^+\Xi^0$ at NNLO. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.} \label{tab:LECpi} \end{table} \begin{table} \begin{ruledtabular} \begin{tabular}{ccccc} Quantity & LO (fm) & NLO fit (fm) & NLO (NNLO fit) (fm) & NNLO (fm) \\ \hline $a_{\pi\Sigma}$ & -0.2294 & -0.208(01)(03) & -0.117(06)(08) & -0.197(06)(08) \\ $a_{\pi\Xi}$ & -0.1158 & -0.105(01)(04) & 0.004(05)(11) & -0.096(05)(12) \\ $a_{Kp}$ & -0.3971 & -0.311(18)(44) & 0.292(35)(48) & -0.154(51)(63) \\ $a_{Kn}$ & -0.1986 & -0.143(10)(27) & 0.531(28)(68) & 0.128(42)(87) \\ $a_{K\Xi}$ & -0.4406 & -0.331(12)(31) & 0.324(39)(54) & -0.127(57)(70) \\ \end{tabular} \end{ruledtabular} \caption{$SU(3)$ extrapolated scattering lengths using the LECs from Table~\ref{tab:LECpi}. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature. Note that the NLO (NNLO fit) column is using $C_1,C_{01}$ from the NNLO fit to $\pi^+\Sigma^+$,$\pi^+\Xi^0$.} \label{tab:scattLpisig} \end{table} \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.47\linewidth]{Pi_SigmaKostasNLOALL.eps}} \hspace{1pt} \vspace{10pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Pi_XiKostasNLOALL.eps}} \hspace{1pt} \vspace{10pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_ProtonKostasNLOALL.eps}} \hspace{1pt} \vspace{10pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_NeutronKostasNLOALL.eps}} \hspace{1pt} \vspace{10pt} \subfloat[]{ \includegraphics[width=0.47\linewidth]{Kaon_XiKostasNLOALL.eps}} \caption{Plots of $\Gamma_{NLO}$ versus the Goldstone masses for the five meson-baryon processes. All lattice data is included.} \label{fig:KostasAlldata} \end{figure} \begin{figure} \centering \subfloat[]{ \label{fig:KostasplotsNLO:a} \includegraphics[width=0.45\linewidth]{Pi_Sigma_SU3.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:KostasplotsNLO:b} \includegraphics[width=0.45\linewidth]{Pi_Xi_SU3.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:KostasplotsNLO:c} \includegraphics[width=0.45\linewidth]{Kaon_Proton_SU3.eps}} \hspace{8pt} \subfloat[]{ \label{fig:KostasplotsNLO:d} \includegraphics[width=0.45\linewidth]{Kaon_Neutron_SU3.eps}} \hspace{8pt} \subfloat[]{ \label{fig:KostasplotsNLO:f} \includegraphics[width=0.45\linewidth]{Kaon_Xi_SU3.eps}} \caption{Plots of $\Gamma_{NLO}$ and $\Gamma_{NNLO}$ versus the Goldstone masses. The line at $\Gamma=1$ is the leading order curve, and dotted line is the physical meson mass. The innermost error bar is the statistical uncertainty and the outermost error bar is the statistical and systematic uncertainty added in quadrature. The inner and outer filled bands correspond to the statistical and systematic uncertainty, respectively, of the fits to the LECs at NLO and NNLO using $\pi^+\Sigma^+$, and~$\pi^+\Xi^0$ {\it only}, for the SU(3) case.} \label{fig:KostasNLONNLO} \end{figure} \section{SU(2) HB$\chi PT$ Extrapolation} \label{sec:su2CE} \noindent Given the poor convergence seen in the three-flavor chiral expansion due to the large loop corrections, it is natural to consider the two-flavor theory with the strange quark integrated out. In this way, $\pi\Sigma$ and $\pi\Xi$ may be analyzed in an expansion in $m_\pi$ with no fear of corrections that scale as powers of $m_K$. The detailed matching of LECs between the three- and two-flavor theories is described in detail in Ref.~\cite{Mai:2009ce}. We make use of the formulation of the $\pi\Sigma$ and $\pi\Xi$ T-matrices from~\cite{Mai:2009ce} to perform the two-flavor chiral extrapolations for $a_{\pi^+\Sigma^+}$, and $a_{\pi^+\Xi^0}$. As pointed out in Ref.~\cite{Mai:2009ce}, there are two representations of the pion-hyperon scattering lengths that are equivalent up to omitted higher orders in the chiral expansion; one contains a chiral logarithm, and the other is purely a polynomial in $m_\pi$. Using both forms provides a useful check on the systematics of the chiral extrapolation. \subsection{Scattering Length Formulas I} \noindent To $\mathcal{O}(m_\pi^3)$ in the two-flavor chiral expansion, $a_{\pi^+\Sigma^+}$ and $a_{\pi^+\Xi^0}$ are given by~\cite{Mai:2009ce} \begin{eqnarray} a_{\pi^+\Sigma^+}=\frac{1}{4 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{2m_\pi}{f_\pi^2} + \frac{2m_\pi^2}{f_\pi^2} {\mathrm{C}}_{\pi^+\Sigma^+} +\frac{m_\pi^3}{\pi^2 f_\pi^4}\log{\frac{m_\pi}{\mu}} + \frac{2m_\pi^3}{f_\pi^2}{h}_{\pi^+\Sigma^+}(\mu ) \bigg] \ ; \label{eq:apisigSU2} \end{eqnarray} \begin{eqnarray} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f_\pi^2} + \frac{m_\pi^2}{f_\pi^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{2\pi^2 f_\pi^4}\log{\frac{m_\pi}{\mu}} + \frac{m_\pi^3}{f_\pi^2}{h}_{\pi^+\Xi^0}(\mu ) \bigg] \label{eq:apixiSU2} \ , \end{eqnarray} where the explicit forms ---in terms of Lagrangian parameters--- of the LECs ${\mathrm{C}}_{\pi^+\Sigma^+}$, ${h}_{\pi^+\Sigma^+}$, ${\mathrm{C}}_{\pi^+\Xi^0}$ and ${h}_{\pi^+\Xi^0}$ are given in Ref.~\cite{Mai:2009ce}. As in the three flavor case, the mixed-action modification to the $SU(2)$ scattering length formula would begin with corrections to the $m_\pi^3 \ln (m_\pi)$ terms, with the mixed valence-sea pions having the known additive mass shift~\cite{Orginos:2007tw}. We again choose $\mu=\Lambda_\chi=4\pi f_\pi$ and evaluate $f_\pi$ at its lattice physical value. In analogy with the three-flavor case, we define \begin{eqnarray} \Gamma_{LO}\equiv 1 \ ; \label{eq:GammaLHSLO1su2} \end{eqnarray} \begin{eqnarray} \Gamma_{NLO}\equiv 1-C_{\pi^+ B} m_\pi \ ; \label{eq:GammaLHSNLO1su2} \end{eqnarray} \begin{eqnarray} \Gamma_{NNLO}\equiv 1-C_{\pi^+ B} m_\pi-h_{\pi^+ B}(\Lambda_{\chi}) m_\pi^2 \ , \label{eq:GammaLHSNNLO1su2} \end{eqnarray} where $B$ is either $\Sigma^+$ or $\Xi^0$. In Fig.~\ref{fig:KostasSU2} we give plots of $\Gamma_{NLO}$ and $\Gamma_{NNLO}$ versus the pion mass for the two-flavor case. Clearly the deviations of $\Gamma$ from unity are consistent with a perturbative expansion at both NLO and NNLO, showing that the loop corrections are much smaller at the scale $\Lambda_\chi$ than in the three-flavor case. All extracted LECs are of natural size and given in Table~\ref{tab:LECpisu2I}. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table~\ref{tab:scattLpisigsu2I}. The results are consistent with what was found in the three-flavor extrapolation. The NLO and NNLO LECs are highly correlated in the NNLO fit. Fig.~\ref{fig:errellipseK} shows the 68\% and 95\% confidence interval error ellipses in the $h$-${\mathrm{C}}$ plane for both ${\pi^+\Sigma^+}$ and ${\pi^+\Xi^0}$. Exploring the full 95\% confidence interval error ellipse in the $h$-${\mathrm{C}}$ plane yields \begin{eqnarray} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.017~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.098\pm 0.017~{\rm fm} \ . \label{eq:MP2} \end{eqnarray} These are the numbers that we quote as our best determinations of the pion-hyperon scattering lengths. \begin{figure}[ht] \centering \subfloat[]{ \label{fig:KostasSU2:a} \includegraphics[width=0.45\linewidth]{Pi_Sigma_SU2.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:KostasSU2:b} \includegraphics[width=0.45\linewidth]{Pi_Xi_SU2.eps}} \caption{$\Gamma_{NLO}$, $\Gamma_{NNLO}$ plots for the $\pi^+\Sigma^+$, and~$\pi^+\Xi^0$ processes versus the pion mass. The line at $\Gamma=1$ is the leading order curve, and the dotted line is the physical pion mass. The innermost error bar is the statistical uncertainty and the outermost error bar is the statistical and systematic uncertainty added in quadrature. The inner and outer filled bands correspond to the statistical and systematic uncertainty, respectively, of the fits to the LECs at NLO and NNLO using $\pi^+\Sigma^+$, and~$\pi^+\Xi^0$ for the SU(2) case.} \label{fig:KostasSU2} \end{figure} \begin{table} \begin{center} \vskip 0.2cm \resizebox{8cm}{!} { \begin{tabular}{ccc} \hline \hline & NLO fit & NNLO fit \\ \hline ${C}_{\pi^+\Sigma^+}$ & 0.66(04)(11) GeV$^{-1}$ & 1.98(17)(24) GeV$^{-1}$ \\ ${C}_{\pi^+\Xi^0}$ & 0.69(06)(22) GeV$^{-1}$ & 2.01(24)(68) GeV$^{-1}$ \\ \hline $h_{\pi^+\Sigma^+}$ & - & -0.65(36)(40) GeV$^{-2}$ \\ $h_{\pi^+\Xi^0}$ & - & -0.6(0.5)(1.1) GeV$^{-2}$ \\ \hline \hline \end{tabular} } \caption{$SU(2)$ LECs fit from each process at NLO and at NNLO. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.} \label{tab:LECpisu2I} \end{center} \end{table} \begin{table} \begin{center} \vskip 0.2cm \resizebox{12cm}{!} { \begin{tabular}{ccccc} \hline \hline Quantity & LO (fm) & NLO (fm) & NLO (NNLO fit) (fm) & NNLO (fm) \\ \hline $a_{\pi\Sigma}$ & -0.2294 & -0.208(01)(03) & -0.166(05)(08) & -0.197(06)(08) \\ $a_{\pi\Xi}$ & -0.1158 & -0.105(01)(04) & -0.083(04)(11) & -0.098(05)(12) \\ \hline \hline \end{tabular} } \caption{$SU(2)$ extrapolated scattering lengths using the LECs from Table~\ref{tab:LECpisu2I}. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.} \label{tab:scattLpisigsu2I} \end{center} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.49\linewidth]{ell_Pi_Sigma_SU2.eps}\hfill \includegraphics[width=0.49\linewidth]{ell_Pi_Xi_SU2.eps}\hfill \caption{The 68\% (light) and 95\% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and~$\pi^+\Xi^0$ (right) processes using Eqs.~\protect(\ref{eq:apisigSU2}) and \protect(\ref{eq:apixiSU2}).} \label{fig:errellipseK} \end{figure} \subsection{Scattering Length Formulas II} \noindent Ref.~\cite{Mai:2009ce} makes the interesting observation that replacing $f_\pi$ with its chiral limit value, $f$, yields \begin{eqnarray} a_{\pi^+\Sigma^+}=\frac{1}{2 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2} {\mathrm{C}}_{\pi^+\Sigma^+} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Sigma^+} \bigg], \qquad h'_{\pi^+\Sigma^+}=\frac{4}{f^2}\ell_4^r+ h_{\pi^+\Sigma^+} \ ; \label{eq:apisig2param} \end{eqnarray} \begin{eqnarray} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Xi^0} \bigg],\qquad h'_{\pi^+\Xi^0}=\frac{4}{f^2}\ell_4^r + h_{\pi^+\Xi^0} \ , \label{eq:apixi2param} \end{eqnarray} where $\ell_4^r$ is the LEC which governs the pion mass dependence of $f_\pi$~\cite{Colangelo:2001df}. Note that the chiral logs have canceled, and in this form, valid to order $m_\pi^3$ in the chiral expansion, the scattering lengths have a simple polynomial dependence on $m_\pi$. Taking the standard value $f=122.9$ MeV~\cite{Colangelo:2001df,Mai:2009ce} and refitting the LECs yields the results tabulated in Table~\ref{tab:LECpisu2II}. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table~\ref{tab:scattLpisigsu2II}. These results are clearly consistent with what was found in the two-flavor extrapolation with the chiral logarithm explicit. Fig.~\ref{fig:errellipseU} shows the 68\% and 95\% confidence interval error ellipses in the $h$-${\mathrm{C}}$ plane for both ${\pi^+\Sigma^+}$ and ${\pi^+\Xi^0}$. Exploring the full 95\% confidence interval error ellipse in the $h$-${\mathrm{C}}$ plane yields \begin{eqnarray} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.011~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.102 \pm 0.004~{\rm fm} \ . \label{eq:MPU} \end{eqnarray} Comparison of these determinations with those of Eq.~(\ref{eq:MP2}) give an estimate of the systematic error due to truncation of the chiral expansion at order $m_\pi^3$. We have also ``pruned'' the data; that is, we have redone all fits omitting the heaviest mass ensemble. While this procedure inflates the errors, we see very little shift in the central values. \begin{table} \begin{center} \vskip 0.2cm \resizebox{8cm}{!} { \begin{tabular}{ccc} \hline \hline & NLO fit & NNLO fit \\ \hline $C_{\pi^+\Sigma^+}$ & 1.28(09)(11) GeV$^{-1}$ & 1.90(10)(17) GeV$^{-1}$ \\ $C_{\pi^+\Xi^0}$ & 1.84(23)(25) GeV$^{-1}$ & 1.93(12)(48) GeV$^{-1}$ \\ \hline $h^{'}_{\pi^+\Sigma^+}$ & - & -1.33(21)(26) GeV$^{-2}$ \\ $h^{'}_{\pi^+\Xi^0}$ & - & -1.36(27)(75) GeV$^{-2}$ \\ \hline \hline \end{tabular} } \caption{$SU(2)$ LECs fit from each process at NLO and at NNLO. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.} \label{tab:LECpisu2II} \end{center} \end{table} \begin{table} \begin{center} \vskip 0.2cm \resizebox{12cm}{!} { \begin{tabular}{ccccc} \hline \hline Quantity & LO (fm) & NLO (fm) & NLO (NNLO fit) (fm) & NNLO (fm) \\ \hline $a_{\pi\Sigma}$ & -0.2294 & -0.212(03)(04) & -0.190(04)(06) & -0.197(04)(09) \\ $a_{\pi\Xi}$ & -0.1158 & -0.106(04)(05) & -0.095(02)(09) & -0.102(02)(09) \\ \hline \hline \end{tabular} } \caption{$SU(2)$ extrapolated scattering lengths using the LECs from Table~\ref{tab:LECpisu2II}. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.} \label{tab:scattLpisigsu2II} \end{center} \end{table} \begin{figure}[!h] \centering \includegraphics[width=0.49\linewidth]{ellX_Pi_Sigma_SU2.eps}\hfill \includegraphics[width=0.49\linewidth]{ellX_Pi_Xi_SU2.eps}\hfill \caption{The 68\% (light) and 95\% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and~$\pi^+\Xi^0$ (right) processes using Eqs.~\protect(\ref{eq:apisig2param}) and \protect(\ref{eq:apixi2param}).} \label{fig:errellipseU} \end{figure} In order to plot the scattering length versus $m_\pi$, we define \begin{eqnarray} \overline{a}_{\pi^+\Sigma^+}=a_{\pi^+\Sigma^+}\left(\frac{m_\pi+m_\Sigma}{m_\Sigma} \right) =\frac{1}{2\pi}\left( -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2} {\mathrm{C}}_{\pi^+\Sigma^+} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Sigma^+} \right) \ ; \label{eq:abarpisigSU2} \end{eqnarray} \begin{eqnarray} \overline{a}_{\pi^+\Xi^0}=a_{\pi^+\Xi^0}\left(\frac{m_\pi+m_\Xi}{m_\Xi} \right)=\frac{1}{4\pi}\left( -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Xi^0} \right) \ . \label{eq:abarpixiSU2} \end{eqnarray} In Fig.~\ref{fig:aSU2} we plot the scattering lengths versus the pion mass. The shaded bands in these plots correspond to the standard error in the determination of the LECs, as given in Table~\ref{tab:LECpisu2II}. Additional systematic errors arising from the specific lattice formulation that we employ are discussed in detail in Ref.~\cite{Beane:2007xs}, and are expected to be well encompassed by our error bars. As discussed in section~\ref{sec:finvol}, there is a systematic error in extracting the scattering length from the phase shift. We find that range corrections affect the scattering length at the 5\% level for $\pi^+\Sigma^+$, and at the 1\% level for $\pi^+\Xi^0$. Finally, we reiterate that there are unquantified systematic errors due to finite-volume and lattice-spacing effects, however, these errors are likely encompassed by our quoted errors. \begin{figure}[ht] \centering \subfloat[]{ \label{fig:aSU2:a} \includegraphics[width=0.45\linewidth]{a_Pi_Sigma_SU2.eps}} \hspace{8pt} \vspace{10pt} \subfloat[]{ \label{fig:aSU2:b} \includegraphics[width=0.45\linewidth]{a_Pi_Xi_SU2.eps}} \caption{$\overline{a}$ plots for the $\pi^+\Sigma^+$, and~$\pi^+\Xi^0$ processes versus the pion mass. The diagonal line is the leading order curve, and the dotted line is the physical pion mass. The innermost error bar is the statistical uncertainty and the outermost error bar is the statistical and systematic uncertainty added in quadrature. The filled bands are the fits to the LECs in the SU(2) case at NNLO as in Eqs.~(\ref{eq:abarpisigSU2}), and~(\ref{eq:abarpixiSU2}).} \label{fig:aSU2} \end{figure} \section{Conclusions} \label{sec:conc} \noindent In this paper we have presented the first fully-dynamical lattice QCD calculation of meson-baryon scattering. While the phenomenologically most-interesting case of pion-nucleon scattering involves annihilation diagrams, and therefore, requires more resources than we currently have available, we have calculated the ground-state energies of $\pi^+\Sigma^+$, $\pi^+\Xi^0$, $K^+p$, $K^+n$, and $\overline{K}{}^0 \Xi^0$, which involve no annihilation diagrams. An analysis of the scattering lengths of these two-body systems using HB$\chi$PT has led us to conclude that the three-flavor chiral expansion does not converge over the range of light quark masses that we investigate. While the kaon-baryon scattering lengths appear perturbative at NLO, a comparison of NNLO with NLO calls into question the convergence of the three-flavor chiral expansion. Therefore, we do not quote values for the kaon-baryon scattering lengths at the physical point. On the other hand, the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths appear to have a well-controlled chiral expansion in two-flavor HB$\chi$PT. Our results, $a_{\pi^+\Sigma^+}=-0.197\pm0.017$ fm, and $a_{\pi^+\Xi^0}=-0.098\pm0.017$ fm, deviate from the LO (current algebra) predictions at the one- and two-sigma level, respectively. We look forward to confirmation of these predictions from other lattice QCD calculations and possibly from future experiments. The HB$\chi$PT analyses performed in this work support a general observation about convergence in the three-flavor chiral expansion, at least for the processes studied here. As the pion masses considered in this lattice calculation are comparable to the physical kaon mass, the distinct convergence patterns of the two- and three-flavor chiral expansions found in this work are suggestive that the breakdown in the three-flavor case is not due to the relative largeness of the strange-quark mass as compared to the light quark masses, but rather due to some other enhancement in the coefficients of the loop contributions, possibly related to a scaling with powers of $n_f$, the number of flavors. While in this paper we have not considered the lowest-lying baryon decuplet, one interesting process for future study is the $\pi^-\Omega^-$ system. It does not involve disconnected diagrams since the pions have no valence quarks with the same flavor as the $\Omega^-$ constituents. It has been argued that there is a bound state~\cite{Wang:2006jg} in this channel, and therefore, it would be of interest to determine whether this state appears bound on the lattice at the available quark masses. \section{Acknowledgments} \noindent We thank U.G.~Mei\ss ner for useful discussions, and R.~Edwards and B.~Joo for help with the QDP++/Chroma programming environment~\cite{Edwards:2004sx} with which the calculations discussed here were performed. We gratefully acknowledge the computational time provided by NERSC (Office of Science of the U.S. Department of Energy, No. DE-AC02-05CH11231), the Institute for Nuclear Theory, Centro Nacional de Supercomputaci\'on (Barcelona, Spain), Lawrence Livermore National Laboratory, and the National Science Foundation through Teragrid resources provided by the National Center for Supercomputing Applications and the Texas Advanced Computing Center. Computational support at Thomas Jefferson National Accelerator Facility and Fermi National Accelerator Laboratory was provided by the USQCD collaboration under {\it The Secret Life of a Quark}, a U.S. Department of Energy SciDAC project ({\tt http://www.scidac.gov/physics/quarks.html}). The work of MJS was supported in part by the U.S.~Dept.~of Energy under Grant No.~DE-FG03-97ER4014. The work of KO and WD was supported in part by the U.S.~Dept.~of Energy contract No.~DE-AC05-06OR23177 (JSA) and DOE grant DE-FG02-04ER41302. KO and AWL were supported in part by the Jeffress Memorial Trust, grant J-813 and DOE OJI grant DE-FG02-07ER41527. The work of SRB and AT was supported in part by the National Science Foundation CAREER grant No. PHY-0645570. Part of this work was performed under the auspices of the US DOE by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. The work of AP was partly supported by the EU contract FLAVIAnet MRTN-CT-2006-035482, by the contract FIS2008-01661 from MEC (Spain) and FEDER and by the Generalitat de Catalunya contract 2005SGR-00343.
1,314,259,992,683
arxiv
\section{Introduction} Stochastic resetting is ubiquitous in nature \cite{evans2020stochastic}. Maybe most of people have this experience: when one goes to work in the morning, (s)he searches for keys before going out. After a search without successes, it is likely to go back to the starting point of the search and try again. The motion of foraging animals can be also modeled as a resetting random walks \cite{RevModPhys.83.81,PhysRevLett.112.240601,JPA2022.55.274005,PhysRevLett.128.148301}. Indeed, animals tend to go back to some fixed location (e.g., to their nest) when searching for food. Other examples are realized in computer simulations in which random restarts are known to optimize search algorithms \cite{Luby1993,PhysRevLett.88.178701}, and in biology, e.g., to describe catastrophes in population dynamics \cite{Biophys.J.2010.98.1099}. Since the seminal work by Evans and Majumdar \cite{evans2011diffusion}, stochastic resetting has received growing attention in the last decade (see \cite{evans2020stochastic,Gupta2022Review} for two recent reviews). A paradigmatic example in statistical physics is resetting Brownian motions where a diffusing particle is reset to its starting point at random times but with a constant rate. A finite resetting rate leads to a nonequilibrium stationary state with non-Gaussian fluctuations for the particle position. The mean time to reach a given target for the first time can become finite and be minimized with respect to the resetting rate \cite{evans2011diffusion}. Reuveni first made a universal observation that the relative standard deviation associated with the first passage time of an optimally restarted process is always unity \cite{PhysRevLett.116.170601}. Pal and Reuveni further showed a criterion under which restart has the ability to expedite the completion of a stochastic process \cite{pal2017first}. Interestingly, such a criterion can be understood by so-called ``inspection paradox'' in probability theory \cite{JPA2022.55.021001}. Chechkin and Sokolov addressed random search via a renewal approach and showed that resetting is always beneficial if the probability of finding a target in absence of resetting decays slower than exponential \cite{chechkin2018random}. These nontrivial findings have triggered an enormous recent activities in the field, including searching \cite{evans2011diffusion2,PhysRevLett.113.220602,ahmad2019first,NJP2016.18.033006,pal2016diffusion,PhysRevE.93.060102,PhysRevE.96.012126,evans2014diffusion,evans2018run,kumar2020active,PhysRevE.103.022135,de2020optimization,lauber2021first,JPA2021.54.404004,JPA2022.55.275002,PhysRevE.103.032107,arXiv:2202.04906,JPA2022.55.234001,PhysRevE.105.034109,huang2021random,PhysRevLett.128.200603}, fluctuating interfaces \cite{gupta2014fluctuating}, stochastic thermodynamics \cite{fuchs2016stochastic,pal2017integral,gupta2020work}, chemical and biological processes \cite{reuveni2014role,rotbart2015michaelis,PhysRevLett.128.148301}, large deviation \cite{meylahn2015large}, extremal statistics \cite{PhysRevE.103.052119,JStatMech2022.063202,JPA2022.55.034002} optimal control theory \cite{arXiv:2112.11416}, and single-particle experiments \cite{tal2020experimental,besga2020optimal}. Entropy is a fundamental concept in statistical physics. In the realm of complex networks, entropy has been introduced to measure the complexity of networks. The entropy of network ensembles quantifies the number of graphs with given structural features such as degree distribution, degree correlations, or community structure \cite{Bianconi_EPL2028,PhysRevE.79.036114,PhysRevE.80.045102,PhysRevE.82.011116}. The principle of maximum entropy has been used to construct exponential random graphs under different soft constraints \cite{PhysRevE.70.066117,NJP2011.13.083001,PhysRevLett.114.158701,PhysRevLett.115.268701,PhysRevE.87.062806,PhysRevE.93.062311,NatRevPhys2019.1.58}. The entropy measures have been shown to be very useful for inference problems defined on networks \cite{PhysRevLett.120.198301}, and it has been successfully applied to the problem of assessing the significance of features for network structure \cite{Bianconi_PNAS2009}. On the other hand, particular attentions have been paid to entropy rate of random walks on complex networks. Entropy rate is a measure to characterize the mixing properties of a stochastic process. In this context, an important issue arises how the entropy rate is maximized to design diffusion processes aiming at a well-mixed state. Burda \textit{et al.} proposed a maximum entropy random walks (MERW) in which all trajectories between two given points are equiprobable \cite{PhysRevLett.102.160602}. They showed that MERW indeed maximizes the entropy of trajectories, in contrast to standard random walk (SRW), which has smaller entropy. The maximum entropy rate is precisely the topological entropy of the network \cite{Parry1964,Demetrius2005,PhysRevE.83.046117}. However, for MERW the walker needs to have a global knowledge of the network. G\'omez-Garde\~nes and Latora considered a degree-biased random walk and found that the entropy rate shows a unique maximum as a degree-biased parameter varies \cite{PhysRevE.78.065102}. Sinatra \textit{et al.} constructed the dynamics of random walks by solely using the degrees of first and second neighbors of the current node of the walker \cite{PhysRevE.83.030103}. They showed almost maximal-entropy random walks can indeed be obtained with a limited and local knowledge of the network. Zhao \textit{et al.} computed the entropy rate of various growing network models \cite{PhysRevE.84.066113}, and showed the entropy rate changes its scaling with the system size when a growing network model has a phase transition. In the present work, we want to address a question about how stochastic resetting affects entropy rate of a Markov process and explore whether the resetting induces a novel effect on the entropy rate. We mainly focus on random walks on complex networks subject to stochastic resetting to a given node with a constant probability. We compute the entropy rate of the resetting random walks on diverse networks, including regular random networks, Cayley trees and degree heterogenous networks. We find that the entropy rate can be maximized at an intermediate level of resetting probability. In particular, the maximum entropy rate can be larger than that of MERW on the same network. \section{Entropy rate of Markovian processes under resetting} We consider discrete-time Markovian processes defined in a finite state space encoded by an $N \times N$ Markov matrix $\bm{W}$, whose entry $W_{ij}$ gives the transition probability from the $i$th state to the $j$th state. A trajectory $\omega_t$ denote a series of subsequent states that the system has visited in the past $t$ time step, i.e., $X_0 \to X_1 \to \dots \to X_{t-1} \to X_t$, where $X_i \in \left\{ {1,\cdots, N} \right\} $ denote the state of the system at time $i$. The probability of the trajectory is given by \begin{eqnarray}\label{eq1} P\left[ {{\omega _t}} \right] = {P}\left( {{X_0}} \right)\prod\limits_{i = 1}^t {{W_{{X_{i - 1}}{X_i}}}} , \end{eqnarray} where $P(X_0)$ denotes the probability that the system starts from the state $X_0$ at $t=0$, and $W_{{X_{i-1}}{X_i}}$ denotes the transition probability from the state $X_{i-1}$ to state $X_i$. The entropy is defined as \begin{eqnarray}\label{eq2} {H_t} = - \sum\limits_{{\omega _t}} {P\left[ {{\omega _t}} \right]\ln P\left[ {{\omega _t}} \right]} , \end{eqnarray} where the summation is over all possible trajectories of length $t$. Substituting Eq.\ref{eq1} into Eq.\ref{eq2}, we have \begin{eqnarray}\label{eq3} {H_t} =&& - \sum\limits_{{X_0}} {P}\left( {{X_0}} \right)\ln {P}\left( {{X_0}} \right) \nonumber \\ && - t \sum\limits_{{X_0},{X_1}} {{P}\left( {{X_0}} \right){W_{{X_0}{X_1}}}\ln {W_{{X_0}{X_1}}}} , \end{eqnarray} where we have utilized the properties of Markov matrix, $\sum\nolimits_j {{W_{ij}} = 1} $. For $t \gg 1$, the first term in Eq.\ref{eq3} can be ignored. At the same time, $P(X_0)$ can be substituted by the stationary distribution $P_s(X_0)$. Therefore, the entropy rate reads \cite{RevModPhys.85.1115} \begin{eqnarray}\label{eq4} h = \mathop {\lim }\limits_{t \to \infty } \frac{{{H_t}}}{t} = -\sum\limits_{{X_0},{X_1}} {{P_s}\left( {{X_0}} \right){W_{{X_0}{X_1}}}\ln {W_{{X_0}{X_1}}}}. \end{eqnarray} It is known that the stationary distribution $P_s(X_0)$ is given by the normalized left eigenvector of the transition matrix $\bm{W}$ corresponding to the unit eigenvalue. We now consider the Markovian processes in the presence of resetting. At each time step, the system either hops from one state to another according to the transition matrix $\bm{W}$ with a probability $1-\gamma$ or is reset to a given state $X_r$ with a complementary probability $\gamma$, in the sense that the transition probability from state $X_0$ to state $X_1$ is \begin{eqnarray}\label{eq4.1} W^{R}_{X_0 X_1}=(1-\gamma) W_{X_0 X_1} + \gamma \delta_{X_1,X_r}. \end{eqnarray} Let us denote by $P_r(X_t,t|X_0)$ the probability that the system visits the state $X_t$ at time $t$ given that the system has started from the state $X_0$ at $t=0$. $P_r(X_t,t|X_0)$ can be connected to the occupation probability $P_0(X_t,t|X_0)$ without resetting via a first renewal equation \cite{pal2016diffusion,ahmad2019first,chechkin2018random,Chaos2021_31.093135,arXiv.2111.01330}, \begin{eqnarray}\label{eq5} {P_r}\left( {{X_t},t|{X_0}} \right) &=& {\left( {1 - \gamma } \right)^t}{P_0}\left( {{X_t},t|{X_0}} \right) \nonumber \\ &+& \sum\limits_{t'=1}^{t} {{{\left( {1 - \gamma } \right)}^{t' - 1}}\gamma {P_r}\left( {{X_t},t - t'|{X_r}} \right)} . \end{eqnarray} The first term in Eq.\ref{eq5} accounts for the system is never reset up to time $t$ with the probability ${\left( {1 - \gamma } \right)^t}$, and the second term accounts for the system is reset at time $t'$ for the first time with the probability ${{{\left( {1 - \gamma } \right)}^{t' - 1}}\gamma }$, after which the process starts anew from the resetting state for the remaining time $t-t'$. $P_0(X_t,t|X_0)$ is given by \begin{eqnarray}\label{eq6} {P_0}\left( {{X_t},t|{X_0}} \right) = {\left( {{\bm{W}^t}} \right)_{{X_0}{X_t}}} \end{eqnarray} Taking the Laplace transform for Eq.\ref{eq5}, $\tilde{f}(s)=\sum_{t=0}^{\infty} f(t) e^{-st}$, which yields \begin{eqnarray}\label{eq7} {{\tilde P}_r}\left( {{X_t},s|{X_0}} \right) &=& {{\tilde P}_0}\left( {{X_t},s'|{X_0}} \right)\nonumber \\ &+& \frac{{\gamma {e^{ - s}}}}{{1 - \left( {1 - \gamma } \right){e^{ - s}}}}{{\tilde P}_r}\left( {{X_t},s|{X_r}} \right), \end{eqnarray} where $s'=s-\ln(1-\gamma)$. Letting $X_0=X_r$ in Eq.\ref{eq7}, we have \begin{eqnarray}\label{eq8} {{\tilde P}_r}\left( {{X_t},s|{X_r}} \right) = \frac{{1 - \left( {1 - \gamma } \right){e^{ - s}}}}{{1 - {e^{ - s}}}}{{\tilde P}_0}\left( {{X_t},s'|{X_r}} \right). \end{eqnarray} Substituting Eq.\ref{eq8} into Eq.\ref{eq7}, we obtain \begin{eqnarray}\label{eq9} {{\tilde P}_r}\left( {{X_t},s|{X_0}} \right) = {{\tilde P}_0}\left( {{X_t},s'|{X_0}} \right) + \frac{{\gamma {e^{ - s}}}}{{1 - {e^{ - s}}}}{{\tilde P}_0}\left( {{X_t},s'|{X_r}} \right). \nonumber \\ \end{eqnarray} If the resetting state coincides with the initial state, $X_r=X_0$, Eq.\ref{eq9} simplifies to \begin{eqnarray}\label{eq10} {{\tilde P}_r}\left( {{X_t},s|{X_0}} \right) = \frac{{1 - \left( {1 - \gamma } \right){e^{ - s}}}}{{1 - {e^{ - s}}}}{{\tilde P}_0}\left( {{X_t},s'|{X_0}} \right), \end{eqnarray} where ${{\tilde P}_0}\left( {{X_t},s'|{X_0}} \right)$ can be calculated by Eq.\ref{eq6}, \begin{eqnarray}\label{eq11} {{\tilde P}_0}\left( {{X_t},s'|{X_0}} \right) = \left[ {\bm{I} - \left( {1 - \gamma } \right){e^{ - s}}\bm{W}} \right]_{{X_0}{X_t}}^{ - 1}. \end{eqnarray} The stationary occupation probability can be obtain by taking the limit \begin{eqnarray}\label{eq12} {P_s}\left( X \right) &=& \mathop {\lim }\limits_{s \to 0} \left( {1 - {e^{ - s}}} \right){{\tilde P}_r}\left( {X,s|{X_0}} \right) \nonumber \\& =& \gamma \left[ {\bm{I} - \left( {1 - \gamma } \right)\bm{W}} \right]_{{X_r}X}^{ - 1}. \end{eqnarray} Supposing that the transition matrix $\bm{W}$ can be eigen-decomposed, one has \begin{eqnarray}\label{eq13} \bm{W} = \sum\limits_{\ell = 1}^N {\lambda_\ell \langle i | {{\phi _\ell}} \rangle } \langle {{{\bar \phi }_\ell}} | j\rangle , \end{eqnarray} where $\lambda_\ell$ is the $\ell$th eigenvalue of the transition matrix $\bm{W}$, and the corresponding left eigenvector and right eigenvector are respectively $\langle {{\bar \phi }_\ell}|$ and $| {\phi_\ell}\rangle$, satisfying $\langle {{{\bar \phi }_\ell}} | {{\phi _m}} \rangle = {\delta _{\ell m}}$ and $\sum_{\ell=1}^{N} |\phi_{\ell} \rangle \langle {\bar \phi}_{\ell}|=\bm{I}$. $\left| i \right\rangle $ denotes the canonical base with all its components equal to 0 except the $i$th one, which is equal to 1. Since $\bm{W}$ is a stochastic matrix satisfying the sum of each row of $\bm{W}$ equal to one, its maximal eigenvalue is equal to one. Without loss of generality, we let $\lambda_1=1$ and the values of other eigenvalues is less than one. The right eigenvector corresponding to $\lambda_1=1$ is simply given by $| {{\phi _1}} \rangle = {\left( {1,1, \ldots ,1} \right)^\top}$, and the corresponding left eigenvector $\langle {\bar \phi _1} |$ gives the stationary occupation probability in the absence of resetting. According to Eq.(\ref{eq13}), the stationary occupation probability in the presence of resetting is rewritten as \begin{eqnarray}\label{eq14} P_s ( X) = \langle {{{\bar \phi }_1}} | X \rangle + \gamma \sum\limits_{\ell = 2}^N {\frac{{\langle X_r | {{\phi _\ell}} \rangle \langle {{{\bar \phi }_\ell}} | X \rangle }}{{1 - \lambda _\ell^{}\left( {1 - \gamma } \right)}}} . \end{eqnarray} where the first term is the stationary occupation probability in the absence of resetting, and the second term is an nonequilibrium contribution due to the resetting processes. Finally, substituting Eq.(\ref{eq4.1}) and Eq.(\ref{eq14}) into Eq.(\ref{eq4}), we can compute the entropy rate for a resetting Markov process. \section{Entropy rate of resetting random walks} As a concrete example, we consider the resetting random walks (RRW) on an undirected and unweighted network. The dynamics is defined as follows \cite{PhysRevE.101.062147,PhysRevE.103.062126,Chaos2021_31.093135,JSM2022.053201}. At each time step, the walker either performs a standard random walk (SRW) between two neighboring nodes with a probability $1-\gamma$ or is reset to a given node $r$ with a complementary probability $\gamma$. For $\gamma=0$, the model recovers the SRW, where the transition matrix is written as $\bm{W}=\bm{D}^{-1}\bm{A}$, where $\bm{A}$ is the adjacency matrix of the underlying network, whose entries are defined as $A_{ij}=1$ if nodes $i$ and $j$ are connected and zero otherwise. $\bm{D}={\rm{diag}} \left\{ k_1,\cdots, k_N \right\}$ is a diagonal matrix where $k_i=\sum_{j=1}^{N} A_{ij}$ is the degree of node $i$. For the SRW, it is known that $P_s(i)=k_{i}/(\langle k \rangle N)$ \cite{masuda2017random,PhysRevLett.92.118701,PhysRevE.87.012112}, where $\langle k \rangle$ is the average degree of the network. RRW on networks may has many applications in computer science and physics. Label propagation in machine learning algorithms \cite{Bautista2019}, or the famous PageRank \cite{Pagerank1998}, can be interpreted as a random walker with uniform resetting probability to all the nodes of the network. Based on hitting times under resetting, a recent study has made an application to network centrality \cite{avrachenkov2018hitting}. \begin{figure} \centerline{\includegraphics*[width=1.0\columnwidth]{rrn.pdf}} \caption{Entropy rate $h^{RRW}$ of resetting random walks as a function of the resetting probability $\gamma$ on three different regular random networks in which each node is randomly connected to exactly $k$ neighbors. The inset shows an illustration of a regular random network with size $N=50$ and degree $k=3$. The symbols indicate the maximum entropy rate $h_{\max}^{RRW}=\ln(k+1)$ that occurs at $\gamma_{{\rm{opt}}}=\frac{1}{k+1}$. The horizontal line indicates the value of entropy rate for the SRW or MERW with $k=3$, $h^{SRW}=h^{MERW}=\ln k$. \label{fig1}} \end{figure} In terms of Eq.\ref{eq4}, one obtains the entropy rate of SRW, \begin{eqnarray}\label{eq2.2} {h^{SRW}} = \frac{{\langle {k\ln k} \rangle }}{{\langle k \rangle }}. \end{eqnarray} While for the RRW, one has \begin{eqnarray}\label{eq2.3} {h^{RRW}} = - \sum\limits_{i=1}^N {{P_s}(i)\left( {1 - \gamma } \right)\ln \frac{{1 - \gamma }}{{{k_i}}} - \gamma \ln \gamma } . \end{eqnarray} To obtain $h^{RRW}$, one needs to compute the stationary occupation distribution $P_s(i)$ in terms of Eq.(\ref{eq14}), where the spectrum of the transition matrix $\bm{W}$ can be expressed in terms of the spectrum of the adjacency matrix $\bm{A}$. Letting $\Lambda_\ell$ be the $\ell$th eigenvalue of $\bm{A}$, and the associated eigenvector is $|\psi_\ell \rangle$, one has \begin{eqnarray} {\lambda _\ell} = \frac{{{\Lambda _\ell}}}{{{\Lambda _1}}}, \, \langle {i| {{\phi _\ell}} \rangle } = \frac{{\langle {i| {{\psi _\ell}} \rangle } }}{{\langle {i| {{\psi _1}} \rangle } }}, \, \langle {{{\bar \phi }_\ell}} | j \rangle = \langle {{\psi _\ell}} | j \rangle \langle {{\psi _1}} | 1 \rangle, \end{eqnarray} for $\ell=1,\cdots,N$. $\Lambda _1 \geq \langle k \rangle$ is the largest eigenvalue of $\bm{A}$. \begin{figure} \centerline{\includegraphics*[width=1.0\columnwidth]{cayley.pdf}} \caption{Entropy rate $h^{RRW}$ of the resetting random walks as a function of the resetting probability $\gamma$ on a Cayley tree $C_{3,5}$ (see the inset). The symbols indicate the maximum entropy rate $h_{\max}^{RRW}$. The upper and lower horizontal lines indicate the values of entropy rate for the MERW and SRW, $h^{MERW}$ and $h^{SRW}$, respectively. Different lines represent that the only resetting node is placed at different shells: $n=0$ to $n=5$ from bottom to top. \label{fig2}} \end{figure} In particular, for regular random networks where each node is randomly connected to exactly $k$ neighbors (see the inset of Fig.\ref{fig1} for an illustration), Eq.(\ref{eq2.3}) can be further simplified to \begin{eqnarray}\label{eq2.4} {h^{RRW}} = - \left( {1 - \gamma } \right)\ln \frac{{1 - \gamma }}{k} - \gamma \ln \gamma . \end{eqnarray} In Fig.\ref{fig1}, we show $h^{RRW}$ as a function of the resetting probability $\gamma$ on three different $k$-regular random networks: $k=3$, 4, and 5. The entropy rate defined in Eq.(\ref{eq2.4}) shows a nonmonotonic change with $\gamma$. For $\gamma \to 0$, $h^{RRW}$ recovers to the result in Eq.(\ref{eq2.2}) without resetting. In the opposite limit, $\gamma \to 1$, the system is always reset to a given node, such that the dynamics is deterministic and thus $h^{RRW} \to 0$. There exists a maximum entropy rate at an intermediate value of $\gamma$, such that $h^{RRW}=h^{RRW}_{\max}$ at $\gamma=\gamma_{{\rm{opt}}}$. To obtain the maximum, we take the derivative of Eq.(\ref{eq2.4}) with respect to $\gamma$, and then let the derivative equal to zero. We obtain the maximum entropy rate $h_{\max}^{RRW}=\ln(k+1)$ that occurs at $\gamma_{{\rm{opt}}}=\frac{1}{k+1}$. We also perform Monte Carlo simulations for the resetting random walks and obtain the entropy rate for simulation date. We find that the simulation results are completely consistent with the theory. The simulation results are not shown in Fig.\ref{fig1} for the sake of clearness. Moreover, we compare the entropy rate of RRW with that of MERW \cite{PhysRevLett.102.160602}. For the MERW, the transition probability from node $i$ to node $j$ is defined as \begin{eqnarray}\label{eq2.5} {W_{ij}^{MERW}} = \frac{{{A_{ij}}}}{\Lambda }\frac{{\langle {j| \psi_1 } \rangle }}{{\langle {i| \psi_1 } \rangle }}, \end{eqnarray} where $\Lambda_1$ is the largest eigenvalue of the adjacency matrix $\bm{A}$, and $\langle i | \psi_1 \rangle$ is the $i$th component of eigenvector corresponding to $\Lambda_1$, as stated before. The MERW is biased in the sense that a walker follows a link $(i,j )$ with a probability proportional to the importance of its ending node $j$, as measured by its eigenvector centrality $\langle {j|\psi_1} \rangle$. It is not hard to verify that the stationary distribution of MERW is $P_s(i)=\langle i | \psi_1 \rangle ^2$. Substituting this result and Eq.(\ref{eq2.5}) into Eq.(\ref{eq4}), one obtains the entropy rate of MERW, \begin{eqnarray}\label{eq2.6} {h^{MERW}} = \ln \Lambda_1. \end{eqnarray} For regular random networks without degree fluctuation, $\Lambda_1=k$, and thus ${h^{MERW}} = \ln k$, coinciding with the entropy rate of SRW defined in Eq.(\ref{eq2.2}). Therefore, for entropy rate on $k$-regular random networks, we have \begin{eqnarray}\label{eq2.7} {h^{RRW}_{\max}} = \ln(k+1) >{h^{MERW}} ={h^{SRW}}= \ln k. \end{eqnarray} \begin{figure*} \centerline{\includegraphics*[width=2.0\columnwidth]{bafig.pdf}} \caption{A Barab\'asi-Albert (BA) network of size $N=200$ and average degree $\langle k \rangle =2$. The nodes are labelled in descending order by degrees of nodes. All nodes are classified into three types according to the entropy rate of resetting random walks by resetting to one of nodes. For four nodes with the largest dgrees (diamonds), the maximum entropy rate is larger then that of MERW, $h_{\max}^{RRW}>h^{MERW}$. For 15 peripheral nodes (triangles), $h^{RRW}<h^{SRW}<h^{MERW}$ for any nonzero resetting probability. For the remaining nodes (circles), $h^{SRW}<h_{\max}^{RRW}<h^{MERW}$. \label{fig3}} \end{figure*} On the other hand, one can see from Fig.\ref{fig1} that for $0<\gamma< \gamma_c$ the entropy rate of RRW on regular random networks is larger than that of SRW (or MERW). To determine $\gamma_c$, let $h^{RRW}=\ln k$, which yields a transcendental equation $k\gamma_c = {\left( {1 - \gamma_c } \right)^{\frac{{\gamma_c - 1}}{\gamma_c }}}$. For $k=4$, $\gamma_c=0.5$ is exact. For $k=3$ and $k=5$, numerically solving for the equation gives $\gamma_c \approx 0.609$ and 0.423, respectively. As $k$ increases, $\gamma_c$ decreases and approaches to zero as $k \to \infty$. \begin{figure} \centerline{\includegraphics*[width=1.0\columnwidth]{ba.pdf}} \caption{Entropy rate $h^{RRW}$ of resetting random walks as a function of the resetting probability $\gamma$ on a BA network shown in Fig.\ref{fig3}. The symbols indicate the maximum entropy rate $h_{\max}^{RRW}$. Different lines correspond to the cases where different nodes is chosen as the resetting node, respectively. The upper and lower horizontal lines indicate the values of entropy rate for the MERW and SRW, $h^{MERW}$ and $h^{SRW}$, respectively. \label{fig4}} \end{figure} As the second example, we consider a Cayley tree $C_{b,n}$, where $b$ is the coordination number except for the outermost nodes and $n$ is the number of shells. The network is generated as follows. Initially ($n = 0$), $C_{b,0}$ consists of only a central node. To form $C_{b, 1}$, $b$ nodes are created and are attached to the central node. For any $n > 1$, $C_{b, n}$ is obtained from $C_{b, n-1}$ by performing the following operation. For each boundary node of $C_{b, n-1}$, $b-1$ nodes are generated and attached to the boundary node. The size of Cayley tree is $N=1+b(2^n-1)$. In Fig.\ref{fig2}, we show the entropy rate $h^{RRW}$ as a function of $\gamma$ on a Cayley tree $C_{3,5}$ (see the inset of Fig.\ref{fig2}). The different lines represent the cases when the node from different shells is chosen as the resetting node. The upper and lower horizontal lines indicate the values of entropy rate of MERW and SRW, respectively. From Fig.\ref{fig2}, one can see that for all cases $h^{RRW}$ reaches a maximum value at a nonzero value of $\gamma$. When the resetting node is located at the inner layer, $h^{RRW}$ shows a larger value for any $\gamma$, except for $\gamma=0$ and $\gamma=1$ where $h^{RRW}$ coincides with $h^{SRW}$ and zero, respectively. Compared with the entropy rate of MERW, $h^{RRW}$ can be larger than $h^{MERW}$ in a wide range of $\gamma$, especially for the case when the inner node is selected as the resetting node. \begin{figure} \centerline{\includegraphics*[width=1.0\columnwidth]{bamax.pdf}} \caption{The maximum entropy rate $h^{RRW}_{\max}$ (a) of resetting random walks and the optimal resetting probability $\gamma_{{\rm{opt}}}$ (b) corresponding to $h^{RRW}_{\max}$ as a function of the resetting node label shown in Fig.\ref{fig3}. The upper and lower horizontal lines in (a) indicate the values of entropy rate for the MERW and SRW, $h^{MERW}$ and $h^{SRW}$, respectively. \label{fig5}} \end{figure} Finally, we consider the RRW on a Barab\'asi-Albert (BA) network \cite{Science.286.509} of size $N=200$ and average degree $\langle k \rangle =2$, as shown in Fig.\ref{fig3}. Nodes have been numbered in descending order by nodes' degrees. In Fig.\ref{fig4}, we show the entropy rate $h^{RRW}$ as a function of the resetting probability $\gamma$. $h^{RRW}$ shows more abundant behaviors with $\gamma$ when the walker is reset to different nodes. For example, when the walker is reset to node 1 or node 10, $h^{RRW}$ shows a similar behavior as in Fig.\ref{fig1} and Fig.\ref{fig2}. In this case, $h^{RRW}$ exhibits a unique maximum at a nonzero value of $\gamma$. When node 40 or node 41 is set to be the resetting node, $h^{RRW}$ shows a more complex change with $\gamma$. If node 102 is chosen as the resetting node, $h^{RRW}$ decreases monotonically as $\gamma$ increases. For each resetting node, we fix the maximum value of $h^{RRW}$, $h^{RRW}_{\max}$, and the corresponding resetting probability, $\gamma_{\rm{ opt}}$, as shown in Fig.\ref{fig5}. According to the value of $h^{RRW}_{\max}$, we can classified all nodes into three types. The first type includes four nodes with the largest degrees (see diamonds in Fig.\ref{fig3}). When one of four nodes is chosen as the resetting node, $h^{RRW}_{\max}>h^{MERW}$ and $\gamma_{\rm{ opt}}$ lies between 0.31 and 0.33. While the walker is reset to any of other nodes, $h^{RRW}$ is always less than $h^{MERW}$ for any value of $\gamma$. Among them, there are 15 nodes (see triangles in Fig.\ref{fig3}) for which $h^{RRW}$ is even less than $h^{SRW}$ for any nonzero value of $\gamma$. We can see that the these 15 nodes are located at the periphery of the network. While for the third type of nodes (see circles in Fig.\ref{fig3}) is chosen as the resetting node, i.e., most of nodes in the network, we have $h^{SRW}<h^{RRW}_{\max}<h^{MERW}$. \section{Conclusions} In conclusion, we have studied the entropy rate of random walks on complex networks subject to stochastic resetting to a given node with a constant probability $\gamma$. We have computed the entropy rate $h^{RRW}$ of the resetting random walks on three different types of networks. For the $k$-regular random networks, we have shown that $h^{RRW}$ is a nonmonotonic function of $\gamma$, and proved that $h^{RRW}$ admits a maximum $h^{RRW}_{\max}=\ln (k+1)$ at $\gamma =\frac{1}{k+1}$. It is worth noting that $h^{RRW}$ is larger than that of SRW or MERW, that is $h^{SRW}=h^{MERW}=\ln k$, for $0<\gamma<\gamma_c$, where $\gamma_c$ is determined by a transcendental equation $k\gamma_c = {\left( {1 - \gamma_c } \right)^{\frac{{\gamma_c - 1}}{\gamma_c }}}$. Subsequently, we consider $h^{RRW}$ on a Cayley tree $C_{3,5}$. No matter which shell of nodes is chosen as the resetting node, $h^{RRW}$ exhibits also nonmonotonic dependence on $\gamma$. A maximum entropy rate occurs at a nonzero value of $\gamma$. Such a maximum entropy rate is larger than that of SRW and MERW. When the walker is reset to the inner shell, the entropy rate becomes larger. Finally, we consider a degree heterogeneous network, i.e., a BA network of size $N=200$ and average degree $\langle k \rangle =2$. We find that the dependence of $h^{RRW}$ on $\gamma$ is more complex, and it highly depends on the resetting node. When the resetting node is one of these nodes with largest degrees, $h^{RRW}$ has a unique maximum as well, and the maximum $h^{RRW}$ is larger than $h^{MERW}$. While the periphery nodes are set to be the resetting node, $h^{RRW}$ decreases monotonically with $\gamma$ such that $h^{RRW}<h^{SRW}<h^{MERW}$. When the walker is reset to one of remaining nodes, which dominate a majority of proportion, the maximum entropy rate is between $h^{SRW}$ and $h^{MERW}$. The concept of entropy rate and its maximization can find its applications to information dissemination in social networks, data packet delivery in compute networks, or to the design of efficient vaccination campaigns. Our results indicate that it is possible to maximize the entropy rate on a given topology by a rather simple resetting operation. Therefore, our findings provide an additional story about nontivial effects of stochastic resetting in the very active field. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (11875069, 61973001). \end{acknowledgments}
1,314,259,992,684
arxiv
\section{Introduction} Let $A$ be a matrix with nonnegative real entries. The \textit{nonnegative rank} of $A$ is the smallest $k$ for which there exist $k$ nonnegative rank-one matrices that sum to $A$. This concept is related to the \textit{nonnegative matrix factorization} problem, which is the task to find the optimal approximation of a given matrix with a matrix of given nonnegative rank. This problem has important applications in different branches of modern science, including data mining~\cite{LS}, combinatorial optimization~\cite{Yan}, statistics~\cite{KRS}, quantum mechanics~\cite{CR}, and many others. This paper presents a combinatorial approach that leads to very short solutions of two widely known problems on nonnegative matrices, and one of these accomplishments is a proof that the nonnegative rank is NP-hard to compute. The paper~\cite{Vavas} by Vavasis has become a standard reference to this result in applied mathematics, and the proof it contains is based on ingenious geometric considerations. The proof we present is short and does not require any special knowledge. When this writing was about to be completed, the author learned that a similar proof is contained in the paper~\cite{JR} by Jiang and Ravikumar. Their Lemma~3.3 is stated in different terms, but it is essentially the oldest reference to the NP-hardness of nonnegative rank we are aware of. Another result that we obtain is a solution of the \textit{Cohen--Rothblum problem}. Assume that $A$ is a rational matrix with nonnegative rank $k$, do there exist $k$ \textit{rational} nonnegative rank-one matrices that sum to $A$? This problem was posed in the foundational paper~\cite{CR} more than 20 years ago, and it has been widely discussed in the literature. Vavasis~\cite{Vavas} demonstrates the connection between the Cohen--Rothblum problem and algorithmic complexity of nonnegative rank. Another notable application of nonnegative ranks is the theory of extended formulations of polytopes~\cite{Yan}, and the possible lack of optimal rational factorizations is a difficulty in this theory~\cite{Roth}. Kubjas, Robeva, and Sturmfels~\cite{KRS} consider this problem in context of modern statistics; they also give a partial solution of this problem. We note the paper~\cite{GFR}, which contains a solution of a similar problem but asked for \textit{positive semidefinite} rank. Our paper contains an explicit example of a $21\times 21$ matrix with integral entries that can be written as a sum of $19$ nonnegative rank-one matrices but not as a sum of $19$ rational nonnegative rank-one matrices. This gives a solution of the Cohen--Rothblum problem, which has been open until now. (Our solution of the Cohen--Rothblum problem appears on arXiv as the manuscript~\cite{myCR}, which has been uploaded concurrently with another solution of the problem, see~\cite{anotherproof}. Also, the earlier paper~\cite{myNRDF} contains a related result that is both more general and more specific: There is a real matrix $A$ of conventional rank five which admits a nonnegative rank factorization but no such factorization is rational in the entries of $A$.) \section{Our technique} Let $A$ be a nonnegative matrix with entries in a subfield $\mathbb{F}\subset\mathbb{R}$. The \textit{nonnegative rank} of $A$ \textit{with respect to} $\mathbb{F}$ (denoted $\operatorname{rank}_{\mathbb{F}+} A$) is the smallest $k$ for which $A$ is a sum of $k$ nonnegative rank-one matrices over $\mathbb{F}$. In particular, the quantity $\operatorname{rank}_{\mathbb{R}+}(A)$ is the usual nonnegative rank as defined in the introduction. Combinatorial methods of studying the nonnegative rank have a long history, see the recent survey~\cite{FKPT}, the older papers~\cite{GP, Orl}, and references therein. Methods that employ zero patterns of matrices are still developing and being used in modern investigations~\cite{CCZ, FMPTdW}. As we will see later, the combinatorial analysis of zero patterns plays a crucial role in our approach. Let $\alpha_1,\ldots,\alpha_n$ be nonnegative real numbers. The $5\times(n+4)$ matrix $$\mathcal{B}(\alpha_1,\ldots,\alpha_n)=\begin{pmatrix} \alpha_1&\ldots&\alpha_n&1&1&1&1\\ 1&\ldots&1&1&1&0&0\\ 0&\ldots&0&0&1&1&0\\ 0&\ldots&0&0&0&1&1\\ 0&\ldots&0&1&0&0&1 \end{pmatrix}$$ will be particularly important in our consideration. Its lower-right $4\times4$ submatrix $$\mathcal{B}_0= \begin{pmatrix} 1&1&0&0\\ 0&1&1&0\\ 0&0&1&1\\ 1&0&0&1 \end{pmatrix} $$ appears in the note~\cite{Thomas}, which is one of the oldest references on the topic of nonnegative matrix factorizations. The reason why $\mathcal{B}_0$ is so important is that it has different conventional and nonnegative ranks, --- one has $\operatorname{rank} \mathcal{B}_0=3$ and $\operatorname{rank}_+ \mathcal{B}_0=4$. In particular, the row of ones can be represented as a linear combination of the rows of $\mathcal{B}_0$ in different ways: one can take the sum of the first and third rows, or the sum of the second and fourth rows, or any convex combination of these two sums. This leads us to the following easy but important observation. \begin{observation}\label{obs1} $\operatorname{Rank}_{\mathbb{F}+}\mathcal{B}(\alpha_1,\ldots,\alpha_n)=4$ if and only if $\alpha_1=\ldots=\alpha_n\in[0,1]\cap\mathbb{F}$. \end{observation} In other words, this matrix has the following interesting property. For any $\varepsilon\in(0,1)$, the $5\times5$ matrices obtained from $\mathcal{B}(\varepsilon)$ by a small perturbation of the $(1,1)$ entry have the same nonnegative rank as the $(1,1)$ cofactor of $\mathcal{B}(\varepsilon)$. Basic results of linear algebra show that such a property cannot hold for a conventional rank function of matrices over a field. In fact, the usual rank of a matrix equals the order of the largest non-singular submatrix, so a small perturbation of an entry would lead to a matrix with rank greater than the rank of its cofactor. We use this distinction between the conventional and nonnegative rank functions in the following matrix completion problem: Given nonnegative matrices $A\in\mathbb{F}^{m\times n}$, $B\in\mathbb{F}^{m\times k}$, $c\in\mathbb{F}^{1\times n}$ and a subset $I\subset\mathbb{F}$, what is the smallest nonnegative rank of \begin{equation}\label{eqAx} \mathcal{A}(x)=\left(\begin{array}{c|c} A&B\\\hline c&\textcolor{blue}{}{x\ldots x} \end{array}\right) \end{equation} provided that $x\in I$? Namely, we can reduce this completion version to the standard formulation of the nonnegative rank problem. \begin{thr}\label{pr3} Let $A\in\mathbb{F}^{m\times n}$, $B\in\mathbb{F}^{m\times k}$, $c\in\mathbb{F}^{1\times n}$ be nonnegative matrices, let $r,s$ be positive integers. We assume that either $\operatorname{rank}_{\mathbb{F}+} A\geqslant r$ or $k=1$. We define $$ \mathcal{G}= \left(\begin{array}{c|c|cccccc} &&0&0&0&0\\ {A}&{B}&\vdots&\vdots&\vdots&\vdots\\ &&0&0&0&0\\\hline {c}&\textcolor{blue}{s\ldots s}&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{1}\\\hline 0\ldots0&\textcolor{green2}{1\ldots1}&\textcolor{red}{1}&\textcolor{green2}{1}&\textcolor{green2}{0}&\textcolor{green2}{0}\\ 0\ldots0&\textcolor{green2}{0\ldots0}&\textcolor{green2}{0}&\textcolor{red}{1}&\textcolor{green2}{1}&\textcolor{green2}{0}\\ 0\ldots0&\textcolor{green2}{0\ldots0}&\textcolor{green2}{0}&\textcolor{green2}{0}&\textcolor{red}{1}&\textcolor{green2}{1}\\ 0\ldots0&\textcolor{green2}{0\ldots0}&\textcolor{green2}{1}&\textcolor{green2}{0}&\textcolor{green2}{0}&\textcolor{red}{1} \end{array}\right) $$ and $\mathcal{A}(x)$ as in~\eqref{eqAx}. Then the inequality $\operatorname{rank}_{\mathbb{F}+}\mathcal{G}\leqslant r+4$ holds if and only if there is a $\xi\in[s-1,s]\cap\mathbb{F}$ such that $\operatorname{rank}_{\mathbb{F}+}\mathcal{A}(\xi)\leqslant r$. \end{thr} \begin{proof} Note that $\mathcal{G}$ is the sum of $\mathcal{A}(\xi)$ and $\mathcal{B}(s-\xi,\ldots,s-\xi)$ up to adding zero rows and columns to the latter matrices. Since $\operatorname{rank}_{\mathbb{F}+}\mathcal{B}(s-\xi,\ldots,s-\xi)=4$ by Observation~\ref{obs1}, the 'if' direction follows immediately. Let us prove the 'only if' direction. First, we note that the inequalities \begin{equation}\label{eq1}\operatorname{rank}_{\mathbb{F}+} \left(A\left|\right. B\right)\geqslant r\mbox{$ $ $ $ and $ $ $ $} \operatorname{rank}_{\mathbb{F}+} \begin{pmatrix}A\\\hline c\end{pmatrix}\geqslant r\end{equation} hold trivially if $\operatorname{rank}_{\mathbb{F}+} A\geqslant r$. (We note that the block matrices involved in~\eqref{eq1} have sizes $m\times(n+k)$ and $(m+1)\times n$, respectively.) If $k=1$, then $\mathcal{A}(x)$ can be obtained by adding a row (a column, respectively) to the matrix in the left (right, respectively) inequality of~\eqref{eq1}. Therefore, if~\eqref{eq1} were not true, we would get $\operatorname{rank}_{\mathbb{F}+}\mathcal{A}(x)\leqslant r$ and complete the proof. So we can assume that the inequalities~\eqref{eq1} are true. Now we assume that $G_1,\ldots,G_{r+4}$ are nonnegative rank-one matrices that sum to $\mathcal{G}$. If there were a $G_i$ with two non-zero red entries, then $\mathcal{G}$ would have a $2\times 2$ submatrix with positive entries two of which are red, --- but this is not the case. Therefore, any $G_i$ contains at most one non-zero red entry, so there are at least four $G_i$'s with non-zero red entries. We call these $G_i$'s red, denote their sum by $\textcolor{red}{R}$, and observe that they have zero black entries. Using the first inequality in~\eqref{eq1}, we get that there are $r$ non-red $G_i$'s, and every of them has a non-zero entry either in the block corresponding to $A$ or in the block corresponding to $B$. Using the second inequality in~\eqref{eq1}, we see that every non-red $G_i$ contains a non-zero entry either in the block corresponding to $A$ or in the block corresponding to $c$. The two previous sentences allow us to conclude that the non-red $G_i$'s do not contribute to the green entries of $\mathcal{G}$. Taking into account Observation~\ref{obs1}, we get that, for some $a\in[0,1]$, the matrix $\textcolor{red}{R}$ equals $\mathcal{B}(a,\ldots,a)$ up to the zeroes at black positions. Therefore, the top-left $(m+1)\times(n+k)$ submatrix of $\mathcal{G}-\textcolor{red}{R}$ is the matrix $\mathcal{A}(s-a)$ as in~\eqref{eqAx}, which completes the proof. \end{proof} \begin{cor}\label{pr33} Let $\mathcal{A}(x)$, $\mathcal{G}$ be as in Theorem~\ref{pr3}. If $k=1$, then we have $$\operatorname{rank}_{\mathbb{F}+}\mathcal{G}=\min \operatorname{rank}_{\mathbb{F}+}\mathcal{A}(\xi)+4,$$ where the minimum is taken over all $\xi\in[s-1,s]\cap\mathbb{F}$. \end{cor} \section{On the complexity of nonnegative rank} Corollary~\ref{pr33} allows us to construct a polynomial time reduction to nonnegative rank from the following problem, which is known (see~\cite{Watson}) as the \textit{nonnegative rank of a partial 0-1 matrix}. Given a matrix $X$ whose $(i,j)$ entry can be $0,1$ or $x_{ij}$; what is the smallest nonnegative rank of matrices that are obtained from $X$ by assigning numbers in $[0,1]$ to the variables $x_{ij}$? We are going to prove that this problem is NP-hard. Let $G$ be a simple graph with vertex set $V$ and edge set $E$. A \textit{clique} in $G$ is a subset $U\subset V$ such that $\{u_1,u_2\}\in E$ for all distinct $u_1,u_2\in U$. The \textit{clique covering number} of $G$ is the smallest number of cliques needed to cover $V$. Let $X=X(G)$ be the matrix (whose rows and columns are indexed with vertices of $G$) defined by $X_{vv}=1$, $X_{uv}=x_{uv}$ if $u,v$ are adjacent, and $X_{uv}=0$ otherwise. Let $U_1,\ldots,U_c$ be cliques whose union is $V$, and we assume without loss of generality that these cliques are disjoint. For all $i\in\{1,\ldots,c\}$, we define the matrix $H^i$ as $H^i_{\alpha\beta}=1$ if $\alpha,\beta\in U_i$ and $H^i_{\alpha\beta}=0$ otherwise. Clearly, the matrix $H_1+\ldots+H_c$ has nonnegative rank at most $c$ and can be obtained from $X$ by replacing the variables with zeros and ones. Conversely, let $M^1,\ldots,M^r$ be nonnegative rank-one matrices whose sum can be obtained from $X$ by replacing the variables with numbers in $[0,1]$. If one of the sets $V^l=\{v\in V\left|\right.M^l_{vv}>0\}$ was not a clique, we would find non-adjacent distinct vertices $u,v$ in $V^l$. This would imply $X_{uv}=0$ and contradict $M^l_{uv}>0$. So we see that $V^1,\ldots,V^r$ are cliques, and their union is $V$. This proves that the clique covering number of $G$ equals the smallest nonnegative rank of matrices that are obtained from $X(G)$ by replacing the variables with numbers in $[0,1]$. Therefore, since the clique covering number is NP-hard to compute (see~\cite{Karp}), so is the nonnegative rank for partial 0-1 matrices. The observation in the beginning of this section shows that the standard version of the nonnegative rank problem is NP-hard as well. As pointed out in the introduction, Lemma~3.3 in~\cite{JR} contains essentially the same result. Namely, Jiang and Ravikumar prove the NP-hardness of the \textit{normal set basis problem}, which is a reformulation of the more commonly known \textit{biclique partition number} problem. We refer the reader to~\cite{CHHK} for a discussion of these two formulations and further complexity results. There is also a straightforward correspondence between the biclique partition number of a bipartite graph and the nonnegative rank of its adjacency matrix, see Remark~6.4 in~\cite{Orl} and Lemma~2.2 in~\cite{GP}. We hope that these references can help an interested reader to translate the proof of Lemma~3.3 in~\cite{JR} into the language of matrix theory and come up with the NP-hardness proof for nonnegative rank. \section{Nonnegative rank depends on the field} We proceed with a solution of the Cohen--Rothblum problem. To this end, we consider the matrix $$ \begin{pmatrix} \textcolor{magenta}{2}&\textcolor{yellow2}{2}&\textcolor{yellow2}{2}&\textcolor{yellow2}{1}&\textcolor{yellow2}{0}&0&0&0&0&0&0&0&0&0&0&0&0&\textcolor{magenta}{1}&\textcolor{magenta}{1}&\textcolor{magenta}{1}&\textcolor{magenta}{1}\\ \textcolor{yellow2}{1}&\textcolor{yellow2}{2}&\textcolor{yellow2}{1}&\textcolor{yellow2}{0}&\textcolor{yellow2}{1}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \textcolor{yellow2}{0}&\textcolor{yellow2}{0}&\textcolor{yellow2}{1}&\textcolor{green2}{2}&\textcolor{yellow2}{0}&0&0&0&0&0&0&0&0&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{1}&0&0&0&0\\ \textcolor{yellow2}{0}&\textcolor{yellow2}{1}&\textcolor{yellow2}{0}&\textcolor{yellow2}{0}&\textcolor{blue}{2}&0&0&0&0&\textcolor{blue}{1}&\textcolor{blue}{1}&\textcolor{blue}{1}&\textcolor{blue}{1}&0&0&0&0&0&0&0&0\\ \textcolor{yellow2}{0}&\textcolor{yellow2}{1}&\textcolor{yellow2}{1}&\textcolor{red}{2}&\textcolor{red}{2}&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{1}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{0}&\textcolor{red}{0}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{1}&\textcolor{red}{1}&\textcolor{red}{0}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{1}&\textcolor{red}{1}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{1}&\textcolor{red}{0}&\textcolor{red}{0}&\textcolor{red}{1}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&\textcolor{blue}{1}&0&0&0&0&\textcolor{blue}{1}&\textcolor{blue}{1}&\textcolor{blue}{0}&\textcolor{blue}{0}&0&0&0&0&0&0&0&0\\ 0&0&0&0&\textcolor{blue}{0}&0&0&0&0&\textcolor{blue}{0}&\textcolor{blue}{1}&\textcolor{blue}{1}&\textcolor{blue}{0}&0&0&0&0&0&0&0&0\\ 0&0&0&0&\textcolor{blue}{0}&0&0&0&0&\textcolor{blue}{0}&\textcolor{blue}{0}&\textcolor{blue}{1}&\textcolor{blue}{1}&0&0&0&0&0&0&0&0\\ 0&0&0&0&\textcolor{blue}{0}&0&0&0&0&\textcolor{blue}{1}&\textcolor{blue}{0}&\textcolor{blue}{0}&\textcolor{blue}{1}&0&0&0&0&0&0&0&0\\ 0&0&0&\textcolor{green2}{1}&0&0&0&0&0&0&0&0&0&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{0}&\textcolor{green2}{0}&0&0&0&0\\ 0&0&0&\textcolor{green2}{0}&0&0&0&0&0&0&0&0&0&\textcolor{green2}{0}&\textcolor{green2}{1}&\textcolor{green2}{1}&\textcolor{green2}{0}&0&0&0&0\\ 0&0&0&\textcolor{green2}{0}&0&0&0&0&0&0&0&0&0&\textcolor{green2}{0}&\textcolor{green2}{0}&\textcolor{green2}{1}&\textcolor{green2}{1}&0&0&0&0\\ 0&0&0&\textcolor{green2}{0}&0&0&0&0&0&0&0&0&0&\textcolor{green2}{1}&\textcolor{green2}{0}&\textcolor{green2}{0}&\textcolor{green2}{1}&0&0&0&0\\ \textcolor{magenta}{1}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\textcolor{magenta}{1}&\textcolor{magenta}{1}&\textcolor{magenta}{0}&\textcolor{magenta}{0}\\ \textcolor{magenta}{0}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\textcolor{magenta}{0}&\textcolor{magenta}{1}&\textcolor{magenta}{1}&\textcolor{magenta}{0}\\ \textcolor{magenta}{0}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\textcolor{magenta}{0}&\textcolor{magenta}{0}&\textcolor{magenta}{1}&\textcolor{magenta}{1}\\ \textcolor{magenta}{0}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\textcolor{magenta}{1}&\textcolor{magenta}{0}&\textcolor{magenta}{0}&\textcolor{magenta}{1} \end{pmatrix} $$ \noindent and denote it by $\mathcal{M}$. We are going to show that the rational and real nonnegative ranks of $\mathcal{M}$ are different. Our strategy is to apply Theorem~\ref{pr3} to $\mathcal{M}$, and we assume that $\mathbb{F}$ is a field as in Section~2. Let us remove the last twelve rows and columns from $\mathcal{M}$, replace the $(1,1)$, $(3,4)$, $(4,5)$ entries by the variables $a,b,c$, and denote the resulting matrix by $\mathcal{M}_1(a,b,c)$. The threefold application of Corollary~\ref{pr33} implies \begin{equation}\label{eqpr1}\operatorname{rank}_{\mathbb{F}+}\mathcal{M}=\min\limits_{a,b,c\in[1,2]\cap\mathbb{F}}\,\operatorname{rank}_{\mathbb{F}+}\mathcal{M}_1(a,b,c)+12.\end{equation} Observing that the $3\times 3$ submatrix defined as the intersection of the second, third, and fourth rows and the first three columns of $\mathcal{M}$ has rank three, we apply Theorem~\ref{pr3} with $r=3$ to $\mathcal{M}_1(a,b,c)$. Taking into account~\eqref{eqpr1}, we get that the inequality $\operatorname{rank}_{\mathbb{F}+}\mathcal{M}\leqslant19$ holds if and only if there are $a,b,c,d\in[1,2]\cap\mathbb{F}$ such that the matrix $$ \mathcal{C}(a,b,c,d)=\begin{pmatrix} a&2&2&1&0\\ 1&2&1&0&1\\ 0&0&1&b&0\\ 0&1&0&0&c\\ 0&1&1&d&d \end{pmatrix} $$ has nonnegative rank at most three with respect to $\mathbb{F}$. Basic tools of linear algebra allow us to show that the conventional rank of $\mathcal{C}(a,b,c,d)$ exceeds three unless $b=c=d=1\pm\sqrt{0.5}$ and $a=2-1/b$. In particular, the rational nonnegative rank of $\mathcal{C}$ cannot be less than four, which implies $\operatorname{rank}_{\mathbb{Q}+}\mathcal{M}\geqslant20$. On the other hand, for $\alpha=1+\sqrt{0.5}$, $$\begin{pmatrix} 0&0&0&0&0\\ 0&\alpha^{-1}&0&0&1\\ 0&0&0&0&0\\ 0&1&0&0&\alpha\\ 0&1&0&0&\alpha \end{pmatrix} + \begin{pmatrix} 0&0&\alpha^{-1}&1&0\\ 0&0&0&0&0\\ 0&0&1&\alpha&0\\ 0&0&0&0&0\\ 0&0&1&\alpha&0 \end{pmatrix} +\begin{pmatrix} \sqrt{2}&2&\sqrt{2}&0&0\\ 1&\sqrt{2}&1&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0 \end{pmatrix} $$ is a representation of $\mathcal{C}(\sqrt{2},\alpha,\alpha,\alpha,\alpha)$ as a sum of three nonnegative rank-one matrices. This shows that $\operatorname{rank}_{\mathbb{R}+}\mathcal{M}\leqslant19$ and completes the proof that the rational and real nonnegative ranks of $\mathcal{M}$ are different. \section{Note added in proof} The idea of my construction comes from the earlier paper~\cite{myBool}, which contains a similar NP-hardness proof but in the setting of Boolean matrices. As I have subsequently learned, Theorem 3.3 in~\cite{theirBool} has the same formulation and proof as the result in~\cite{myBool} but in the language of graph theory. I feel sorry that I did not know about~\cite{theirBool} when I was preparing~\cite{myBool} for publication, but such ocurrences seem to be ubiquitous in modern mathematics and hard to avoid. (As said above, a similar situation arose with Lemma 3.3 in~\cite{JR}, which states the NP-hardness of nonnegative rank but not in the terms common in applied mathematics.) \section{Acknowledgements} This work was carried out at National Research University --- Higher School of Economics in Moscow. I owe my gratitude to the university for their support of research activities and to my students, from whom I probably learned more than they from me. I would like to thank the editors and anonymous referees of \textit{SIAM Review} for careful reading of the paper and helpful suggestions, which allowed me to improve the presentation of the results. Before I decided to include the solution of the Cohen--Rothblum problem in this paper, the manuscript~\cite{myCR} was undergoing the reviewing process in \textit{SIAM Journal of Applied Algebraic Geometry}. I am grateful to the referee of that journal for the detailed report and helpful comments.
1,314,259,992,685
arxiv
\subsection{Introduction} Since their discovery, neutron stars (NSs) have excited a broad range of interests not only in the astrophysical context, but also in terms of fundamental physics: for example, confirmation of the existence of gravitational waves \citep{1982ApJ...253..908T}, high-density nuclear matter inside NSs \citep{2007PhR...442..109L}, and high-magnetic field effects around these enigmatic objects \citep{2006RPPh...69.2631H}. Since NSs are characterized by extreme conditions, such as dense matter, rapid rotation, and high magnetic field, they have proved to be ideal laboratories to test fundamental physics, which cannot be achieved by ground-based experiments. Even 45 years into their discovery, space-based observations are becoming more important in understanding the growing diversity of these enigmatic objects. \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\hsize]{ppdot} \caption{ The $P$-$\dot{P}$ diagram of pulsars shown with lines of constant dipole magnetic field, $B$, and spin-down age ($\tau_c$=$\frac{P}{2~\dot{P}}$). The black dots show the majority of radio-discovered pulsars believed to be rotation-powered, and the red circles show the X-ray or gamma-ray discovered magnetars addressed in this white paper (see \S1.1). Source: ATNF pulsar catalog \citep{Manchester2005AJ....129.1993M}.} \label{fig1:ppdot} \end{center} \end{figure} Recent multi-wavelength observations from radio to the highest energy gamma-rays have revealed a remarkable diversity of NSs \citep{2010arXiv1005.0876K}. Figure~\ref{fig1:ppdot} shows the distribution of known pulsars based on their measured spin ($P$) and spin-down ($\dot{P}$) properties. So far over $\sim$1700 pulsars have been discovered in the radio and most of them are thought to be powered by their rotational energy (i.e. Rotation Powered Pulsars; RPPs). Their magnetic field is usually inferred from their spin properties\footnote{The pulsar's dipole surface magnetic field is estimated as $B_{\rm s}$ (Gauss) $\approx$3.2$\times$10$^{19}$($P/\dot{P}$)$^{1/2}$ where $P$ is in sec.}, in the 10$^{11}$--10$^{13}$~Gauss (1~Tesla~=~10$^4$~Gauss). In addition some accretion-powered pulsars show X-ray ``Cyclotron Resonance Scattering Features'' allowing us to estimate the magnetic field of the neutron star, see \S\ref{sec:binaries_intro} and \S\ref{sec:crsf}. On the other hand, Soft Gamma-ray Repeaters (SGRs) and Anomalous X-ray Pulsars (AXPs) have become a rapidly growing new subclass with much higher magnetic fields, $B\sim 10^{14}-10^{15}$ Gauss and dubbed as ``magnetars". Unlike for rotation-powered or accretion-powered pulsars, the bulk of their X-ray emission appears to be powered by their super-strong magnetic fields. The growing diversity of NSs includes the Rotating Radio Transients (RRATs), X-ray Dim Isolated NSs (XDINSs), and Central Compact Object (CCOs). Understanding the connection between these classes remains one of the most important questions in this field; multi-wavelength observations continue to provide clues on their nature, emission mechanisms and physical properties. A unified understanding requires a better understanding of their birth environment, evolutionary path, and interaction with their (binary, if applicable) environment. In this White Paper, we focus on highly magnetized NSs with magnetic fields $B>10^{12}$\,Gauss. These include magnetars (\S\ref{sec:magnetars_intro}, \S\ref{sec:magnetars}) as well as accreting pulsars (\S\ref{sec:binaries_intro}, \S\ref{sec:binaries}). The latter are mostly found in High Mass X-ray Binaries (HMXBs). NS with relatively weaker fields ($B\leq10^{11}$\,Gauss) are usually observed in Low Mass X-ray Binaries (LMXBs), these sources will be reviewed in another White Paper (WP\#3, Done, Tsujimoto et al.). There are exceptions, though, like the Low and Intermediate Mass X-ray Binary pulsars GX\,1$+$4 (\S\ref{sec:gx1p4}) and Her~X-1 (\S\ref{sec:crsf}). We also include new and not yet fully characterized accreting binary classes like Super-giant Fast X-ray Transients (SFXTs, \S\ref{sec:sfxt}) and Gamma-ray Loud Binaries (\S\ref{sec:gamma}) in the current White Paper. Last but not least we also consider peculiar sources that are not necessarily known to harbor a pulsar and can therefore serve as comparison sources (e.g., regarding wind accretion) as well as being unique study objects in their own right, e.g., Cyg~X-3 (\S\ref{sec:wind}) or SS\,433. \subsection{Accreting Pulsars}\label{sec:binaries_intro} Many XRBPs comprise a young neutron star (NS), endowed with a strong $B$-field ($\sim$$10^{12}$ G) and a supergiant or main sequence, ``donor" star. The donor star can lose a conspicuous amount of matter through a strong stellar wind and/or via the Roche lobe overflow \citep[see, e.g.,][]{frank2002}. A large part of this material is focused toward the NS as a consequence of the strong gravitational field of this object, and then threaded by its intense magnetic field at several thousand kilometers from the stellar surface. The region in which the ram pressure of the plasma equals the magnetic field pressure is known as the Alfven surface and in its vicinity all the exchange of angular momentum happens. Once funneled down to the magnetic poles of the NS, accretion columns may form. Here, the gravitational potential energy of the accreting matter is first converted into kinetic energy and then dissipated in the form of X-rays \citep[see, e.g.,][]{Pringle72,Davids73}. To a first approximation, the $B$-field lines of the NS rotate rigidly with the surface of the star, and thus the X-ray emission emerging from the accretion column is received by the observer modulated at the spin period of the compact object, if the magnetic and rotational axes are misaligned. Important effects are expected from the interaction of radiation with the strongly magnetized plasma (see below). These effects can be proved on observational grounds studying pulse profiles and time resolved spectra. As a more important progress expected with the \textsl{ASTRO-H}, SXS will probe with high significance the characteristics of the plasma in the stellar wind and its coupling with the Alfven surface. This is an open field of study, which is receiving further thrust by novel simulations \citep[e.g.,][]{2012A&A...547A..20M} and will receive an observational breakthrough from the high spectral resolution and high effective area of the SXS, which can measure precisely, e.g., the ionization parameter \citep{Turner1968}. The strong wind, with the velocity of more than $10^8$ cm s$^{-1}$, has a strong impact on the emission line profiles. Doppler shift and broadening, coupled with self absorption by the wind itself \citep{2001ApJ...559.1108O}, and P-Cygni profile will be observed \citep{2000ApJ...544L.123B}. The existence of accretion wakes and ionization front of a photo-ionized plasma can be also studied with a modulation of the line profiles induced by the orbital motion. A subject which looks particularly interested is the study of the time variation of iron fluorescence line at the spin period timescale, which might help tracing the ultimate fate of the plasma before being captured by the $B$-field and unveil the location and extension of the transition zone between the wind and the NS magnetosphere. The understanding of the broad-band energy spectrum and its time variations can be used to track the X-ray emission pattern close to the NS surface and ultimately constrain the fundamental parameters of a neutron stars, such as its mass, magnetic moment and radius. Among the different radiation processes, the interaction with the magnetic field is probably the most crucial for the XRBPs, as it gives rise to the cyclotron resonant scattering features \citep[CRSFs; see, e.g.,][ for recent models]{isenberg1998,araya2000,gabi2007}. The CRSFs provide a unique tool to directly estimate the magnetic field strength in the X-ray emitting region close to the NS surface, because the centroid energy of the fundamental appears at $E_{cyc} = 11.6 B_{12} \times (1 + z)^{-1}$ keV. \citep[Here $B_{12}$ is the magnetic field strength in units of $10^{12}$\,G and $z$ is the gravitational redshift in the line-forming region,][]{wasserman1983}. If a cyclotron line is detected with high significance in the energy range of the SGD (many are known, see \citealt{2012MmSAI..83..230C} for a compilation), it can be possible to measure the degree of polarisation, which is induced by the scattering with electrons trapped in a strong magnetic field. This would be a major breakthrough for our understanding of the source emission mechanism, as it would be a very strong constraint on the geometry of the X-ray emitting region, which is currently unknown. \subsection{Magnetars}\label{sec:magnetars_intro} We expect NSs to acquire their strong magnetic fields up to $\sim$10$^{12}$~Gauss or more when their massive progenitors collapse, but the origin of strong magnetism in NSs is one of the biggest unsolved problems in astrophysics. Some of the major challenges for understanding their fundamental properties lie in the difficulties associated with probing observationally their internal structure, the state of nuclear matter, and their magnetic field configuration. Similarly, we know little about how their magnetic fields evolve and whether they decay with time. To tackle these questions, Soft Gamma-ray Repeaters (SGRs) and Anomalous X-ray Pulsars (AXPs), collectively called ``magnetars" \citep{1992ApJ...392L...9D, 1995MNRAS.275..255T}, have in the past decade provided an ideal laboratory, particularly in the light of recent multi-wavelength observations and monitoring programs. Magnetars are a fascinating subclass of NSs \citep{2006csxs.book..547W, 2008A&ARv..15..225M} characterized by (i) spin periods in a narrow range of $P$$\sim$2--12 s; (ii) high spin-down rate, $\dot{P}$, indicative of a young characteristic age $\tau_c\equiv P/2\dot{P} \lesssim$100 kyr; (iii) dominant emission in X-rays, with luminosity $L_X \sim 10^{34-36}$ erg s$^{-1}$ which largely exceeds their spin-down luminosity, $\dot{E}_{\rm sd}$~=~$\frac{-2\pi~I~\dot{P}}{P^3}$ (where $I$ is the pulsar's moment of inertia); (iv) sporadic X-ray activities on time-scales of msec to years; and (v) no evidence for accretion indicating \textit{isolated} compact objects, unlike the HMXBs. The surface dipole magnetic field of magnetars estimated from $P$ and $\dot{P}$ (see footnote in the Introduction and Figure~1) turns out to extremely high, $B_{\rm s} > B_{QED}$, the so-called quantum electrodynamic critical field of 4.4~$\times$~10$^{13}$~G at which the Landau level separation exceeds the rest mass energy of an electron, $m_ec^2=511$ keV. We note however the recent discovery of lower magnetic field magnetars, discussed later. Despite the wealth of observational studies dedicated for the study of magnetars, their true nature and their evolutionary link to the other classes of NSs is still not clear. Moreover, the magnetar hypothesis itself remains an open issue. While the magnetar model \citep{1992ApJ...392L...9D, 1995MNRAS.275..255T} remains the most popular theory for explaining the overall properties of SGRs and AXPs, other models have been proposed and are not yet completely ruled out. These include the fallback disk model \citep{2001ApJ...557L..61A, 2009ApJ...702.1309E}, quark stars model \citep{2006ApJ...653..558O}, and white dwarfs model \citep{2012PASJ...64...56M}. Therefore, a truly fundamental question related to their intrinsic magnetic fields and the powering mechanism for their observed emission remains to be answered. One straightforward way of confirming their ``magnetar" nature is through a \textit{direct} measurement of their magnetic fields. This can be performed by detecting cyclotron resonance lines as already established in accreting-powered pulsars using the ``electron" cyclotron resonance appearing at $E_{\rm ec}=11.6\times (B/10^{12} {\rm \ G})$ keV; see e.g. \citep{1978ApJ...219L.105T}. Correspondingly, the ``proton" cyclotron resonance is expected to appear at an energy \begin{equation} E_{\rm pc} = 0.63\times(B/10^{14} {\rm \ G}) {\rm \ keV}. \label{eq:proton_cyclotron} \end{equation} Although signatures of the proton cyclotron resonance have been reported from only a few magnetars \citep{Ibrahim2002ApJ...574L..51I, Ibrahim2003ApJ...584L..17I, 2003ApJ...586L..65R, 2008AIPC..983..234G}, none of them are considered to be convincing due to the limited energy bands, their transient nature, or insufficient statistics. We are thus urged to search for a much firmer evidence of proton cyclotron resonances using the SXS of \textsl{ASTRO-H}. The detailed discussion will be performed in \S3.3. If this subclass indeed exhibits such extreme fields, how do these stars sustain their strong magnetism, and how are the magnetars linked to the canonical NSs with typical $B_{\rm s} \sim 10^{12}$ G? Since we do not know whether they are intrinsically different from other ordinary pulsars, an effective approach is to search for transition objects between magnetars and canonical pulsars. Such an object has in fact been discovered and caught in the act of transitioning from a rotation-powered pulsar to a magnetar: the high-magnetic field pulsar J1846--0258 in the supernova remnant Kes~75 \citep{2008ApJ...678L..43K, 2008Sci...319.1802G}. Furthermore, more recently, the {\it Swift} satellite is allowing the detection of sporadic X-ray outbursts, thus increasing the number of magnetars by a few new sources per year. These recent on-going discoveries of the magnetar class implies that this population would be much larger in our Galaxy than we expected before, requiring us to revise our understanding of the formation and evolution both of magnetars and NSs. \begin{figure}[htb] \begin{center} \includegraphics[height=0.32\hsize]{suzaku_magnetar_nuFnu} \includegraphics[height=0.32\hsize]{magnetar_B_vs_HR} \caption{(left) Broadband $\nu F_{\nu}$ spectra of 4 magnetars observed with \textsl{Suzaku} (modified from \citep{Enoto2010ApJ...722L.162E}). To clearly illustrate the difference, each spectrum is normalized at 2 keV. Individual sources are shown in different colors with their magnetic field. (right) Correlation of the broadband HR (=HXC/SXC) to the surface magnetic field. } \label{fig1:suzaku_magnetar_nuFnu} \end{center} \end{figure} The discovery of transient magnetars \citep{2004ApJ...609L..21I} and, more recently, the discovery of a low-field (i.e.\ $B<B_{QED}$) ``magnetar" SGR 0418+5729 \citep{2010Sci...330..944R} (and a few others, see later) suggest that there is a larger population of magnetar-like objects or ``fossil magnetars" in our Galaxy and that the spin-down dipole magnetic field is \textit{not} the only factor determining the magnetar properties. These ``fossil" magnetars will be interesting potential targets for \textsl{ASTRO-H}. As suggested by such weaker field magnetar activities, the NSs are suggested to store a huge hidden poloidal field component as energy reservoir in addition to the canonical toroidal field component. This is an interesting topic to be related with the supernova explosion mechanism and the formation of magnetars or even the more canonical NSs. In order to observationally address the supernova explosion mechanism, the birth environment and progenitors of magnetars, one promising approach is through X-ray spectroscopy of supernova remnants (SNRs) associated with magnetars. Recent \textit{Chandra} and \textit{XMM-Newton} observations (e.g. \citealt{2012ApJ...754...96K}) suggested a high-mass progenitor ($\gtrsim$30 $M_\odot$) for an SNR associated with a high-B pulsar; a result that is also supported by multi-wavelength studies of a few magnetars (see e.g. \citealt{2005ApJ...620L..95G, 2007ApJ...667..219M}). Related \textsl{Suzaku} studies have been also performed for the SNRs Kes~73 and CTB~109 hosting AXP (\S3.1). So far $\sim$20 SGRs and AXPs have been discovered in our Galaxy and the Magellanic Clouds. For a long time, they were thought to emit X-rays only below $\sim$10~keV. This bright soft X-ray component (hereafter SXC) is represented by a quasi-thermal spectrum with $kT \sim 0.5$ keV hotter than other isolated NSs. A new breakthrough came with \textsl{INTEGRAL}'s discovery of an unexpected hard X-ray component (HXC) above $\sim$10~keV \citep{2006ApJ...645..556K}. This HXC was confirmed by follow-up \textsl{Suzaku} observations, covering the SXC and HXC simultaneously thanks to combining the X-ray Imaging Spectrometer (XIS) and the Hard X-ray Detector (HXD) \citep{2011PASJ...63..387E}. Figure \ref{fig1:suzaku_magnetar_nuFnu} shows $\nu F_{\nu}$ spectra of four magnetars. The HXC has an extremely hard photon index, $\Gamma_{\rm h} \sim 1$, almost the hardest among known X-ray sources, and extends up to $\sim$100 keV. Although theoretical accounts have not yet been reached (e.g., \citealt{2005ApJ...634..565T, 2007Ap&SS.308..109B}), its near-absence in other types of X-ray sources suggests its connection to the strong magnetic fields of magnetars. It is hence imperative to examine their broad-band spectral properties including both the SXC and the HXC to understand this enigmatic object. \textsl{Suzaku} has performed broad-band (0.8--70 keV) observations of $\sim$9 magnetar sources, providing some evidence that two components in the magnetars spectra (i.e.\ the SXC and HXC described above) systematically change depending on their magnetic field and characteristic age \citep{Enoto2010ApJ...722L.162E}. In particular, the HXC becomes weaker relative to the SXC, but harder for older objects. Although SGRs and AXPs first came into astrophysics as unrelated objects, where the former class was discovered through the burst activity while the latter class was recognized as unusually bright X-ray pulsars, the present correlation implies a unification of SGRs and AXPs into one evolutionarly path. Recent exciting discoveries of X-ray outbursts from magnetars also provide us rare opportunities to search for the HXC during active states. Following the {\it Swift} discoveries of magnetar outbursts, \textsl{Suzaku} and \textsl{INTEGRAL} follow-up observations successfully detected the enhanced HXC and their gradual decays coinciding with the decay of the SXC: e.g., SGR~0501+4516 \citep{2009MNRAS.396.2419R, 2010PASJ...62..475E} and 1E~1547.0-5408 \citep{2010PASJ...62..475E, 2012ApJ...748..133K, 2012PASJ_Iwahashi}. Although the number of the HXC-confirmed magnetars is gradually increasing, we still don't know whether all magnetars actually exhibit the HXC in all the states; e.g., the HXC has not yet been confirmed from some persistent emitting sources, such as 1E~2259+586 and 1E~1048.1--5937. In the \textsl{ASTRO-H} era, our next step is to investigate the broadband spectra of transient magnetars in their outbursts. In particular, to bring our discoveries during the quiescent states into the next stage, our main strategy will be to perform prompt observations of the HXC just after the onset of the outburst. These questions on a unified understanding of the magnetar evolution will be discussed in detail in \S3.2. Another distinctive feature is sporadic burst activities, short bursts, intermediate flares, and giant flares, usually coinciding with X-ray outbursts. The spectra of SGR bursts detected with {\it HETE}-2 and {\it Suzaku} were described well by two blackbody components \citep{2004ApJ...616.1148O, Nakagawa2007PASJ...59..653N}. The blackbody temperature can much exceed the normal Eddington limit, probably due to suppression of the Thomson scattering. Thus, the strong magnetic field significantly affects the familiar blackbody radiation. Also, the blackbody radius sometime increases up to 30 km during bursts, suggesting the photosphere to expand to fill the magnetosphere. At the moment, we don't know why two blackbody components appear. Do they represent the two photo polarizations (O- and X-modes)\footnote{Two photon polarizations of which the electric vector is parallel and perpendicular to the magnetic field.} which acquire different ``photospheres" when $B>B_{\rm c}$? More fundamentally, is the two-blackbody fit physically meaningful or mimicking a more complex process, e.g., resonant electron cyclotron scattering? In fact, the persistent emission from some magnetars prefer blackbody $+$ power-law modeling to the two-blackbody fit, requiring investigation (\S3.5). \subsection{X-raying the Environment} \subsubsection{Mapping the Stellar Wind}\label{sec:wind} \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{astroH_SXS_sim_velaX1_10ks} \end{center} \caption{SXS simulations of three 10\,ks observations of the accreting pulsar and HMXB Vela~X-1 at different orbital phases, based on longer \textsl{Chandra} observations \citep{watanabe2006}. The spectra have been rebinned for clarity, excesses at low energies indicate additional lines. Courtesy M. K\"uhnel (FAU) and N. Hell (FAU \& LLNL).} \label{fig:velax1_sxs} \end{figure} Many accreting pulsars are part of a High Mass X-ray Binary (HMXB). HMXBs posses a strong stellar wind that can be photoionised by the X-ray emission from the compact object. Hundreds of X-ray emission and absorption lines have been observed with the \textsl{Chandra} and \textsl{XMM-Newton} gratings for bright wind NS and black hole accreters like \textbf{Vela~X-1}, \textbf{Cen~X-3}, \textbf{Cyg~X-3}, \textbf{Cyg~X-1}, \textbf{4U\,1700$-$37}, or \textbf{GX\,301$-$2} \citep{sako2002,schulz2002,hanke2009a,fuerst2011}. From modelling those lines as well from studying the sometimes extreme flux variability -- as, e.g., shown by the supergiant fast X-ray transients (\S\ref{sec:sfxt}) or sources like Vela X-1 \citep{kreykenbohm2008,fuerst2010} -- it is known that these winds are often structured. They can show focused material streams towards the compact object \citep{hanke2009b}, can have material trailing the compact object in an accretion wake \citep{nagase1992,manousakis2012}, and can exhibit instabilities leading to clumping \citep{feldmeier2008,hanke2009b}. In addition several HMXBs are eclipsing sources (e.g., Vela~X-1, Cen~X-3, 4U\,1700$-$37). Observing the ingress or egress can in principle allow for a detailed look at the atmosphere of the companion star, e.g., its density and ionisation structure resulting from the interaction with the compact object \citep{wojdowski2003,naik2011}. Modelling lines and absorption edges from photoionised plasmas, ideally fitting series from the same ion together, has already provided important constraints, e.g., on predominant ionisation stages, ion abundances, and equivalent widths for different sources and orbital phases. However, this is just the beginning of what will be possible using the \textsl{ASTRO-H} SXS instrument. Pre-SXS limitations include that wind properties have not yet been spatially resolved. This requires high time resolution line mapping, e.g., through eclipse egress/ingress or of the photoionised accretion wake. The comparatively small effective area of grating spectrometers cannot provide sufficient statistics for such a study, contrary to the SXS, (see below). Such intriguing studies as resolving individual clumps will also become feasible for the first time. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{norm_fit_lin_5ks} \end{center} \caption{Simulated evolution of the line flux with orbital phase during egress for three prominent lines of Vela~X-1. A 5\,ks gliding window was applied to 1\,ks simulations. The simulated line flux was linearly interpolated between \textsl{Chandra} observations at phase 0 and 0.25. The simulated absorbed continuum has been subtracted ($N_\textrm{H}$ was assumed to be modulated with phase according to spherical wind model). The wavy structures are a numerical artifact. Courtesy N. Hell (FAU \& LLNL).} \label{fig:velax1_egress} \end{figure} \noindent\textbf{Line spectra of exemplary accreting pulsars} --- \textbf{Vela~X-1:} For a description of system properties see \S\ref{sec:crsf}. Figure~\ref{fig:velax1_sxs} shows the SXS view of lines from many highly ionised ions as well as their strong variability over the $\sim$9\,d orbit. The X-ray continuum spectrum is highly absorbed, especially near phase 0.5 (accretion wake) but the lines still have high equivalent widths, i.e., they are not fully absorbed by this material. SXS observations during the line-rich (near-)eclipse as well as around phase 0.5 can be expected to maximize the scientific return. Figure~\ref{fig:velax1_egress} (egress) and the inset in Figure~\ref{fig:velax1_sxs} (wake) show that SXS observations will allow us to determine the evolution of line parameters with unprecedented time resolution of 1-5\,ks over a broad energy range. --- \textbf{Cen X-3}: This system consists of a NS spinning at a period of $\sim$4.8\,s in an $\sim$2.1\,d orbit with a O-type supergiant star\citep[see, e.g., references in][]{suchy2008}. All the phenomena described above -- from eclipse to eclipse -- could thus potentially be covered by one SXS observation. Near-neutral, He-like, and H-like iron lines have been observed by \textsl{Chandra} and \textsl{Suzaku} \citep{iaria2005,naik2011}. Outside of eclipse the X-ray spectrum is dominated by sometimes highly variable continuum emission, exhibiting very weak absorption lines mostly from H-like ions \citep{sako2002}. During eclipse, however, the spectrum is dominated by line emission \citep{wojdowski2003}. --- \textbf{GX\,301$-$2}: For a description of system properties see \S\ref{sec:crsf}. That section also presents an SXS simulation of the continuum and line spectrum expected during the bright pre-periastron flare that is a regular feature of the 41.5\,d orbital flux variation, see Figure~\ref{fig:gx301}. GX~301$-$2 is characterised by several (near-neutral) fluorescent emission lines, a Compton shoulder of the iron line, and a highly absorbed ionising continuum \citep{fuerst2011,watanabe2003}. --- \textbf{CRSF:} Note that all three of these accreting pulsars are also known cyclotron line (CRSF) sources. Observing their broad band spectra with \textsl{ASTRO-H} would thus also allow us to address the scientific objectives described in \S\ref{sec:crsf}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{CygX3_sxs_hxi.pdf} \caption{SXS and HXI simulation of a 2\,ks observation of Cyg~X-3.} \label{fig:cygx3_sxs} \end{center} \end{figure} \noindent\textbf{A unique wind accreter} --- \textbf{Cyg~X-3:} Located at a distance of 8--10\,kpc in the plane of the Galaxy, Cyg~X-3 is composed of a compact object in a 4.8\,hours orbit with a Wolf-Rayet star. The emission from the system is detected from radio up to GeV energy band. The soft X-rays are thought to arise from the accretion disc, the hard X-rays from the corona, and the radio from the jet \citep{szostek2008}. This makes interpretation of the data complex and one needs to use timing information along with the spectral one to be able to reconstruct the data in a unique way \citep{koljonen2013}. Cyg~X-3 harbours a large number of emission lines that are especially prominent in the high resolution energy spectra \citep{paerels2000}. However, due to the low sensitivities of the \textsl{RXTE} PCA below 3.5\,keV, \cite{koljonen2013} were able to use only the iron line complex in their analysis. Despite this fact they still have seen dips in the variance spectrum in the energy bins 1.8--1.9 and 2.3-2.4\,keV. The dips correspond roughly to the location of some of the strongest emission lines in the X-ray spectra (H-like Si at $\sim$2.0\,keV and H-like S at $\sim$2.5\,keV) and could be interpreted as a reduction in variability indicating line production further out from the compact object in the photoionised stellar wind. However, the dips could also be due to the low flux and wide energy bins. \cite{paerels2000} showed that the iron line complex consists of He-like and H--like iron ions (XXV/XXVI) at 6.7\,keV and 6.9\,keV, respectively, and cold iron K$_\alpha$ at 6.4\,keV. However, these lines blend into a single broad feature in the PCA, which makes it difficult to disentangle the two possible emission regions. Thus a similar exercise done with \textsl{ASTRO-H} data will provide a unique chance to study the spectral-timing variability of the source along the orbit using the whole wealth of information coming from spectral lines and using the simultaneous time series for a very broad energy range, which will make it possible to disentangle these different components and see all of them in play simultaneously. The sensitivity of the SXS instrument allows us to get detailed spectra already with a 2\,ks exposure (see Figure \ref{fig:cygx3_sxs}), and thus with a 20\,ks observation \textsl{ASTRO-H} would be able to trace the spectral variability of the source along the whole orbit. \subsubsection{Searching for Signatures of the Alfven Shell}\label{sec:gx1p4} Fluorescent lines of low ionized iron ions have been observed from some of accretion powered X-ray pulsars. Although its emission region is not well understood, its candidate is an Alfven shell or an inner part of their accretion disk. In these regions, magnetic field and the matter are fiercely competing with each other and information of their dynamical motion of the gas provide us new insights of the environment of accreting neutron stars. The information of the dynamics around the Alfven shell is relating to the strength of the magnetic moment and mass of the neutron stars. GX\,1+4 is not a High Mass X-ray Binary, but is considered to contain a highly magnetized neutron star with $\sim$10$^{13}$ G \citep{1988Natur.333..746M}. Its prominent $K \alpha$ line has an equivalent width of $\sim$200~eV. Its central energy and the 7.1~keV absorption edge indicate the iron ions are almost neutral and suggest its fluorescent origin. \begin{figure}[htb] \begin{center} \includegraphics[width=0.55\hsize]{20130126_4ks_10eV.pdf} \caption{Simulation of the SXS spectrum of GX\,1+4. The exposure time is assumed to be 4~ks and the model is an absorbed power law plus three Gaussian lines ($K _{\alpha 1}$, $K_{\alpha 2}$ and $K_{\beta}$). The line widths are all assumed to be 10~eV (standard deviation of a Gaussian function). } \label{fig:GX14_sxs} \end{center} \end{figure} Although \textsl{Suzaku} XIS data did not show the energy modulation of the line central energy ($<$10~eV), it shows a line broadening with roughly 40$\sim$50~eV in standard deviation. These results, the broadening and the small energy modulation, indicate that the line emission region is in a widely extended region of the accretion flow or of the Alfven shell. However, current CCDs, including XIS, have ambiguity of the energy response function for bright point sources due to the SCF-effect \citep{2012PASJ...64..101T}. Therefore, SXS, for the first time, makes it possible to study detailed spectroscopy of spin-phase sliced spectra. The expected Kepler velocity around the Alfven shell is $\sim$1000 km s$^{-1}$. Thus the energy shift by this velocity is $\sim$200~eV. If the emission lines uniformly come from the Alfven shell, the line width is expected to be $\sim \frac{200}{\sqrt[]{12}} \sim 60$~eV in standard deviation. By the observation with the SXS, we can expect the detection of a significant broadening and probably a modulation of the width as well as the central energy. Assuming a 40~ksec observation and analysis of 10 phase sliced spectra, we simulated a 4~ksec observation using best fit parameters obtained by \textsl{Suzaku}. An expected spectrum is shown in Figure~\ref{fig:GX14_sxs}, We assumed a power law continuum with a photon index of 2.9 suffered photo-electric absorption of 4$\times$10$^{23}$ H atoms cm$^{-2}$, and neutral iron fluorescent lines: $K_{\alpha 1}$, $K_{\alpha 2}$, and $K_{\beta}$ (this line contains several lines but here we represented them by one Gaussian). The intensity-ratio of the three lines is fixed to be 100:50:17 and the line width is fixed to be 10~eV (standard deviation). One sigma error of the line central energy is estimated to be $\sim$ 0.1~eV, which corresponds to 4.7 km sec$^{-1}$. The line width can be also determined with an error of $\sim$ 0.2~eV, which is again much less than the expected width. Therefore we can precisely determine an average velocity and velocity dispersion of gas which emits fluorescent lines with only a 1 day observation. If we can estimate the Kepler velocity at the Alfven radius from the observed modulation of the central energy, by assuming some accretion flow geometry, we can discuss the relation among mass, magnetic moment of the neutron star and accretion rate. Since the accretion rate can be estimated from the bolometric luminosity, the relation between the mass and the magnetic moment of the neutron star can be deduced. \subsection{Cyclotron Line Sources}\label{sec:crsf} \subsubsection{Background and Previous Studies} About 25 X-ray binaries show broad absorption-line-like features in their hard X-ray spectrum due to the interaction of the emerging photons with the electrons trapped along the $10^{12}$\,G magnetic field. The energy of these features is the only direct measurement of a magnetic field in the X-ray emitting region ($E \simeq 11.6 \times n \times B_{12}$\,keV, where $n$ is the harmonic number and $B_{12}$ is the magnetic field in units of $10^{12}$\,G). The underlying continuum emission is produced by Comptonization of the radiation produced in the optically thick part of the accretion flow or at its footprint on the NS surface and has a smooth power-law shape exponentially attenuated at high energy. Although its phenomenological description is fairly simple, the detailed physics are still a matter of debate \citep[][ and references therein]{becker2012}, as well as the strong variability at all time scales, which is thought to be due to the clumpy nature of the plasma, when it flows within the magnetosphere. Reproducing on theoretical grounds the observables is still a challenge which would yield informations on basic properties of the neutron star such as the magnetic field configuration and its interaction with the accreted matter. Timing signatures are of paramount importance to understanding these objects. In particular the ``pulse profile'', which is the X-ray emission of the XRBPs folded at the spin period of the NS. Pulse profiles are remarkably stable when averaged over several rotational cycles, reflecting the trapping of plasma along the magnetic field lines. Significant variations in the shape of profiles are observed when the XRBPs undergo transitions between different X-ray luminosity states, such as reported in the cases of EXO\,2030$+$275 and V\,0332$+$65, \citep[][ and references therein]{dima2008,tsygankov2010}. At high luminosity, a radiative shock forms in a relatively extended (a few km) accretion column, and the radiation is emitted mainly from its lateral walls in the form of a ``fan beam''. At lower luminosities, the radiative shock is suppressed, matter reaches the base of the column with the free-fall velocity and X-rays are emitted nearly vertically in a ``pencil beam''. In this state, the height of the column can be significantly reduced, and for very weak sources, it reduces to a hot-spot on the NS surface. Correspondingly, variations of the centroid energy of the cyclotron scattering features with the luminosity have been observed and tentatively been interpreted as variations of the accretion column height \citep{staubert2009,klochkov2011,klochkov2012}. This picture still needs to be confirmed by further observations. The majority of XRBPs show significant changes in the spectral energy distribution of the X-ray emission at different pulse phases. This is due to a variety of factors that change during the rotation of the star. In particular, the cross section of the scattering between photons and electrons trapped in a magnetic field strongly depends on the angle between the photon trajectory and the magnetic field lines. The interplay between the extraordinary and ordinary polarisation modes of photons propagating in a strongly magnetised plasma is expected to produce phase variable linear polarisation at a level as high as 80\% in correspondence to the cyclotron scattering features and for particular pulse phases \citep[the polarisation fraction is lower for energies out of resonance,][]{meszaros1988p}. Different imprints are expected to appear in the spin-phase resolved emission for different emission patterns: an anti-correlation between polarisation fraction and X-ray intensity in the case of pencil beam, and anti-correlation in case of a fan beam. The linear polarisation fraction can be lowered of $\sim$30\% by relativistic light bending, which causes emission from both magnetic poles to be simultaneously visible. The theoretical predictions are highly uncertain as they depend on the unknown system geometry and emission beaming. Advances in our understanding of these objects have come from \textsl{BeppoSAX} \citep{sax} and \textsl{RXTE} \citep{rxte} observations, which have however a limited spectral resolution, while the superb performance of \textsl{Suzaku} \citep{suzaku} is hampered by the spectral gap between the soft and the hard X-rays. \textsl{NuSTAR} \citep{nustar} has provided a quantum leap in the 5--60\,keV energy domain spectral analysis, which is comparable to the possible achievements of the HXI. \subsubsection{Prospects \& Strategy} \textsl{ASTRO-H} provides a wide spectral coverage, coupled with excellent timing capabilities (the latter for SXS, HXI, and SGD) and unprecedented spectral resolution in the broad energy range relevant for the physics of HMXB (0.1--100\,keV). The significant overlap in the bands of the instruments provides a very robust inter calibration tool, which helps in reducing the systematic uncertainties. It is, therefore, an ideal laboratory for studies of objects emitting over a wide energy range combining the excellence of previous missions. The limited effective area with respect to past (e.g., \textsl{RXTE}) and current (e.g, \textsl{XMM-Newton}) instruments is not an issue for bright X-ray binaries. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\hsize]{velax1_spec_gabs} \end{center} \caption{Simulated SXS and HXI spectrum for a 100\,ks exposure of Vela~X-1 using the continuum model and two CRSFs at $\sim$25\,keV and $\sim$55\,keV from \cite{indiani2013}. Middle panel: Residuals using Lorentzian cyclotron line profiles, as for the simulation. Bottom panel: Residuals from a model with Gaussian optical depths lines, structure is present around the CRSF fundamental at $\sim$25\,keV.} \label{fig:velax1} \end{figure} Polarisation has not been measurable in X-rays below a few hundreds keV so far. The Soft Gamma-ray Detector (SGD) on board \textsl{ASTRO-H} can perform this measurement using the Compton kinematics above $\sim$50\,keV. This will open an unprecedented observational window, which will provide stringent constraints on the theoretical models when combined with the pulse-phase and luminosity dependent spectral variations of the cyclotron line energy and underlying continuum. This will potentialy trim down the enormous parameter space which is currently left almost unconstrained in the theoretical interpretation. In particular, the geometrical configuration of the magnetic field and thus of the emitting regions on the NS is largely unknown as well as the shape of the accretion stream close to the NS surface: matter could fall in a filled or hollow column, or in portions of it \citep{basko1976}. Theoretical models are already challenged by observation, which in turn are still not sufficient to find a solution due to limited spectral resolution or band pass limitations. The broad band data with high spectral and timing resolution provided by \textsl{ASTRO-H} will provide an unprecedented robust benchmark for theory. Many of the sources that display cyclotron lines or are strong candidates are transient, e.g., \textbf{4U\,0115$+$63}, \textbf{V\,0332$+$53}, \textbf{GRO\,J1008$-$57}, \textbf{GX\,304$-$1}, \textbf{EXO\,2030$+$375}, or \textbf{A\,0535$+$26}. In the case that one of them shows a giant outburst it will be an excellent target for \textsl{ASTRO-H}, providing the best chance of observing polarisation, see \S\ref{sec:cyc_3}. Bright persistent sources that will allow for detailed time-, luminosity-, and pulse-phase-resolved studies of the cyclotron lines and the X-ray continuum, are, e.g., \textbf{Vela~X-1}, \textbf{GX\,301$-$2}, \textbf{Cen~X-3}, \textbf{Her~X-1}, or \textbf{4U 1626$-$67}. In the following we present simulations of \textsl{ASTRO-H} observations for the persistent cyclotron line sources Vela~X-1 and GX~301$-$2. Vela X-1 is characterised by significant orbital-phase-dependent variability (\S\ref{sec:wind}) as well the irregular occurrence of strong flares and off-states \citep{fuerst2010}. GX~301$-$2 presents a marked pre-periastron flare and heavy intrinsic absorption in the stellar wind, thought to be due to an accretion stream preceding the NS \citep[][and references therein]{fuerst2011}. Both sources also show pronounced emission lines due to neutral and ionised material (Figure~\ref{fig:velax1_sxs} and Figure~\ref{fig:gx301}). \subsubsection{Targets \& Feasibility} \begin{figure}[htb] \begin{center} \includegraphics[width=80mm]{gx301-2-hori} \caption{a) Simulated SXS and HXI spectra for a 50\,ks exposure on GX 301$-$2 based on the analyses by \cite{fuerst2011} and \cite{suchy2012}. Emission lines are indicated. b) Residuals from a best fit. c) The residuals from a model with the exclusion of the cyclotron absorption feature at $\sim$30\,keV. Courtesy M. K\"uhnel (FAU) and N. Hell (FAU \& LLNL).} \label{fig:gx301} \end{center} \end{figure} \textbf{Vela X-1} (4U~0900$-$40) is a an eclipsing and persistently active system consisting of a massive (23$M\odot$; 34$R_\odot$) B0.5 1b supergiant and a neutron star. It has an orbital period of 8.964 \,days and shows only slight eccentricity ($e\sim$0.1). The neutron star is deeply embedded in the strong stellar wind of the companion; the typical X-ray luminosity is $\sim4\times10^{36}$erg/s. It exhibits pulsations with a period 283\,s. Cyclotron lines appear at $\sim$25 and 55\,keV \citep{labarbera2003,kreykenbohm2002}. \cite{indiani2013} analysed a 100\,ks \textsl{Suzaku} observation and found an interesting dip-feature linked to enhanced absorption form a partial covering component at a particular pulse phase. This will be better constrained by the SXS. A 100\,ks simulation with SXS and HXI shows that \textsl{ASTRO-H} will provide an improved determination of the energy and width of the cyclotron scattering features in comparison to existing datasets (Figure~\ref{fig:velax1}; see \S\ref{sec:wind} for a simulation including the emission lines from the stellar wind). With such an observation it would be possible to discriminate between different line profile models for the first time. Relatively strong polarisation could be present in the SGD band due to the harmonic cyclotron scattering feature at $\sim50$\,keV. In Vela X-1 the first harmonic is unusually strong. We have verified that in 50\,ks, it would be possible to detect a polarisation fraction of 35\%. \textbf{GX 301$-$2} consists of an accreting NS with a period of $\sim$685\,s fed by the surrounding stellar wind of the B type emission line companion Wray 977. \cite{doroshenko2010a} determined an orbital period of 41.482$\pm$0.001\,days. In the SXS band, GX~301$-$2 is characterised by several fluorescent emission lines, rising above the highly absorbed ionising continuum \citep{fuerst2011}. A Compton shoulder of the iron line is visible both in the \textsl{XMM-Newton} CCD spectrum and in the high-resolution grating spectrum of \textsl{Chandra} \citep{watanabe2003} and could be studied in detail by the SXS. The cyclotron absorption feature is located at about 30\,keV. We have simulated a 50\,ks observation based on existing analyses \citep[Figure~\ref{fig:gx301}][]{fuerst2011,suchy2012} and verified that the emission lines are detected by the SXS with high significance, allowing the observer to fully characterise the wind environment, while the HXI is able to constrain the shape of the cyclotron absorption line with high accuracy. \subsubsection{Measuring Polarisation during a Giant Outburst}\label{sec:cyc_3} \begin{figure}[htb] \begin{center} \includegraphics[width=80mm]{A0535+26_polarization_figure9} \caption{Measurable polarisation fraction of the X-ray signal in the 50--100\,keV energy band by the SGD in a 50\,ks exposure time (courtesy Sasano).} \label{fig:A0535} \end{center} \end{figure} Giant outbursts of Be/X-ray binaries are thought to be due to an exceptional extension of the equatorial disc of the donor star providing material to be accreted on the NS through a viscous accretion disc \cite[see][for a review]{reig2011}. Outbursts are not predictable although a correlation is observed with the strength of the H-$\alpha$ emission of the system. In 4U\,0115$+$63, these episodes happen every few years and appear to be linked to a semi-period perturbation of the equatorial disc of the Be donor \citep{negueruela2001b}. Longer periods of quiescence followed by repeated outbursts have been observed for A\,0535$+$26, while EXO\,2030$+$375 shows periodic outbursts at each periastron passage during which the source does not exceed $\sim$250\,mCrab. The spectacular source V\,0332$+$53 has been observed in outburst only once with modern facilities. GRO\,J1008$-$57 has recently exhibited a giant outburst \citep{mathias2012}. All these objects are potential targets for \textsl{ASTRO-H} in case they go in outburst. Triggers can be provided by \textsl{Swift}/BAT, \textsl{INTEGRAL}, or \textsl{MAXI} and the X-ray outbursts usually last for a few weeks. \begin{wraptable}{}{80mm} \vspace{-5mm} \caption{ Estimated minimum detectable polarisation fraction in the 50--100\,keV energy range for bright Be/X-ray binaries.} \begin{center} \vspace{-2mm} \begin{tabular}{cccc} \hline \hline \scriptsize Exposure (ks) & \scriptsize A\,0535$+$26 & \scriptsize GX\,304$-$1 & \scriptsize EXO\,2030$+$375 \\ \hline 25 & 7\% & 23\% & 8\% \\ 50 & 5\% & 16\% & 5\% \\ 100 & 3\% & 15\% & 4\% \\ \hline \vspace{-13mm} \end{tabular} \end{center} \label{mdp_table} \end{wraptable Observations during these episodes would provide unprecedented data sets to study X-ray binaries. We emphasise that the SGD can measure X-ray polarisation above $\sim$50\,keV where cyclotron lines appear in a handful of very bright transient sources, for which we have simulated the feasibility of detection in different observing times (see Table~\ref{mdp_table}). For the X-ray binary A\,0535$+$26 ($E_\mathrm{cyc}\simeq45$\,keV), at the maximum of its powerful outbursts (0.9\,Crab above 50\,keV) and in 25\,ks of observation, it is possible to measure a polarisation fraction as low as 7\%. In Figure~\ref{fig:A0535}, we show how this measurement would be possible throughout the outburst, helping us to contain the details of the emission mechanism at different luminosity levels. In systems as EXO\,2030$+$375, in which the detection of a scattering feature is controversial, \citep[][and references therein]{wilson2008,klochkov2007}, it would be feasible to detect a polarisation fraction of 20\% during the regular periastron passages, while 8\% would be reachable if the source undergoes a giant outburst. This would constitute a direct prove of the magnetic field intensity even in absence and a clear spectral signature and would open a new window to investigate why the scattering features are not an ubiquitous phenomenon in magnetised pulsars. The polarisation angle is strongly dependent on the $B$-field orientation with respect to the line of sight and averaging over a phase interval might lead to a lowering of the polarisation signal. We argue, based on a rough comparison of theoretical computations \citep{meszaros1988p}, that it could be sufficient to divide the pulse profile in less than ten phase bins to measure a polarisation fraction of a few 10\% in most bins and it would thus be sufficient to invest a reasonable amount of observing time (100-200\,ks) on a bright X-ray binary to achieve such a breakthrough result. \subsection{Super-giant Fast X-ray Transients (SFXTs)}\label{sec:sfxt} \subsubsection{Background and Previous Studies} Together with the classical supergiant and Be/X-ray binaries, the \textsl{INTEGRAL} satellite has provided evidence for a third class of X-ray binary called the supergiant fast X-ray transients (SFXTs) \citep{smith2004,sguera2005,negueruela2006}. These objects have an OB supergiant donor star, but at odds with the persistent systems, they show short hours-long periods of intense X-ray activity (flare, $L_X\sim10^{36-37}$\,erg/s), often grouped in day-long outbursts, and extended quiescent states ($\sim10^{32}$\,erg/s), with swings up to $10^5$ in luminosity \citep[see,][for an extensive summary]{romano2014}. An open debate exists on the nature of the accreting object and the characteristics of the stellar wind from which matter is funnelled: clumpy winds are thought to cause the very variable accretion rate \citep{zand2005,walter2007,rampy2009}, but some gating mechanism is probably in action to produce the very low duty cycle of these objects compared to the classical systems \citep{oskinova2007,grebenev2007,bozzo2008}. The spectral characteristics of the objects and the detection of pulsation \citep[from 20 to 1200\,s, references in][]{romano2014} in five out of twelve confirmed systems strongly indicate that the compact object is a neutron star. It is therefore possible that the gating barrier is provided by an ultra-strong magnetic field ($B \gg 10^{12}$\,G), which causes matter to accumulate or be expelled at the magnetospheric boundary. Evidence of flares linked to ingestion of clumps has been reported in \textsl{XMM-Newton} and \textsl{Suzaku} observations \citep{rampy2009,bodaghee2011,bozzo2011}, while the typical flare rise time ($\sim 10$ min) is consistent with the free-fall time from the Alf\'ven radius associated to magnetic field $B\sim10^{13}$\,G. \subsubsection{\textsl{ASTRO-H} prospectives on SFXTs} \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{4u0114p65_nFn_spectra.pdf} \end{center} \caption{Top: Simulation of 100 ksec \textsl{ASTRO-H} observation of long period pulsar, 4U 0114+65, with a CRSF at 150 keV. Bottom: Residuals of the simulation against a model with no CRSF. The residuals clearly deviate from the model at $\sim$ 150 keV and the CRSF will be sufficiently detectable with the SGD.} \label{fig:4u0114} \end{figure} \textsl{ASTRO-H} will provide unprecedented tools to study the very nature of these objects. The HXI can provide good sensitivity to the presence of a cyclotron scattering absorption feature in the hard X-ray domain, with a possible extension using the SGD at higher energy for very bright events. The SXS can provide a very useful diagnostic on the presence of the emission Iron line, its ionisation status and the proper motion of the clump impacting on the NS. The absorption can be constrained by the SXI and SXS, similarly to the \textsl{Suzaku} achievements \citep{bodaghee2011,rampy2009}. Such objects, if confirmed, may be considered as aged magnetars in binary systems, and will provide clues to the formation and evolution of magnetars \citep{Enoto2010ApJ...722L.162E}, as well as to the origin of the NS magnetism \citep{makishima1999}. Other candidates to search for cyclotron scattering features are the long period pulsars (e.g., 4U\,0114$+$65 and 4U\,2206$+$54). Their spectrum extends to 100 keV without appreciable cut-off, unlike those of ordinary X-ray pulsars which show a steep cutoff around 20--40\,keV and a CRSF at a higher energy. We thus expect these LPPs to exhibit CRSFs in energies above $\sim 100$\,keV \citep{makishima1999}, where the SGD will for the first time realize a sufficient sensitivity. \subsection{Gamma-ray Loud Binaries}\label{sec:gamma} \subsubsection{Introduction} Gamma-ray-loud binary systems (GRLB) are X-ray binaries which emit very-high energy (VHE) $\gamma$-ray s. Four such systems \psrb, LS~5039, LSI~+61\deg~303\ and HESS J0632+057, have been firmly detected as persistent or regularly variable TeV $\gamma$-ray\ emitters \citep{aharon05,aharon06-ls,albert06-lsi, aharon06-innerGalaxy}. Observations of the {\it Fermi}/LAT telescope helped to reveal several more binaries emitting at very high energies. Among these sources are well-known microquasar Cyg X-3, symbiotic binary V 407 Cygni, colliding wind binary $\eta$ Carina, newly discovered binary system 1FGL J1018.6-5856, but still the number of known GRLBs is very limited. The source of the high-energy activity of GRLBs is uncertain. It can be either accretion onto or dissipation of rotation energy of the compact object. It is commonly assumed that the $\gamma$-ray\ emission is produced in result of interaction of the relativistic outflow from the compact object with the non-relativistic wind and radiation field of a companion massive star. Neither the nature of the compact object nor the geometry and physical properties of relativistic wind from this compact object are known in most of the GRLBs. The only exception is \psrb\ system in which the compact object is known to be a young rotation powered pulsar which produces relativistic pulsar wind. In \cite{bednarek09} it was proposed that accreting magnetars in massive binaries can also generate TeV gamma-rays. In the inner magnetosphere of a magnetar the magnetic pressure can balance the gravitational pressure of the accreting matter, creating a very turbulent, magnetised transition region. This region provides good conditions for acceleration of electrons to relativistic energies. These accelerated relativistic electrons lose energy on the synchrotron process and the Inverse Compton scattering of the radiation from the nearby massive stellar companion, producing high energy radiation from X-rays up to TeV $\gamma$-ray s. Recently Burst Alert Telescope (BAT) on board \textit{Swift}\ has detected a short burst from the direction of the TeV binary LSI~+61\deg~303. The burst is visible in the 15 - 50 keV energy range, while no significant excess is observed above 50 keV. The total duration of the event is about 0.3 s. Previously such short flares have been also observed from the dirction of LSI~+61\deg~303\ by \textit {RXTE}\ \citep{smith09, li11}. In the paper of \cite{torres12} it was noticed that the properties of the burst observed by \textit{Swift}/BAT (a very short duration and a thermal spectrum) are typical of magnetars. In their work the authors propose that due to the highly eccentric orbit the system is subject to flip-flop behaviour, from a rotationally powered regime in apastron to a propeller regime in the periastron. In this case in apastron an interwind shock leads to the normally observed LSI~+61\deg~303\ behaviour, while during the periastron propeller is expected to efficiently accelerate particles only to sub-TeV energies, in agreement with the observations. More observations are needed to find out the true nature of these very interesting systems. Below we discuss how \textsl{ASTRO-H} observations could help to solve some unresolved problems. We start with the case of \psrb, the only system in which we are sure about the nature of the compact object, and which we can use as a sample to test the other systems. Study of this system could provide a clue on the energy of the relativistic particles of the pulsar wind and give us a chance to study interactions of the winds in the highly variable environment. After that we dicuss in more details LSI~+61\deg~303\ and show the importance of measuring of its spectral variability in a broad energy range ( 1 - 100 keV) along the orbit. \subsubsection{PSR B1259-63} In \psrb \ a 48 ms radio pulsar is in a highly eccentric 3.4 year orbit with a Be star LS 2883. This system is known to be highly variable on an orbital time scale in radio (\citealt{johnston05} and references therein), X-ray (\citealt{chernyakova09} and references therein), and TeV \citep{aharon05} energy ranges. The orbital multi-wavelength variability pattern is determined by the details of the interaction of relativistic pulsar wind with a strongly anisotropic wind of the companion Be star, composed of a fast, rarefied polar wind and a slow, dense equatorial decretion disk. The disk of the Be star in the \psrb\ system is believed to be tilted with respect to the orbital plane. The line of intersection of the disk plane and the orbital plane is oriented at $\sim 90$$^\circ$\ with respect to the major axis of the binary orbit \citep{wang04} and the pulsar passes through the disk twice per orbit. Despite the intensive observational campaigns during the last three periastron passages (2004, 2007 and 2010) it was still not possible to conclude whether the observed X-ray emission has inverse Compton or synchrotron origin \citep{chernyakova09}. The answer to this question is very important for our understanding of the composition of the pulsar wind, as the Lorentz factor of the relativistic electrons varies from about 10 to 10$^6$ in these two models. Study of the variability of the broad band (1 -- 100 keV) spectrum of the source as the pulsar interacts with the disk before and after the periastron could give the missing clues to answer this question. \textsl{Suzaku} observations of the 2007 periastron passage show a break of the source spectrum as the pulsar was crossing the disk before the periastron \citep{uchiyama09}, see left panel of Figure \ref{psrb_uchiyama}. However presense of the nearby X-ray pulsar IGR J13020-6359, located only 10 minutes from the \psrb \ makes the results of \textsl{Suzaku}/HXD very model dependent, and independent measurements in this energy range with the imaging instrument would be a huge benefit. Right panel of Figure \ref{psrb_uchiyama} show simulation of 10 ks observation of \psrb with \textsl{ASTRO-H}. On this Figure we model the broken power law slope as observed with \textsl{Suzaku}. On the left Figure we show that if one would try to fit such a spectrum with a single power law the high energy data will clearly deviate from the model. \begin{figure}[h] \includegraphics[angle=0,width=0.42\linewidth]{uchiyama_fig7} \hspace{0.4cm} \includegraphics[angle=90,width=0.55\linewidth]{PSRB_1po_astroH} \caption {\textit{left panel:}\textsl{Suzaku} observations of the spectral break in \psrb. Figure is taken from \cite{uchiyama09}. \textit{right panel:}Simulation of 10 ksec \textsl{ASTRO-H}/HXI observation of PSR B1259-63 as it crosses the disk with the parameters derived from the \textsl{Suzaku} observation. Fit with a single power law leads to the clear deviation of the data from the model.} \label{psrb_uchiyama} \end{figure} \subsubsection{LSI +61 303} The Be star binary LSI~+61\deg~303\ is another GRLB system from which radio, X-ray and very high-energy gamma-ray emission is observed. In LSI~+61\deg~303\ the high-energy particle outflow is directly observed in the radio band, where angular resolution is sufficient to resolve the source and to detect variations of its morphology on the orbital period time scale. The observed morphological changes indicate that the outflow has a variable morphology outflow filling a region the size $\sim 10^2-10^3$ times larger than the binary separation distance. The radio signal could not be used to trace the outflow down to the production site inside the binary orbit, because of the free-free absorption in the dense stellar wind environment \citep{zdz10}. To understand the nature of the high-energy particles carrying outflow one has to use complementary high-energy data in X-ray and/or $\gamma$-ray\ bands. \begin{figure}[h] \begin{center} \includegraphics[angle=0,width=0.4\textwidth]{Hermsen_LSI_l_rot_crop_mod} \includegraphics[angle=0,width=0.55\textwidth]{lsi_1po_40ks_mod} \caption {\textit{left:}Broad band spectrum of LSI~+61\deg~303. \textsl{INTEGRAL} observations reveal a presence of a possible feature at ~50 keV. Figure is taken from W. Hermsen and L. Kuiper presentation at the first {\it Fermi} Symposium. \textit{right}:Simulation of 40 ks \textsl{ASTRO-H} observation of LSI~+61\deg~303.} \label{fig:lsi} \end{center} \end{figure} Two major types of models of radio-to-X-ray activity of LSI~+61\deg~303\ were proposed in the literature. Models of the first type assume that activity of the source is powered by accretion onto the compact object. Alternatively, the activity of the source could be explained by an interaction of a young rotation powered pulsar with the wind from the companion Be star. If the system is an accreting neutron star or black hole, one expects to find a cut-off powerlaw spectrum in the hard X-ray band. The cut-off energy is normally at $10-60$~keV for neutron stars and at $\sim 100$~keV for black holes. If the jet and accretion contributions to the X-ray spectrum are comparable, then emission from the accretion disk should at least produce an observable spectral feature (e.g. a bump, a break or turnover) in the 10-100~keV energy band. Currently the hard spectrum of LSI~+61\deg~303\ was checked only with the \textsl{INTEGRAL}. Unfortunately the source is rather weak for the \textsl{INTEGRAL} and the only possibility to measure the spectrum was to sum up spectra collected through the years of observations. The resulted spectra didn't show a break, but have indicated a possible feature at around 50 keV, see left panel of Figure \ref{fig:lsi} presented by W. Hermsen and L. Kuiper at the first {\it Fermi} Symposium. Current \textsl{Suzaku} observations of the source are unable to constrain the spectrum of LSI~+61\deg~303\ above 50 keV. Clearly more sensitive observations are needed to clarify the nature of the source, and \textsl{ASTRO-H} observations can help to solve the issue, see right panel of Figure \ref{fig:lsi}. On this panel we simulate a powerlaw spectrum with $N_H=0.5\times 10^{22} \mbox{cm}^{-2}$ , $\Gamma=1.5$ and $F_{2-10}=1.2 \times 10^{-11} \mbox{erg}\ \mbox{cm}^{-2}\ \mbox{s}^{-1}$. \if0 \subsection{What makes magnetars and how do they evolve? Probing their supernova progenitors, energetics and evolution} \begin{figure}[h] \centering \includegraphics[scale=0.35]{kes73_ctb109} \caption{Bright magnetar SNRs: (left) The young SNR~Kes~73 observed with \textit{Chandra} with \textsl{ASTRO-H}'s SXS and SXI fields of view overlaid. The central source is the AXP 1E~1841--045. (right) The evolved SNR~CTB~109 with \textit{XMM-Newton} \citep{2004ApJ...617..322S} with \textsl{ASTRO-H}'s SXS and SXI field of views. The central source is the AXP 1E~2259+586. The dim western half part of this SNR is covered by a giant molecular cloud. } \label{snr_fov} \end{figure} The different manifestations of neutron star classes (see \S1 and Figure~1) can be potentially linked to the different environments and progenitors of the supernova explosions creating these compact objects. While there is currently no general consensus on the progenitors of highly magnetized neutron stars (including the high-magnetic field radio pulsars and magnetars), there is accumulating evidence, using multi-wavelength studies, for them to originate from very massive progenitors, i.e. with mass $\gtrsim$20 solar masses, and expanding into a relatively low-density medium; see \cite{2013IAUS..291..480S} for a summary. X-ray spectroscopy of their associated supernova remnants (SNRs) is particularly a powerful tool in inferring their progenitor mass through the detection of the X-ray emitting ejecta and comparison to nucleosynthesis models (see the \textit{Young Supernova Remnants} White Paper (WP\#7) for a more detailed discussion of this). If indeed magnetars originate from very massive progenitors, as can be diagnosed with X-ray spectroscopic studies of their associated SNRs, then this will lead to the much interesting implication that very massive stars do \textit{not} necessarily form black holes. To date, only a handful SNRs are associated with magnetars, while the majority of the $\sim$330 known Galactic SNRs are associated with rotation-powered neutron stars or other subclasses or neutron stars \citep{2013IAUS..291..251S, 2012AdSpR..49.1313F}. While this sparse SNR-magnetar association by itself holds some clues on the nature of these objects, studying the energetics and properties of their associated SNR emission further addresses the magnetar nature of these sources. In particular, one popular model for magnetars is that they are formed from proto-neutron stars with initial spin periods of only $\sim$0.6--3~ms. The combination of convection and fast rotation helps build up the magnetic field to ultra-high values during the first tens of seconds following neutron star birth \citep{1992ApJ...392L...9D, 1996AIPC..366..111D}. Such initial fast spin periods would imply larger-than-typical initial rotational energy of the neutron star, which in turn would lead a more energetic SNR. Past and current X-ray spectroscopic studies, however, indicate that the SNRs associated with highly magnetized neutron stars appear to have typical supernova kinetic energies in the 10$^{50}$--10$^{51}$~ergs \citep{2006MNRAS.370L..14V,2008AdSpR..41..503V, 2013IAUS..291..480S}, posing a challenge to the proto-neutron star model for magnetars. Furthermore, more recent estimates for the explosion energies in two SNRs hosting an anomalous X-ray pulsar (Kes~73/AXP 1E~1841--045) and a high magnetic field radio pulsar (G292.2--0.5/PSR J1119--6127) \citep{2014ApJ...781...41K,2012ApJ...754...96K} confirm these ``typical" explosion energies. These studies also propose very massive progenitors ($\gtrsim$20 solar masses) based on the X-ray spectroscopic studies of these SNRs with \textit{Chandra} and \textit{XMM-Newton}. The past and current X-ray studies with \textit{Chandra}, \textit{XMM-Newton}, and \textsl{Suzaku} have been however severely limited by poor statistics and the lack of spectral resolution and sensitivity needed for a more appropriate study and modelling of the X-ray emitting supernova ejecta and surrounding medium. Such an analysis can be best performed with the SXS on-board the \textsl{ASTRO-H} satellite. Together with its coverage in the hard X-ray band thanks to the Hard X-ray Imager and the Soft Gamma-ray Dectector, \textsl{ASTRO-H} will provide a unique window to study \textit{simultaneously} the thermal plasma associated with the SNR, and the associated compact objects with spectra characterized by both both a soft and a hard X-ray component and which occasionally emit outbursts that should impact their ``beambags", i.e. their associated SNRs. \begin{figure}[t] \centering \includegraphics[angle=0,width=1.0\textwidth]{Kes73_WP} \caption{Simulated spectra for the SNR Kes~73 and its associated AXP 1E1841--045 shown using a \textit{Chandra} observation (Kumar et al., 2014) with \textsl{ASTRO-H}'s SXS field of view overlaid. (Left) A 100 ksec SXS simulation of the SNR Kes~73 (with the AXP's spectrum excluded) obtained using {\tt simx}, shown in comparison with the \textit{XMM-Newton} MOS1 spectrum. The SXS spectrum should allow us to resolve the line emission (particularly from the Mg, Si and S complex) and detect for the first time strong line emission from Fe-K. This will be needed to constrain the abundances and thus the mass of the progenitor star. (Right) Shown next for comparison the broadband SXS+HXI+SGD 100~ks simulated spectrum of the AXP 1E~1841--045 associated with the SNR Kes~73. In the SGD band, we assumed three possibilities: no spectral break (red), a roll-over in the SGD band (orange), and a steep cutoff just above the HXI (or \textit{NuSTAR}) energy band. Studying both the SNR and the AXP with \textsl{ASTRO-H} will provide the first broadband view of this system. We note that the AXP's total flux is $\sim$10\% that of the SNR's flux in the SXS band. The AXP's emission won't affect the line diagnosis of the SNR for abundance studies, but will dominate the continuum in the hard X-ray band.} \label{kes73axp} \end{figure} The SNR~Kes~73 hosting the AXP 1E1841-045 represents an ideal magnetar-SNR for \textsl{ASTRO-H} given its brightness, size, and previous detailed X-ray studies of both the SNR and AXP with CCD-type spectra. In Figure~\ref{kes73axp}, we show the SXS field of view overlaid on the SNR, with the pointing aimed at covering the brightest and bulk of X-ray emission from the SNR, while also covering the AXP. Figure~\ref{kes73axp} shows a 100~ks SXS simulated spectrum of the diffuse emission from the SNR emission fitted with a two-component non-equilibrium ionization model based on the \textit{Chandra} and \textit{XMM-Newton} study \citep{2014ApJ...781...41K}. The CCD spectrum shows evidence for a soft, ejecta-dominated component, and a hot, low-ionization timescale component, attributed to the shocked ISM or CSM blown by a massive progenitor. The simulated spectrum will provide new constraints on the ejecta abundances, and thus on the mass of the progenitor star through a comparison to nucleosynthesis model yields such as those of \cite{1995ApJS..101..181W} and \cite{2006NuPhA.777..424N}. As well, the Fe-K line complex will provide the first opportunity to diagnose the hot plasma conditions and confirm or refute the stellar wind blown bubble scenario. The right panel of Figure~\ref{kes73axp} shows the simulated broadband (SXS+HXI+SGD) spectrum of the central AXP~1E~1841--045, illustrating the additional magnetar science that can be done with the same observation. While the line emission from the SNR dominates in the SXS band, the AXP's spectrum dominates in the hard band (HXI+SGD). In particular, the spectrum will shed light on a) the magnetic field of the AXP through a confirmation of the $\sim$30~keV cyclotron (emission or absorption) feature detected recently with \textit{NuSTAR} \citep{An2013ApJ...779..163A}, and b) the nature of the hard X-ray emission through pinning down the spectral cutoff with the HXI+SGD combined spectrum. If the 30~keV feature is attributed to a cyclotron line, the magnetic field is estimated to be 3$\times$10$^{12}$~G (electron cyclotron) and 5$\times$10$^{15}$~G (proton cyclotron). Furthermore, so far we have not yet distinguished †between the competing theoretical models for the origin of the magnetar's hard X-ray emission. For example, the e+/e-†outflow model \citep{2013ApJ...777..114B} predicts the $\nu$--$F_{\nu}$ peak at†$\sim$7~MeV \citep{An2013ApJ...779..163A} while the fall-back disk predicts it at $\sim$100--200 keV \citep{2014A&A...562A..62K}. This will be an advantage over \textit{NuSTAR} due to the sensitivity of \textsl{ASTRO-H} above 70~keV. This science will be further discussed in Sections 3.2 and 3.3. CTB~109 is another relatively bright, but more evolved, SNR associated with a magnetar, AXP 1E2259+586 (see Figure~\ref{snr_fov}). The remnant appears as a half-shell due to an obscuring molecular cloud on the western side. A most recent study with \textit{Chandra} showed the presence of shock-heated ejecta with enhanced Si and Fe in and around the lobe adjacent to the AXP. The lobe is believed to be created by the interaction of the SNR shock wave and the supernova ejecta with a dense and inhomogeneous medium in the SNR environment \citep{2013A&A...552A..45S}. In Figure~\ref{ctb109_ne_line}, we show a simulated 100~ksec SXS spectrum based on a \textsl{Suzaku} observation, illustrating the wealth of lines that will be resolved with SXS. \begin{figure}[h] \vspace{-5mm} \centering \includegraphics[width=80mm]{sxswide} \includegraphics[scale=0.27]{sxsNeLine} \caption{ (left) A 100~ks SXS simulated spectrum of the SNR CTB~109 using a two-component absorbed non-equilibrium ionization model based on the \textsl{Suzaku} data. (right) SXS simulation of the Ne-IX lines based on the \textsl{Suzaku} spectra. Red- and blue-shifted lines and their sum are shown in red, blue, and black colors, respectively. Depending on the pointing position, we can observe both shell components (black) or only the front shell (blue), since the shell at far-western side is covered by the giant molecular cloud. } \label{ctb109_ne_line} \end{figure} One of the puzzling questions about magnetar SNRs is the discrepancy between the SNR's age (normally estimated from the shell's shock velocity) and the magnetar's age (estimated from the characteristic age of the pulsar, $P/(2\dot{P})$). This discrepancy is particularly pronounced for the SNR CTB~109 (Sedov-estimated age$\sim$13--17~kyr) and its AXP 1E~2259+586 ($\sim$230~kyr), as recently confirmed with \textit{Chandra} \citep{2013A&A...552A..45S} \& \textsl{Suzaku} observations (Nakano et al., accepted). The answer to this puzzle lies in understanding how magnetic fields evolve in magnetars and in acquiring accurate measurements of the associated SNR's shock velocity. \begin{figure}[htb] \begin{center} \includegraphics[width=70mm]{age_vs_field} \caption{ Magnetic field of magnetars and other (rotation-powered) pulsars calculated from the pulsar's period and its derivative (Nakano et al., submitted). } \label{magnetic_field_decay} \end{center} \end{figure} Giant flares, short bursts, and bright persistent X-ray emission exceeding the spin-down power are thought to originate from the magnetic energy dissipation. However, how the magnetic field of magnetars evolves with time, as the magnetar ages, is still being debated. The magnetic field measured from $P$ and $\dot{P}$ is known to decrease with the characteristic age, $P/(2\dot{P})$, as illustrated in Figure~\ref{magnetic_field_decay}. While large characteristic ages can be potentially explained if one accounts for the magnetic field decay in magnetars \citep{2000ApJ...529L..29C}, the age measurement of associated SNRs provides an independent way of understanding the magnetic field evolution. This can be done through an accurate measurement of the shock velocity, which will be also useful for probing the supernova explosion energy shedding light on the origin of magnetars. For example, in the Sedov phase of SNR evolution, the explosion energy and age are given by $E \propto R^3 \upsilon^2$ and $\tau_\mathrm{snr} = 5R/2\upsilon$, respectively. For the SNR CTB~109, the large molecular cloud on the western side is blocking half of the shell on the far-side while the shell is fully visible on the eastern side. As a result, depending on the pointing direction, SXS is expected to measure blue- and red- shifted lines originating from expanding shells towards and away from us. The 100~ks SXS simulation in Figure~\ref{ctb109_ne_line} assumes a velocity of 500~km~s$^{-1}$, inferred from the \textit{Chandra} observation of CTB~109 \citep{2013A&A...552A..45S}. The Doppler shift of the Ne-IX K lines can be detectable, and simultaneous usage of multiple lines in the soft band increases the accuracy of the measurements. For the compact object, we will be able to address its magnetic field strength and probe its hard X-ray emission, as detailed in Sections 3.2 and 3.3. In summary, the young and more evolved SNRs Kes~73 and CTB~109, respectively hosting the magnetars 1E~1841--045 and 1E~2259+586 at their centers, cover all the fundamental questions presented in this white paper (see the next sections for the compact objects' science). The same method can be also applied to the other handful magnetar SNRs. This science is also relevant to WP\#7. \subsection{Can we find direct evidence of the strong field?} \begin{figure}[h] \centering \includegraphics[scale=0.35]{fig_4u0142_proton} \vspace{-0.3cm} \caption{ Simulated possible proton CRSF at $E_{\rm cyc}=$8.1~keV of \textsl{ASTRO-H}/SXS (left) and of \textsl{Suzaku}/XIS (right). Both Monte Carlo simulations employ the same model, assuming a 100~ksec exposure and a Gaussian feature on the continuum observed in 4U~0142+61, with a line width $\sigma_{\rm line}$=40 eV and an equivalent width of 60 eV. } \label{fig1:fig_4u0142_proton.eps} \end{figure} In the strong magnetic field ($B$$>$$B_{QED}$) of magnetars, the electron cyclotron resonance scattering feature (CRSF) falls above the MeV range, while the proton CRSF (Eq. \ref{eq:proton_cyclotron}) falls right in the soft X-ray band. The dipole magnetic fields derived from $P$ and $\dot{P}$ reflects only from the poloidal component; this CRSF can provide a direct determination of the total surface $B$-field. Early theoretical predictions suggested that the proton CRSF exhibits an equivalent width up to many hundred eV and a relatively wider absorption width of $\triangle E/E_{pc}\sim 0.05 - 0.2$ \citep{2001MNRAS.327.1081H, 2001ApJ...560..384Z}. Contrary to expectations for the detection of proton cyclotron lines as a direct measurement of the strong $B$-field, there are quite a few observational reports of the proton CRSF so far from the quiescent magnetar spectra: $\sim$5, 10 keV from 1E~2259+586 \citep{Iwasawa1992PASJ...44....9I}, $\sim$8.1 keV from 1RXS~J1708-4009 \citep{Rea2004NuPhS.132..554R}, and more recently an absorption feature from SGR~0418+5729 \citep{Tiengo2013Natur.500..312T} and a 25--35 keV (absorption or emission) weak feature from 1E~1841-045 in the SNR~Kes~73 \citep{An2013ApJ...779..163A}. Some of those have not yet been confirmed by following observations \citep{2007Ap&SS.308..505R,2011ysc..conf...43M}. In the absence of a significant CRSF in most of the quiescent X-ray spectra of magnetars, it was pointed out that vacuum polarization effectively suppresses the strength of the proton CRSF, making the equivalent width nearly an order of magnitude lower than previously thought \citep{2002ApJ...566..373L, 2003ApJ...583..402O}. In addition, the gradient of the $B$-field and effective temperatures would suppress these features as well. \textsl{ASTRO-H}'s SXS, combined with its broadband capability with HXI+SGD, will allow us to test for this and confirm previously reported features. \begin{table}[t] \caption{ Reported absorption-like features in magnetars, with reference to \citep{2011ysc..conf...43M}. \vspace{2mm} } \label{table:proton_cyclotron_observations} \vspace{-0.5cm} \begin{center} \footnotesize \begin{tabular}{cp{2.5cm}cp{5.0cm}l} \hline \hline Object & $E_{\rm pc}$ (keV) & Detector & Note & Ref. \\ \hline SGR~1806-20 & 5.0, 7.5, 11.2, 17.5 & {\it RXTE}/PCA & in the harder part of a precursor & (1) (2) \\ SGR~1900+14 & 6.4 & {\it RXTE}/PCA & during precursor to the main burst & (3) \\ 1RXS~J1708-4009 & 8.1 & {\it BeppoSAX} & the longest observation (200 ks) during rising phase & (4) (5) (6) \\ 1E~1048.1-5937 & 14 & {\it RXTE}/PCA & emission, in a burst & (7) \\ $\cdots$ & 13 & {\it RXTE}/PCA & emission, at one part of a burst tail & (8) \\ XTE~J1810-197 & 12.6 & {\it RXTE}/PCA & emission in a burst tail & (9) \\ 4U~0142+61 & 4, 8, 14 & {\it RXTE}/PCA & emissions, in the most energetic among a sequence of bursts & (10) \\ 1E~2259+586 & 5, 10 & {\it Ginga}/LAC & during flux increase & (11) \\ 1E~1841--045 & 25--35 & {\it NuSTAR} & phase-resolved spectrum & (12) \\ \hline \end{tabular} \end{center} \vspace{-2mm} \begin{footnotesize} References: (1) \citealt{Ibrahim2002ApJ...574L..51I}, (2) \citealt{Ibrahim2003ApJ...584L..17I}, (3) \citealt{Strohmayer2000ApJ...537L.111S}, (4) \citealt{2003ApJ...586L..65R}, (5) \citealt{Rea2004NuPhS.132..554R}, (6) \citealt{Oosterbroek2004ESASP.552..471O}, (7) \citealt{Gavriil2002Natur.419..142G}, (8) \citealt{Gavriil2006ApJ...641..418G}, (9) \citealt{Woods2005ApJ...629..985W}, (10) \citealt{Gavriil2008AIPC..983..234G}, (11) \citealt{Iwasawa1992PASJ...44....9I}, (12) \citealt{An2013ApJ...779..163A} \end{footnotesize} \end{table}% \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{energy_vs_eqwidth} \caption{ Detectable absorption features from the AXP 4U~0142+61. Feasibility to detect them ($>$3$\sigma$ detection region) in an \textsl{ASTRO-H} SXS 100 ksec observation are evaluated through the F-test probability with and without absorption features onto the X-ray continuum of 4U~0142+61. Gaussian absorption features are assumed at its width $\sigma_{\rm line}$=40~eV. The previously measured energy-dependent 3$\sigma$ upper limits are also shown as arrows referring to the 46 ksec {\it XMM-Newton} observation in 2004 \citep{2007Ap&SS.308..505R}, where assuming its possible absorption width at $\sigma_{\rm line}$=100~eV. The star mark corresponds to Figure~\ref{fig1:fig_4u0142_proton.eps} . } \label{fig1:energy_vs_eqwidth.eps} \end{center} \end{figure} More puzzling, some proton CRSF have been reported from bursts spectra, as shown in table \ref{table:proton_cyclotron_observations}; e.g., at 5.0, 11.2, and 17.5 keV of SGR~1806$-$20 with {\it RXTE} \citep{Ibrahim2002ApJ...574L..51I,Ibrahim2003ApJ...584L..17I}. Even though the number of detections in bursts is larger than that of the persistent emission, these detections are still quite rare compared to numerous short bursts from magnetars. In summary, the current observations have not yet achieved a consistent interpretation of the proton cyclotron features, requiring much deeper observations by high-resolution instruments. The Soft X-ray Spectrometer (SXS) on board \textsl{ASTRO-H} can provide a higher sensitivity search for the proton CRSF even with a shallower and/or narrower width than the previous studies. Since the previous persistent and burst observations have not yet provided a consistent picture of the proton CRSF (e.g., resonance energy, absorption width, equivalent width, and spectral shapes), here we empirically assume the possible Gaussian absorption feature, and simulate the sensitivity to detect the lines. Figure~\ref{fig1:fig_4u0142_proton.eps} shows Monte Carlo simulations of the absorption feature at 8.1 keV from the anomalous X-ray pulsar 4U~0142+61. As shown in this plot, compared to the XIS on board \textsl{Suzaku}, the high resolution of the SXS would allow the detection of weaker features (small equivalent width). Figure~\ref{fig1:energy_vs_eqwidth.eps} represents an example of detectability of the proton CRSF on the equivalent width versus energy plane. Comparing to the previous {\it XMM-Newton} observations, the SXS potentially search for features with a factor of 2--3 weaker equivalent width. \subsubsection{On the link between magnetars and the other classes of neutron stars through a direct measurement of the magnetic field} As mentioned in Section 1, the growing diversity of neutron stars includes an intriguing class of compact objects near the centres of core-collapse SNRs, referred to as CCOs (for Central Compact Objects). These objects are typified by the CCO discovered with the first light \textit{Chandra} observation of the O-rich supernova remnant (SNR) Cas~A. Originally, CCOs were thought to be ``relatives" of magnetars primarily based on the resemblance between their X-ray spectra in the 0.5--10 keV band, the relatively slow spin periods (in comparison to the typical rotation-powered pulsars), and the lack of radio emission and pulsar wind nebulae surrounding them. \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\hsize]{puppisa_em1_cyclmodel_1e5k_sxs} \caption{ A 100~ksec SXS simulated spectrum of a CCO using a two-blackbody model plus an emission line, fitted with a two-blackbody model plus two cyclotron lines. The residuals illustrate the difference between proposed models that can be addressed with the SXS. We acknowledge the contribution of Adam Rogers (U. Manitoba) for this figure. } \label{puppisa_emissionversuscyclotronabs} \end{center} \end{figure} Recent dedicated timing observations of a few CCOs led to the suggestion that these are ``anti-magnetars", i.e. with a magnetic field much smaller than a typical magnetar's field, with inferred $B$$\sim$10$^{10}$--10$^{11}$~Gauss; see \cite{2009ApJ...695L..35G} for the discovery of a 112~ms period in the CCO residing in the SNR Puppis~A. The measured $\dot{P}$ implies a surface dipole magnetic field $B$$<$9.8$\times$10$^{11}$~G, most recently refined to a value of 2.9$\times$10$^{11}$~Gauss \citep{2013ApJ...765...58G}. Evidence is accumulating for other low-$B$ CCOs being anti-magnetars: 1E 1207.4--5209 in PKS~1209--51/52 and CXOU~J185238.6+004020 in the SNR Kes~79. In addition to timing measurements, X-ray spectroscopic studies of the CCO in Cas~A also supports the low-$B$ scenario \citep{2009Natur.462...71H}. This study further indicates that the neutron star in Cas~A is covered with a non-magnetized atmosphere of Carbon, the product of nuclear burning of H and He. Other, more exotic (quark star), models have been however proposed \citep{arXiv:1404.5063}, in light of the recent \textit{NuSTAR} study of the Cas~A SNR \citep{2014Natur.506..339G}. As for magnetars, a direct measurement of the CCOs magnetic field comes from cyclotron resonance lines, and such low $B$ ($\sim$10$^{10}$--10$^{11}$~Gauss) are expected to show cyclotron features in the soft X-ray band, making the SXS an ideal instrument to study these features. Indeed cyclotron features have been discovered in a handful CCOs, including Puppis~A's CCO, RXJ~0822.0--4300, whose spectrum displays a phase dependent emission feature at 0.7--0.8 keV. This feature has been modelled either as an emission line of energy $\sim$0.75 keV (hereafter {\tt emission}) or as a cyclotron absorption feature plus a harmonic with an energy of $E_0$$\sim$0.46 keV, hereafter {\tt cyclotron} \citep{2013ApJ...765...58G}. It wasn't possible to distinguish between these models using \textit{Chandra} and \textit{XMM-Newton} data. In Figure~\ref{puppisa_emissionversuscyclotronabs}, we illustrate the capability of the SXS in differentiating between the {\tt emission} model and the {\tt cyclotron} model using a 100~ksec exposure. This particular CCO was selected based on its location near the centre of a large and not so bright SNR (in comparison for example to Cas A's CCO which will be difficult to resolve from the surrounding bright thermal emission from Cas~A). We note however that this simulation does not take into account the presence of the SNR thermal plasma. This is meant as an illustrative example to show SXS's capability to differentiate the different models proposed from fitting CCD-type spectra. Other more ``isolated" targets (including other classes of neutron stars shown on Figure~1) with possible links to magnetars would be less contaminated by the X-ray emission from the SNR plasma. In summary, SXS will open a new window to probe and understand the spectral features in an emerging and new class of (soft) X-ray emitting neutron stars, and confirm whether they are indeed anti-magnetars or descendants of magnetars. \subsection{Unified understanding of the magnetar X-ray spectrum?} The Hard X-ray component (HXC) was discovered from some magnetars above 10 keV by \textsl{INTEGRAL} and {\it RXTE} \citep{2006ApJ...645..556K}, and further confirmed by \textsl{Suzaku}. The HXC is distinguished from the well known soft X-ray component (SXC) since the HXC is represented by a power-law (PL) with an extremely hard $\Gamma_{h} \sim 1$, and extending $>$100 keV. The {\it CGRO}/COMPTEL and {\it Fermi}/LAT upper-limits \citep{2010ApJ...725L..73A} suggest that the HXC must have a spectral break under $\sim$ 750 keV which has not yet been clearly detected. The break energy is an important hint to constrain the emission mechanism of the HXC; e.g., thermal bremsstrahlung, synchrotron radiation and resonant Compton up-scattering \citep{2005ApJ...634..565T, 2007Ap&SS.308..109B}. This is a unique science goal for the HXI and SGD. \begin{figure}[h] \begin{center} \includegraphics[width=72mm]{4u0142_suzaku_vs_astroh_110331_obs} \hspace{5mm} \includegraphics[width=70mm]{fake_100ks_06} \end{center} \caption{ (Left) The 4U~0142+61 $\nu$F$\nu$ spectrum obtained from the \textsl{Suzaku} 100 ksec observation \citep{2011PASJ...63..387E}. (Right) Simulated 100~ks \textsl{ASTRO-H} 4U~0142+61 observation using the parameters of the \textsl{Suzaku} result. The HXC is reproduced by a single PL of $\Gamma_{\rm h}=0.11$ and the flux $F_{h}$ of 29.7 $\times$ $10^{-12}$ erg s$^{-1}$ cm$^{-2}$ in the 1-60 keV. The exponential cutoff is further added at 150 keV to satisfy the COMPTEL upper limit. Orange, dotted green, and red lines denote the best-fit model, SXC model and HXC model, respectively. The 2 $\sigma$ {\it CGRO}/COMPTEL upper-limits \citep{2006ApJ...645..556K} are also plotted.} \label{fig:spec4u0142} \end{figure} In the \textsl{Suzaku} era, the simultaneous spectroscopy of the SXC and HXC revealed a possible broadband spectrum evolution; the hardness ratio, defined as $F_{\rm h}/F_{\rm s}$, is positively (or negatively) correlated with their magnetic field (or pulsar characteristic age). The $\Gamma_{\rm h}$ of the HXC is anti-correlated with the characteristic age. Although the interpretation of these correlations has not yet been established, one possible explanation is a down-cascade of the sub-MeV photons through photon splittings. Sub-MeV photons can be generated via electron-positron annihilation or resonant Compton up-scattering near the stellar surface, and are repeatedly splitting into hard X-rays in the QED field. In addition, \cite{Nakagawa2011PASJ...63S.813N} suggested a similarity of the hard X-ray spectrum between the persistent emission and an accumulated weaker short bursts from the activated magnetar, SGR~0501+4516 (\S3.4). Furthermore, the broad-band coverage both of SXC and HXC can potentially provide a clue for the hidden toroidal magnetic field inside magnetars. In the recent \textsl{Suzaku} observation \cite{Makishima2014PhRvL.112q1102M}, an evidence for a free precession was detected from a prototypical magnetar 4U~0142+61. In the 15--40 keV hard X-rays, its 8.69 sec pulsations suffered slow phase modulations by ±0.7 sec, with a period of $\sim$15 h. When this is interpreted as free precession of the magnetar, the object is inferred to deviate from spherical symmetry by $\sim 1.6\times 10^{-4}$ in its moments of inertia. This deformation, suggests a strong toroidal magnetic field, $\sim 10^{16}$ G in the stellar interior when ascribed to magnetic pressure. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\hsize]{fig21_140805} \caption{The HXC detectability with the HXI and SGD. Required exposure for HXC 3$\sigma$ detections is plotted as a function of the 10--100 keV flux $F_{h}$, assuming a single PL with $\Gamma_{\rm h}=2$ (green), 1 (red), and 0 (black) and 3\% systematic uncertainty of the non X-ray background. Dotted lines indicate assumed $F_{h}$ of 5 magnetars assuming the correlation suggested by \textsl{Suzaku} \citep{Enoto2010ApJ...722L.162E}.} \label{fig:feasibility} \end{center} \end{figure} As a first demonstration of the power of the \textsl{ASTRO-H} observation, we compared the $\nu F_{\nu}$ \textsl{Suzaku} and \textsl{ASTRO-H} spectra of a bright anomalous X-ray pulsar 4U~0142+61 in Figure~\ref{fig:spec4u0142}. Although the HXC was detected by \textsl{Suzaku} \citep{2011PASJ...63..387E}, its spectral information is still poor above 80 keV (see Figure~\ref{fig:spec4u0142} left). On the other hand, as shown in Figure~\ref{fig:spec4u0142} right, \textsl{ASTRO-H} will detect the HXC up to 400 keV with 3 $\sigma$ level, when assuming 100 ksec exposure. In order to further estimate detectability of the cutoff, we compare the models with and without an exponential cut-off, indicating that \textsl{ASTRO-H} achieves the detection of the cut-off feature. Another bright magnetar, SGR~1806$-$20, was also simulated in Figure~\ref{fig:specsgr1806} (left). To obtain the same statistical significance, the required 50 ksec observation by \textsl{Suzaku} is reduced only to 30 ksec by \textsl{ASTRO-H}. Although the \textsl{Suzaku}/HXD and \textsl{INTEGRAL} has already detected the HXC up to $\sim$20 keV \citep{2007A&A...476..321E, 2009PASJ...61S.387N, Enoto2010ApJ...722L.162E}, \textsl{ASTRO-H} can extend it up to 100 keV, presumably providing a precise measurement of the HXC and confirming the proposed correlation by \textsl{Suzaku}. In Section 3.1, we also show the HXI+SGD simulation of the bright AXP 1E~1841--045, located in the SNR~Kes~73, to illustrate the ability of \textsl{ASTRO-H} to pin down the nature of its hard X-ray emission (e.g in the light of magnetar versus fossil disk accretion models; see Figure~14, right). \textsl{ASTRO-H} has the advantage over \textit{NuSTAR} in that it will allow a \textit{simultaneous} broadband coverage needed to study the SXC and HXC together (see also the potential for studying these sources with the Soft Gamma-ray Detector on board \textsl{ASTRO-H}, \S3.4) Contrary to the above three examples, the HXC from a famous anomalous X-ray pulsar, 1E2259$+$586 located in the SNR~CTB~109 (see Section 3.1), has not yet been detected despite intensive observations with \textsl{INTEGRAL}, \textsl{RXTE} and \textsl{Suzaku}. \citep{2006ApJ...645..556K,2011PASJ...63..387E}. If the correlation proposed by \textsl{Suzaku} is true, 1E 2259$+$586 may emit the HXC at $F_{h}=3.9\times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ based on the confirmed SXC $F_{s}=7.3\times 10^{-11}$ erg s$^{-1}$ cm$^{-2}$ with its characteristic age, 230 kyr. Here we employed an absorbed blackbody plus a power-law for the SXC and another hard power-law for the HXC. If there is no spectral break in the HXC and its photon index is between -1 and 1, \textsl{ASTRO-H}/HXI will clearly detect the HXC at least up to 50 keV with $\sim$120 ks exposure time. On the other hand, without the spectral break, the HXC exceeded the the upper limits from COMPTEL (together with \textsl{Suzaku} and \textsl{INTEGRAL}), as seen in 4U~0141+61. Figure \ref{fig:specsgr1806} (right) shows $\nu$F$\nu$ spectrum when adding an exponential cut-off feature at 150 keV, together with upper-limits obtained from previous observations \citep{Enoto2010ApJ...722L.162E, 2006ApJ...645..556K}. Even in this case, \textsl{ASTRO-H} will detect the HXC with a certain significance level, if the photon index is roughly 0, which is consistent with the correlation by \textsl{Suzaku}. Since 1E 2259$+$586 has the oldest characteristic age among magnetars observed with \textsl{Suzaku}, revealing the HXC spectrum of the object means to test whether the relation is applicable for all magnetars or not. \begin{figure}[h] \begin{center} \includegraphics[width=72mm]{fake_50ks_sgr1806_00_04} \hspace{5mm} \includegraphics[width=70mm]{step_06_mod} \end{center} \caption{ (Left) Expected $\nu$F$_{\nu}$ form of SGR 1806$-$20 observed by \textsl{ASTRO-H}. An absorbed blackbody plus a PL model is employed as a best-fit model based on the \textsl{Suzaku} observation. The orange, green dotted and red dotted curves are the total best fit model, SXC and HXC, respectively. An absorbed blackbody plus a power-law model was employed. (Right) Same as left panel but for 1E 2259$+$586. The upper-limits are shown from \textsl{Suzaku} \citep[blue,][]{Enoto2010ApJ...722L.162E}, \textsl{INTEGRAL} \citep[light blue,][]{2006ApJ...645..556K} and {\it CGRO} (magenta). [After compiling this white paper, the NuSTAR reported the hard X-rays \citep{2014ApJ...789...75V}] } \label{fig:specsgr1806} \end{figure} We finally evaluated detectability of the HXC from dim sources in 10-100 keV range. Figure \ref{fig:feasibility} shows required exposure time to detect the HXC with a $3\sigma$ significance. Five magnetars indicated in the figure were observed with several X-ray observatories, but their weak HXC has not yet been reported so far. Assuming hard X-ray fluxes calculated by the broadband spectral correlation suggested by \textsl{Suzaku}, \textsl{ASTRO-H} will catch the HXC with a realistic exposure time (i.e., $\sim$100 ksec). It should be also noted that the imaging capability of the HXI enables us to extract the hard X-ray spectrum from objects previously contaminated from nearby sources, e.g., CXO J164710.2$-$455216. Such a systematic study by \textsl{ASTRO-H} will help us reach a unified hard X-ray nature for this class. \subsection{How the burst activity is related with the nature of magnetars?} One of the prominent magnetar features are sporadic X-ray outbursts (flares) accompanied by short bursts which both are thought to be directly related to the magnetic energy release. However, the persistent HXC and short bursts during X-ray outbursts are poorly understood. \subsubsection{Unresolved spectral change during the X-ray outburst} The persistent SXC of magnetars sometimes increases during the outburst by one to two orders of magnitude with unpredictable timing. Such a transient enhancement lasts typically for a few months, with gradual decay. Figure \ref{fig1:1e1547_longhistory.eps}a shows two recent sudden X-ray activations of the AXP 1E~1547.0-5408. As shown in Figure~\ref{fig1:1e1547_longhistory.eps}b, it is often accompanied, in its early phase, by short burst activities. These burst-active states have been already observed from some magnetars; XTE~J1810$-$197 \citep{2004ApJ...605..368G}, CXOU~J164710.2$-$455216 \citep{2007ApJ...664..448I}, SGR~0501+4516 \citep{2009ApJ...693L.122E}, and 1E~1547.0-5408 \citep{2009ApJ...696L..74M}. More recently, there has been an increasing number of reports from magnetars with much weaker dipole fields ($\lesssim 4.4\times 10^{13}$~G): SGR~0418+5729 \citep{2010Sci...330..944R}, SGR~1833-0832 \citep{2010ApJ...718..331G}, Swift~J1822.3-1606 \citep{2012ApJ...754...27R}, and Swift~J1834.9$-$0846 \citep{2012ApJ...748...26K}. Figure~\ref{fig1:magnetar_flux_decay.eps}a presents recent examples of the SXC flux decay of magnetars. \begin{figure}[h] \centering \includegraphics[scale=0.50]{1e1547_longhistory} \caption{ (a) A long-term SXC monitoring of AXP 1E~1547.0$-$5408 with {\it Swift}/XRT, with the absorbed 2-10 keV X-ray flux shown. Two \textsl{Suzaku} observations are also shown in green stars \citep{2010PASJ...62..475E}. (b) Short burst forests from this source recorded by \textsl{Suzaku}/HXD-WAM on January 22, 2009, indicated by a blue arrow in panel (a) } \label{fig1:1e1547_longhistory.eps} \end{figure} During the SXC outburst, an enhanced HXC above 10 keV has also been reported \citep{2009MNRAS.396.2419R, 2010ApJ...715..665E, 2010PASJ...62..475E, 2012ApJ...748..133K}. While the SXC has been monitored in detail by {\it RXTE} and {\it Swift} during the decay phase, the HXC detections are still quite rare: only a few observations by \textsl{INTEGRAL} and \textsl{Suzaku} from SGR~0501+4516 in 2008 and 1E~1547.0-5408 in 2009. Actual spectra recorded by \textsl{Suzaku} are shown in Figure~\ref{fig1:magnetar_flux_decay.eps}b and c, where the HXC was successfully detected with a flux of $2.7\times 10^{-11}$ erg s$^{-1}$ cm$^{-2}$ and $1.1\times 10^{-10}$ erg s$^{-1}$ cm$^{-2}$ in the 15--50 keV. However, they are just snapshots during the decay phase, and it is not yet clear how the HXC evolves during the outburst state and how the HXC is physically related with the SXC. Thus, the broadband spectral coverage for the SXC and HXC is expected to help resolve the emission mechanism and the postulated dissipation process of the magnetic energy. \begin{figure}[t] \centering \includegraphics[scale=0.25]{transient_magnetar_spec} \caption{ (a) Absorbed 2-10 keV X-ray fluxes of transient magnetars monitored with Swift/XRT, {\it RXTE}/PCA, and \textsl{Suzaku}/XIS. The time onset corresponds to the first detection of magnetar short bursts from these sources, mainly detected with {\it Swift}/BAT. (b) \textsl{Suzaku} spectra during recent two transient activities of 1E~1547.0$-$5408 (b1) and SGR~0501+4516 (b2) \citep{2010PASJ...62..475E, 2010ApJ...715..665E}. } \label{fig1:magnetar_flux_decay.eps} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{fig_sim_sgr0501_decay} \caption{ Simulations of the HXC monitoring toward an activated SGR~0501+4516 in 2008. The spectral shape is assumed to be the same as that observed by \textsl{Suzaku} in 2008, and normalized to have the same decaying speed as the SXC. Blue and Red data points are simulated X-ray fluxes in the 20--100 keV measured by \textsl{Suzaku}/HXD-PIN and \textsl{ASTRO-H}/HXI, respectively. Both exposures are set to be 40 ksec (90\% confidence level errors). } \label{fig1:fig_sim_sgr0501_decay.eps} \end{center} \end{figure} Previous \textsl{Suzaku} ToOs (Figure~\ref{fig1:magnetar_flux_decay.eps}) of 1E~1547.0$-$5408 and SGR~0501+4516 were performed $\sim$4 and $\sim$7 days after the onset of outburst, respectively. The HXC quickly decays to $\sim$0.1 mCrab or less below the \textsl{Suzaku} detection limit within a few weeks. To compare \textsl{Suzaku} and \textsl{ASTRO-H}, we simulated, in Figure~\ref{fig1:fig_sim_sgr0501_decay.eps}, the HXC monitoring of the X-ray outburst. While \textsl{Suzaku} HXD-PIN can not detect the HXC when $\sim$3 weeks have passed after the onset of the outburst, \textsl{ASTRO-H} HXI still provides sufficient counts to detect the HXC within nearly a few ``years" after the onset. These provide us, for the first time, with a detailed measurement of the HXC spectral evolution during the decay phase of magnetars. So far, 2--3 magnetar outbursts have been detected per year. Recent accumulated discoveries of weak-field magnetars further suggest that there is a large hidden population of this kind of sources. The above continuous monitoring into the HXC provides us a way to investigate the connection between the quiescent and activated magnetars, which is vital to understand the evolutionary path of the magnetar class. The prompt one or two ToO observations can provide us the physical conditions at an activated state, while the follow-up observations at once or twice per year during subsequent two years can further give us the decay trend of the hard X-rays (e.g., photon index, decay speed), e.g., Swift J1822.3-1606 \citep{2012ApJ...754...27R}, Swift J1834.9−0846 \citep{2012ApJ...748...26K}, and 3XMM J185246.6+003317 (Zhou et al., 2014; Rea et al., 2014). \subsubsection{Magnetar signatures in the short bursts and giant flares} \begin{figure}[htb] \begin{center} \includegraphics[width=60mm]{Figure12b_SGD3Dpicture} \caption{ A schematic picture of Soft Gamma-ray Detector (SGD). The main detector of SGD is surrounded by large and thick 25 BGO crystals. } \label{SGDfig} \end{center} \end{figure} One characteristic form of magnetar X-ray radiation is sporadic emission of bursts with a typical duration from $\sim$0.1 second to a few hundred seconds. The burst mechanisms is thought to be related to the rearrangement of the $B$-field due to reconnections or motions of the NS crust (e.g., star quake). These bursts are phenomenologically classified into three types: ``giant flares" ($L_{\rm x} > 10^{45}$ erg s$^{-1}$, lasting about a few hundred seconds), ``intermediate flares" ($L_{\rm x} \sim 10^{42}-10^{43}$ erg s$^{-1}$, lasting a few seconds), and frequently occurring ``short bursts" ($L_{\rm x} \sim 10^{38}-10^{41}$ erg s$^{-1}$, $\sim$0.1-sec durations). These explosive events often show luminosities exceeding the Eddington limit for a NS of $1.4M_\odot$, $L_{\rm Edd} \sim 1.8\times 10^{38}$ erg s$^{-1}$, presumably due to suppression of the electron scattering cross sections in the strong $B$-field. These short bursts are thus attractive targets during the ToO observations. {\bf Polarization of bursts:} High polarization degree is expected from magnetars due to the high $B$-field. We simulated the polarization detectability of short bursts with SGD using a simulator provided by the SGD hardware team. We assumed short bursts from SGR 1806$-$20 ($N_{\rm H}$ = 6$\times10^{22}$cm$^{-2}$, distance = 15 kpc) with spectral parameters of two-blackbody model (\citep{Nakagawa2007PASJ...59..653N}: $R_{\rm HT}^2/R_{\rm LT}^{2}$ = 0.01, $kT_{\rm LT}$ = 4 keV, $kT_{\rm HT}$ = 11 keV ). Figure \ref{fig:nobvsmdp} shows the expected polarization in different intensities when burst events accumulated. The burst rate of SGR~1806$-$20 in the active state was $\sim$2 bursts/day observed by HETE-2 \citep{Nakagawa2007PASJ...59..653N}. In this simulation, we can detect the polarization of bursts from SGR/AXP when we observe bright bursts in the active state. {\bf Proton CRSF:} Proton CRSF is a potential interesting target of the short burst using the \textsl{ASTRO-H} SXS as already discussed in \S3.1. {\bf Is the persistent emission composed of unresolved short bursts?} The enhanced persistent and burst emissions have been simultaneously observed in many activated magnetars. However, it is not yet clear how these two emission forms are physically related to each other. One interesting possibility is that the persistent emission is composed of a large number of small bursts that are not individually detectable. Such a possibility has been examined using a cumulative number-intensity distribution of short bursts. The observational information has so far remained insufficient to evaluate this possibility, since the studied short bursts are so bright (with fluence $> 10^{-7}$ ergs cm$^{-2}$) and infrequent that their time-averaged flux is much lower than that of the persistent emission. It is interesting to examine, from observations of activated magnetars, whether weaker short bursts become similar in spectral shape to the persistent X-ray emission, as recently found in SGR~0501$+$4516 \citep{Nakagawa2011PASJ...63S.813N}. The high sensitivity of \textsl{ASTRO-H} can provide further studies connecting the persistent to burst emissions. {\bf Advantages of the SGD shield detector:} So far there is no clear detection of MeV photons from magnetar bursts except for during giant flares. MeV photons of magnetar bursts are an important key to investigate the radiation mechanism and physics in the strong $B$-field because photon splitting effect may suppress the high energy photons. In order to explore the MeV photons from short bursts, more photon statistics with fine time resolution is essentially needed. The \textsl{Suzaku} Wide-band All-sky Monitor (WAM) has reported a hint of MeV photons from one strong short burst from AXP 1E1547$-$5408 (Yasuda et al., in prep), but the lack of time resolution prevented a detailed investigation. The Soft Gamma-ray Detector (SGD) onboard \textsl{ASTRO-H} is surrounded by large Bi$_4$Ge$_3$O$_{12}$ crystals to reduce the cosmic and gamma-ray backgrounds (Figure~ \ref{SGDfig}). This active shield with a wide field-of-view is available as an all-sky monitor covering $\sim$200 keV to 5 MeV. The most notable features are a large effective area ($\sim$800 cm$^2$ even at 1 MeV) roughly twice to the \textsl{Suzaku} WAM, fine time resolution (16 ms), and fine spectral resolution (32 channel than 4 channel of the \textsl{Suzaku} WAM). The \textsl{Suzaku} WAM has to disable triggers until the onboard triggered data is transferred to the spacecraft memory at the next SAA passage, while the SGD active shield can minimize this latency and the triggered data is immediately transferred. These features become a powerful tool to study magnetar short burst especially with large photon statistics and fine time resolution. \begin{figure}[t] \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=70mm]{nobvsmdp} \end{center} \caption{Simulated polarization detectability from accumulated short burst events by SGD. Spectrum from 1806-20 bursts were assuming two black body model: $\frac{R_{\rm HT}}{R_{\rm LT}}^{2}$ = 0.01, $kT_{\rm LT}$ = 4 keV, $kT_{\rm HT}$ = 11 keV (Nakagawa et al., 2007) in varied burst intensities.} \label{fig:nobvsmdp} \end{minipage} \hspace{5mm} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=70mm]{axp1e1547_5408_wamfakespec.pdf} \end{center} \caption{Spectral simulation of one of the bright bursts of AXP 1E1547-5408 with the SGD shield detectors. Assumed spectral model is blackbody plus power-law model as reported in Yasuda et al. (in prep). Dashed-lines shows each spectral model component and the solid line represents the total model.} \label{SGDWAM_1E1547simspec} \end{minipage} \end{figure} Figure \ref{SGDWAM_1E1547simspec} shows a simulated spectrum of the intense short burst from AXP 1E1547$-$5408, which has a signature of MeV photons\footnote{Due to readout deadtime and counter rollover effect, the data from the SGD shield detectors should be subject to some corrections such as pile-up or carry over of observed counts. The estimated maximum brightness of bursts which free from such corrections is about 100 Crab, and 1000 Crab would be also observable with some corrections. From spectral simulations, a possible detection limit is found to be about 1 Crab. }. The SGD monitor can clearly detect up to 2 MeV, and distinguish different spectral models, such as 2BB and BB+PL. Figure \ref{fig:nobvsmdp} shows the observable flux range of SGD shield detectors. We can expect that bright short bursts and intermediate flares are good targets for SGD shield detectors. \section{Appendix} \subsection{Acronym} \begin{description} \item[AXP]\mbox{}\\ Anomalous X-ray Pulsar. \item[BAT]\mbox{}\\ Burst Alart Telescope onboard {\it Swift}. \item[CCD]\mbox{}\\ Charge Coupled Devise. \item[CCO]\mbox{}\\ Central Compact Object. \item[CRSF]\mbox{}\\ Cyclotron Resonance Scattering Feature. \item[CSM]\mbox{}\\ Circumstellar Medium. \item[GRLB]\mbox{}\\ Gamma-ray Loud Binary. \item[HMXB]\mbox{}\\ High Mass X-ray Binary. \item[HXC]\mbox{}\\ Hard X-ray Component of magnetar X-ray spectra ($>$10 keV). \item[HXD]\mbox{}\\ Hard X-ray Detector onboard {\it Suzaku}. \item[HXI]\mbox{}\\ Hard X-ray Imager onboard {\it ASTRO-H}. \item[ISM]\mbox{}\\ Interstellar Medium. . \item[LMXB]\mbox{}\\ Low Mass X-ray Binary. \item[LPP]\mbox{}\\ Long Period Pulsar. \item[NS]\mbox{}\\ Neutron Star. \item[PCA]\mbox{}\\ Proportional Counter Array onboard {\it RXTE}. \item[QED]\mbox{}\\ Quantum Electrodynamics. \item[RPP]\mbox{}\\ Rotation Powered Pulsar. \item[RRAT]\mbox{}\\ Rotating Radio Transient. \item[{\it RXTE}]\mbox{}\\ Rossi X-ray Timing Explorer. \item[SCF-effect]\mbox{}\\ Self Charge Filling effect. \item[SFXT]\mbox{}\\ Supergiant Fast X-ray Transient. \item[SGD]\mbox{}\\ Soft Gamma-ray Detector onboard {\it ASTRO-H}. \item[SGR]\mbox{}\\ Soft Gamma Repeater. \item[SNR]\mbox{}\\ Supernova Remnant. \item[SXC]\mbox{}\\ Soft X-ray Component of magnetar X-ray spectra ($<$10 keV). \item[SXI]\mbox{}\\ Soft X-ray Imager onboard {\it ASTRO-H}. \item[SXS]\mbox{}\\ Soft X-ray Spectrometer onboard {\it ASTRO-H}. \item[ToO]\mbox{}\\ Target of Opportunity. \item[VHE $\gamma$-ray]\mbox{}\\ Vert High Energy $\gamma$-ray. \item[WAM]\mbox{}\\ Wide-band All-sky Monitor onboard {\it Suzaku}. \item[XDINS]\mbox{}\\ X-ray Dim Isolated Neutron Star. \item[XIS]\mbox{}\\ X-ray Imaging Spectrometer onboard {\it Suzaku}. \item[XRBP]\mbox{}\\ X-ray Binary Pulsar. \end{description} \section{Overview: Strongly Magnetized Neutron Stars} \input{1.1_intro.tex} \input{1.2_overview_hmxb.tex} \input{1.3_overview_magnetar.tex} \section{Probes into Accreting Pulsars and their Environment}\label{sec:binaries} \input{2.1_wind_accretion.tex} \input{2.2_electron_cycl.tex} \input{2.3_sfxt.tex} \input{2.4_gamma_binary_short.tex} \section{Probes into Magnetars and their Environment}\label{sec:magnetars} \input{3.1_birth_env_evol.tex} \input{3.2_proton_cycl.tex} \input{3.3_hard_xray.tex} \input{3.4_xray_outburst.tex} \section*{Acknowledgements} We thank Matthias K\"uhnel (Remeis Observatory \& FAU, Germany) for substantial contributions to preparing Figures 3 and 8 as well as for valuable comments on an earlier version of the manuscript. \input{acronym_abc.tex} \clearpage \begin{multicols}{2} {\footnotesize \input{reference.tex} } \end{multicols} \end{document}
1,314,259,992,686
arxiv
\section{Introduction} \label{sec:intro} \gls{dl} is, nowadays, one of the hottest topics in signal processing research, spanning across multiple applications. This is a highly demanding computational field, since, in many cases, better performance and generality result in increased complexity and deeper models \cite{Thompson2020}. For example, the recently published language model GPT-3, the largest ever trained network with 175 billion parameters, would require 355 years and \$4.6M to train on a Tesla V100 cloud instance\cite{Li2020}. Therefore, it is increasingly important to optimize the energy consumption required by the training process. Although algorithmic approaches may contribute to these goals, computing architectures advances are also fundamental \cite{Schmidhuber2015}. The computations involved in \gls{dl} mostly use the \glsunset{ieee754}\acrshort{ieee754} \gls{sp} \gls{fp} format \cite{IEEE2019}, with 32 bits. However, recent research has achieved comparable precision with smaller numerical formats. The novel posit format \cite{Gustafson2017}, designed as a direct drop-in replacement for float (i.e., IEEE SP FP), provides a wider dynamic range, higher accuracy, and simpler hardware. Moreover, each posit format has a corresponding exact accumulator, named quire, which is particularly useful for the frequent dot products in \gls{dl}. Contrasting with the \acrshort{ieee754} FP, the posit numerical format may be used with any size and has been shown to be able to provide more accurate operations than floats, while using fewer bits. Posits may even use sizes that are not multiples of 8, which could be exploited in \gls{fpga} or \gls{asic} to obtain optimal efficiency and performance. However, most published studies regarding the application of the posit format to \glspl{dnn} rely on the inference stage \cite{Cococcioni2018, Johnson2018, Langroudi2018, Carmichael2019, Carmichael2019a, Langroudi2019, Langroudi2020}. The models are trained using floats and are later quantized to posits to be used for inference. Nevertheless, the inference phase tends to be less sensitive to errors than the training phase, making it easier to achieve good performance using \{5..8\}-bit posits. In contrast, exploiting the use of posits during the training phase is a more compelling topic since this is the most computationally demanding stage. The first time posits were used in this context was in \cite{Montero2019}, by training a \gls{fcnn} for a binary classification problem using \{8, 10, 12, 16, 32\}-bit posits. Later, in \cite{Langroudi2019b, Langroudi2019a}, a \gls{fcnn} was trained for MNIST and Fashion MNIST using \{16, 32\}-bit posits. In \cite{Lu2019, Lu2020}, \glspl{cnn} were trained using a mix of \{8, 16\}-bit posits, but still relying on floats for the first epoch and layer computations. More recently, in \cite{Murillo2020}, a \gls{cnn} was trained for CIFAR-10 but using only \{16, 32\}-bit posits. Under the premise of these previous works, the research that is now presented goes a step further by extending the implementation of \glspl{dnn} in a more general and feature-rich approach. Hence, the original contributions of this paper are: \begin{itemize}[leftmargin=*] \item \textbf{open-source framework}\footnote{\href{https://github.com/hpc-ulisboa/posit-neuralnet}{Available at: https://github.com/hpc-ulisboa/posit-neuralnet}} to natively perform inference and training with posits of any precision (number of bits and exponent size) and quires; it was developed in C++ and adopts a similar API as PyTorch, with multithread support; \item adaptation of the framework to support \textbf{mixed-precision}, with different stages (forward, backward, gradient, optimizer, and loss) operating under different posit formats; \item training \glspl{cnn} with only \textbf{8 to 12-bit posits} without impacting on the achieved model accuracy. \end{itemize} \section{Posit numbering system} Among the several different numbering formats that have been proposed to represent real numbers~\cite{Sousa2020}, the \gls{ieee754} single-precision floating-point (float) is the most widely adopted. It decomposes a number into a sign (1-bit), exponent (8-bits) and mantissa (23-bits): \begin{equation} f = \left(-1\right)^\text{sign} \times \text{2}^{\text{exponent}-127} \times \text{mantissa}. \label{eq:float} \end{equation} However, it has also been observed that many application domains do not need nor make use of the total accuracy and wide dynamic range that is made available by \gls{ieee754}, often compromising the resulting system optimization in terms of hardware resources, performance, and energy efficiency. One of such domains is \gls{dnn} training, where most of the computations are zero-centered. To overcome these issues, the Posit numbering system~\cite{Gustafson2017} was recently proposed as a new alternative to \gls{ieee754}. Posit is characterized by a fixed size/\gls{nbits} and an \gls{es}, being composed by the following fields: sign (1-bit), regime (variable bits), exponent (0..\gls{es}-bits), and fraction (remaining bits)~\cite{Group2018}. It is decoded as in \cref{eq:posit}. \begin{equation} p = \left(-1\right)^\text{sign} \times 2^{2^\textit{es} \times k} \times 2^\text{exponent} \times \left( 1 + \text{fraction} \right). \label{eq:posit} \end{equation} When the number is negative, the two's complement has to be applied before decoding the other fields. The regime bits are decoded by measuring $k$, determined by their run-length. A particular characteristic of Posit, and perhaps the most interesting aspect for \acrshort{dnn} applications, refers to the distribution of its values, resembling a log-normal distribution (see \cref{fig:posit_distribution}), which is similar to the normal distribution of the values commonly found in \glspl{dnn}. % % Another interesting point is the definition of the quire, a Kulisch-like large accumulator~\cite{Kulisch2012} designed to contain exact sums of products of posits. \cref{tab:posit_quire} shows the recommended posit and quire configurations. \section{Deep learning posit framework} Current \gls{dnn} frameworks (such as PyTorch and TensorFlow/Keras) do not natively support the posit data type. As a result, the whole set of functions and operators would need to be reimplemented, in order to take advantage of this new numbering system. As such, it was decided to develop an entirely new framework, from scratch, in order to ensure better control of its inner operations and exploit them for the posit data format. \subsection{PositNN Framework} The developed framework, named PositNN, was based on the PyTorch API for C++ (LibTorch), thus inheriting its program functions and data flow. As a result, any user familiar with PyTorch may easily port their networks and models to PositNN. As an example, a comparison between PyTorch and the proposed framework regarding the declaration of a 1-layer model is shown in \cref{fig:framework} (left and center). The overall structure and functions are very similar, the only difference being the declaration of the backward function, since the proposed framework does not currently support automatic differentiation. Despite being compared against a full-fledged framework, like PyTorch, the proposed framework is also capable of performing \gls{dnn} inference and training with the most common models and functions. A complete list of the supported functionalities is shown in \cref{fig:framework} (right), which allow implementing all the stages illustrated in \cref{fig:diagram}. Thus, common \glspl{cnn}, such as LeNet-5, CifarNet, AlexNet, and others, are fully supported. Moreover, the framework allows the user to extend it with custom functions or combine it with existing ones (e.g.\ from PyTorch). \begin{table}[t] \vspace*{-0.5\baselineskip} \centering \caption{Main properties of posit formats according to \cite{Group2018}.} \label{tab:posit_quire} \begin{tabular}{@{}lcccc@{}} \toprule nbits & $8$ & $16$ & $32$ & $64$ \\ \midrule es & $0$ & $1$ & $2$ & $3$ \\ dynamic range & $2^{\pm 6}$ & $2^{\pm 28}$ & $2^{\pm 120}$ & $2^{\pm 496}$ \\ quire bits & $32$ & $128$ & $512$ & $2048$ \\ dot product limit & $127$ & $32767$ & $2^{31}-1$ & $2^{63}-1$ \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Figures/posit_8_0_hist_v3.pdf} \vspace*{-1.5\baselineskip} \caption{Distribution of posit(8, 0) in linear and log scale.} \vspace*{-0.5\baselineskip} \label{fig:posit_distribution} \end{figure} \begin{figure*}[htb] \begin{minipage}[b]{.69\textwidth} {\input{code.tex}} \end{minipage}% \hfill \fbox{\begin{minipage}[b]{.28\textwidth} \footnotesize \begin{itemize}[leftmargin=*, itemindent=-2ex, itemsep=0.5em] \item {\bf Activation functions:}\\ ReLU, Sigmoid, TanH \item {\bf Layers:}\\ Linear, Convolutional, \\ Average and Maximum Pooling,\\ Batch Normalization, Dropout \item {\bf Loss functions:}\\ Cross Entropy,\\ \gls{mse} \item {\bf Optimizer:}\\ \gls{sgd} \item {\bf Utilities:}\\ StdTensor, convert PyTorch tensors, mixed precision tensor, save and load model, scaled gradients \end{itemize} \end{minipage}} \caption{Comparison of PyTorch (left) and the proposed framework (center). Implemented functionalities of PositNN (right).} \vspace*{-1\baselineskip} \label{fig:framework} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=.9\columnwidth]{Figures/diagram.pdf} \vspace*{-0.5\baselineskip} \caption{\gls{dnn} training diagram, starting at the dataset. The various $p_i$, with $i=\{1..5\}$, represent the different posit precisions that may be used throughout the proposed framework.} \vspace*{-0.5\baselineskip} \label{fig:diagram} \end{figure} \subsection{Posit variables} Among the several libraries already available to implement posit operators in software \cite{NGATeam2019}, the Universal\footnote{Available at: \href{https://github.com/stillwater-sc/universal}{https://github.com/stillwater-sc/universal}} library was selected, thanks to its comprehensive support for any posit configuration and quires. Furthermore, C++ classes and function templates are generically used to implement different posits. Therefore, declaring a posit(8, 0) variable \texttt{p} equal to 1 is as simple as: \begin{lstlisting}[language=C++, basicstyle=\small\ttfamily, frame=leftline] #include <universal/posit/posit> sw::unum::posit<8, 0> p = 1; \end{lstlisting} Moreover, all the main operations specified in the Posit Standard \cite{Group2018} are fully supported and implemented. Furthermore, the proposed framework adopts bitwise operations whenever possible, thus avoiding operating with intermediate float representations, since this could introduce errors regarding a native implementation. \subsection{Implementation} Posit tensors are stored as StdTensors, a class implemented using only the C++ Standard Library. Data is internally stored in a one-dimensional dynamic vector and the multidimensional strides are automatically accounted for. Given that some stages are more sensitive to numerical errors, the proposed framework supports different precisions per stage, as depicted in the arrows of \cref{fig:diagram}. Although not illustrated, it even allows the model to use different precisions per layer. To accomplish that, the weights are stored in a class where members are copies with different posit configurations. Hence, each layer and stage converts its posit tensors to the appropriate precisions and seamlessly updates the copies after every change. It also has the option to use quires for the accumulations, significantly improving the accuracy of matrix multiplications, convolutions, etc. In order to take the maximum advantage of the \acrshort{cpu}, most functions were conveniently parallelized and implemented with multithread support, thus dividing each mini-batch by different workers. In matrix multiplication, this corresponds to splitting the left operand by rows, performing the computation, and then concatenating the results. The threads were implemented using std::thread. The proposed framework could also be adapted to support other data types, since most functions are independent of the posit format, except those that use the quire to accumulate. \vspace{-0.5\baselineskip} \section{Experimental Evaluation} \vspace{-0.5\baselineskip} By making use of the developed framework, the presented research started by studying how much can the posit precision be decreased without penalizing the \gls{dnn} model accuracy. Then, the best configuration was used to train a deeper model on a more complex dataset. In this evaluation, small accuracy differences ($<1\%$) were assumed to be caused solely by the randomness of the training process and not exactly by lack of precision of the numerical format. For the initial evaluation, the 5-layer \gls{cnn} LeNet-5 was trained on Fashion MNIST (a more complex dataset than the ordinary MNIST) during 10 epochs. Just as in \cite{Langroudi2019a, Murillo2020}, posit(16, 1) was first used everywhere and decreased until posit(8, 0) (see \cref{tab:same_posit_without_quire}). \begin{table}[b] \vspace*{-\baselineskip} \centering \caption{Accuracy of LeNet-5 trained on Fashion MNIST using posit and without quire, using float for reference.} \label{tab:same_posit_without_quire} \begin{tabular}{@{}l|c|cccc@{}} \toprule Posit & \textbf{Float} & $(16, 1)$ & $(12, 1)$ & $(10, 1)$ & $(8, 0)$ \\ \midrule Accuracy [\%] & \textbf{\num{90.42}} & \num{90.87} & \num{90.15} & \num{88.15} & \num{10.00} \\ \bottomrule \end{tabular} \end{table} As expected, posit(16, 1) achieves a float-like accuracy, and narrower precisions, such as posit(12, 1) and posit(10, 1), are also usable for training, the latter incurring in some accuracy loss. However, when trained using posit(8, 0), the model accuracy does not improve and fixes at $\SI{10}{\percent}$ (equivalent to randomly classifying a 10-class dataset), probably due to the narrow dynamic range (as seen in \cref{fig:posit_distribution}). This hypothesis was subsequently evaluated by using a different exponent size (\gls{es}) and using quires for the accumulations (see \cref{tab:es_quire}). The obtained results confirmed the hypothesis, showing that the precision of the 8-bit model slightly increases when using quires, especially when the posit exponent size is $\textit{es}=2$. \begin{table}[t] \centering \caption{Accuracy of LeNet-5 trained on Fashion MNIST using posit and quire. Posit8 is tested with different \gls{es}.} \label{tab:es_quire} \resizebox{\columnwidth}{!}{\begin{tabular}{@{}l|c|cccc@{}} \toprule Posit with quire & \textbf{Float} & $(10, 1)$ & $(8, 0)$ & $(8, 1)$ & $(8, 2)$ \\ \midrule Accuracy [\%] & \textbf{\num{90.42}} & \num{88.40} & \num{13.84} & \num{12.86} & \num{19.39} \\ \bottomrule \end{tabular}} \end{table} Another common problem that is particularly noted while using 8-bit posit precisions is the vanishing gradient problem -- the gradients become smaller and smaller as the model converges. This is particularly problematic when the model weights are updated with low-precision posits, since they do not have enough resolution for small numbers. As suggested in \cite{Lu2020}, using 16-bit posits for the optimizer and loss is usually enough to allow models to train with low-precision posits. With this observation in mind, this model was trained with a different precision for the optimizer and loss, while using posit(8, 2) everywhere else (see \cref{tab:mixed}). The posit exponent size \gls{es} was fixed at 2, since it gave the best results and simplified the conversion between posits with different \gls{nbits}. \begin{table}[t] \vspace*{-0.5\baselineskip} \centering \caption{Accuracy of LeNet-5 trained on Fashion MNIST using posit, quire, and mixed precision. Configuration OxLy means Optimizer (O) with posit(x, 2) and Loss (L) with posit(y, 2), and everything else with posit(8, 2).} \label{tab:mixed} \resizebox{\columnwidth}{!}{\begin{tabular}{@{}l|c|cccc@{}} \toprule Configuration & \textbf{Float} & O12L8 & O12L12 & O12L10 & O10L10 \\ \midrule Accuracy [\%] & \textbf{\num{90.42}} & \num{88.40} & \num{90.07} & \num{90.25} & \num{88.08} \\ \bottomrule \end{tabular}} \end{table} The obtained results showcase the feasibility of using 8-bit posits, achieving an accuracy very close to 32-bits \acrshort{ieee754}. In particular, while solely computing the optimizer with posit(12, 2) is not enough to achieve a float-like accuracy, when the loss precision is also increased, the model is able to train without any accuracy penalization and using, at most, 12-bit posits. Conversely, if posit(10, 2) is used for both the optimizer and loss, the final accuracy slightly decreases. Therefore, the configuration with 12 bits for optimizer and 10 bits for loss (O12L10 in \cref{tab:mixed}) offers the best compromise in terms of low-precision and overall model accuracy. This configuration will be referred to as posit(8, 2)*, since the loss function and weight update, both computed with higher precision, only represent about \SI{15}{\percent} of the operations that are performed while training the considered models. Given the promising results for the Fashion MNIST dataset, the posit(8, 2)* configuration was also used to train LeNet-5 on MNIST and CifarNet on CIFAR-10, validating the proposed mixed configuration. The resulting accuracies are compared against float in \cref{tab:results}. Moreover, a plot of the training progress of LeNet-5 on Fashion MNIST is shown in \cref{fig:training}, comparing different posit configurations and float. \begin{table}[t] \centering \caption{Accuracy of \glspl{cnn} trained on MNIST, Fashion MNIST, and CIFAR-10 using float and posit(8, 2)*.} \label{tab:results} \begin{tabular}{@{}lccc@{}} \toprule Dataset & MNIST & Fashion MNIST & CIFAR-10 \\ CNN & LeNet-5 & LeNet-5 & CifarNet \\ \midrule Float [\%] & \num{99.19} & \num{90.42} & \num{70.29} \\ Posit(8, 2)* [\%] & \num{99.17} & \num{90.25} & \num{68.65} \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Figures/plot1.pdf} \vspace*{-1\baselineskip} \caption{Training loss and testing accuracy of LeNet-5 trained on Fashion MNIST using float and different posit precisions. Posit(8, 2)* corresponds to configuration O12L10 of \cref{tab:mixed}.} \label{fig:training} \end{figure} \section{Conclusion} A new \gls{dnn} framework (PositNN) supporting both training and inference using any posit precision is proposed. The mixed precision feature allows adjusting the posit precision used in each stage of the training network, thus achieving results similar to float. Common \glspl{cnn} were trained with the majority of the operations performed using posit(8, 2) and showed no significant loss of accuracy on datasets such as MNIST, Fashion MNIST, and CIFAR-10. Future work shall make use of this knowledge and framework to devise adaptable hardware implementations of posit units that may exploit this feasibility to implement low-resource and low-power \gls{dnn} implementations while keeping the same model accuracy. \section*{Acknowledgments} Work supported by national funds through Fundação para a Ciência e a Tecnologia (FCT), under the projects\linebreak UIDB/50021/2020 and PTDC/EEI-HAC/30485/2017, and student merit scholarship funded by Fundação Calouste Gulbenkian (FCG). \vfill\pagebreak \small \bibliographystyle{IEEEbib}
1,314,259,992,687
arxiv
\section{Introduction} A number of information-theoretic divergence measures between probability distance functions have been introduced and analyzed in the literature \cite{bhattacharyya1946measure, kullback1951information, chernoff1952measure, hero01, Cha07}. They have been extensively used in many signal processing applications involving classification \cite{guorong1996}, segmentation \cite{hamza2003image}, source separation \cite{hild2001blind}, clustering \cite{banerjee2005clustering}, and other domains. Among the different divergence functions, the family of $f$-divergences or Ali-Silvey distances is perhaps the most widely used in signal processing \cite{csiszar04}. This family includes the total variation distance, the Bhattacharya distance \cite{bhattacharyya1946measure}, the Kullback-Leibler divergence \cite{kullback1951information}, and more generally, the Chernoff $\alpha$-divergence \cite{chernoff1952measure, hero01}. Because there exists an indirect relationship between the class of $f$-divergences and the minimum achievable error in classification problems \cite{ali1966general}, this family of divergence measures is particularly useful for this setting. Consider the problem of classifying a multi-dimensional feature vector, $\mathbf{x}$, into one of two classes, $\{0,1\}$. The conditional distributions are given by $f_0(\mathbf{x})$ and $f_1(\mathbf{x})$ and the prior probabilities are given by $p$ and $q$, respectively. The classifier that assigns a vector $\mathbf{x}$ to the class with the highest posterior is called Bayes classifier, and the error rate of this classifier is given by: \begin{equation} \epsilon^{\mathrm{Bayes}} = \! \! \! \! \! \! \! \! \! \! \! \! \! \int\limits_{p f_0(\mathbf{x}) \leq q f_1(\mathbf{x})} \! \! \! \! \! \! \! \! \! \! \! \! pf_0(\mathbf{x}) d\mathbf{x} \ \ + \! \! \! \! \! \! \! \! \int\limits_{q f_1(\mathbf{x}) \leq p f_0(\mathbf{x})} \! \! \! \! \! \! \! \! \! \! \! \! qf_1(\mathbf{x}) d\mathbf{x} \label{eq:BER} \end{equation} This is the minimum classification error rate, or the Bayes error rate (BER), that can be achieved by any classifier. Computing the BER requires evaluating the multi-dimensional integral over regions that can only be determined if one has perfect knowledge of the data distribution. As an alternative to computing the integral, a number of attempts have been made to bound this error using estimable measures of distance between probability functions \cite{chernoff1952measure, kailath1967divergence, hashlamoun1994tight, avi1996arbitrarily}. In this paper, we derive a new bound on classification error that is based on a nonparametric probability distance measure that belongs to the family of $f$-divergences. In the context of binary classification, this new measure has a number of appealing properties: (1) there exists an asymptotically consistent estimator of the divergence measure that does not require density estimates of the two distributions; (2) we show that there exists a local relationship between this new divergence measure and the Chernoff $\alpha$-divergence; (3) we derive tighter bounds on the BER than those based on the Bhattacharya distance and derive empirical estimates of these bounds using data from the two distributions; (4) we derive bounds on the minimum achievable error rate for the case where training and test data in the classification problem come from different distributions. \subsection{Related work} There are three lines of research that are related to the work presented in this paper: information theoretic bounds on the Bayes error rate (and related quantities); bounds from the machine learning literature for the scenario where training and test data come from different distributions; and recent work on empirical estimates of the KL divergence. The total variation (TV) distance is closely related to the Bayes error rate \cite{kailath1967divergence}. A number of bounds exist in the literature relating the KL divergence and the TV distance. The well-known Pinsker inequality provides a bound on the total variation distance in terms of the KL divergence \cite{csisz1967information}. Sharpened inequalities that bound the KL divergence in terms of a polynomial function of the TV distance were derived in \cite{kullback1967lower}. One drawback of the Pinsker-type inequalities is that they become uninformative for completely separable distributions where the KL divergence goes to $\infty$ (since the TV distance is upper bounded). Vajda's refinement to these bounds addresses this issue \cite{vajda1970note}. For classification problems, the well-known upper bound on the probability of error based on the Chernoff $\alpha$-divergence has been used in a number of statistical learning applications \cite{chernoff1952measure}. The tightest bound is determined by finding the value of $\alpha$ that minimizes the upper bound. The Bhattacharya (BC) divergence, a special case of the Chernoff $\alpha$-divergence for $\alpha = \frac{1}{2}$, upper and lower bounds the BER \cite{bhattacharyya1946measure, kailath1967divergence}. The BC bounds are often used as motivation for algorithms in the statistical learning literature because these bounds have closed form expressions for many commonly used distributions. In addition, for small differences between the two classes, it has been shown that, in the class of Chernoff $\alpha$-divergence measures, $\alpha= \frac{1}{2}$ (the BC divergence) results in the tightest upper bound on the probability of error \cite{hero01}. Beyond the bounds on the BER based on the divergence measures, a number of other bounds exist based on different functionals of the distributions. In \cite{hashlamoun1994tight}, the authors derive a new functional based on a Gaussian-Weighted sinusoid that yields tighter bounds on the BER than other popular approaches. Avi-Itzhak proposes arbitrarily tight bounds on the BER in \cite{avi1996arbitrarily}. Both of these sets of bounds are tighter than the bounds we derive here; however, these bounds cannot be estimated without at least partial knowledge of the underlying distribution. A strength of the bounds proposed in this paper is that they are empirically estimable without knowing a parametric model for the underlying distribution. In addition to work on bounding the Bayes error rate, recently there have been a number of attempts to bound the error rate in classification problems for the case where the training data and test data are drawn from different distributions (an area known as domain-adaptation or transfer learning in the machine learning literature). In \cite{ben2007analysis, ben2010theory}, Ben-David \emph{et al.} relate the expected error on the test data to the expected error on the training data, for the case when no labeled test data is available. In \cite{blitzer2008learning}, the authors derive new bounds for the case where a small subset of labeled data from the test distribution is available. In \cite{mansour2009domain}, Mansour \emph{et al.} generalize these bounds to the regression problem. In \cite{mansour2009multiple}, the authors present a new theoretical analysis of the multi-source domain adaptation problem based on the $\alpha$-divergence. In contrast to these models, we propose a general non-parametric bound that can be estimated without assuming an underlying model for the data and without restrictions on the hypothesis class. While previous bounds have proven useful in a number of applications, a drawback shared by most divergence functions (and corresponding bounds) is that they require some knowledge of the underlying distribution for their estimation. For some of the more popular divergence measures, closed form solutions are available for different distribution types \cite{fukunaga1990introduction}. More recently, a number of non-parametric methods have been introduced to estimate information theoretic quantities. Graph-based non-parametric estimators were introduced in \cite{costa2004geodesic}. Plug-in estimates of existing divergence measures that require density estimation have also been proposed \cite{sricharan2012estimation}. More recently, estimates of the KL divergence that rely on estimates of the likelihood ratio instead of direct density estimation have been proposed \cite{nguyen2009surrogate, nguyen2010estimating}. In \cite{berisha2015Empirical} a minimal spanning tree (MST) based estimator of a different kind of $f$-divergence measure was investigated. Unlike other divergences, this $f$-divergence can be estimated directly from the data without performing density estimation. This estimator was used in \cite{berisha2015Empirical} to develop a nonparametric estimator for the Fisher information. Whereas that paper analyzes the utility of the proposed $f$-divergence for estimation problems, this work focuses on its importance to binary classification tasks. The rest of this paper is outlined as follows: In section II, we provide an overview of the divergence measure and its consistent estimator. In section III, we derive bounds on the BER based on this probability distance measure and compare the tightness of the bound with Bhattacharya bound and, more generally, with the bound based on the $\alpha$-divergence. In section IV, we derive bounds on the classification error rate for the case where the training and the test data come from different distributions. In section V, we provide numerical results that confirm the validity of the bounds and describe two practical algorithms for feature learning that aim to minimize the upper bound on the error rate. Section VI contains concluding remarks and a discussion of future work. \section{A Nonparametric Divergence Measure} \label{sec:divergence} For parameters $p \in (0,1)$ and $q =1-p$ consider the following divergence measure between distributions $f$ and $g$ with domain $\Reals^d$: \begin{equation} \label{eqn:div_HP} D_{p}(f,g) = \frac{1}{4pq}\left [ \int \frac{(p f(\mathbf{x})- qg(\mathbf{x}))^2}{p f(\mathbf{x}) + qg(\mathbf{x})} d\mathbf{x} - (p-q)^2 \right] \end{equation} The divergence in (\ref{eqn:div_HP}), first introduced in \cite{berisha2015Empirical}, has the remarkable property that it can be estimated directly without estimation or plug-in of the densities $f$ and $g$ based on an extension of the Friedman-Rafsky (FR) multi-variate two sample test statistic \cite{friedman1979multivariate}. Let us consider sample realizations from $f$ and $g$, denoted by $\mathbf{X}_f \in \Reals^{N_f \times d}$, $\mathbf{X}_g \in \Reals^{N_g \times d}$. The FR test statistic, $\mathcal{C}(\mathbf{X}_f, \mathbf{X}_g)$, is constructed by first generating a Euclidean minimal spanning tree (MST) on the concatenated data set, $\mathbf{X}_f \cup \mathbf{X}_g$, and then counting the number of edges connecting a data point from $f$ to a data point from $g$. The test assumes a unique MST for $\mathbf{X}_f \cup \mathbf{X}_g$ - therefore all inter point distances between data points must be distinct. However, this assumption is not restrictive since the MST is unique with probability one when $f$ and $g$ are Lebesgue continuous densities. In Theorem 1, we present an estimator that relies on the FR test statistic and asymptotically converges to $D_p(f,g)$. Note that this theorem combines the results of Theorem 1 and equations (3) and (4) in \cite{berisha2015Empirical}. The proof of this theorem can be found in Appendix A. \vspace{0.3cm} \begin{mydef} As $N_f \rightarrow \infty$ and $N_g \rightarrow \infty$ in a linked manner such that $\frac{N_f}{N_f+N_g}\rightarrow p$ and $\frac{N_g}{N_f+N_g}\rightarrow q$, \[ 1 - \mathcal{C}(\mathbf{X}_f, \mathbf{X}_g)\frac{N_f + N_g}{2N_f N_g} \rightarrow D_{p}(f,g). \] almost surely. \label{thm:HP} \end{mydef} \vspace{0.3cm} \begin{figure*}[!t] \begin{center} \subfloat[$f(x)=g(x)$]{ \includegraphics[width=0.225\textwidth]{Scatter0.png} \includegraphics[width=0.225\textwidth]{MST0.png} \label{fig:exmst1} } \subfloat[$f(x)\neq g(x)$]{ \includegraphics[width=0.225\textwidth]{Scatter1.png} \includegraphics[width=0.225\textwidth]{MST1.png} \label{fig:exmst2} } \caption{Estimation of $D_p$ for the case when (a) $f=g$ and (b) $f \neq g$.} \label{fig:ExMSTs} \end{center} \end{figure*} In Fig.\ref{fig:exmst1} and \ref{fig:exmst2} we show two numerical examples in order to visualize the results of Theorem \ref{thm:HP} - we plot samples from two distributions, $\mathbf{X}_f \sim f(\mathbf{x})$ and $\mathbf{X}_g \sim g(\mathbf{x})$, and evaluate the value of $\mathcal{C}(\mathbf{X}_f,\mathbf{X}_g)$. In Fig.\ref{fig:exmst1}, both data sets are drawn from the same distribution, $f(\mathbf{x}) = g(\mathbf{x}) = \mathcal{N}([0, 0]^{\mathrm{T}},\mathbf{I})$. In Fig. \ref{fig:exmst2}, we plot data drawn from $f(\mathbf{x}) = \mathcal{N}([-\frac{\sqrt{2}}{2}, -\frac{\sqrt{2}}{2}]^{\mathrm{T}},\mathbf{I})$ and $g(\mathbf{x}) = \mathcal{N}([\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}]^{\mathrm{T}},\mathbf{I})$. $\mathbf{I}$ is the identity matrix. For both data sets, an equal number of points are drawn, therefore $N_f = N_g = N$ and $p=q=\frac{1}{2}$. The dotted line in each figure represents the Euclidean MST associated with $\mathbf{X}_f \cup \mathbf{X}_g$. The green lines represent the edges of the MST connecting points from $f$ to points from $g$, $\mathcal{C}(\mathbf{X}_f,\mathbf{X}_g)$. We can use this to estimate $D_p(f,g)$ using the results of Theorem \ref{thm:HP}. It is clear from the figures that this value is much smaller for overlapping distributions (Fig. \ref{fig:exmst1}) than for separable distributions (Fig. \ref{fig:exmst2}). Indeed, as Theorem \ref{thm:HP} suggests, in the limit, this statistic converges to the integral used in the divergence measure in (\ref{eqn:div_HP}). In the ensuing sections we outline some important properties of this divergence measure and develop new bounds for classification using this distance function between distributions. \subsection{Properties of $D_p$} The divergence measure in (\ref{eqn:div_HP}) exhibits several properties that make it useful for statistical analysis. It is relatively straightforward to show that the following three properties are satisfied. \begin{enumerate} \item $0 \leq D_{p} \leq 1$ \item $D_{p} = 0 \iff f(\mathbf{x})=g(\mathbf{x})$. \item $D_{p}(f,g)=D_{q}(g,f)$ \end{enumerate} The lower bound in the first property follows from the fact that when $f=g$ and $p=q$, the minimum value of $D_{p}$ is 0. To show that the divergence measure is upper bounded by 1, we first note that \begin{align} \int \frac{(p f(\mathbf{x})- qg(\mathbf{x}))^2}{p f(\mathbf{x}) + qg(\mathbf{x})} d\mathbf{x} = 1 - 4pq A_p(f,g), \label{eqn:defineA} \end{align} where \begin{equation*} A_p(f,g) = \int \frac{ f(\mathbf{x}) g(\mathbf{x})}{p f(\mathbf{x}) + qg(\mathbf{x})} d\mathbf{x}. \end{equation*} The function $A_p(f,g)$ attains its minimum value of 0, when $f$ and $g$ have no overlapping support (since $f(\mathbf{x}) > 0$ and $g(\mathbf{x}) > 0$ for all $\mathbf{x}$); therefore $D_p = \frac{1}{4pq} [1 - (p-q)^2] = 1$. The second property is closely related to the first: the minimum value $D_{p} = 0$ only when $f=g$ and $p=q$. The third property follows from commutativity. The divergence measure in (\ref{eqn:div_HP}) belongs to the class of $f$-divergences. Every $f$-divergence can be expressed as an average of the ratio of two distributions, weighted by some function $\phi(t)$: $D_{\phi}(f,g) = \int \phi(\frac{f(\mathbf{x})}{g(\mathbf{x})}) g(\mathbf{x}) d\mathbf{x}$. For $D_{p}(f,g)$, the corresponding function $\phi(t)$ is, \begin{equation} \phi(t) = \frac{1}{4 p q} \Bigg [ \frac{(p t- q)^2}{p t+ q} - (2 p - 1)^2 \Bigg]. \end{equation} Furthermore, $\phi(t)$ is defined for all $t>0$, is convex - $\phi''(t) = \frac{2pq}{(pt+q)^3} > 0$, and $\phi(1) = 0$. This is consistent with the requirements of the definition of an $f$-divergence \cite{csiszar04}. Indeed, for the special case of $\alpha=\frac{1}{2}$, the divergence in (\ref{eqn:div_HP}) becomes the symmetric $\chi^2$ $f$-divergence in \cite{Cha07} and is similar to the Rukhin $f$-divergence in \cite{Ruhkin94}. \section{Bounds on Bayes Classification Error} \label{sec:BER} In this section, we show how $D_p$ in (\ref{eqn:div_HP}) can be used to bound the Bayes error rate (BER) for binary classification. Further, we show that, under certain conditions, this bound is tighter than the well-known Bhattacharya bound commonly used in the machine learning literature and can be empirically estimated from data. Before deriving the error bounds, for notation convenience, we introduce a slightly modified version of the divergence measure in (\ref{eqn:div_HP}), \begin{align} \label{eq:DHPh} \tilde{D}_{p}(f,g) & =1-4pq\int \frac{ f(\mathbf{x}) g(\mathbf{x})}{p f(\mathbf{x}) + qg(\mathbf{x})} d\mathbf{x} \\ & = \int \frac{(pf(\mathbf{x}) - qg(\mathbf{x}))^2}{pf(\mathbf{x}) + qg(\mathbf{x})} d\mathbf{x}. \nonumber \end{align} It is easy to see that $D_{p} = \frac{\tilde{D}_{p}}{4pq} - \frac{(p-q)^2}{4pq}$ and when $p=q=0.5$, $D_{p} = \tilde{D}_{p}$. While this function no longer satisfies $\tilde{D}_{p}(f,g) = 0$, for $f=g$, and therefore is no longer a valid divergence measure, it greatly simplifies the notation of the ensuing error bounds. As with $D_p$, w can estimate this quantity using the FR test statistic since, under the same conditions as those in Theorem 1, \begin{equation} 1 -2\frac{\mathcal{C}(\mathbf{X}_f, \mathbf{X}_g)}{N_f + N_g} \rightarrow \tilde{D}_{p}(f,g). \end{equation} Given a binary classification problem with binary labels $y \in \{0, 1\}$ and $\mathbf{x}$ drawn from $f_{\mathrm{S}}(\mathbf{x})$, we denote the conditional distributions for both classes as $f_0(\mathbf{x}) = f_\mathrm{S}(\mathbf{x} | y=0)$ and $f_1(\mathbf{x}) = f_\mathrm{S}(\mathbf{x} | y=1)$. We draw samples from these distributions with probability $p$ and $q=1-p$, respectively, and formulate two data matrices denoted by $\mathbf{X}_0 \in \Reals^{N_0 \times d}$ and $\mathbf{X}_1 \in \Reals^{N_1 \times d}$. The Bayes error rate associated with this problem is given in (\ref{eq:BER}). In Theorem \ref{thm:BER} below, we show that we can bound this error from above and below using the divergence measure introduced in the previous section. The proof of this theorem can be found in Appendix B. \vspace{0.3cm} \begin{mydef} For two distributions, $f_0({\mathbf{x}})$ and $f_1({\mathbf{x}})$, with prior probabilities $p$ and $q$ respectively, the Bayes error rate, $\epsilon^{\mathrm{Bayes}}$, is bounded above and below as follows: \begin{equation*} \frac{1}{2} - \frac{1}{2}\sqrt{\tilde{D}_{p}(f_0,f_1)} \leq \epsilon^{\mathrm{Bayes}} \leq \frac{1}{2} - \frac{1}{2}\tilde{D}_{p}(f_0,f_1). \end{equation*} \label{thm:BER} \end{mydef} Combining the results from Theorem \ref{thm:HP} with the results of Theorem \ref{thm:BER}, we see that we can approximate the upper and lower bounds on the BER from the data matrices $\mathbf{X}_0$ and $\mathbf{X}_1$ as \begin{equation*} \frac{1}{2} - \frac{1}{2}\sqrt{\tilde{D}_{p}(f_0,f_1)} \approx \frac{1}{2} - \frac{1}{2}\sqrt{1-2 \frac{\mathcal{C}(\mathbf{X}_0,\mathbf{X}_1)}{N_0 + N_1}}, \end{equation*} and \begin{equation*} \frac{1}{2} - \frac{1}{2} \tilde{D}_{p}(f_0,f_1) \approx \ \frac{\mathcal{C}(\mathbf{X}_0,\mathbf{X}_1)}{N_0 + N_1}. \end{equation*} The derived bound is tight for the case $p=q=\frac{1}{2}$. For $f_0(\mathbf{x}) = f_1(\mathbf{x})$, the BER is 0.5. Under these conditions, $\tilde{D}_{p}(\mathbf{x}) = 0$, and both the upper and lower bound in Theorem \ref{thm:BER} go to 0.5. For the case where $f_0(\mathbf{x})$ and $f_1(\mathbf{x})$ are completely separable, the BER is 0, $\tilde{D}_{p}(\mathbf{x}) = 1$, and both the upper and lower bound go to 0. \subsection{Relationship to the Chernoff Information Bound} Here we compare the tightness of the bounds on the Bayes error rate based on $D_p$ to the bounds based on the Chernoff information function (CIF) \cite{hero01}, defined as \[ I_{\alpha}(f_0, f_1) = \int p^{\alpha}f_0^{\alpha}(\mathbf{x}) q^{1-\alpha}f_1^{1-\alpha}(\mathbf{x}) d\mathbf{x}. \] In Theorem \ref{thm:HPChernoff}, we derive an important relationship between the affinity measure, $A_p(f_0, f_1)$, and a scaled version of the CIF. The proof of this theorem can be found in Appendix C. \vspace{0.3cm} \begin{mydef} The affinity measure, $A_p(f_0, f_1)$, is a lower bound for a scaled version of the Chernoff information function: \begin{equation*} A_p(f_0,f_1)\leq \int f_0^q (\mathbf{x})f_1^p(\mathbf{x}) d\mathbf{x}. \end{equation*} \label{thm:HPChernoff} \end{mydef} \vspace{0.1cm} It is important to note that the second term in Theorem \ref{thm:HPChernoff} is exactly equal to the CIF for $\alpha = p = q = 1/2$. For this special case, the Chernoff bound reduces to the Bhattacharyya (BC) bound, a widely-used bound on the Bayes error in machine learning that has been used to motivate and develop new algorithms \cite{kailath1967divergence, saon01, xuan06}. The popularity of the BC bound is mainly due to the the fact that closed form expressions for the bound exist for many of the commonly used distributions. Let us define the Bhattacharya coefficient as: \begin{equation} BC(f_0,f_1) = 2 \int \sqrt{pq f_0(\mathbf{x}) f_1(\mathbf{x})} d\mathbf{x}. \end{equation} The well-known Bhattacharya bound on the BER is given by \begin{equation} \label{eqn:bcbound} \frac{1}{2}-\frac{1}{2}\sqrt{1-BC^2(f,g)} \leq \epsilon^{\mathrm{Bayes}} \leq \frac{1}{2} BC(f,g). \end{equation} In Theorem \ref{thm:tightness} below, we show that, for equiprobable classes, the $D_p$ bound provides tighter upper and lower bounds on the BER when compared to the bound based on the BC coefficient under all separability conditions. The proof of this theorem can be found in Appendix D. \vspace{0.3cm} \begin{mydef} For $p = q = \frac{1}{2}$, the $D_p$ upper and lower bounds on the Bayes error rate are tighter than the Bhattacharyya bounds: \begin{equation} \begin{aligned} \frac{1}{2}-\frac{1}{2}\sqrt{1-BC^2(f_0,f_1)} \leq & \frac{1}{2} - \frac{1}{2}\sqrt{\tilde{D}_{\frac{1}{2}}(f_0,f_1)} \\ \leq \epsilon^{\mathrm{Bayes}} \leq \frac{1}{2} - & \frac{1}{2}\tilde{D}_{\frac{1}{2}}(f_0,f_1) \leq \frac{1}{2}BC(f_0,f_1). \nonumber \end{aligned} \label{eq:thm3} \end{equation} \label{thm:tightness} \end{mydef} \vspace{0.1cm} Using asymptotic analysis of the Chernoff exponent, for small differences between the two classes, it was shown that $\alpha=\frac{1}{2}$ results in the tightest bound on the probability of error - this corresponds to the bound in (\ref{eqn:bcbound}) \cite{hero01}. Using a variant of this analysis, we derive a local representation of the CIF and relate it to the divergence measure proposed here. In particular, if we let \begin{align*} pf_0(\mathbf{x}) & = \frac{1}{2}(pf_0(\mathbf{x}) + qf_1(\mathbf{x})) + \frac{1}{2}(pf_0(\mathbf{x}) - qf_1(\mathbf{x})) \\ & =f_{\frac{1}{2}} (\mathbf{x}) (1+\frac{1}{2}\Delta_{\mathbf{x}}), \end{align*} where $f_{\frac{1}{2}} (\mathbf{x}) =\frac{1}{2}(pf_0(\mathbf{x}) + qf_1(\mathbf{x})) $ and $\Delta_{\mathbf{x}} = (pf_0(\mathbf{x}) - qf_1(\mathbf{x}))/f_{\frac{1}{2}} (\mathbf{x})$. Similarly, \begin{equation*} qf_1(\mathbf{x}) = f_{\frac{1}{2}}(\mathbf{x})(1-\frac{1}{2}\Delta_{\mathbf{x}}). \end{equation*} As in \cite{hero01}, after a Taylor series expansion around $p^{\alpha}f_0^{\alpha}(\mathbf{x})$ and $q^{1-\alpha}f_1^{1-\alpha}(\mathbf{x})$, the Chernoff information function can be expressed as (see proof of Proposition 5 in \cite{hero01}): \begin{align} \nonumber I_{\alpha}(f_0, f_1) & = \int f_{\frac{1}{2}}(\mathbf{x}) \bigg [ 1 - (2\alpha-1)\frac{\Delta_{\mathbf{x}}}{2} \\ \nonumber & \ \ \ \ \ - \alpha(1-\alpha) \left( \frac{\Delta_{\mathbf{x}}}{2} \right)^2 + o(\Delta_{\mathbf{x}}^3) \bigg ] d\mathbf{x} \\ \nonumber & = \int f_{\frac{1}{2}}(\mathbf{x}) dx - (2\alpha - 1) \int f_{\frac{1}{2}}(\mathbf{x}) \frac{\Delta_{\mathbf{x}}}{2} d\mathbf{x} \\ \nonumber & \ \ \ \ \ -\alpha(1-\alpha) \int f_{\frac{1}{2}}(\mathbf{x}) \left( \frac{\Delta_{\mathbf{x}}}{2} \right ) ^2 + o(\Delta^2)\\ \nonumber & = \frac{1}{2} - (2\alpha-1)(2p-1)/2 \\ \nonumber & \ \ \ \ \ - \frac{\alpha(1-\alpha)}{2}\int \frac{(pf_0(\mathbf{x}) - qf_1(\mathbf{x}))^2}{pf_0(\mathbf{x}) + qf_1(\mathbf{x})} d\mathbf{x} + o(\Delta^2) \\ \nonumber & = (p+\alpha) - 2\alpha p - \frac{\alpha(1-\alpha)}{2} \tilde{D}_{p}(f_0,f_1) + o(\Delta^2) \nonumber \end{align} The local equivalence of $D_p$ and $I_\alpha$ is not surprising since all $f$-divergences are locally equivalent (they induce the same Riemann-Fisher metric on the manifold of densities) \cite{csiszar04}. This useful property allows us to estimate the CIF for small differences between $f_0$ and $f_1$ using the MST procedure in Section \ref{sec:divergence}. Further, we can express the BER in terms of the CIF: \begin{equation} \epsilon^{\mathrm{Bayes}} \leq I_{\alpha} \approx (p+\alpha) - 2\alpha p - \frac{\alpha(1-\alpha)}{2} \tilde{D}_{p}(f_0,f_1). \nonumber \end{equation} For $p = q = \frac{1}{2}$, this bound reduces to $\epsilon^{\mathrm{Bayes}} \leq \frac{1}{2} - \frac{\alpha(1-\alpha)}{2} \tilde{D}_{\frac{1}{2}}(f_0,f_1)$. This is very similar to the upper bound in Theorem \ref{thm:BER}, differing only in the scale of the second term. Further, it is easy to see from this that the bound in Theorem \ref{thm:BER} is tighter than the Chernoff bound since $\frac{\alpha(1-\alpha)}{2} < \frac{1}{2}$ for all $\alpha$. This is not surprising since, locally, $\alpha=0.5$ yields the tightest bounds on the BER \cite{hero01}. This corresponds to the BC bound in (\ref{eqn:bcbound}) and we have already shown that new bound is tighter than the BC bound in Theorem \ref{thm:tightness}. This analysis further confirms that result. In addition to providing tighter bounds on the BER we can estimate the new $D_p$ bound without ever explicitly computing density estimates. We provide a numerical example for comparison. We consider two data samples from two classes, each of which comes from a normally distributed bivariate distribution with varying mean and spherical unit variance. The separation in means between the two class distributions is increased incrementally across 150 trials. The two distributions completely overlap initially, and are almost entirely separated by the final trial. In each trial we calculate the BER analytically using (\ref{eq:BER}), as well as the upper and lower bounds introduced in Theorem \ref{thm:BER}. We calculate the bounds both analytically (through numerical integration) and empirically (using the results from Theorem \ref{thm:HP}). In order to demonstrate the tightness of this bound we also plot it against the upper and lower Bhattacharyya error bounds for Gaussian data (the closed form expression of the bound for Gaussian data is known) \cite{kailath1967divergence}. Figure \ref{fig:BayesBound} displays the true BER along with both error bounds as a function of the Euclidean separation between the means of two bivariate normal distributions of unit variance. We see in this plot that the proposed error bounds are noticeably tighter than the Bhattacharyya error bounds and are well correlated with the true BER. Although the analytically calculated $D_p$ bound never crosses the BC bound, the empirically estimated $D_p$ bound crosses the BC bound for small values of the mean separation. This is due to the variance of the estimator. It is important to note that the estimator used here {\em asymptotically} converges to the $D_p$ divergence; however this result doesn't necessarily extend to finite data. In fact, for any fixed estimator, there exists a distribution for $X$ and $y$ such that the error converges arbitrarily slowly \cite{antos99}. \begin{figure} \centerline{\includegraphics[width=.5\textwidth]{Bayesbound.eps}} \caption{The $D_p$ and BC bounds on the Bayes error rate for a bivariate Gaussian example.} \label{fig:BayesBound} \end{figure} \section{Bounds on the Domain Adaptation Error} \label{sec:DAbounds} In this section, we consider a cross-domain binary classification problem and show how the $D_p$ distance can be used to bound the error rate in this setting also. Let us define data from two domains, the source (training) and the target (testing) domain and the corresponding labeling functions for each domain $y_{\mathrm{S}}(\mathbf{x}), y_{\mathrm{T}}(\mathbf{x}) \in \{0, 1\}$ that yields the true class label of a given data point $\mathbf{x}$. The source domain, denoted by the pair $(\mathbf{X}_{\mathrm{S}}, y_{\mathrm{S}})$, represents the data used to train the machine learning algorithm and the data $(\mathbf{X}_{\mathrm{T}}, y_{\mathrm{T}})$ represents the data the algorithm will encounter once deployed. Let us further define the conditional distributions $f_{\mathrm{S},0}(\mathbf{x}) = f_{\mathrm{S}}(\mathbf{x} | y_s(\mathbf{x}) = 0)$ and $f_{\mathrm{S},1}(\mathbf{x}) = f_{\mathrm{S}}(\mathbf{x} | y_s(\mathbf{x}) = 1)$. The rows of the source and target data are drawn from $f_{\mathrm{S}}(\mathbf{x})$ and $f_{\mathrm{T}}(\mathbf{x})$. The risk, or the probability that the decision, $\mathit{h}$, disagrees with the true label is defined as \begin{equation} \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) = \mathbf{E}_{f_{\mathrm{S}}(x)} [|h(\mathbf{x}) - y_{\mathrm{S}} |], \end{equation} for the source data. It is similarly defined for the target data. In Theorem \ref{thm:DAbound}, we identify a relationship between the error rates on the source and target data. The proof of this theorem can be found in Appendix E. \vspace{0.3cm} \begin{mydef} Given a hypothesis, $h$, the target error, $\epsilon_{\mathrm{T}}(h, y_{\mathrm{T}})$, can be bounded by the error on the source data, $\epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) $, the difference between labels, and a distance measure between source and target distributions as follows: \begin{align} \epsilon_{\mathrm{T}} (h, y_{\mathrm{T}}) \leq & \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) + \mathbf{E}_{f_{\mathrm{S}}(x)} [|y_{\mathrm{S}} - y_{\mathrm{T}} |] \\ \nonumber + & 2\sqrt{\tilde{D}_{\frac{1}{2}} ( f_{\mathrm{S}}, f_{\mathrm{T}} )}, \end{align} where $\tilde{D}_{\frac{1}{2}} ( f_{\mathrm{S}}, f_{\mathrm{T}} )$ assumes equiprobable data from the source and target distributions. \label{thm:DAbound} \end{mydef} \vspace{0.3cm} The bound in Theorem \ref{thm:DAbound} depends on three terms: the error on the source data, the expected difference in the labeling functions across the two domains, and a measure of the distance between source and target distributions ($D_p$ distance). We expect that the selected training algorithm will seek to minimize the first term; the second term characterizes the difference between labeling functions in the source and target domains; the third term is of particular interest to us - it provides a means of bounding the error on the {\em target} data as a function of the distance between source and target distributions. In the {\em covariate shift} scenario, we assume that there exists no difference between labeling functions (e.g. $y_{\mathrm{S}} (\mathbf{x}) = y_{\mathrm{T}} (\mathbf{x})$) and only the distributions between the source and target data change \cite{ben2010theory}. Under this assumption, the bound in Theorem \ref{thm:DAbound} reduces to \begin{equation} \epsilon_{\mathrm{T}} (h, y_{\mathrm{T}}) \leq \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) + 2\sqrt{\tilde{D}_{\frac{1}{2}} ( f_{\mathrm{S}}, f_{\mathrm{T}} )}. \label{eq:BD} \end{equation} Furthermore, if we assume that the decision rule $h$ attains the Bayes error rate, $\epsilon^{\mathrm{Bayes}}$, on the source domain, we can use the results from Theorem \ref{thm:BER} to rewrite the bound in Theorem \ref{thm:DAbound} using only the $D_p$ distance: \begin{equation} \epsilon_{\mathrm{T}} \leq \frac{1}{2} - \frac{1}{2}\tilde{D}_{p} ( f_{\mathrm{S},0}, f_{\mathrm{S},1} ) + 2\sqrt{\tilde{D}_{\frac{1}{2}} ( f_{\mathrm{S}}, f_{\mathrm{T}} )}. \label{eq:et1} \end{equation} If we denote the training data matrices by $\mathbf{X}_{\mathrm{S},0} \sim f_{\mathrm{S},0}$ and $\mathbf{X}_{\mathrm{S},1}\sim f_{\mathrm{S},1}$, then we can estimate this upper bound using the FR test statistic by \begin{equation} \frac{\mathcal{C}(\mathbf{X}_{\mathrm{S},0},\mathbf{X}_{\mathrm{S},1})}{N_{\mathrm{S},0} + N_{\mathrm{S},1}} + 2\sqrt{1-2\frac{\mathcal{C}(\mathbf{X}_{\mathrm{S}},\mathbf{X}_{\mathrm{T}})}{N_{\mathrm{S}} + N_{\mathrm{T}}}}. \label{eq:et2} \end{equation} The result shown in (\ref{eq:et2}) represents an upper bound on the target domain error that can be computed without access to any labels in this domain. This bound provides interesting insight on the importance of invariant representations for classification. The target error is bounded by the sum of the affinity between class distributions in the source domain and the square root of the $D_p$-distance between domains. Because of the square root and the multiplicative factor, it is clear that the second term in (\ref{eq:et2}) is weighted much more heavily. This stresses the importance of invariant representations in classification. In other words, the bound provides a means of quantifying the relative importance of selecting features that are invariant across domains versus features that provide good separation separation between classes in the source domain. \section{Numerical Results and Practical Algorithms} \label{sec:Practical Algorithms} Here, we describe a number of numerical experiments that evaluate the bounds in a classification setting. In the first experiment, we evaluate the tightness of the bound on the Bayes error rate in higher dimensions by comparing against two other bounds for an example where the Bayes error rate is known in closed form. In the second and third experiments, we develop new criteria for feature selection based on the derived bounds and compare the probability of correct classification against competing alternatives. \subsection{Bounding the Bayes Error Rate} Consider the two data sets $\mathcal{D}_1$ and $\mathcal{D}_2$ in Table \ref{tbl:BERDataSets}, each consisting of data from two 8 dimensional Gaussian distributions. In \cite{fukunaga90} Fukunaga computed the true Bayes error rate analytically for both of these data sets. Here we compare three different bounds on this error for both datasets - the $D_p$-based bound, the Mahalanobis bound, and the BC bound. We use the closed-form version of the BC and Mahalanobis bound for Gaussian data \cite{fukunaga90}. Furthermore, we assume perfect knowledge of the parameters for these two bounds ($\sigma$ and $\mu$). As a result, this is the best possible case for both of these bounds - the data matches the model and no estimation of the parameters is required. \renewcommand{\tabcolsep}{5pt} \begin{table} \caption{Parameters for 2 8-dimensional Gaussian data sets for which the Bayes error rate is known (from \cite{fukunaga90})} \begin{tabular}{c c | c c c c c c c c } \hline \multirow{4}{*}{$\mathcal{D}_1$} & $\mu_1$ & 0& 0& 0& 0& 0& 0& 0& 0 \\ & $\sigma_1$ & 1& 1& 1& 1& 1& 1& 1& 1 \\ & $\mu_2$ & 2.56 & 0& 0& 0& 0& 0& 0& 0 \\ & $\sigma_2$ & 1& 1& 1& 1& 1& 1& 1& 1 \\ \hline \multirow{4}{*}{$\mathcal{D}_2$} & $\mu_1$ & 0& 0& 0& 0& 0& 0& 0& 0 \\ & $\sigma_1$ & 1& 1& 1& 1& 1& 1& 1& 1 \\ & $\mu_2$ & 3.86 & 3.10 & 0.84 & 0.84 & 1.64 & 1.08 & 0.26& 0.01 \\ & $\sigma_2$ & 8.41 & 12.06& 0.12 & 0.22 & 1.49 & 1.77& 0.35 & 2.73 \\ \hline \end{tabular} \label{tbl:BERDataSets} \end{table} For both data sets $\mathcal{D}_1$ and $\mathcal{D}_2$, we evaluate the $D_p$-based upper bound between the two distributions using the graph-based method outlined in Section \ref{sec:divergence} for three different sample sizes (100 samples, 500 samples, and 1000 samples - 50 Monte Carlo simulations each). We compare the $D_p$ bound (computed from empirical data without assuming any parametric model of the data distribution) with the Bhattacharyya bound and the Mahalanobis bound. For both data sets, the average $D_p$-based bound is closer to the true error rate, regardless of the sample size. Again, it is important to stress that this is the best case scenario for the competing bounds since there exists a closed form expression for both bounds for Gaussian data and we assume perfect knowledge of the distribution parameters. Regardless, the empirically-estimated $D_p$ bound is still tighter. \begin{table} \caption{Comparing upper bounds on the Bayes error rate for the multivariate Gaussians defined in Table \ref{tbl:BERDataSets}.} \centering \begin{tabular}{l c c } \hline\hline & Data 1 & Data 2 \\ [0.5ex] \hline Actual Bayes Error & 10\% & 1.90\% \\ Mahalanobis Bound &18.95\%& 14.13\% \\ Bhattacharyya Bound &22.04\% & 4.74\% \\ $D_p$ Bound (100 points) & \bf{18.23\%} $\pm$ \bf{3.32\%} & \bf{4.10\%} $\pm$ \bf{1.10\%} \\ $D_p$ Bound (500 points) & \bf{16.88\%} $\pm$ \bf{1.51\%} & \bf{2.17\%} $\pm$ \bf{0.42\%} \\ $D_p$ Bound (1000 points) & \bf{16.46\%} $\pm$ \bf{1.14\%}& \bf{1.94\%} $\pm$ \bf{0.29\%} \\ \hline \end{tabular} \label{table:nonlin} \end{table} \subsection{Feature Selection using $D_p$-distance} In machine learning, feature selection algorithms are often used to reduce model complexity and prevent over-fitting \cite{liu2007computational}. In many scenarios, feature selection can actually improve model performance since the reduced dimensionality leads to a much more densely populated hypothesis space. This prevents the model from learning irrelevant patterns in the training data that aren't pertinent for a given task and will not generalize to new datasets. This problem is exacerbated in domain adaptation problems where the separation in domains makes misleading patterns in the training data especially problematic. We use the bounds defined in Theorems \ref{thm:BER} and \ref{thm:DAbound} to develop new feature selection criteria that aim to directly minimize the BER bound. We consider two different scenarios: (1) one where the training data and the test data come from the same distribution and (2) another where the training data and the test data come from different distributions. For both scenarios, we seek to identify the subset of features, $\Omega$, that will minimize the ``worst-case" error. For scenario 1, this results in minimizing the upper bound in Theorem 2: \begin{equation} \varPhi(\Omega) = \frac{\mathcal{C}(\mathbf{X}_1(\Omega),\mathbf{X}_2(\Omega))}{N_1+N_2}, \label{eq:crit1} \end{equation} and, for scenario 2, we minimize the DA bound defined in Theorem \ref{thm:DAbound}: \begin{equation} \begin{aligned} \varPhi(\Omega) &= \frac{\mathcal{C}(\mathbf{X}_{\mathrm{S},0}(\Omega ),\mathbf{X}_{\mathrm{S},1}(\Omega))}{N_{\mathrm{S},0}+N_{\mathrm{S},1}} \\ &+2\sqrt{1-2\frac{\mathcal{C}(\mathbf{X}_{\mathrm{S}}(\Omega),\mathbf{X}_{\mathrm{T}}(\Omega))}{N_{\mathrm{S}}+N_{\mathrm{T}}}}\end{aligned}. \label{eq:crit2} \end{equation} We integrate the optimization criteria into a forward selection search algorithm in Alg. \ref{alg:DAFR feature selection algorithm}. In this algorithm, we use a parameter $\alpha$ to determine whether or not the algorithm should account for the separation between domains. For traditional machine learning problems $\alpha$ should be set to $0$. For domain adaptation problems, $\alpha$ is set to $1$ to minimize the error upper bound, or tuned based on the importance of minimizing the separation between domains. We set $\alpha$ to $1$ for all DA experiments reported in this paper - this corresponds directly to the bound in Theorem \ref{thm:DAbound}. \begin{algorithm} [t] \caption{Forward selection algorithm using $D_p$-distance} \label{alg:DAFR feature selection algorithm} \begin{algorithmic} \State \bf{Input:} \rm{Feature data from two different classes in the source} \State \, \, \, \, \, \, \, \rm{domain and unlabelled data from the target} \State \, \, \, \, \, \, \, \rm{domain:}$\mathbf{X}_{\mathrm{S},0}$, $\mathbf{X}_{\mathrm{S},1}$, $\mathbf{X}_{\mathrm{T}}$, $\alpha$ \State \bf{Output:} \rm{Top} $k$ \rm{features that minimize $\varPhi$ : \State \, \, \, \, \, \, \, \, \, $\Omega$} \State\bf{Define:} \, $\Omega = \emptyset$ \State \, \, \, \, \, \, \, $F = {1 \dots M}$ \State \, \, \, \, \, \, \, $\mathbf{X}_{\mathrm{S}} = \mathbf{X}_{\mathrm{S},0} \cup \mathbf{X}_{\mathrm{S},1}$ \For {$j \in {1 \dots k}$} \State $\varPhi = \emptyset$ \For {$F_i \in F \setminus \Omega$} \State $ \begin{aligned} \varPhi(F_i) &= \frac{\mathcal{C}(\mathbf{X}_{\mathrm{S},0}(\Omega \cup F_i),\mathbf{X}_{\mathrm{S},1}(\Omega \cup F_i))}{N_{\mathrm{S},0}+N_{\mathrm{S},1}} \\ &+2 \alpha \sqrt{1-2 \frac{\mathcal{C}(\mathbf{X}_{\mathrm{S}}(\Omega \cup F_i),\mathbf{X}_{\mathrm{T}}(\Omega \cup F_i))}{N_{\mathrm{S}}+N_{\mathrm{T}}}} \end{aligned} $ \EndFor \State $\Omega = \Omega \cup \{\underset{F_i}{\text{argmin }} \varPhi(F_i)\}$ \EndFor \end{algorithmic} \end{algorithm} We empirically evaluate the feature selection algorithm on a pathological speech database recorded from patients with neurogenic disorders. In particular, we consider the problem of classifying between healthy and dysarthric speech. Dysarthria is a motor speech disorder resulting from an underlying neurological injury. We make use of data collected in the Motor Speech Disorders Laboratory at Arizona State University, consisting of 34 dysarthric speakers and 13 healthy speakers (H). The dysarthria speakers included: 12 speakers with ataxic dysarthria, secondary to cerebellar degeneration (A), 10 mixed flaccid-spastic dysarthria, secondary to amyotrophic lateral sclerosis (ALS), 8 speakers with hypokinetic dysarthria secondary to Parkinson's Disease (PD), and 4 speakers with hyperkinetic dysarthria secondary to Huntington's disease (HD). Each patient provided speech samples, including a reading passage, phrases, and sentences. The speech database consists of approximately 10 minutes of recorded material per speaker. These speech samples were taken from the larger pathological speech database described in \cite{Liss14}. The recordings from each speaker were split into individual sentences by hand and features were extracted at the sentence level. Three different feature sets were used: envelope modulation spectrum (EMS) features, long-term average spectrum (LTAS) features, and ITU-T P.563 features. EMS is a representation of the slow amplitude modulations in a signal and captures aspects of the speech signal related to rhythm. The LTAS features capture atypical average spectral information in the signal. The P.563 features measure atypical and unnatural voice and articulatory quality. For a more detailed discussion of these features, we refer the readers to \cite{berisha2014modeling}. In our first experiment we evaluate the FS algorithm based on the criteria in (\ref{eq:crit1}). We consider the problem of discriminating between healthy and dysarthric speech based on the features discussed above. For this experiment we form both the training and test sets by randomly drawing 300 dysarthric speech samples and 300 healthy speech samples for each set, ensuring that there is no overlap between training and test data. Using the FS algorithm in Alg. \ref{alg:DAFR feature selection algorithm}, we use the training data to find the top 20 features that maximize the separability between the two groups. We compare this feature selection algorithm against one based on maximizing the Bhattacharyya distance between classes. Using the feature subsets chosen by the two algorithms, we build support vector machine (SVM) classifiers on the training data and evaluate their accuracy on the test data. This experiment is repeated ten times using different randomly generated training and test sets, and the average accuracy is displayed in Figure \ref{fig:ACCdys}. The results of this experiment indicate that the initial features selected by the $D_p$-distance criteria provide faster convergence to the maximum classification rate when compared to those selected by the BC criteria; however, as expected, as additional features are selected, both algorithms eventual converge to roughly the same level of performance. We purposefully restrict ourselves here to a very limited training set (300 samples per class) in order to evaluate the $D_p$-distance in a small $N$ setting. Next, we consider the same problem but with a variable number of training samples per class. The results of this experiment are presented in Table \ref{tab:results}. As the number of training instances increases, the classifier success rate increases for the $D_p$-based method, however it stays relatively flat for the BC-based method. For very small values of $N$, the bias/variance associated with the $D_p$-distance estimator seems to results in features that provide poorer separability when compared to the BC method. Given that the results of this estimator are asymptotic, this is expected. As the number of features increase, both the $D_p$ and BC algorithms converge to approximately the same value. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{DpSimPlot} \caption{Average classification accuracy using reduced feature sets} \label{fig:ACCdys} \end{figure} \begin{table} \centering \caption{Average classification accuracies (in percent) of top 10 features selected by $D_p$ and BC divergence} \begin{tabular}{ c c c c c c c } \hline \hline \bf{Number of} &\bf{Algorithm} & \multicolumn{5}{ c }{\bf{Number of Training Instances}} \\ \bf{Features} & \multicolumn{1}{c} {} & {\em 100} & {\em 200} & {\em 300} & {\em 400} & {\em 500} \\ \hline \multirow{2}{*}{10} & \pbox{25cm}{BC} & \textbf{86.88} & 86.93 & 87.61 & 87.98 & 87.22 \\ & \pbox{25cm}{$D_p$} & 86.36 & \textbf{88.67} & \textbf{89.59} & \textbf{89.20} & \textbf{90.03} \\ \multirow{2}{*}{15} & \pbox{25cm}{BC} & \textbf{90.84} & 90.46 & 90.51 & 91.69 & 90.88 \\ & \pbox{25cm}{$D_p$} & 88.08 & \textbf{90.66} & \textbf{92.00} & \textbf{92.12} & \textbf{92.72} \\ \multirow{2}{*}{20} & \pbox{25cm}{BC} & \textbf{91.10} & \textbf{93.02} & \textbf{93.35} & \textbf{93.98} & 93.72 \\ & \pbox{25cm}{$D_p$} & 89.28 & 92.15 & 93.20 & 93.41 & \textbf{94.21} \\ \hline \end{tabular} \label{tab:results} \end{table} Next we would like to investigate the efficacy of the FS criteria (\ref{eq:crit2}) in a domain adaptation setting. We consider the same problem here - discriminating between healthy and dysarthric individuals; however now we train on data from one disorder and evaluate on data from another disorder. In order to partition the data into dissimilar training and test groups, we start by selecting 300 healthy instances for the training set and 300 (different) healthy instances for the test set. The rest of the training and test data is made up of 300 randomly selected samples from one of the four Dysarthria subtypes: Ataxic, ALS, Huntington's and Parkinson's. Each model is then evaluated on the test sets for each subtype not contained in the training set. Using each training set-test set combination, we generate feature subsets using the proposed selection algorithm, along with three competing algorithms that are used for comparison. The first algorithm we use for comparison is a standard forward selection algorithm based on the BC distance. This algorithm is used as a baseline for comparison, however because it assumes the training and test data come from the same distribution \cite{guorong1996}, we expect it to perform poorly relative to the other algorithms. Next we use the same Bhattacharyya FS algorithm, however we account for the separation in domains by using feature normalization, as described in \cite{Kinnunen2009}, prior to feature selection. We refer to this method as BC with feature normalization (BCFN). The final domain-invariant feature learning algorithm we compare against is based on Conditional Probability Models (CPM), as described in \cite{satpal2007domain}. This approach attempts to select a sparse mapping that maximizes an objective function that trades off between prediction algorithm performance and the distance between target and source distributions (controlled by a Lagrangian parameter $\lambda$). For classification, the logistic regression function is used and a penalization term is added to ensure that the mapping contains minimal contribution from features containing large differences between source and target data. For the specifics of the implementation, we refer the reader to \cite{satpal2007domain}. The same parameter settings are used here. Because this approach utilizes an optimization criteria involving a trade-off between the source-domain separation and the train-test separation, it resembles the proposed FS algorithm more closely than any other method proposed in the literature. We present the average classification accuracies yielded by the top 20 features from each FS algorithm for each train-test combination in Table \ref{tab:DysDAResults1}. The algorithm proposed in this paper acheived the highest classification accuracy in 8 of the 12 trials, while the BC algorithm scored the lowest 8 of 12 trials. The results clearly illustrate the importance of utilizing domain adaptation in this type of scenario; even an approach as simple as feature normalization yields roughly 8.5 \% higher classification accuracy on average. To observe the value of the lower-dimensional subsets generated by each algorithm, we average the accuracy across all twelve trials and display the accuracy as a function of the number of features in Figure \ref{fig:ACCdysDA}. We can see in this figure that the performance of the proposed algorithm consistently improves as additional features are added. Because the optimization criteria we have selected minimizes the upper bound on the error, the algorithm has a tendency to pick ``safe'' features; e.g. using this algorithm invariant features are preferred, even if they are less informative in the source domain. To better understand how DA helps us build robust models, we look at the top two features returned general and DA FS criterions proposed in this paper. Figure \ref{fig:FRScat1} displays the training and test data plotted across the top two features returned by the general FS criteria. We see that these two features represent a strong separation between the two classes in the training set, however this separation is not similarly represented in the test data, and as a result these features will not be beneficially for the target application. Figure \ref{fig:FRDAScat1} displays the data plotted against the top two features returned by the DA FS criteria. Even though the separation between classes in the training data isn't as noticable as in the features returned by the general criteria, both Dysarthria subtypes manifest themselves very similarly within this feature space, and as a result models built on them will generalize well between these two subtypes. \begin{table} \centering \caption{Classification accuracies of SVM classifier using the top 20 features returned by each feature selection method for each combination of training and test data} \begin{tabular}{ c c c c c c c} \hline \hline Trial & Source & Target & BC & BCFN & CPM & $D_p$ \\ \hline 1 & Ataxic & ALS & 56.50 & 73.28 & 75.82 & \textbf{76.22} \\ 2 & Ataxic & Huntington's & 56.83 & 72.52 & 70.12 & \textbf{75.12} \\ 3 & Ataxic & Parkinson's & 49.27 & 60.75 & 58.53 & \textbf{64.43} \\ 4 & ALS & Ataxic & 52.95 & 66.35 & 54.68 & \textbf{67.15} \\ 5 & ALS & Huntington's & 64.25 & \textbf{73.67} & 65.50 & 72.23 \\ 6 & ALS & Parkinson's & 54.32 & 65.97 & 69.48 & \textbf{73.60} \\ 7 & Huntington's & Ataxic & 49.95 & \textbf{53.63} & 43.00 & 49.30 \\ 8 & Huntington's & ALS & 63.40 & 64.12 & 63.17 & \textbf{73.00} \\ 9 & Huntington's & Parkinson's & 59.48 & 62.22 & 69.73 & \textbf{76.03} \\ 10 & Parkinson's & Ataxic & 41.13 & \textbf{55.65} & 42.15 & 48.23 \\ 11 & Parkinson's & ALS & 62.10 & 66.30 & 61.25 & \textbf{67.35} \\ 12 & Parkinson's & Huntington's & \textbf{73.67} & 71.12 & 64.47 & 68.98 \\ \hline \end{tabular} \label{tab:DysDAResults1} \end{table} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{avgacc2.eps} \caption{Average Classification Accuracy on foreign subtypes using reduced feature sets} \label{fig:ACCdysDA} \end{figure} \begin{figure} \begin{center} \subfloat[Source and target data using top domain-specific features]{ \includegraphics[width=0.45\textwidth]{FR_Scat_2_4} \label{fig:FRScat1} } \subfloat[Source and target data using top domain-invariant features]{ \includegraphics[width=0.45\textwidth]{FRDA_Scat_2_4} \label{fig:FRDAScat1} } \caption{Low dimensional representation of datasets (Source Domain:ALS, Target Domain:Parkinson's).} \label{fig:Scatter Plots} \end{center} \end{figure} \section{Conclusion} In this paper we showed that a nonparametric $f$-divergence bounds the Bayes classification error rate for two scenarios: the case where training and test data come from the same distribution and the case where training and test data come from different distributions. For the first case, we show that the bound is tighter than the commonly used Bhattacharyya bound on the Bayes error. Our experimental results confirm the theoretical findings - when used as a feature selection criterion in a pathological speech classification problem, the $D_p$-distance yields an improved classification rate with fewer features as compared against popular alternatives. Future work revolves around analyzing the estimator of the $D_p$-distance. In particular, understanding the convergence properties of the estimator as a function of the sample size and data dimension will yield insight into the fidelity of the estimation for any given data set. Furthermore, characterizing the bias and variance of this estimator may allow us to apply ensemble estimator methods of \cite{moon2014multivariate} to improve estimation accuracy for high dimensional feature space. \appendices \section{Proof of Theorem \ref{eqn:div_HP}} By combining Eq. (\ref{eqn:div_HP}) and (\ref{eqn:defineA}) from the text we can rewrite \begin{align} D_p & = \frac{1}{4pq} [1 - 4pq A_p(f,g) - (p-q)^2] \\ & = \frac{1- (p-q)^2}{4pq} - A_p(f,g) \\ & = 1 - A_p(f,g), \label{eqn:thm1proofDA} \end{align} where \begin{equation} \label{eqn:th1proof1} A_p = \int \frac{f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x}. \end{equation} From Theorem 2 in \cite{henze1999multivariate}, we know that as $N_f \rightarrow \infty$ and $N_g \rightarrow \infty$ in a linked manner such that $\frac{N_f}{N_f+N_g}\rightarrow p$ and $\frac{N_g}{N_f+N_g}\rightarrow q$, \begin{equation} \label{eqn:thm1proofeq2} \frac{\mathcal{C}(f,g)}{N_f + N_g} \rightarrow 2pq A_p(f,g), \end{equation} almost surely. Combining the asymptotic relationship in Eq. (\ref{eqn:thm1proofeq2}) with the results from Eq. (\ref{eqn:thm1proofDA}), we see that \begin{equation} 1- \mathcal{C}(f,g) \frac{N_f + N_g}{2N_fN_g} \rightarrow D_p(f,g), \end{equation} almost surely as $N_f \rightarrow \infty$ and $N_g \rightarrow \infty$ in a linked manner such that $\frac{N_f}{N_f+N_g}\rightarrow p$ and $\frac{N_g}{N_f+N_g}\rightarrow q$. \section{Proof of Theorem \ref{thm:BER}} We begin with the realization that the Bayes error rate can be expressed in terms of the total variation (TV) distance between distributions \cite{kailath1967divergence}: \begin{equation} \epsilon^{Bayes}=\frac{1}{2}-\frac{1}{2}\int |pf(\mathbf{x})-qg(\mathbf{x})|d\mathbf{x}. \label{eq:BER1} \end{equation} Next, we show that we can bound the TV distance from above and below using $\tilde{D}_p$: \begin{subequations} \begin{align} \tilde{D}_p & = 1-4pqA_p(f,g) \\ &= 1 - 4pq\int \frac{f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x}\\ &\begin{aligned}= &\int \left[pf(\mathbf{x})+qg(\mathbf{x}) \right]d\mathbf{x} \\&- 4pq\int \frac{f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x}\end{aligned}\\ &= \int \frac{[pf(\mathbf{x})+qg(\mathbf{x})]^2-4p q f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x} \\ &= \int \frac{pf(\mathbf{x})^2+qg(\mathbf{x})^2-2p q f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x} \\ &= \int \frac{[pf(\mathbf{x})-qg(\mathbf{x})]^2}{pf(\mathbf{x})-qg(\mathbf{x})} d\mathbf{x} \\ &= \int \left|pf(\mathbf{x})-qg(\mathbf{x})\right| \frac{\left|pf(\mathbf{x})-qg(\mathbf{x})\right|}{pf(\mathbf{x})+qg(\mathbf{x})} d\mathbf{x} \label{eq:dist1f}. \end{align} \label{eq:dist1} \end{subequations} Since \begin{equation} \frac{\left|pf(\mathbf{x})-qg(\mathbf{x})\right|}{pf(\mathbf{x})+qg(\mathbf{x})} \leq 1 \; \; \; \; \mathrm{for} \; \mathrm{all } \; \mathbf{x}, \end{equation} we can simplify (\ref{eq:dist1f}) to \begin{equation} 1-4pqA_p(f,g) \leq \int \left|pf(\mathbf{x})-qg(\mathbf{x})\right| d\mathbf{x}. \label{eq:A1} \end{equation} This provides a lower bound on the TV distance based on $\tilde{D}_p$. In order to derive the upper bound we begin with \begin{subequations} \begin{align} D_{\mathrm{TV}}(f,g)&=\int \left|pf(\mathbf{x})-qg(\mathbf{x})\right| d\mathbf{x} \\ &= \int \left|pf(\mathbf{x})-qg(\mathbf{x})\right| \frac{\sqrt{pf(\mathbf{x})+qg(\mathbf{x})}}{\sqrt{pf(\mathbf{x})+qg(\mathbf{x})}} d\mathbf{x}\\ &\begin{aligned}&\leq \sqrt{\int \left(\frac{pf(\mathbf{x})-qg(\mathbf{x})}{\sqrt{pf(\mathbf{x})+qg(\mathbf{x})}}\right)^2 d\mathbf{x}} \\ &\times \cancelto{1}{\sqrt{\int \left(\sqrt{pf(\mathbf{x})+qg(\mathbf{x})}\right)^2 d\mathbf{x}}} \end{aligned} \\ &\leq \sqrt{\tilde{D}_{p}(f,g)} \label{eq:TVbound2}. \end{align} \end{subequations} By combining the inequalities in (\ref{eq:A1}) and (\ref{eq:TVbound2}) with the relationship in (\ref{eq:BER1}), we see that we can bound the BER by \begin{equation} \frac{1}{2}-\frac{1}{2}\sqrt{\tilde{D}_{p}(f,g)} \leq \epsilon^{Bayes} \leq \frac{1}{2}-\frac{1}{2}\tilde{D}_{p}(f,g). \end{equation} \section{Proof of Theorem \ref{thm:HPChernoff}} By the geometric vs harmonic mean inequality, \begin{equation} f(\mathbf{x})^q g(\mathbf{x})^p \geq \frac{f(\mathbf{x})g(\mathbf{x})}{pf(\mathbf{x})+qg(\mathbf{x})}. \end{equation} It immediately follows that $A_p(f,g) \leq \int f(\mathbf{x})^q g(\mathbf{x})^p$, a scaled Chernoff information function. Thus, \begin{equation} A_p(f,g) \leq \int f(\mathbf{x})^q g(\mathbf{x})^p. \label{eq:qpbound} \end{equation} \section{Proof of Theorem \ref{thm:tightness}} For equiprobable classes ($p=q=\frac{1}{2}$) The upper and lower bounds on the Bayes error rate based on the Bhattacharyya distance are defined by \cite{kailath1967divergence} \begin{equation} \frac{1-\sqrt{1-BC^2(f,g)}}{2} \leq \epsilon^{Bayes} \leq \frac{BC(f,g)}{2}, \label{eq:BB} \end{equation} where \begin{equation} BC(f,g)=\int \sqrt{f(\mathbf{x})g(\mathbf{x})}d\mathbf{x}. \end{equation} To show that the $\tilde{D}_{\frac{1}{2}}$ bound upper bound is tighter than the Bhatacharyya bound we must show that $A_{\frac{1}{2}}(f,g) \leq BC(f,g)$. It is clear that this is the case from Theorem \ref{thm:HPChernoff}. For the $\tilde{D}_{\frac{1}{2}}$ lower bound to be tighter, $BC^2(f,g)$ must be less than equal to $A_{\frac{1}{2}}(f,g)$. We show this to be true using the Cauchy-Schwartz inequality: \begin{subequations} \begin{align} BC^2(f,g)&=\left[\int \sqrt{f(\mathbf{x})g(\mathbf{x})}\right]^2 \\ &=\left[ \int \frac{\sqrt{f(\mathbf{x})g(\mathbf{x})}}{\sqrt{\frac{1}{2}(f(\mathbf{x})+g(\mathbf{x}))}}\sqrt{\frac{1}{2}(f(\mathbf{x})+g(\mathbf{x}))}d\mathbf{x}\right]^2 \\ &\leq \int \frac{f(\mathbf{x})g(\mathbf{x})}{\frac{1}{2}(f(\mathbf{x})+g(\mathbf{x}))}d\mathbf{x} \cancelto{1}{\int\frac{1}{2}(f(\mathbf{x})+g(\mathbf{x}))d\mathbf{x}} \\ & = A_{\frac{1}{2}}(f,g). \end{align} \end{subequations} Combining both bounds, we see that \begin{equation*} \begin{aligned} \frac{1}{2}-\frac{1}{2}\sqrt{1-BC^2(f,g)} \leq & \frac{1}{2} - \frac{1}{2}\sqrt{\tilde{D}_{\frac{1}{2}}(f,g)} \\ \leq \epsilon^{Bayes} \leq \frac{1}{2} - & \frac{1}{2}\tilde{D}_{\frac{1}{2}}(f,g) \leq \frac{1}{2}BC(f,g). \end{aligned} \label{eq:thm3} \end{equation*} \section{Proof of Theorem \ref{thm:DAbound}} The proof begins in the same fashion as the result in \cite{ben2010theory} and then diverges. \begin{subequations} \begin{align} \epsilon_{\mathrm{T}} (h, y_{\mathrm{T}}) = & \epsilon_{\mathrm{T}} (h, y_{\mathrm{T}}) + \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) - \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) \\ & + \epsilon_{\mathrm{S}} (h, y_{\mathrm{T}}) - \epsilon_{\mathrm{S}} (h, y_{\mathrm{T}}) \notag \\ \leq & \epsilon_{\mathrm{S}}(h, y_{\mathrm{S}}) + |\epsilon_{\mathrm{S}} (h, y_{\mathrm{T}}) - \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}})| \\ & + |\epsilon_{\mathrm{T}}(h, y_{\mathrm{T}}) - \epsilon_{\mathrm{S}} (h, y_{\mathrm{T}})| \notag \\ \leq & \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) + \mathbf{E}_{f_{\mathrm{S}}(\mathbf{x})} [|y_S - y_T |] \\ & + \Bigl \lvert \int f_{\mathrm{T}}(\mathbf{x}) |h(\mathbf{x}) - y_T| d\mathbf{x} \notag \\ & \ \ \ \ \ - \int f_{\mathrm{S}}(\mathbf{x}) |h(\mathbf{x}) - y_T| d\mathbf{x} \Bigl \lvert \notag \\ \leq & \epsilon_{\mathrm{S}} (h, y_{\mathrm{S}}) + \mathbf{E}_{f_{\mathrm{S}}(\mathbf{x})} [|y_S - y_T |] \\ & + \int | f_{\mathrm{T}}(\mathbf{x}) -f_{\mathrm{S}}(\mathbf{x}) | |h(\mathbf{x}) - y_T| d\mathbf{x} \notag \\ \leq & \epsilon_{\mathrm{S}}(h, y_{\mathrm{S}}) + \mathbf{E}_{f_{\mathrm{S}}(\mathbf{x})} [|y_S - y_T |] \label{finalbound} \\ &+ \int | f_{\mathrm{T}}(\mathbf{x}) -f_{\mathrm{S}}(\mathbf{x}) | d\mathbf{x} \notag \end{align} \end{subequations} In (\ref{finalbound}), we identify an upper bound on the target error expressed using the TV distance between source and target distributions. Using (\ref{eq:TVbound2}) this can be expressed in terms of $\tilde{D}_{\frac{1}{2}}$: \begin{equation} \begin{aligned} \epsilon_T(h,y_T) & \leq \epsilon_S(h,y_S) +E\{|y_S-y_T|\} \\ &+2\sqrt{\tilde{D}_{\frac{1}{2}}(f_T,f_S)} \label{eq:DAbound} \end{aligned} \end{equation} % \bibliographystyle{IEEEtran}
1,314,259,992,688
arxiv
\section{Introduction} The geologically recent mega-thrust earthquakes and giant tsunamis in Indonesia (2004) and Japan (2011) as well as the seismic catastrophes in Haiti (2010) and China (2008) each occurred in regions previously mapped as having ``low'' seismic hazard. One reason for this is because hazard assessments relied largely on instrumental data only available since the mid-1900s \cite{Steinetal2012}. Historical records and geological evidence of seismic events exist in each of these regions, but they were not adequately accounted for due to the uncertainty that is inherent to such data sources \cite{MuJi2008}. For example, tsunami deposits were documented on the Sendai Plain before the 2011 Japan mega-thrust earthquake \cite{minoura2001869}, but were not considered in risk assessments, such as the retrofitting of the nuclear power plant. These recent seismic and tsunami disasters motivate us to push beyond the geologically limited time window of instrumentally recorded earthquakes to find new ways of quantifying unconventional data sources for earthquake locations and magnitudes. Resilience to tsunamis and other seismic hazards requires learning from what has happened in the past -- incorporating all available data and records (even from uncertain sources) -- and thereby reducing the negative impact of future events through more comprehensive hazards education and preparedness strategies \cite{unisdr2009}. While evidence of previous earthquake and tsunami events can be found in the geological record, such as deposits left by previous tsunami events \cite{Sulaemanetal2017}, damage to coral reefs \cite{meltzner2010coral,gagan2015coral}, and sediment cores of turbidites \cite{GoNeJo2003}, much information on past events is available in the form of textual accounts in historical records such as the recently translated Wichmann catalog \cite{wichmann1918earthquakes,wichmann1922earthquakes,harris2016waves} and other sources \cite{Re1858,Be1868,Mu2012,Re2012} for Indonesia, North and South America (see \cite{SiAsHa1981,De2004,KoKo2004} for example), and many other locations in Asia and throughout the world. Issues and concerns about the accuracy and validity of historical records illustrate a shift from the question ``can we quantify what happened?'' to ``can we make a principled estimate of the uncertainties around what happened?''. While developing rigorous and reproducible estimates of historical events from textual and anecdotal accounts presents a number of obvious challenges, recent efforts to do so illustrate both the promise of these data sources as well as the imperative of incorporating all available historical data into the modern understanding of seismic risk. For example the western Sunda Arc experienced the great Sumatran earthquake and Indian Ocean tsunami of 2004, which claimed more lives than any other tsunami in recorded history \cite{Barber2005}. Most of the world was surprised by the event because it had been forty years since an earthquake or tsunami of that magnitude had occurred anywhere on Earth, and much longer since one had happened in a densely populated area like Indonesia. However, several studies prior to the event used historical and geological records to identify the seismic risk in this region \cite{NeMc1987,GhOi1988,HaDoPr1997,Zachariasenetal1999,HaPr2002}. (Several studies since have also identified evidence of large tsunamis in the region -- see, for example, \cite{meltzner2010coral,philibosian2017earthquake,Rubinetal2017}.) Unfortunately, due to the uncertain nature of historical accounts, these studies did not quantify or provide uncertainties for their predictions and hence their results were not actively incorporated into hazard assessments for the region. In addition, this scientific research did not move ``downstream'' very far, and did nothing to increase resiliency to tsunami hazards in the Indian Ocean region as most of those in harm’s way did not know what a tsunami was, let alone that they were at extreme risk for one \cite{Kay2014}. In effect, in a field awash with ``big'' data -- modern automated instrumentation -- reconstructions of seismic and tsunami events from historical accounts failed to have a direct impact on forecasting and mitigation efforts because they relied on data that was ``small,'' i.e., sparse, highly uncertain, centuries old, and in some cases textual or anecdotal in nature. In addition, the methods used to infer the causative event were largely ad-hoc, so the resulting inference to characterize previous events and hence the prediction of future events was necessarily qualitative. This is useful to indicate the potential for a seismic hazard, but is of little quantitative use for policy decisions relevant to hazard assessment. It is therefore desirable to develop a more rigorous and reproducible methodology that can both leverage the promise of these historical data to provide new insights about seismic history, informing understanding of the current status of elastic strain accumulation of the relevant faults, while also being honest about what it cannot tell us due to the inherently uncertain nature of the data. In this paper, we apply the Bayesian approach to inverse problems \cite{kaipio2005statistical,stuart2010inverse,gelman2014bayesian} to address this issue. A Bayesian framework is a natural fit because the chief problem we face is uncertainty in the data; the resulting posterior distribution will therefore provide estimates of the most likely values of, but also the uncertainties that surround, the seismic parameters we would like to estimate, e.g., the magnitude and location of the historical earthquake in question. Here the numerical resolution of partial differential equations (PDEs) describing tsunami wave propagation provides a ``forward map'' which can be ``inverted'' starting from our historical data to develop the posterior distribution. While the Bayesian framework has been used in the past to address problems in seismicity (see \cite{bui2013computational,martin2012stochastic,giraldi2017bayesian} for a few examples), our study is the first that we know of to apply the approach to inference of pre-instrumental events. Our focus here is on an initial case study concerning the reconstruction of the 1852 Banda arc earthquake and tsunami in Indonesia detailed in the recently translated Wichmann catalog of earthquakes \cite{harris2016waves,wichmann1922earthquakes} and from contemporary newspaper accounts \cite{newspaper}. We refer the reader to \cite{ringer2021methodological}, which describes the approach to estimating the 1852 event from a more geological perspective, as well as to the graduate thesis \cite{ringer2020method}. The rest of the article is organized as follows: \cref{sec:data} describes the dataset that we will use, its limitations, and the associated challenges of drawing conclusions from it. In \cref{sec:bayes}, we detail how we adapt the Bayesian framework to the problem. In \cref{sec:application}, we outline how we apply the approach to the 1852 Banda Arc earthquake and tsunami. \cref{sec:results} describes the results of the inference for the 1852 event and bounds on the sensitivity and uncertainty of our analysis. \cref{sec:discussion} outlines conclusions of geological relevance for the 1852 event, how the methodology can be applied to other problems, and related paths for future research. \section{The Data: Historical Accounts of Tsunamis} \label{sec:data} In this section, we describe the kinds of data that will be used to infer characteristics of historical earthquakes and some of the challenges associated with doing so. We focus on textual accounts, although analogous (if perhaps less severe) interpretation issues arise with geological data such as disrupted turbidites, coral uplifts, and tsunami deposits. In particular, geological evidence of past seismic events is less uncertain, but the monetary cost of obtaining such data is prohibitive. As an example for the textual accounts, the two volumes of Arthur Wichmann’s \textit{The Earthquakes of the Indian Archipelago} \cite{wichmann1918earthquakes,wichmann1922earthquakes} document nearly 350 years of observations of earthquakes and tsunamis for the entire Indonesian region. The observations were mostly compiled from Dutch records kept by the Dutch East India Company of Indonesia. Seismic events are included that reach west to east from the Cocos Islands to New Guinea, and north to south from Bangladesh to Timor. Although the catalogue is cited in some tsunami and earthquake literature \cite{soloviev1974catalog,newcomb1987seismic}, it remained largely unknown to the scientific community until its translation to English and interpretation of what faults may have produced these events \cite{harris2016waves}. The Wichmann catalog documents 61 earthquakes and 36 tsunamis in the Indonesian region between 1538 and 1877. Most of these events caused damage over a broad region, and are associated with years of temporal and spatial clustering of earthquakes. However, there has not been a major shallow earthquake ($Mw \ge 8$) in Java and eastern Indonesia for the past 160 years. During this time of relative quiescence, enough tectonic strain energy has accumulated across several active faults to cause major earthquake and tsunami events reminiscent of those documented in the Wichmann catalog. The disaster potential of these events is much greater now than in the past due to an exponential growth in population and urbanization in coastal regions destroyed by past events. \subsection{The 1852 Tsunami and Historical Observations} \begin{wrapfigure}{r}{20em} \fbox{\begin{minipage}{20em} {\bf 1852, November 26, 7:40.} At Banda Neira, barely had the ground been calm for a quarter of an hour when the flood wave crashed in \dots The water rose to the roofs of the storehouses \dots and reached the base of the hill on which Fort Belgica is built on. \end{minipage}} \caption{An excerpt from the Wichmann catalog for the 1852 tsunami and earthquake.} \label{fig:1852BandaNeira} \end{wrapfigure} The gigantic earthquake and tsunami of 1852 is perhaps the largest recorded historic seismic event of its kind in eastern Indonesia \cite{fisherharris2016}. The main shock of the earthquake took place between 7 a.m. and 8 a.m. on November 26, 1852. Later that day, 9 aftershocks were felt. Aftershocks happened daily for the next 8 days and occasionally in the following months and years. For context, a map of the region is shown in \cref{fig:1852map}. \cref{fig:1852BandaNeira} shows an excerpt from the Wichmann catalog entry for the 1852 tsunami with observations in Banda Neira, a small island in the Banda Sea west of Papua New Guinea. The account provides clear descriptors, such as locations, arrival times, wave heights, and inundation lengths that can be used to characterize the tsunami and infer the earthquake that might have caused it. Moreover, because historical observations of the tsunamis were observed in multiple, often geographically-dispersed locations (Banda Neira being just one of several for the 1852 event), even uncertain observations can be ``triangulated'' to provide more certain estimates of earthquake size and location. At the same time, the excerpt also demonstrates some of the challenges associated with doing such an inference in a rigorous way: Given that these measurements were taken well before the modern era of automated and sophisticated sensing, how accurate are they? What does water rising to rooftops tell us about the event? In the next section, we describe past attempts in the natural hazards community to use observations like those above to estimate historical earthquakes. \begin{figure}[h] \centering \includegraphics[width=.8\textwidth]{figures/Overview3.png} \caption[1852 Banda arc event]{The seismic and geologic setting for the 1852 event. Convergent/transvergent plate boundaries are in red/black. The green rectangle indicates the region that is depicted in \cref{fig:post:latlong}. The yellow rectangle is where the posterior distribution concentrates. Locations where observations of the tsunami are used in the inversion process are labeled and indicated by a red dot. } \label{fig:1852map} \end{figure} \subsection{Past Approaches to Historical Inference} \label{sec:traditional} Previous efforts to reconstruct pre-instrumental earthquakes have varied from a focus on the use of geological evidence (see \cite{monecke20081,sieh2008earthquake,jankaew2008medieval,meltzner2010coral} for example) to the use of historically recorded (but not instrumental) accounts \cite{okal2003mechanism,bryant2007cosmogenic,LiuHarris2014,harris2016waves,reid2016two,fisherharris2016,GrNgCuCi2018,Cummins2020,PranantyoCummins2020} as well as some combination of the two types of uncertain data (see \cite{martin2019reassessment} for one example). Most of these efforts, particularly those directed toward using historical records, have relied on a combination of physical intuition and a restricted number of forward simulations to match the observational data. Qualitative comparisons are then made to the historical (or geological) record, and a heuristic choice is made as to the ``best'' forward simulation that fits the data. Past attempts have been made to reconstruct the earthquake that produced the 1852 tsunami using observations from the Wichmann catalog in particular. In \cite{fisherharris2016}, for example, the Wichmann observations were converted into estimates of wave heights, arrival times, and onshore wave runups. Nine ``reasonable'' candidate earthquakes were then constructed and simulated using a numerical model of tsunami propagation. The numerical results and Wichmann text were then qualitatively compared to determine which candidate event provided the ``best'' match, which then was declared the most likely source. This analysis indicated that the source of the 1852 event was an earthquake on the Tanimbar trough exceeding 8.4 Mw. This approach, however, is laced with subjective judgments, particularly in terms of (i) how such uncertain observations are converted into numerical estimates (ii) which candidate earthquake sources are chosen and (iii) what constitutes the best match. Taken together, these concerns make the results difficult to justify or reproduce. Meanwhile, interpreting observations like those in \cref{fig:1852BandaNeira} as a single number representing arrival time or wave height is clearly too simplistic. So while such investigations have significantly improved our understanding of the historical seismicity of different regions, with modern computational resources and recent advances in algorithmic techniques, we propose a more principled approach to model observational error and incorporate it into the inversion process. \section{Bayesian Inference and Likelihood Modeling} \label{sec:bayes} Our approach to leveraging the data described in \cref{sec:data} in a more principled and systematic fashion involves introducing a Bayesian framework. Bayesian inference \cite{gelman2014bayesian,kaipio2005statistical,dashti2017bayesian,tarantola2005inverse} provides a rigorous, statistical methodology for converting uncertain outputs into probabilistic estimates of model parameters. The primary inputs required for Bayesian inference are: \begin{itemize \item \textbf{The ``prior''} $\mpr$ is a probability density describing the best guess of model parameters $\unk$ without considering observations. In our setting, the parameters characterize the seismic event of interest. \item \textbf{The ``likelihood''} $\llh$ is the conditional probability density associated with observable $\data$ given model parameters $\unk$, i.e., $\llh(\data; \unk) \sim \data | \unk$. The likelihood represents the probability that, for example, the peak wave height for the historical event was $\data$ given that the earthquake was described by parameters $\unk$. The likelihood is often written in terms of ``data'' $\data$ and observational noise $\ns$ with noise probability density $\mns$. For additive noise, for example (we make no assumption below that the noise is additive), the likelihood can be written as \begin{align} \label{eq:llh:add} \llh(\data; \unk) := \mns(\data-\fwd(\unk)). \end{align} where $\fwd$ is a \emph{forward model} from model parameters (e.g., earthquake magnitude and location) to observables (e.g., tsunami wave height). The likelihood distribution is the key to our approach to inferring from the observations like those in \cref{sec:data}; see \cref{sec:llh} for details. \end{itemize} The output of Bayesian inference is a third distribution, \textbf{the ``posterior''} $\mps$, which incorporates both the prior and the likelihood. This $\mps$ is defined as $\unk | \data$, i.e. the conditional distribution of $\unk$ given $\data$. Here Bayes' Theorem provides an explicit expression for $\mps$ as \begin{align} \label{eq:bayes} \mps(\unk) = \frac{1}{Z} \llh(\data; \unk) \mpr(\unk) \end{align} where $Z := \int \llh(\data; \unk) \mpr(\unk) d\unk$ is the normalizing constant making $\mps$ a true probability density function. The posterior describes, in probabilistic terms, the model parameters that best match both our understanding of reasonable parameter values (based on the prior) and observations from historical records associated with the given event (the likelihood). Most critically, the Bayesian approach incorporates uncertainty at all levels of the inverse problem, an essential feature given that the data in this case clearly does not provide enough information to fully specify the model parameters -- we hope that it will tell us \emph{something} about the parameters, but expect that it will necessarily not tell us \emph{everything}. \subsection{Likelihood Modeling} \label{sec:llh} As noted in \cref{sec:data}, observations associated with historical earthquakes are highly uncertain and anecdotal. We are thus in a situation where it would be quite artificial to specify ``data'' $\data$ provided from historical accounts at hand by a series of fixed values perturbed by observational noise. Instead we embrace this inherent uncertainty by modeling $\data$ with a probability distribution which we use to specify a likelihood-like function $\llh(\data; \unk)$. Our procedure is as follows: \begin{enumerate}[label={[\arabic*]}] \item \label{it:obs:def} Select an observation probability density $\mob$ such that $\mob(\data)$ reflects the probability density associated with the observable taking on value $\data$ given the account in the historical record. For example, the probability that the true wave height was between 5 and 10 meters when the observer noted that ``the water rose to the rooftops'' in \cref{fig:1852BandaNeira} would be computed as $\int_{5}^{10} \mob(\tdata) \,d\tdata$. \item \label{it:llh:def} We then define the likelihood-like function $\llh(\data; \unk)$ in terms of the observation probability $\mob$ and the forward model $\fwd$: \begin{equation}\label{eq:llh} \llh(\data; \unk) = \mob(\fwd(\unk)) \end{equation} That is, for a given parameter value $\unk$ we compute the associated observables (e.g., wave heights) $\tdata := \fwd(\unk)$. The likelihood is then the probability of those observables according to the historical account given by $\mob(\tdata)$. \end{enumerate} Here it is notable that $\llh(\data; \unk)$ in \eqref{eq:llh} lacks a clear interpretation as a probability measure on $\data$ as a function of each fixed $\unk$ representing $\data | \unk$. As such it is unclear if the formulation \eqref{eq:bayes} can be justified with the usual formal Bayesian procedure. To make this more concrete one might instead: \begin{enumerate}[label={[\arabic*']}] \item \label{it:long:obs:def} Develop the observation distribution $\mob$ from the historical record as described in \ref{it:obs:def}. \item \label{it:long:data:def} Define the data $\data$ as, e.g., the mean of the observation distribution: $\data = \Exp_{\mob}[y]$. \item \label{it:long:ns:def} Define the observational noise as the difference between the data and the observation distribution: $\mns(\ns) := \mob(\data-\ns)$. \item \label{it:long:llh:def} Define the likelihood as in \eqref{eq:llh:add}. \end{enumerate} This alternative approach clarifies the meaning of $\data$; however, the combination of \ref{it:long:ns:def} and \ref{it:long:llh:def} yields precisely \eqref{eq:llh} regardless of the choice of $\data$ in \ref{it:long:data:def}, so from an implementation perspective the two approaches are identical. Moreover, for observations of the type described in \cref{sec:data}, the choice of a single value $\data$ is largely arbitrary. We will therefore refer to the approach given by \ref{it:obs:def} and \ref{it:llh:def} in what follows. Of course, the choice of the observation distribution $\mob$ is subjective, as any interpretation of the historical records described in \cref{sec:data} must be. However, the approach outlined above represents a clear improvement over the modeling of the historical data outlined in \cref{sec:traditional} in at least two ways: \begin{itemize} \item By using probability distributions rather than single values, the methodology more clearly encapsulates the uncertainty associated with the observations. \item Modeling assumptions are explicitly specified and incorporated into the methodology so that the results are rigorous and reproducible. \end{itemize} In fact, one might interpret our modeling via the likelihood distribution as repeating the approach from \cref{sec:traditional} a large number of times, with the observation distribution $\mob(\data)$ representing the probability that a given modeler might interpret the observation as representing the true value $\data$. In any case this represents a fruitful paradigm shift from the usual Bayesian inversion framework by modeling observational uncertainty as a distribution itself rather than a definitive value with measurement noise. Such a shift in thinking does not change the impact of Bayes' Theorem on the posterior distribution, but it does open the possibility of applying the Bayesian methodology to other settings where the observations are inherently uncertain. A direct extension of the current work would be to implement this approach for other types of geological evidence such as coral uplift, sediment cores, and disrupted turbidites, but further extension to problems outside of seismic inversion are also reasonable. \subsection{Example: Application to Banda Neira} \label{sec:bandaneirallh} In this section, we walk through our approach to modeling the historical account summarized in \cref{fig:1852BandaNeira} for the 1852 tsunami in Banda Neira. The record includes observations of arrival time (the time interval between shaking and the arrival of the first tsunami wave), wave height (the vertical height of the wave above sea level), and inundation length (the distance that the wave reached onshore). We identify observation distributions for each observation type as follows: \begin{itemize} \item \textbf{Arrival time.} The text in this case states ``barely had the ground been calm for a quarter of an hour when the flood wave crashed in.'' This clearly implies using 15 minutes as the anticipated arrival time of the wave at Banda Neira. However, it is noted in other locations that the shaking lasted for at least 5 minutes, while the computational model used in the forward model here assumes an instantaneous rupture. Hence we build into the observation distribution a skew toward longer times with a mean of 15 minutes. This is done with a skew-normal distribution with a mean of 15 minutes, standard deviation of 5 minutes, and skew parameter 2. \item \textbf{Wave height.} As noted above, the historical account says ``the water rose to the roofs of the storehouses.'' Assuming standard construction for the time period for the homes (and storehouses), we can assume the water rose at least 4 meters above standard flood levels, as most buildings of the time were built on stilts and had steep, vaulted roofs. Based on the regular storm activity in the region we can expect that with high tide, and normal seasonal storm surge, the standard flood level was also approximately 2 meters in this region. This leads us to select a normally distributed observation distribution for wave height with a mean of $6.5m$ and standard deviation of $1.5m$, allowing for reasonable probability of wave heights in the range from $3m$ to $9m$. \item \textbf{Inundation length.} Here the account states that the water ``reached the base of the hill on which Fort Belgica is built.'' To quantify the wave reaching the base of the hill, we measured the distance from 20 randomly selected points along the beach to the edge of said hill in arcGIS (\url{https://www.arcgis.com/}). The mean of these measurements was 185 meters, with a standard deviation of roughly 65 meters. Thus we choose a normal distribution with those parameters. Without more detailed information about the coastline, and a direct idea of the direction of the traveling wave, we could not be more precise with regard to the inundation. \end{itemize} The observation distributions for other accounts of the 1852 tsunami and assembly of these individual distributions into a full observation distribution and likelihood are described in \cref{sec:1852llh}. \section{Application to the 1852 Banda Sea Earthquake and Tsunami} \label{sec:application} As noted in \cref{sec:bayes}, Bayesian inference requires two inputs: (i) the prior distribution and (ii) the likelihood distribution, which in our application consists of a forward model composed with an observation distribution. In addition, we need a numerical method to estimate key quantities from the posterior measure. In this section, we describe how each of these components was developed for the problem of estimating the earthquake that caused the 1852 Banda Sea tsunami. \subsection{Earthquake Parameterization and Prior Distribution} \label{sec:1852prior} To conduct Bayesian inference, we need to define a set of model parameters to estimate. The canonical parameterization of an earthquake is the nine-parameter Okada model \cite{okada1985surface}, which describes the earthquake rupture as a sliding rectangular prism describing the location (latitude, longitude, depth), orientation (strike, rake, dip), and size/magnitude (length, width, slip) of the rupture. However, in practice these parameters are often highly correlated -- for example, the rectangle typically has a certain range of aspect ratios, rake is near $90\degree$ for most subduction zone events (like that considered in this setting), and depth, strike, and dip can mostly be determined from latitude and longitude for major subduction zones via available instrumental data. Also, while a justifiable prior on the size parameters would be complicated to assemble, they can be estimated from earthquake magnitude, which famously follows the Gutenberg-Richter (exponential) distribution. With all of these considerations in mind, we settled on a reparameterization of the Okada model consisting of the following six parameters: (1) latitude, (2) longitude, (3) depth offset (the difference in depth from the expected depth of the subduction interface given the latitude-longitude location), (4) magnitude, and (5-6) $\Delta \log$ length and $\Delta \log$ width (a logarithmically scaled difference in length and width from the expected values for the given magnitude). The prior distributions on latitude, longitude, and depth offsets were determined from the Slab2 dataset \cite{HayesSlab2} which incorporates modern instrumental data to map out major subduction zones globally. The prior distribution on magnitude was taken from the Gutenberg-Richter distribution, truncated to reasonable maximum (9.5) and minimum (6.5) values. The priors on $\Delta \log$ length and $\Delta \log$ width were taken to be normal distributions with mean zero and standard deviation computed from historical earthquake data cataloged in \cite{wells1994new} and from recent major events from the global USGS dataset. For further details on development of the prior distributions including geophysical considerations, we refer the reader to \cite{ringer2021methodological} (see also \cite{ringer2020method}). The resulting prior distributions are summarized in \cref{tab:priors}. The prior on latitude and longitude is shown in the left-hand plot on \cref{fig:post:latlong} while the priors for the remaining four parameters are shown in green in \cref{fig:post:mag}. We note that the prior distributions on the magnitude, $\Delta \log$ length and $\Delta \log$ width are universally applicable to other earthquakes, whereas the prior distributions on the latitude, longitude and depth offset, while derived from the Slab2 dataset, are specific to the subduction zone in question. \begin{table} \centering \caption{Prior distributions for the 1852 Banda Arc earthquake} \label{tab:priors} \footnotesize \begin{tabular}{@{}ccc@{}} \hline Parameter name(s) & Kind & Distribution Parameters \\ \hline Latitude \& longitude & Pre-image of truncated normal via depth & $\mu=$ 30km, $\sigma=$ 5km, $(a,b) =$ (2.5km,50km)\\ Depth offset & Normal & $\mu=0$, $\sigma=$ 5km\\ Magnitude & Truncated exponential & $\lambda=.5$, $(a,b) =$ (6.5,9.5) \\ $\Delta \log L$ & Normal & $\mu=0$, $\sigma=.188$ \\ $\Delta \log W$ & Normal & $\mu=0$, $\sigma=.172$\\ \hline \end{tabular} \end{table} \subsection{Forward Model} \label{sec:1852fwd} In Bayesian estimation, the forward model takes in model parameters and outputs observables. For the earthquake estimation problem the forward model maps the earthquake parameters described in \cref{sec:1852prior}, modeling the resulting seafloor deformation via the Okada model \cite{okada1985surface}, and then simulating tsunami generation and propagation to produce wave arrival times, wave heights, and inundation lengths. This is accomplished using the GeoClaw software package \cite{leveque2008high, leveque2011tsunami, gonzalez2011validation, berger2011geoclaw}. GeoClaw both computes the seafloor deformation and simulates the tsunami via a finite-volume solver for the nonlinear shallow water partial differential equations. Approximating the posterior distribution required running the forward model thousands of times, so in practice implementing GeoClaw for this problem required developing a software package to automate setting it up and running it as well as several steps to carefully optimize its performance. In addition, because the events under consideration were large and the faults in the region bend (see \cref{fig:1852map}), approximation via a single rectangle (the default Okada model) was not physically accurate, so an algorithm to split the event into multiple rectangles oriented along the fault lines was added to the forward model. Accurate tsunami simulation also required substantial effort to find and integrate data from several sources to develop more refined bathymetric data (i.e., seafloor topography) for the study region. Additional details on the many geophysical considerations in each of these steps can be found in \cite{ringer2021methodological,ringer2020method}. \subsection{Observation Distributions and Likelihood} \label{sec:1852llh} In \cref{sec:bandaneirallh}, we provided a detailed description of our process in developing the observation distribution for historical accounts of the 1852 tsunami in Banda Neira. We had usable accounts for this tsunami in eight other locations; the observation distributions for all nine locations are shown in \cref{fig:likelihoods}. Each was constructed in a very similar manner to that described above for Banda Neira. We note that the current investigation has assigned each observation to a single latitude-longitude location based on the historical record. Such a specific assignment is reasonable only if the likelihood distributions are sufficiently wide to account for bathymetric and model dependent resolution differences along the coastline which is a reasonable assumption, although certainly not one that is guaranteed. In future studies we will address this issue by weighting the wave heights and arrival times from a collection of nearby locations. The total observation distribution $\mob$ is computed as the product of these individual observational distributions under the assumption that the error in each observation is independent of the others. Critically, this does not assume that the observations themselves are independent of one another, as the observations are connected via the forward model -- if the wave height is high in one location, for example, it is likely to be high at all locations as the earthquake is likely larger than average. We only assume that the mistakes made by individual observers, or equivalently our (mis)interpretation of the written record for each observation, are independent. This is still somewhat questionable if, for example, observers tend to systematically over- or underestimate the size of a wave. In an attempt to mitigate these concerns, we rely only on observations with some quantifiable measurements. Moreover, it would be difficult to justify a more complicated construction of the total likelihood, so we have chosen to take the most simplified approach without making additional assumptions about the structure of the likelihood. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/likelihoods.pdf} \caption[1852 Banda arc event likelihood densities]{1852 Banda arc tsunami observation densities for the 13 observations at 9 locations. Each observation density represents an interpretation of the Wichmann catalog description. The same color scheme is used for all 3 types of observations, but only Banda Neira and Saparua included wave arrival time and inundation length.} \label{fig:likelihoods} \end{figure} \subsection{Posterior Sampling via Markov Chain Monte Carlo} \label{sec:1852mcmc} As noted in \cref{sec:bayes}, the outcome of Bayesian inference is the posterior probability distribution. Computing this distribution in practice requires computing the normalization constant $Z$ in \eqref{eq:bayes}, an integral that can be difficult to evaluate in practice. We therefore seek to draw samples from the posterior distribution using Markov Chain Monte Carlo (MCMC) methods. Because we did not have an adjoint solver for this PDE-based forward map, gradient-based methods like Hamiltonian Monte Carlo were not available. We therefore employed random walk-style Metropolis-Hastings MCMC; a diagonal covariance structure was used for the proposal kernel with the step size in each of the six parameters tuned to approximate the optimal acceptance rate of roughly $0.23$ \cite[Section 12.2]{gelman2014bayesian}. The final standard deviations for the random walk proposal kernel are given in the GitHub repository (see \cref{sec:disc:sw}). Chains, particularly when initialized in different regions of the parameter space, sometimes got stuck in places with low posterior probability. We therefore conducted periodic importance-style resampling according to posterior probability (see \cite{doucet2001sequential}); this resampling does not maintain invariance with respect to the posterior measure, but provides a mechanism to ``jump'' trapped samples from poorly-performing regions of the parameter space to regions given more weight by the posterior distribution. To minimize any bias from the resampling steps, the approximate posterior was ultimately assembled from samples collected after a suitable burn in period following the last resampling step (see \cref{sec:sampling}). The resulting algorithm is summarized in \cref{alg:mcmc}. \begin{algorithm}[H] \caption{(MCMC as Applied to the 1852 Banda Sea Tsunami Problem)}\label{alg:mcmc} \begin{algorithmic}[1] \State Choose number of chains $M$, resampling rate $N$, proposal covariance $C$, and initial parameters $\unk_0^{(i)}, i=1,\dots,M$. \For{$k \geq 0$} \For{$i = 1,\dots,M$} \State Propose $\prop^{(i)} = \unk_k^{(i)} + \eta, \eta \sim N(0,C)$ \State Run Geoclaw to compute likelihood $\llh(\fwd(\prop^{(i)}))$. \State Compute un-normalized posterior $\mps(\prop^{(i)})$ from \eqref{eq:bayes}. \State Set $\unk_{k+1}^{(i)} :=\prop^{(i)}$ with probability $\min\{1,\mps(\prop^{(i)})/\mps(\unk_k^{(i)})\}$. \State Otherwise take $\unk_{k+1}^{(i)} := \unk_{k}^{(i)}$. \EndFor \State If $k \mod N=0$, resample $\unk_k^{(i)} \sim \Sigma_j \mps(\unk_k^{(j)})\delta\left( \unk_k^{(j)} \right) /\Sigma_l \mps(\unk_k^{(l)}), i=1,\dots,M$. \State $k \to k + 1$. \EndFor \end{algorithmic} \end{algorithm} \section{Results for the 1852 event} \label{sec:results} In this section, we describe the results of the Bayesian inference of the 1852 Banda Arc earthquake and tsunami using the approach described in \cref{sec:application}. We first outline behavior of the MCMC chains (\cref{sec:sampling}), then in \cref{sec:posterior} we describe the structure of the computed posterior distribution and some conclusions of geological significance that can be drawn from it. Finally, we provide some results on the sensitivity of the posterior distribution to the choice of likelihood function in \cref{sec:1852llh}. \subsection{Sampling and Convergence} \label{sec:sampling} To ensure that all viable seismic events were considered, we initialized 14 MCMC chains at locations around the Banda arc with initial magnitudes of either $8.5$ or $9.0$ Mw. Additional chains were initialized at $8.0$ Mw; however, these were quickly discarded as they consistently failed to generate a sufficiently large wave to reach all of the observation points (\cref{fig:1852map}) and therefore produced likelihoods of zero probability. Each chain was initialized with the other sample parameters (depth offset etc.) set to zero. Each of the 14 chains was run for 24,000 samples, for a total of 336,000 samples. These samples were computed using the computational resources available through BYU's Office of Research Computing, consuming a total of nearly 200,000 core-hours in all. About two thirds of the chains converged from their disparate initial conditions to a similar region in the parameter space that ultimately represented the bulk of the posterior distribution. However, the remaining third of the chains became trapped by geographic barriers in a region of parameter space with much lower posterior probability (roughly $\exp(-5)$ to $\exp(-10)$ times the probability of the samples in the first region). For this reason, as noted in \cref{sec:1852mcmc}, after 6,000 samples we resampled the chains using importance sampling to give each a chance to jump to regions of higher probability. Resampling was conducted twice more at samples 12,000 and 18,000. However, the range of posterior values was much smaller at the second two resampling steps and so resampling had a less pronounced impact. Since the resampling adds a small amount of bias to the posterior once the samples are in equilibrium, these latter two resampling steps were in retrospect probably not warranted. To minimize their effect, we therefore use only the last 5,000 steps from each chain (assuming a 1,000 sample "burn in" after the last resampling), making a dataset of 70,000 samples from which we approximate the posterior distribution. The results of the sampling are shown in \cref{fig:sampling}; the figure shows 100-sample rolling averages across all chains (blue) plus or minus their standard deviations (black) for each parameter as well as the points at which resampling was done (red) and the samples included in the final approximate posterior (green). The figure shows the large jumps associated with the first resampling, smaller jumps at the second resampling, and almost no effect in the third resampling; the chains appear to have reached approximate equilibrium by about midway through the sampling so that the final 5,000 samples should provide a good representation of the posterior measure as a whole. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/sampling.pdf} \caption[Sampling]{MCMC sampling by parameter. 100-sample rolling averages across all chains are shown in blue. Black lines show the rolling averages $\pm$ their standard deviations. Resampling points are marked with red lines. The green box shows samples used in the final computed posterior.} \label{fig:sampling} \end{figure} \subsection{Posterior Structure} \label{sec:posterior} The resulting approximate posterior structure shows that, even though the individual observations were highly uncertain, taken together they provide strong evidence for several conclusions of geophysical significance. First, as shown in \cref{fig:post:mag}, the 1852 earthquake was likely very large, with magnitude greater than 8.5 Mw. This is because the tsunami modeling consistently -- over hundreds of thousands of trials -- indicated that an earthquake would need to be at least this large to generate observable waves at each of the nine observation locations. This is an important conclusion because no earthquake of this magnitude has been recorded in the Banda Sea during the period of instrumental data (since approximately 1950). Second, as shown in \cref{fig:post:latlong}, while the prior distribution considered events all along the Banda Arc, the observations imply that the centroid for the 1852 earthquake likely occurred in a narrow region near 4.5\degree S, 131.5\degree E, which is situated in a shallow part of the subduction interface. This is the ``triangulation effect'' -- because the observations were in different locations, the different wave heights and arrival times allowed the model to constrain the location of the event even though each observation was highly uncertain if considered individually. \begin{figure}[h] \centering \includegraphics[width=.7\textwidth]{figures/posterior_vs_prior.png} \caption[Priors and posteriors for other parameters]{Magnitude, depth offset, $\Delta \log L$, and $\Delta \log W$ posterior histograms and associated prior distribution densities (green).} \label{fig:post:mag} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\textwidth,trim=55 48 45 22mm, clip]{figures/latlonprior_vs_samples.png} \caption[Latitude/longitude prior and posterior]{Centroid latitude/longitude for the prior (left) and posterior (right) distributions, showing concentration in a narrow region of the Banda arc. Observation locations in red.} \label{fig:post:latlong} \end{figure} Insight into the behavior of the model can be gleaned from the \emph{posterior predictive distributions} shown in \cref{fig:post:obs}. These distributions are essentially the histograms of the observables associated with the posterior distribution. Overlaid on the plots are the observation distributions as well as the observables associated with the approximate maximum a posteriori (MAP) and maximum likelihood estimator (MLE) points of the distribution (the maxima among the posterior samples). The match or mismatch between the histograms and the observation distributions show where the model was able to match the observations well, and where it was not. The histograms can also be interpreted as predictions of what communities in these locations might be expected to experience -- the wave heights, arrival times, and inundations -- should a similar event happen in the future. For instance, if an event of this magnitude occurred in the same location on the Banda arc, we anticipate a wave of approximately $2.5m$ to reach the populous city of Ambon (approximately 300,000 people). For those living in the bay of Ambon, this is a potentially powerful tool for probabilistic hazard assessment. \begin{figure}[!ht] \centering \includegraphics[width=0.9\textwidth]{figures/obs_hists3.pdf} \caption[Model output compared to observation distributions]{Model output compared to observation distributions. The blue histograms represent the forward model outputs corresponding to the posterior distribution. The black curves are the observation densities assigned to each observation. The model outputs corresponding to the estimated maximum a posteriori (MAP) point and maximum likelihood estimate (MLE) are marked with red and orange lines, respectively.} \label{fig:post:obs} \end{figure} \subsection{Error Bounds and Sensitivity} \label{sec:errorbounds} Given the necessarily uncertain process of interpreting textual records as probability distributions, it would be natural to question how sensitive the results are to different choices of observation distribution. To allay some of these concerns, we estimate upper and lower bounds on the potential error in the posterior distribution following the theoretical results of \cite{dupuis2016path}. Throughout, expected values are approximated using Monte Carlo integration on the posterior samples described in \cref{sec:results}. First, we let $\theta$ denote a parameterization of the observation distributions (see \cref{fig:likelihoods}); components of $\theta$, for example, represent the mean or standard deviation of the normal distribution used for the observation distribution associated with arrival time at Banda Neira. The posterior distribution shown in \cref{sec:posterior} is denoted by $P^\theta$. We then introduce the Fisher information matrix (FIM), denoted by $\mathcal{I}\left( P^\theta \right)$ (see \eqref{eq:fim}), which we will use to estimate the sensitivity of the posterior distribution to the parameters $\theta$ below. A derivation of the FIM for the posterior \eqref{eq:bayes} is given in \cref{app:fim}. The parameters associated with each observation distribution and their associated Fisher information (the diagonal element of the Fisher information matrix) are presented in \cref{tab:perturb}. Because the differences in Fisher information for absolute changes in parameter values are largely driven by units (e.g., meters for wave height vs. minutes for arrival time), the Fisher information values presented in the table are computed for the relative change in each parameter value. \cref{tab:perturb} also presents the relative entropy (Kullback–Leibler divergence) $\RE$ associated with a $10\%$ shift in each parameter, estimated from \cite[Equation 2.35]{dupuis2016path}: \begin{align}\label{eq:dupuis235} \RE (P^{\theta+v} || P^{\theta} ) = \frac{1}{2} v^T \mathcal{I}\left( P^\theta \right)v + \bigO\left( |v|^3 \right). \end{align} The last column in \cref{tab:perturb} lists the first singular vector of the FIM, which is the combination of perturbations of the observation parameters that produce the largest relative entropy -- effectively the ``worst-case'' perturbation. The results show that the most sensitive parameters of the observation distributions are the means of the arrival times at Saparua and Banda Neira. These two parameters seem to be the most sensitive because they are the only two arrival time measurements, so their values seem to provide the most ``triangulation'' information about earthquake location. \begin{table}[!h] \centering \caption{Observation distribution parameters, associated Fisher Information values, relative entropy according to \eqref{eq:dupuis235} associated with a 10\% relative perturbation, and the first (most sensitive) singular vector of the Fisher information matrix.} \label{tab:perturb} \footnotesize \begin{tabular}{lllllrrrr} \hline Name & Observation & Distribution & Parameter & Value & FI & $\RE~10\%$ & Sing. Vec. \\ \hline Pulu Ai & height & normal & mean & 3 & 5.934 & 0.030 & -0.151 \\ Pulu Ai & height & normal & std & 0.8 & 2.505 & 0.013 & 0.087 \\ Ambon & height & normal & mean & 1.8 & 12.370 & 0.062 & 0.364 \\ Ambon & height & normal & std & 0.4 & 5.220 & 0.026 & 0.216 \\ Banda Neira & arrival & skewnorm & mean & 15 & 14.082 & 0.070 & 0.447 \\ Banda Neira & arrival & skewnorm & std & 5 & 1.950 & 0.010 & -0.148 \\ Banda Neira & arrival & skewnorm & a & 2 & 1.339 & 0.007 & 0.132 \\ Banda Neira & height & normal & mean & 6.5 & 7.525 & 0.038 & -0.014 \\ Banda Neira & height & normal & std & 1.5 & 0.884 & 0.004 & -0.006 \\ Banda Neira & inundation & normal & mean & 185 & 2.663 & 0.013 & -0.010 \\ Banda Neira & inundation & normal & std & 65 & 0.272 & 0.001 & -0.006 \\ Buru & height & chi & mu & 0.5 & 0.006 & 0.000 & 0.009 \\ Buru & height & chi & sigma & 1.5 & 0.122 & 0.001 & 0.035 \\ Buru & height & chi & dof & 1.01 & 0.142 & 0.001 & 0.040 \\ Hulaliu & height & chi & mu & 0.5 & 0.001 & 0.000 & 0.000 \\ Hulaliu & height & chi & sigma & 2 & 0.003 & 0.000 & -0.000 \\ Hulaliu & height & chi & dof & 1.01 & 0.185 & 0.001 & 0.002 \\ Saparua & arrival & normal & mean & 45 & 19.264 & 0.096 & 0.716 \\ Saparua & arrival & normal & std & 5 & 1.280 & 0.006 & -0.163 \\ Saparua & height & normal & mean & 5 & 9.085 & 0.045 & 0.009 \\ Saparua & height & normal & std & 1 & 0.869 & 0.004 & -0.005 \\ Saparua & inundation & normal & mean & 125 & 2.905 & 0.015 & 0.005 \\ Saparua & inundation & normal & std & 40 & 0.178 & 0.001 & -0.003 \\ Kulur & height & normal & mean & 3 & 0.199 & 0.001 & 0.038 \\ Kulur & height & normal & std & 1 & 0.362 & 0.002 & -0.050 \\ Ameth & height & normal & mean & 3 & 0.351 & 0.002 & 0.043 \\ Ameth & height & normal & std & 1 & 0.409 & 0.002 & -0.046 \\ Amahai & height & normal & mean & 3.5 & 4.107 & 0.021 & 0.014 \\ Amahai & height & normal & std & 1 & 0.784 & 0.004 & -0.001 \\ \hline \end{tabular} \end{table} To gauge the effect of such perturbations on estimates of earthquake characteristics, we use the following bound on the expected value of an observable $f$ with respect to model $Q$ in terms of the values of $f$ according to model $P$ from \cite[Equation 2.11]{dupuis2016path}: \begin{equation}\label{eq:dupuis211} \begin{aligned} \sup_{c>0} &\left( -\frac{1}{c} \log \Exp_{P} \left[e^{-c(f -\Exp_{P} [f])}\right] - \frac{1}{c} \RE (Q || P ) \right)\\ &\qquad \le \Exp_{Q}[f] - \Exp_{P}[f] \le \inf_{c>0} \left( \frac{1}{c} \log \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] + \frac{1}{c} \RE (Q || P ) \right). \end{aligned} \end{equation} Here $\Exp_{P}$ and $\Exp_{Q}$ are the expected values according to $P$ and $Q$, respectively, and $Q$ is assumed to be absolutely continuous with respect to $P$ (that is, $P(A)=0 \implies Q(A)=0$ for any event $A$). By letting $P$ be the posterior measure, we can estimate the uncertainty in observables with respect to other similar measures/models $Q$. The expressions inside the $\sup$ and $\inf$ in \eqref{eq:dupuis211} can be differentiated to find the value of $c$ that provides the optimal bound for a given value of $\RE (Q || P )$. When $f$ is bounded, the equations also give a global upper and lower bound $\Exp_Q[f]$. Details of these derivations and a plot of the resulting bounds in terms of $\RE (Q || P )$ are reserved for \cref{app:dupuis211}. By combining these bounds with \eqref{eq:dupuis235}, we can estimate bounds on our estimates of earthquake parameters for perturbations of observation distributions. To model the worst-case scenario, we assume perturbation in the direction of the first singular vector of $\mathcal{I}$; this perturbation, which is primarily made up of changes to the arrival times in Banda Neira and Saparua, will produce the largest sensitivity for a given norm $\| v \|_2$. The resulting bounds are shown in \cref{fig:dupuis:211}. We see that even for relatively large perturbations, we get relatively narrow bounds on posterior estimates. For example, even with a 25\% perturbation\footnote{Note that as the size of the perturbation grows, the approximation in \eqref{eq:dupuis235} may break down. In this case, we refer the reader to \cref{fig:dupuis:211:re}, where the $x$-axis is in terms of relative entropy.} in the most sensitive direction, the expected value of magnitude according to the perturbed posterior distribution would be between 8.7 and 9.0 -- a very large earthquake in any case. These narrow bounds are an encouraging sign that the posterior measure is robust to small changes in the choice of observation distributions -- i.e., that our Bayesian approach is quite robust to the way that we formulated our likelihood function. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/dupuis211_sv1.pdf} \caption[Bounds on Parameter Estimates]{Bounds on mean parameter values in terms of relative perturbation in the first singular vector of the Fisher information matrix (see the last column of \cref{tab:perturb}). Upper and lower bounds are in green and red, respectively. The posterior mean is in blue } \label{fig:dupuis:211} \end{figure} Finally, \cite[Equation 2.39]{dupuis2016path} gives bounds on the sensitivity of estimates of observables due to changes in the likelihood: \begin{align}\label{eq:dupuis239} \left| S_{f,v}\left( P^\theta \right) \right| \le \sqrt{ \Var_{P^\theta}(f) } \sqrt{ v^T \mathcal{I}\left( P^\theta \right)v }. \end{align} Here $f$ is an observable, which we will consider to be our six earthquake parameters, $\Var$ denotes variance, and the sensitivity bound $S_{f,v}$ is the approximate derivative of $\Exp_{P^\theta}[f]$ with respect to perturbation of $\theta$ in the direction of $v$. \eqref{eq:dupuis239} shows that the greatest sensitivity will occur when the perturbation $v$ heavily weights likelihood parameters $\theta$ that most affect the posterior (the second term) and earthquake parameters $f$ have the most uncertainty in the posterior (the first term). To estimate the worst-case scenario, we again assume here that the perturbation $v$ is along the first singular vector of $\mathcal{I}$. The sensitivity bounds associated with a $10\%$ relative perturbation in this direction are presented in \cref{tab:sensitivities}. Among earthquake parameters, the greatest sensitivities were associated with depth offset because, as shown in \cref{fig:post:mag}, it had the widest distribution according to the posterior measure. This also indicates that depth offset is the least certain inferred parameter for this earthquake, as opposed to the small sensitivity for the magnitude and $\Delta \log L$ and $\Delta \log W$ which indicate that the inferred values of these 3 parameters representing the size of the earthquake are quite certain. \begin{table}[!h] \centering \caption{Posterior variance and sensitivity bound according to \eqref{eq:dupuis239} by earthquake parameter. Sensitivity bound is for relative perturbation of $10\%$ in the direction of the first (worst-case) singular vector of the Fisher Information matrix.} \label{tab:sensitivities} \footnotesize \begin{tabular}{@{}lrr@{}} \hline Parameter & Variance & Sensitivity Bound \\ \hline Latitude & 0.066 & 0.135 \\ Longitude & 0.040 & 0.105 \\ Magnitude & 0.008 & 0.046 \\ $\Delta$ log L & 0.011 & 0.054 \\ $\Delta$ log W & 0.006 & 0.041 \\ Depth Offset & 14.483 & 1.997 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} \subsection{Methodology} \label{sec:disc:meth} The results for the 1852 Banda Arc earthquake and tsunami show the promise of the described methodology: even though the historical accounts of the tsunami are textual in nature and therefore individually prone to much uncertainty, it nevertheless appears that taken together they can be used to determine key characteristics of the causal earthquake. The approach is similar to the ``ad hoc'' approach described in \cref{sec:traditional}, but with a more reproducible and rigorous set of assumptions, a more comprehensive coverage of possible events via automation, and a more clear characterization of uncertainty on the results. The strategy outlined in \cref{sec:llh} can readily be applied to any number of historical seismic events, but also any other problem of inverting from textual accounts similar in nature to those described in \cref{sec:data} or any other historical or other data that is similarly ``small'' -- sparse and riddled with uncertainty so long as a reasonably believable forward model is available with parameters on which we can formulate a suitable prior distribution. \subsection{Software Package} \label{sec:disc:sw} To fully document the methodology and ease application to additional historical seismic events, the approach described in this paper has been packaged into a Python library called \texttt{tsunamibayes}. The package is open-source and available on GitHub: \url{https://github.com/jpw37/tsunamibayes}. Since each historical scenario may have a unique interpretation as a Bayesian inference problem -- e.g., different parameters/priors, modified/generalized forward model, additional types of observations -- the core code of the module does not assume particular features, but rather provides a suite of tools that can be recombined or modified to suit the needs of the user. A further description of the software package is available in \cite[Chapter 7]{ringer2020method}. Datasets for this research are available in these in-text data citation references: \cite{zenodov1}. \subsection{Future Work} \label{sec:disc:future} Any reconstruction of historical events is going to beg the question ``How do we know if the result is right?'' In particular, for the Bayesian approach described here, there is inherent uncertainty in the likelihood distributions stemming from the historical record. One avenue of future research will therefore be to supplement the theoretical results presented in \cref{sec:errorbounds} with a numerical study of the robustness of the posterior to changing interpretations of the likelihood. A second effort will validate the approach by using it to ``reconstruct'' a modern event for which the truth is known from instrumental data and plentiful newspaper and historical accounts are available. Finally, there are a number of methodological refinements, e.g., to the sampling approach, that might yield faster or better resolved results, and there are dozens of other historical earthquakes of interest in the Wichmann catalog ready for reconstruction that will improve modern understanding of seismic risk. \begin{appendix} \section{Derivation of the Fisher Information Matrix}\label{app:fim} In this appendix we derive the Fisher Information Matrix (FIM) $\mathcal{I}$ associated with a parameterization of the posterior measure given in \eqref{eq:bayes}. The FIM associated with parameter $\theta$ is given by \begin{align} \label{eq:fim} \mathcal{I}(P^{(\theta)}) = \int \nabla_\theta \log p^{(\theta)}(x) \left( \nabla_\theta \log p^{(\theta)}(x) \right)^T P^{(\theta)}(dx) \end{align} where $P^{(\theta)}$ is the posterior measure and $p^{(\theta)}$ its associated density given by (see \eqref{eq:bayes} and \eqref{eq:llh}) \begin{align*} \mps^{(\prm)}(x)dx = \frac{1}{Z^{(\prm)}} \mpr^{(\prm)}(x)\mob^{(\prm)}\left( \fwd(x) \right)dx. \end{align*} Since the focus of this paper is on modeling of historical observations via observation distributions, we will focus on the case where $\theta$ describes the observation distributions. In this case, we have \begin{align*} \mathcal{I}_{ij}(P^{(\theta)}) = \Cov_{P^{(\theta)}}\left[ \frac{\partial}{\partial \theta_i} \pot^{(\prm)}, \frac{\partial}{\partial \theta_j} \pot^{(\prm)} \right] \end{align*} where $\Cov_{P^{(\theta)}}$ is the covariance according to the posterior and $\pot^{(\prm)}$ is the negative log-likelihood given by \begin{align*} \pot^{(\prm)}(x) &:= -\log \mob^{(\prm)}\left( \fwd(x) \right). \end{align*} Thus, to compute the FIM, we compute the derivative of $\pot^{(\prm)}$ with respect to each observation parameter (the ``score'') and then compute the covariance of each pair of scores, which we approximate using the observations associated with the approximate posterior samples generated as described in \cref{sec:results}. Since the individual distributions making up $\mob$ are assumed to be independent as described in \cref{sec:1852llh}, the derivatives can be computed separately for each observation distribution. We now consider each type of observation distribution listed in \cref{tab:perturb}. \subsection*{Normal Distribution} For a normal distribution with mean $\mu$ and standard deviation $\sigma$, we have \begin{align*} \pot^{(\prm)}(x) = \frac{1}{2\sigma^2} \left[ \fwd(x) - \mu \right]^2 + \ln \sigma + \frac{1}{2}\ln (2\pi) \end{align*} Then the derivatives with respect to parameters $\mu$ and $\sigma$ are given by: \begin{align*} \frac{\partial}{\partial \mu} \pot^{(\prm)}(x) &= -\frac{1}{\sigma^2} \left[ \fwd(x) - \mu \right] \\ \frac{\partial}{\partial \sigma} \pot^{(\prm)}(x) &= -\frac{1}{\sigma^3} \left[ \fwd(x) - \mu \right]^2 + \frac{1}{\sigma}. \end{align*} \subsection*{Skew-Norm Distribution} For a skew-normal distribution with location $\mu$, scale $\sigma$, and skew $a$, we have \begin{align*} \Phi(x) &= \frac{1}{2}\tilde{x}(x)^2 + \ln \sigma + \frac{1}{2}\ln (2\pi) - \ln \left[ 1+\text{erf}\left(z(x)\right) \right] \end{align*} where $\text{erf}$ is the error function and $\tilde{x},z$ are given by \begin{align*} \tilde{x}(x) &:= \frac{\fwd(x)-\mu}{\sigma} \quad \text{and} \quad z(x) :=\frac{a\tilde{x}(x)}{\sqrt{2}}. \end{align*} Then the derivative with respect to $z$ and $\tilde{x}$ are given by \begin{align*} \frac{\partial}{\partial z}\Phi(x) &= -\frac{2}{\sqrt{\pi}}\frac{e^{-z(x)^2}}{1+\text{erf}\left(z(x)\right)} \\ \frac{\partial}{\partial \tilde{x}}\Phi(x) &= \frac{\partial}{\partial \tilde{x}}\Phi(x) + \frac{\partial \Phi}{\partial z}(x)\frac{\partial z}{\partial \tilde{x}}(x) = \tilde{x}(x) - \sqrt{\frac{2}{\pi}}a\frac{e^{-z(x)^2}}{1+\text{erf}\left(z(x)\right)}, \end{align*} so that the derivatives with respect to parameters $a$, $\mu$, and $\sigma$ are \begin{align*} \frac{\partial}{\partial a}\Phi(x) &= \frac{\partial \Phi}{\partial z}(x)\frac{\partial z}{\partial a}(x) = -\sqrt{\frac{2}{\pi}}\tilde{x}(x)\frac{e^{-z(x)^2}}{1+\text{erf}\left(z(x)\right)} \\ \frac{\partial}{\partial \mu}\Phi(x) &= \frac{\partial \Phi}{\partial \tilde{x}}(x)\frac{\partial \tilde{x}}{\partial \mu}(x) = -\frac{1}{\sigma}\left[ \tilde{x}(x) - \sqrt{\frac{2}{\pi}}a\frac{e^{-z(x)^2}}{1+\text{erf}\left(z(x)\right)} \right] \\ \frac{\partial}{\partial \sigma}\Phi(x) &= \frac{\partial}{\partial \sigma}\Phi(x) + \frac{\partial \Phi}{\partial \tilde{x}}(x)\frac{\partial \tilde{x}}{\partial \sigma}(x) = \frac{1}{\sigma}\left[ 1 - \tilde{x}(x)^2 + \frac{2z(x)}{\sqrt{\pi}}\frac{e^{-z(x)^2}}{1+\text{erf}\left(z(x)\right)} \right]. \end{align*} \subsection*{Chi Distribution} For the Chi distribution with location $\mu$, scale $\sigma$, and degrees of freedom $k$, we have \begin{align*} \Phi(x) &= \frac{1}{2}\tilde{x}(x)^2 + \ln \sigma + \left(\frac{k}{2} - 1\right)\ln 2 + \ln \Gamma \left( k/2 \right) - (k-1)\ln \tilde{x}(x), \end{align*} where $\Gamma$ is the gamma function and $\tilde{x}$ is given by \begin{align*} \tilde{x}(x) := \frac{\fwd_i(x)-\mu}{\sigma}. \end{align*} The derivative with respect to $\tilde{x}$ is given by \begin{align*} \frac{\partial \Phi}{\partial \tilde{x}} (x) &= \tilde{x}(x) - (k-1)\tilde{x}(x)^{-1}. \end{align*} Then the derivatives with respect to parameters $\mu$, $\sigma$, and $k$ are given by \begin{align*} \frac{\partial \Phi}{\partial \mu}(x) &= \frac{\partial \Phi}{\partial \tilde{x}} \frac{\partial \tilde{x}}{\partial \mu} = -\frac{1}{\sigma} \left( \tilde{x}(x) - (k-1)\tilde{x}(x)^{-1} \right) \\ \frac{\partial \Phi}{\partial \sigma}(x) &= \frac{\partial \Phi}{\partial \sigma} + \frac{\partial \Phi}{\partial \tilde{x}} \frac{\partial \tilde{x}}{\partial \sigma} = -\frac{1}{\sigma} \left( \tilde{x}(x)^2 - k \right). \\ \frac{\partial \Phi}{\partial k}(x) &= \frac{1}{2}\ln 2 + \frac{1}{2}\psi \left( k/2 \right) - \ln \tilde{x}(x) \end{align*} where $\psi$ is the digamma function. \section{Derivation of Bounds in Terms of Relative Entropy}\label{app:dupuis211} In this section, we derive from \eqref{eq:dupuis211} more explicit bounds on $\Exp_{Q} [f]$ -- first in terms of $\RE(Q||P)$ and then independent of $\RE(Q||P)$. These bounds were used to generate \cref{fig:dupuis:211:re}, below, which shows the bounds on estimates of parameters of the 1852 Banda Arc earthquake in terms of $\RE(Q||P)$. These bounds were then combined with the estimate from \eqref{eq:dupuis235} to produce the plots in terms of relative parameter value changes shown in \cref{fig:dupuis:211}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/dupuis211.pdf} \caption[Bounds on Parameter Estimates]{Bounds on mean parameter values by relative entropy from computed posterior. Upper and lower bounds are in green and red, respectively. The posterior mean is in blue and the estimated uniform upper and lower bounds are in purple and yellow, respectively.} \label{fig:dupuis:211:re} \end{figure} \subsection{Optimal Bound for a Given $\RE(Q||P)$} Here we derive a relationship between the relative entropy $\RE(Q||P)$ and the $c < \infty$ for which the bounds given by \eqref{eq:dupuis211} are achieved. Here, we assume that such a $c$ exists; the case where the supremum/infimum are achieved as $c \to \infty$ is discussed in the next subsection. Denoting $\tilde{f} = f -\Exp_{P} [f]$ and differentiating the right hand side of \eqref{eq:dupuis211} with respect to $c$ and setting equal to zero yields that the infimum of the upper bound is achieved when $c$ satisfies \begin{align} \label{eq:dupuis211:Rub} \RE (Q || P ) &= c\frac{\Exp_{P}\left[\tilde{f} e^{c\tilde{f}}\right]}{\Exp_{P}\left[e^{c\tilde{f}}\right]} - \log \Exp_{P}\left[e^{c\tilde{f}}\right] = c\frac{E_2(f,c)}{E_1(f,c)} - \log E_1(f,c) \end{align} and similarly for the lower bound $c$ must satisfy \begin{align} \label{eq:dupuis211:Rlb} \RE (Q || P ) &= -c \frac{\Exp_{P}\left[\tilde{f} e^{-c\tilde{f}}\right]}{\Exp_{P}\left[e^{-c\tilde{f}}\right]} - \log \Exp_{P}\left[e^{-c\tilde{f}}\right] = -c \frac{E_2(f,-c)}{E_1(f,-c)} - \log E_1(f,-c) \end{align} where to simplify notation we have defined \begin{align*} E_1(f,c) = \Exp_{P}\left[e^{c\tilde{f}}\right] \quad \text{and} \quad E_2(f,c) = \Exp_{P}\left[\tilde{f} e^{c\tilde{f}}\right]. \end{align*} Denoting the $c$ achieving these upper and lower bounds by $c_{+}$ and $c_{-}$, respectively, and plugging these values back into \eqref{eq:dupuis211} yields the bounds \begin{align} \label{eq:dupuis211:R} \frac{E_2(f,-c_{-})}{E_1(f,-c_{-})} \le \Exp_{Q}[f] - \Exp_{P}[f] \le \frac{E_2(f,c_{+})}{E_1(f,c_{+})}. \end{align} It is not clear how to invert \eqref{eq:dupuis211:Rub} and \eqref{eq:dupuis211:Rlb} to find the optimal $c$ for a given $\RE (Q || P )$, so to generate \cref{fig:dupuis:211:re} (and, by extension, \cref{fig:dupuis:211}) we generate a list of $c$ values, plug them into \eqref{eq:dupuis211:Rub} to find the $\RE (Q || P )$ for which they achieve the optimal upper bound and into \eqref{eq:dupuis211:R} to compute the associated upper bounds (analogously for the lower bounds), and plot those values against each other. \subsection{Bounds Independent of $\RE(Q || P)$} In this section, we consider the case where the bounds in \eqref{eq:dupuis211} are achieved as $c \to \infty$, i.e., are independent of $\RE(Q || P)$. From \eqref{eq:dupuis211}, we have for any $Q \ll P$ \begin{align*} \Exp_{Q}[f] - \Exp_{P}[f] &\le \sup_{c>0} \frac{1}{c} \log \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] + \frac{1}{c} \RE (Q || P ). \end{align*} Then clearly if \begin{align}\label{eq:ub:finite} U := \lim_{c\to\infty} \frac{1}{c} \log \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] < \infty, \end{align} then for any $Q \ll P$ we have \begin{align*} \Exp_{Q}[f] - \Exp_{P}[f] &\le \lim_{c\to\infty}\left\{ \frac{1}{c} \log \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] + \frac{1}{c} \RE (Q || P )\right\} = U. \end{align*} Finally, we note that since the logarithm and exponential are continuous, we have \begin{align*} U &= \lim_{c\to\infty} \frac{1}{c} \log \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] = \lim_{c\to\infty} \log \left( \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] \right)^{1/c} \\ &= \log \lim_{c\to\infty} \left( \Exp_{P}\left[e^{c(f -\Exp_{P} [f])}\right] \right)^{1/c} = \log \left\| e^{f -\Exp_{P} [f]}\right\|_{\infty} = \mathrm{sup\,ess}_P [f] -\Exp_{P} [f] \end{align*} An analogous relationship will hold for the lower bound. Thus if $f$ is an essentially bounded random variable according to $P$, we have the following bound for any $Q \ll P$: \begin{align*} \mathrm{inf\,ess}_P [f] \le \Exp_{Q}[f] \le \mathrm{sup\,ess}_P [f]. \end{align*} \end{appendix} \section*{Acknowledgements} The authors acknowledge the Office Research Computing at BYU (\url{http://rc.byu.edu}) and Advanced Research Computing at Virginia Tech (\url{http://www.arc.vt.edu}) for providing computational resources and technical support that have contributed to the results reported within this paper. JPW and RH would like to thank the Office of Research and Creative Activities at BYU for supporting several of the students' efforts on this project through a Mentoring Environment Grant, as well as generous support from the College of Physical and Mathematical Sciences and the Mathematics and Geology Departments. We also acknowledge the visionary support of Geoscientists Without Borders. JPW was partially supported by the Simons Foundation travel grant under 586788. NEGH was also partially supported by NSF Grant DMS-1816551. We also thank G. Simpson for pointing us toward the theoretical results that ultimately yielded \cref{sec:errorbounds}; J. Guinness and R. Gramacy for helpful feedback on an early presentation of this work; as well as S. Giddens, C. Ashcraft, G. Carver, A. Robertson, M. Harward, J. Fullwood, K. Lightheart, R. Hilton, A. Avery, C. Kesler, M. Morrise, M. H. Klein, and many other students at BYU who participated in the setup of the inverse problem for the 1852 event. \addcontentsline{toc}{section}{References} \bibliographystyle{plain}
1,314,259,992,689
arxiv
\section{Motivation: dianalytic vector bundles over a Klein surface}\label{intro} We start by recalling the fundamentals of Klein surfaces (\textit{e.g.} \cite{AG}). A function $f:U\to\mathbb{C}$ defined on an open subset of $\mathbb{C}$ is called \textbf{dianalytic in} $U$ if it is either holomorphic or anti-holomorphic (meaning respectively that $\overline{\partial}{f}=0$ and that $\partial{f}=0$) on any given connected component of $U$. Denote $\ov{\mathcal{H}}=\{z\in\mathbb{C}\ |\ Im\, z \geq 0\}$ the closed upper half-plane of $\mathbb{C}$, with the induced topology, and denote $\mathcal{H}$ the open upper half-plane. Let $U$ be an open subset of $\ov{\mathcal{H}}$. A function $f:U\to\mathbb{C}$ is called \textbf{dianalytic on} $U$ if it is continuous on $U$, dianalytic in $U\cap \mathcal{H}$ and satisfies $f(U\cap\mathbb{R})\subset \mathbb{R}$. We can consider the restrictions of such functions to any open subset $V$ of $\ov{\mathcal{H}}$ such that $V\subset U$ and this defines a sheaf of functions on any open subset $U\subset\ov{\mathcal{H}}$, denoted $\mathcal{D}_U$, called the sheaf of dianalytic functions on $U$. A \textbf{Klein surface} is a topological space $S$ together with a subsheaf of the sheaf of continuous functions making $S$ locally isomorphic as a ringed space to a space $(U,\mathcal{D}_U)$ for some open subset $U$ of $\ov{\mathcal{H}}$. In other words, $S$ is a topological surface (possibly non-orientable and with non-empty boundary) which admits a covering by open sets homeomorphic to open subsets of $\ov{\mathcal{H}}$ and such that the associated transition maps are dianalytic. A homomorphism between two Klein surfaces $S_1$ and $S_2$ is a continuous mapping $f:S_1\to S_2$ which is dianalytic in local charts (in particular, it sends the boundary of $S_1$ to the boundary of $S_2$). One may observe that the topological requirement that the boundary of $S_1$ should be sent to the boundary of $S_2$ is the reason for the condition $f(U\cap\mathbb{R})\subset \mathbb{R}$ in the definition of a dianalytic function on an open subset $U$ of $\ov{\mathcal{H}}$.\\ When the open set $U\subset\ov{\mathcal{H}}$ is connected, then, by the Schwarz reflection principle, the functions $f\in\mathcal{D}_{\ov{\mathcal{H}}}(U)$ are in bijective correspondence with the holomorphic functions $F$ defined on the symmetric open set $U\cup\ov{U}$ of $\mathbb{C}$ satisfying $F|_{U}=f$ and $F(\ov{z})=\ov{F(z)}$ for all $z\in U\cup\ov{U}$ (in particular, $F$ sends $U\cap\mathbb{R}$ to $\mathbb{R}$). Observe that $U\cup\ov{U}$ is connected if, and only if, $U\cap\mathbb{R}$ is non-empty, and that this is the case where we actually need the Schwarz reflection principle. This property is the basis for the existence of the \textbf{complex cover} of a Klein surface $S$, which, by definition, is a Riemann surface $M$ together with an anti-holomorphic $\sigma$ and a map $p:M\to S$ inducing a homeomorphism from $M/\sigma$ to $S$ (in fact, an isomorphism of Klein surfaces, for the natural Klein structure on $M/\sigma$). If $(M',\sigma',p')$ is another complex cover of $S$, then there is a biholomorphic bijection $f:M\to M'$ such that $\sigma'= f\sigma f^{-1}$ (see \cite{AG}, section 1.6), which is why we shall henceforth speak of \textit{the} complex cover of $S$. An important point for us in the study of vector bundles on compact topological surfaces \textit{which do not admit a Riemann surface structure} is that any compact topological surface (possibly non-orientable and with non-empty boundary) may be endowed with a Klein surface structure (see \cite{AG}, section 1.5). We now briefly recall the construction of the complex cover of a Klein surface, as it is instructive in order to, later, pull back dianalytic bundles over $S$ to holomorphic bundles over $M$. Let $(U_{\tau},\varphi_{\tau}:U_{\tau}\overset{\simeq}{\longrightarrow} V_{\tau}\subset \ov{\mathcal{H}})_{\tau\in T}$ be a dianalytic atlas of $S$ and form the open sets $W_{\tau} = V_{\tau} \cup \ov{V_{\tau}} \subset \mathbb{C}$ and the topological space $\Omega:=\bigsqcup_{\tau\in T} W_{\tau}$, with the final topology (each $W_{\tau}$ is open in $\Omega$). For simplicity, we assume that $U_{\tau}\capU_{\tau'}$ is connected for all $\tau,\tau'$. As $(U_{\tau},\varphi_{\tau})_{\tau\in T}$ is a dianalytic atlas, the transition map $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ from $V_{\tau'}$ to $V_{\tau}$ is either holomorphic or anti-holomorphic. We first treat the case where it is anti-holomorphic. In this case, we glue $\varphi_{\tau'}(U_{\tau} \cap U_{\tau'}) \subset V_{\tau'}$ to $\ov{\varphi_{\tau}}(U_{\tau} \cap U_{\tau'})\subset \ov{V_{\tau}}$, and $\ov{\varphi_{\tau'}}(U_{\tau} \cap U_{\tau'}) \subset \ov{V_{\tau'}}$ to $\varphi_{\tau}(U_{\tau} \cap U_{\tau'}) \subset V_{\tau}$. \begin{figure}[ht] \centerline{ \begin{pspicture}(-3,-3)(3,3) \pscircle(-0.8,1.5){1.3} \rput(-2.5,1.5){$V_{\tau'}$} \pscircle(0.8,1.5){1.3} \rput(2.5,1.5){$\overline{V_{\tau}}$} \pscircle(-0.8,-1.5){1.3} \rput(-2.5,-1.5){$\overline{V_{\tau'}}$} \pscircle(0.8,-1.5){1.3} \rput(2.5,-1.5){$V_{\tau}$} \end{pspicture} } \caption{The open set $[W_{\tau}]\cup[W_{\tau'}]$ of $M$ when $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ is anti-holomorphic.} \end{figure} \noindent Had $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ been holomorphic, we would have glued $\varphi_{\tau'}(U_{\tau} \cap U_{\tau'})\subsetV_{\tau'}$ to $\varphi_{\tau}(U_{\tau} \cap U_{\tau'})\subsetV_{\tau}$ and $\ov{\varphi_{\tau'}}(U_{\tau} \cap U_{\tau'})\subset\ov{V_{\tau'}}$ to $\ov{\varphi_{\tau}}(U_{\tau} \cap U_{\tau'})\subset\ov{V_{\tau}}$. This defines an equivalence relation $\mathcal{R}$ on $\Omega$ and we set $M:=\Omega/\mathcal{R}$, with the quotient topology. Let $[W_{\tau}]$ denote the image of $W_{\tau}\subset\Omega$ under the canonical projection and consider the map $$z_{\tau}: \begin{array}{rcl} [W_{\tau}] & \longrightarrow & W_{\tau}\\ \lbrack x \rbrack & \longmapsto & x\end{array}.$$ Each $z_{\tau}$ is a homeomorphism and the transition maps $z_{\tau}\circ z_{\tau'}^{-1}$ are holomorphic. Indeed, if for instance $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ is anti-holomorphic, then, by definition of the gluing on $\Omega$, the map $z_{\tau}\circ z_{\tau'}^{-1}:z_{\tau'}([W_{\tau}]\cap[W_{\tau'}]) \to z_{\tau}([W_{\tau}]\cap [W_{\tau'}])$ is the map $$v\longmapsto \left\{\begin{array}{rcl} \ov{\varphi_{\tau}\circ\varphi_{\tau'}^{-1}}(v) & \mathrm{if} & v\inV_{\tau'},\\ \varphi_{\tau}\circ\varphi_{\tau'}^{-1}(\ov{v}) & \mathrm{if} & v\in\ov{V_{\tau'}}\end{array},\right.$$ and this map is holomorphic (this requires the use of the Schwarz reflection principle precisely when $V_{\tau} \cap V_{\tau'}$ intersects $\mathbb{R}$ in $\ov{\mathcal{H}}$). The case where $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ is holomorphic is similar. So we have a holomorphic atlas $([W_{\tau}],z_{\tau})_{\tau \in T}$ on $M$. The anti-holomorphic involution $\sigma$ on $M$ is just complex conjugation in the local charts $([W_{\tau}],z_{\tau})$ and the projection $p:M\to S$ takes $[x]=[a+ib]\in[W_{\tau}]$ to $a+i|b|\inV_{\tau}$.\\ We now consider a fixed Klein surface $S$ and denote $(M,\sigma,p)$ its complex cover. As earlier, we denote $(U_{\tau},\varphi_{\tau})_{\tau\in T}$ a dianalytic atlas of $S$ and set $V_{\tau}=\varphi_{\tau}(U_{\tau})\subset\ov{\mathcal{H}}$. \begin{definition}[Dianalytic vector bundle] A \textbf{dianalytic vector bundle} $Q$ over $S$ is a topological complex vector bundle for which there exists a family $(U_{\tau},\psi_{\tau})_{\tau\in T}$ of local trivialisations whose associated $1$-cocycle of transition maps, denoted $$\big(h_{\tau\tau'}: U_{\tau} \cap U_{\tau'} \to \mathrm{GL}_r(\mathbb{C})\big)_{\tau,\tau'},$$ is dianalytic. In particular, the map $h_{\tau\tau'}$ sends $(U_{\tau} \cap U_{\tau'})\cap\partial{S}$ to $\mathrm{GL}_r(\mathbb{R})$. \end{definition} \noindent Saying that the map $h_{\tau\tau'}:U_{\tau} \cap U_{\tau'} \to \mathrm{GL}_r(\mathbb{C})\subset\mathfrak{gl}_r(\mathbb{C})$ is dianalytic means, by definition, that its $r^2$ components are either simultaneously holomorphic or simultaneously anti-holomorphic. This implies that a dianalytic bundle is in particular a dianalytic manifold. Accordingly, we define a homomorphism between two dianalytic bundles $Q$ and $Q'$ over $S$ to be a homomorphism $f:Q\to Q'$ of topological complex vector bundles which is dianalytic in local charts. To clarify the notation, let us mention that we think of $h_{\tau\tau'}$ as the transition map from $U_{\tau'}$ to $U_{\tau}$, associated to the map $\psi_{\tau}\circ\psi_{\tau'}^{-1}$ by the relation $$\psi_{\tau}\circ\psi_{\tau'}^{-1}(x,\eta) = \big(x,h_{\tau\tau'}(x).\eta\big)$$ for all $(x,\eta)\in(U_{\tau} \cap U_{\tau'})\times\mathbb{C}^r$. We shall now show that the pulled back bundle $(p^*Q\to M)$ is a holomorphic bundle. First, a definition. If $(E\to M)$ is any complex vector bundle over a topological space $M$, the vector bundle $(\ov{E}\to M)$ is defined to be the complex vector bundle over $M$ whose fibres are the fibres of $E$ with the complex structure defined by multiplication by $-i$ (called the conjugate complex structure). Alternatively, if $(g_{\tau\tau'})_{\tau,\tau'}$ is a \textit{smooth} $1$-cocycle representing $E$, then $\ov{E}$ is represented by $(\ov{g_{\tau\tau'}})_{\tau,\tau'}$. The choice of a Hermitian metric on $E$ gives a \textit{complex linear} isomorphism $\ov{E} \simeq E^*$. In particular, $\mathrm{deg}(\ov{E}) = -\mathrm{deg}(E)$. We now proceed with the construction of $p^*Q$. Consider the local charts $Q|_{U_{\tau}}\simeq V_{\tau} \times \mathbb{C}^r$ of $Q$, and form the product bundles $V_{\tau} \times \mathbb{C}^r$ and $\ov{V_{\tau}} \times \mathbb{C}^r$. The transition maps of $Q$ are the maps $$\begin{array}{rcl} \varphi_{\tau'}(U_{\tau} \cap U_{\tau'}) \times \mathbb{C}^r & \longrightarrow & \varphi_{\tau}(U_{\tau} \cap U_{\tau'})\times\mathbb{C}^r \\ (v,\eta) & \longmapsto & \big(\varphi_{\tau}\circ\varphi_{\tau'}^{-1}(v), h_{\tau\tau'}\big(\varphi_{\tau'}^{-1}(v)\big).\eta\big)\end{array}.$$ By definition of a cocycle of transition maps, the dianalyticity requirement on $h_{\tau\tau'}$ means that $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ and $h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}$ are either simultaneously holomorphic or simultaneously anti-holomorphic. Let us first consider the case where they are both anti-holomorphic. Then, as in the construction of the complex cover of $S$, we set, for $(w,\eta)\in z_{\tau'}([W_{\tau}]\cap[W_{\tau'}])\times\mathbb{C}^r$, $$\big(w,\eta\big) \sim \left\{\begin{array}{rcl} \big(\ov{\varphi_{\tau}\circ\varphi_{\tau'}^{-1}}(w),\ \ov{h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}}(w).\eta\big) & \mathrm{if} & w\inV_{\tau'}, \\ \big(\varphi_{\tau}\circ\varphi_{\tau'}^{-1}(\ov{w}),\ h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}(\ov{w}).\eta\big) & \mathrm{if} & w\in\ov{V_{\tau'}}\end{array}.\right.$$ Had $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ and $h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}$ been holomorphic, we would have set $$\big(w,\eta\big) \sim \left\{\begin{array}{rcl} \big(\varphi_{\tau}\circ\varphi_{\tau'}^{-1}(w),\ h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}(w).\eta\big) & \mathrm{if} & w\inV_{\tau'}, \\ \big(\ov{\varphi_{\tau}\circ\varphi_{\tau'}^{-1}}(\ov{w}),\ \ov{h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}}(\ov{w}).\eta\big) & \mathrm{if} & w\in\ov{V_{\tau'}}\end{array}.\right.$$ Note that the elements on the right are elements of $z_{\tau}([W_{\tau}]\cap[W_{\tau'}])\times\mathbb{C}^r$. This defines an equivalence relation on the bundle $\sqcup_{\tau\in T} ([W_{\tau}]\times\mathbb{C}^r) \to \sqcup_{\tau\in T} [W_{\tau}]$, therefore a topological complex vector bundle $E$ with typical fibre $\mathbb{C}^r$ over $M$ (the complex cover of $S$) with a $1$-cocycle of transition maps $$\big(g_{\tau\tau'}:[W_{\tau}]\cap[W_{\tau'}] \longrightarrow \mathrm{GL}_r(\mathbb{C})\big)_{\tau,\tau'}$$ satisfying $$g_{\tau\tau'}\circz_{\tau'}^{-1} (w)= \left\{ \begin{array}{rcl} \ov{h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}}(w) & \mathrm{if} & w\in V_{\tau'} \\ h_{\tau\tau'}\circ\varphi_{\tau'}^{-1} (\ov{w}) & \mathrm{if} & w\in\ov{V_{\tau'}} \end{array}\right.$$ when $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ and $h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}$ are anti-holomorphic, and $$g_{\tau\tau'}\circz_{\tau'}^{-1}(w) = \left\{ \begin{array}{rcl} h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}(w) & \mathrm{if} & w\in V_{\tau} \\ \ov{h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}} (\ov{w}) & \mathrm{if} & w\in\ov{V_{\tau'}} \end{array}\right.$$ when $\varphi_{\tau}\circ\varphi_{\tau'}^{-1}$ and $h_{\tau\tau'}\circ\varphi_{\tau'}^{-1}$ are holomorphic. In particular, $g_{\tau\tau'}\circz_{\tau'}^{-1}$ is always a holomorphic map (this requires the use of the Schwarz reflection principle precisely when $V_{\tau} \cap V_{\tau'}$ intersects $\mathbb{R}$ in $\ov{\mathcal{H}}$). So $(g_{\tau\tau'})_{\tau,\tau'}$ is a holomorphic cocycle for the holomorphic atlas $([W_{\tau}],z_{\tau})_{\tau\in T}$ of $M$ and the complex vector bundle it defines is indeed holomorphic. By construction, the bundle $\mathcal{E}$ thus obtained is no other than $p^*Q$, where $p:M\to S$ is the (ramified) covering map between $M$ and $S$. The construction also shows that we have a commutative diagram \begin{equation*}\begin{CD} [W_{\tau}]\times\mathbb{C}^r @>\widetilde{\sigma}>> [W_{\tau}]\times\mathbb{C}^r \\ @VVV @VVV \\ [W_{\tau}] @>\sigma>> [W_{\tau}]=\sigma([W_{\tau}]), \end{CD}\end{equation*} where $\widetilde{\sigma}$ is fibrewise $\eta\mapsto \ov{\eta}$ and covers $\sigma$. This means that $\mathcal{E}$ has a global involution $\widetilde{\sigma}:\mathcal{E}\to \mathcal{E}$ covering $\sigma$ and which is $\mathbb{C}$-antilinear in the fibres, so $\mathcal{E}=p^*Q$ is a \textbf{real bundle} over the \textbf{real space} $(M,\sigma)$ in the sense of Atiyah (\cite{Atiyah_real_bundles}). \begin{definition}[Real bundle over $M$]\label{def_real_bundle} Let $M$ be a Riemann surface and let $\sigma$ be an anti-holomorphic involution of $M$. A \textbf{real holomorphic vector bundle} $\mathcal{E}\to M$ is a holomorphic vector bundle, together with an involution $\widetilde{\sigma}$ of $\mathcal{E}$ making the diagram \begin{equation*}\begin{CD} \mathcal{E} @>\widetilde{\sigma}>> \mathcal{E} \\ @VVV @VVV \\ M @>\sigma>> M \end{CD}\end{equation*} commutative, and such that, for all $x\in M$, the map $\widetilde{\sigma}|_{\mathcal{E}_x}: \mathcal{E}_x \to \mathcal{E}_{\sigma(x)}$ is $\mathbb{C}$-antilinear: $$\widetilde{\sigma}(\lambda\cdot\eta)=\ov{\lambda}\cdot\widetilde{\sigma}(\eta),\ \mathrm{for\ all}\ \lambda\in\mathbb{C}\ \mathrm{and\ all}\ \eta\in E_x.$$ A homomorphism between two real bundles $(\mathcal{E},\widetilde{\sigma})$ and $(\mathcal{E}',\widetilde{\sigma}')$ is a homomorphism $$f:\mathcal{E}\to \mathcal{E}'$$ of holomorphic vector bundles over $M$ such that $f\circ\widetilde{\sigma} = \widetilde{\sigma}' \circ f$. \end{definition} \noindent We shall often call $\widetilde{\sigma}$ the \textbf{real structure} of $\mathcal{E}$. If clear from the context, it is convenient, following the convention in \cite{Atiyah_real_bundles}, to simply write $x\mapsto \ov{x}$ for $\sigma$, and $v\mapsto\ov{v}$ for $\widetilde{\sigma}$. There are similar notions of real structures for topological, smooth, and Hermitian complex vector bundles. For Hermitian bundles for instance, one requires that $\widetilde{\sigma}$ should be a $\mathbb{C}$-antilinear, involutive isometry which covers $\sigma$. A \textbf{real section} of a real bundle $\mathcal{E}$ over $M$ is a section $s$ of $\mathcal{E}$ satisfying $s(\ov{x})=\ov{s(x)}$ for all $x\in M$. By the Schwarz reflection principle, the real sections of the real bundle $p^*Q$ constructed above are in one-to-one correspondence with the dianalytic sections of $Q$. This implies that, if two dianalytic bundles $Q$ and $Q'$ over $S$ satisfy $p^*Q\simeq p^*Q'$ as real vector bundles over $M$, then $Q\simeq Q'$ as dianalytic vector bundles over $S=M/\sigma$. We sum up this discussion with the following lemma. \begin{lemma} Let $S$ be a Klein surface and let $(M,\sigma,p)$ be its complex cover. Then the pulling back functor $p^*$ is an equivalence of categories between the category of dianalytic vector bundles over $S=M/\sigma$, and the category of real vector bundles over $M$. \end{lemma} \noindent Thus, instead of studying dianalytic vector bundles over a Klein surface, one may just as well choose to study real vector bundles over the complex cover of that Klein surface. Observe that these two equivalent categories are equivalent to a third one, the category of real algebraic vector bundles over a real algebraic curve (hence the term, real bundle). Indeed, a real algebraic curve $X/\mathbb{R}$ gives rise to a Riemann surface $M=X(\mathbb{C})$ with anti-holomorphic involution $\sigma$ induced by the complex conjugation of $\mathbb{C}$, and complexification of real algebraic bundles over $X$ provides an equivalence of categories between the category of real algebraic bundles over $X$ and the category of real holomorphic bundles over $M$ (see \cite{Atiyah_real_bundles}, page 370). Vector bundles over real algebraic curves are studied in \cite{BHH}, where results similar to ours are obtained. Indeed, one may observe that a Klein surface $M/\sigma$, together with its sheaf of dianalytic functions, naturally gives rise to a scheme of finite type over $\mathbb{R}$. The set of real points of that scheme is in bijection with the boundary of $M/\sigma$.\\ The point that we try to make in the rest of the paper is that we can (almost) think of the set of real vector bundles over $M$ as the fixed-point set of an involution of a larger set, morally the set of isomorphism classes of all holomorphic vector bundles. Certainly, a real bundle over $M$ satisfies $\os{\mathcal{E}}\simeq \mathcal{E}$ (the isomorphism being induced by $\widetilde{\sigma}$). But the converse is not true in general (indeed, quaternionic bundles, meaning those which admit a $\mathbb{C}$-antilinear automorphism $\widetilde{\sigma}$ such that $\widetilde{\sigma}^2=-\mathrm{Id}_\mathcal{E}$, are also fixed by that involution), so we cannot just consider the bundles whose isomorphism class is fixed under the involution $\mathcal{E}\mapsto\os{\mathcal{E}}$ if we want the real bundles to stand out. One may observe that restricting the involution to the set of isomorphism classes of stable bundles is not sufficient for our purposes, again because of the possibility that a stable bundle satisfying $\os\mathcal{E}\simeq \mathcal{E}$ might be quaternionic. Instead, we use the differential geometric approach of Atiyah, Bott and Donaldson, which consists in replacing holomorphic vector bundles of a given topological type with unitary connections on a fixed Hermitian bundle with that same topological type. Then, for each choice of a real Hermitian structure $\widetilde{\sigma}$ on that Hermitian bundle, we construct an involution $\alpha_{\sigt}$ on the space of connections, which induces $\beta_{\sigma}:[\mathcal{E}]\mapsto [\os{\mathcal{E}}]$ on isomorphism classes of stable bundles, regardless of the choice of $\widetilde{\sigma}$. The upshot of that construction is that the connections which are fixed by the various $\alpha_{\sigt}$ thus constructed, are exactly those which define the real holomorphic bundles. In a forthcoming paper, we shall show that quaternionic bundles can also be characterised by means of an involution and use this, combined with the real case dealt with in the present work, to derive topological information on $\mathrm{Fix}(\beta_{\sigma})$, such as a bound on the number of connected components of that set. \section{Moduli spaces of semistable bundles over a Riemann surface}\label{moduli_of_ss_bundles} In this section, we recall the basics about moduli varieties of semistable holomorphic vector bundles over a compact Riemann surface $M$ with a fixed compatible Riemannian metric normalised to unit volume. The section contains nothing new and the specialist reader may skip altogether. Although some aspects of what we shall say extend to higher-dimensional compact K\"ahler manifolds, the benefit of working in complex dimension one is twofold. \begin{enumerate} \item The classification of smooth complex vector bundles over a compact Riemann surface $M$ is very simple to state; such a vector bundle $E$ is classified by two integers, its rank $r$ and its degree $d:=\int_M c_1(E) \in \mathbb{Z}$, where $c_1(E)\in H^2(M;\mathbb{Z})$ is the first Chern class of $E$, viewed as a real-valued differential $2$-form on $M$. \item When studying holomorphic structures on a smooth complex vector bundle $E$ over $M$, we do not have to worry about integrability conditions; isomorphism classes of holomorphic structures on $E$ are in bijective correspondence with orbits of Dolbeault operators on $E$, as we shall recall shortly.\end{enumerate} References for the present section are sections $7$ and $8$ of \cite{AB}, section $2$ of \cite{Don_NS}, and sections $2.1$ and $2.2$ of \cite{DK}. Although the involution $\sigma$ of $M$ plays no role in this section, we denote $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ the moduli variety of semistable vector bundles of rank $r$ and degree $d$ on $M$, to avoid introducing another piece of notation. \subsection{Holomorphic vector bundles and unitary connections}\label{hol_bundles_and_unitary_connections} \begin{definition}[Dolbeault operator] A \textbf{Dolbeault operator} $D$ on a smooth complex vector bundle $(E\to M)$ is a $\mathbb{C}$-linear map $$D:\Omega^0(M;E) \longrightarrow \Omega^{0,1}(M;E)$$ such that, for any smooth function $f:M\to\mathbb{C}$ and any smooth section $s\in\Omega^0(M;E)$, $$D(fs) = (\overline{\partial}{f})s + f(Ds)\ (Leibniz\ identity).$$ \end{definition} \noindent The $\overline{\partial}$ in the definition above is the usual Cauchy-Riemann operator on $\Omega^0(M;\mathbb{C})$, taking a smooth function $f:M\to\mathbb{C}$ to the $\mathbb{C}$-antilinear part of $df\in\Omega^1(M;\mathbb{C})$. In particular, $\overline{\partial}{f}=0$ if, and only if, $f$ is holomorphic (\textit{i.e.} the Cauchy-Riemann equations are satisfied). A Dolbeault operator on $E$ is also called a (0,1)-connection on $E$. In a local trivialisation $(U_{\tau},\psi_{\tau})$ of the bundle, the Dolbeault operator $D$ takes the form $$(Ds)_{\tau} = \overline{\partial}{s_{\tau}} + B_{\tau} s_{\tau},$$ where $B_{\tau}\in\Omega^{0,1}(U_{\tau};\mathfrak{gl}_r(\C))$ and $\overline{\partial}{}$ acts componentwise on sections of $\psi_{\tau} : E_{\tau}\overset{\simeq}{\longrightarrow} U_{\tau} \times \mathbb{C}^r$. Denote $(U_{\tau},\psi_{\tau})_{\tau\in T}$ a family of local trivialisations of $E$ which covers $M$ and let $$(g_{\tau\tau'}: U_{\tau}\capU_{\tau'} \longrightarrow \mathrm{GL}_r(\mathbb{C}))_{\tau,\tau'}$$ be the corresponding smooth $1$-cocycle of transition maps. With our conventions on $1$-cocycles of transition maps, the family $(s_{\tau})_{\tau\in T}$ satisfies $s_{\tau}=g_{\tau\tau'}s_{\tau'}$, and the family $(B_{\tau})_{\tau\in T}$ satisfies \begin{equation}\label{Dolbeault_compatibility} B_{\tau} = g_{\tau\tau'} B_{\tau'} g_{\tau\tau'}^{-1} - (\overline{\partial}{g_{\tau\tau'}}) g_{\tau\tau'}^{-1}.\end{equation} Conversely, a Dolbeault operator is completely determined by such a family $B:=(B_{\tau})_{\tau\in T}$. In the following, we shall denote $\overline{\partial}_{B}$ the Dolbeault operator corresponding to a family $B=(B_{\tau})_{\tau\in T}$ satisfying (\ref{Dolbeault_compatibility}). It follows from the Leibniz identity that the difference $\overline{\partial}_{B}-\overline{\partial}_{B'}$ of two Dolbeault operators is a $\Omega^0(M;\mathbb{C})$-linear map from $\Omega^0(M;E)$ to $\Omega^{0,1}(M;E)$, so it corresponds to an element of the space $\Omega^{0,1}(M;\mathfrak{gl}(E))$ of (0,1)-forms over $M$ with values in the bundle $\mathfrak{gl}(E)$ of endomorphisms of $E$. Consequently, the set $\mathrm{Dol}(E)$ of all Dolbeault operators on $E$ is an infinite-dimensional complex affine space whose group of translations is $\Omega^{0,1}(M;\mathfrak{gl}(E))$. If $u\in\mathrm{Aut}(E)$ is an automorphism of $E$, the operator $$\overline{\partial}_{u(B)}s := u\big(\overline{\partial}_B(u^{-1}s)\big)$$ is a Dolbeault operator on $E$ and this defines an action of $\mathrm{Aut}(E)$ on $\mathrm{Dol}(E)$ (observe that we denoted $\overline{\partial}_{u(B)}$ the result of the action of $u$ on $\overline{\partial}_B$). If we still denote $\overline{\partial}_B$ the Dolbeault operator on $\mathfrak{gl}(E)$ induced by the operator $\overline{\partial}_B$ on $E$ (explicitly, it is the operator defined by the family $(\mathrm{ad}\, B_{\tau})_{\tau\in T}$, which satisfies relation (\ref{Dolbeault_compatibility}) when the cocycle $(\mathrm{Ad}\, g_{\tau\tau'})_{\tau,\tau'}$ is used to represent $\mathfrak{gl}(E)$) and expand the above formula using the Leibniz rule, we obtain the following well-known formula for the above action, $$\overline{\partial}_{u(B)} = \overline{\partial}_B - (\overline{\partial}_{\mathrm{ad} B} u)u^{-1}.$$ To obtain the local expression for this formula, one represents the automorphism $u$ by a family $(u_{\tau}:U_{\tau}\to \mathrm{GL}_r(\mathbb{C}))_{\tau\in T}$ of smooth maps satisfying $u_{\tau} = g_{\tau\tau'} u_{\tau'} g_{\tau\tau'}^{-1}$, from which one shows that $$\big(u(B))_{\tau} = u_{\tau} B_{\tau} u_{\tau}^{-1} - (\overline{\partial}{u_{\tau}}) u_{\tau}^{-1}.$$ Note that the action of $\mathrm{Aut}(E)$ on $\mathrm{Dol}(E)$ is often denoted $$u(B)=B - (\overline{\partial}_B u)u^{-1}$$ in the literature, with a slight abuse of notation (writing simply $B$ for the operator $\overline{\partial}_B$, and $\overline{\partial}_B$ for $\overline{\partial}_{\mathrm{ad}B}$), which simplifies the practical computations. The fundamental result that we need about Dolbeault operators is the following proposition. \begin{proposition}\label{hol_bundles_and_Dolbeault_op} Let $E$ be a smooth complex vector bundles of rank $r$ and degree $d$ over the compact Riemann surface $M$. Then the set $\mathrm{Vect}_M^{hol}(r,d)$ of isomorphism classes of holomorphic vector bundles of rank $r$ and degree $d$ over $M$ is in bijection with the orbit space $\mathrm{Dol}(E)/\mathrm{Aut}(E)$. \end{proposition} \noindent The proof of this result is easy except for the fact that a Dolbeault operator $\overline{\partial}_B$ on $E$ has sufficiently many linearly independent sections of $E$ satisfying $\overline{\partial}_B s=0$ (one needs the sheaf of germs of such sections to be locally free of rank $r$ over the sheaf of holomorphic functions on $M$), for a proof of which we refer to \cite{DK}, subsection 2.2.2, page 50. On higher-dimensional compact K\"ahler manifolds, the proposition remains true if $E$ is a fixed smooth complex vector bundle and one considers the set of holomorphic vector bundles having the same Chern classes as $E$ on the one hand, and integrable Dolbeault operators on $E$ on the other hand.\\ Let us now endow $E$ with a Hermitian metric $h$ and recall the following observation from linear algebra (\cite{AB}, section 8, page 570). The map \begin{equation*} \begin{array}{rcl} \Omega^{0,1}(U_{\tau};\mathfrak{gl}_r(\C)) & \longrightarrow & \Omega^1(U_{\tau};\mathfrak{u}_r) \\ B_{\tau} & \longmapsto & A_{\tau} := B_{\tau} - B_{\tau}^* \end{array}\end{equation*} is an isomorphism of \textit{real} vector spaces, whose inverse map sends $A_{\tau}$ to its $\mathbb{C}$-antilinear part, $$A_{\tau}^{0,1}(\cdot) = \frac{A_{\tau}(\cdot) + i A_{\tau}(i\cdot)}{2}.$$ If $(g_{\tau\tau'}: U_{\tau}\cap U_{\tau'} \to \mathbf{U}_r)_{\tau,\tau'}$ is a \textit{unitary} $1$-cocycle representing the Hermitian vector bundle $(E,h)$ and if the family $(B_{\tau})_{\tau\in T}\in\Omega^{0,1}(U_{\tau},\mathfrak{gl}_r(\mathbb{C}))$ satisfies $$B_{\tau} = g_{\tau\tau'} B_{\tau'} g_{\tau\tau'}^{-1} - (\overline{\partial}{g_{\tau\tau'}}) g_{\tau\tau'}^{-1},$$ then $g_{\tau\tau'}^*=g_{\tau\tau'}^{-1}$ and the family $(A_{\tau} = B_{\tau} - B_{\tau}^*)_{\tau\in T}$ satisfies $$A_{\tau} = g_{\tau\tau'} A_{\tau'} g_{\tau\tau'}^{-1} - (dg_{\tau\tau'})g_{\tau\tau'}^{-1}.$$ In other words, if the family $(B_{\tau})_{\tau\in T}$ defines a Dolbeault operator $\overline{\partial}_B$ on $E$, then the family $A:=(A_{\tau})_{\tau\in T}$ defines a \textit{unitary connection} $d_A$ on $(E,h)$, given locally by $$(d_A s)_{\tau} = ds_{\tau} + A_{\tau}s_{\tau}.$$ This sets up a bijection between $\mathrm{Dol}(E)$ and the set $\mathcal{A}(E,h)$ of unitary connections on $(E,h)$ which can be expressed globally as follows. The Hermitian structure on $E$ enables one to define the operator $\partial_{B^*}$, and the maps \begin{equation}\label{real_isom} \begin{array}{rcl} \mathrm{Dol}(E) & \longrightarrow & \mathcal{A}(E,h) \\ \overline{\partial}_{B} & \longmapsto & d_{B-B^*} = \overline{\partial}_{B} + \partial_{B^*} \end{array}, \end{equation} and $$\begin{array}{rcl} \mathcal{A}(E,h) & \longrightarrow & \mathrm{Dol}(E) \\ d_{A} & \longmapsto & \overline{\partial}_{A^{0,1}}\ = d_A^{\, 0,1} \end{array},$$ which are inverse to one another. Now, as $\dim_{\mathbb{R}} M =2$, the Hodge star of $M$ induces a complex structure on $\Omega^1(M;\mathfrak{u}(E,h))$, and the map (\ref{real_isom}) is actually $\mathbb{C}$-linear for that complex structure. This means that $\mathrm{Dol}(E)$ and $\mathcal{A}(E,h)$ are in fact isomorphic as \textit{complex} affine spaces. Moreover, there is a unique $\mathrm{Aut}(E)$-action on $\mathcal{A}(E,h)$ making that isomorphism equivariant. It is given explicitly by the formula $$u(A) = A - (\overline{\partial}_{A^{0,1}}u)u^{-1} + \big((\overline{\partial}_{A^{0,1}}u)u^{-1}\big)^*,$$ for all $u\in \mathrm{Aut}(E)$ and all $A\in\mathcal{A}(E,h)$. This action extends the natural action of the group $\mathcal{G}_{(E,h)}$ of unitary automorphisms of $(E,h)$. The group $\G_{(E,h)}$ is commonly called the unitary gauge group. And indeed, if $u$ is unitary, then $u^*=u^{-1}$, and the above formula becomes $$u(A) = A + (d_A u)u^{-1}$$ (again with the slight abuse of notation that consists in writing $A$ for $d_A$, and $d_A$ for $d_{\mathrm{ad}A}$, which we shall do systematically from now on). Locally, this action is given by the formula $$\big(u(A)\big)_{\tau} = \utA_{\tau}u_{\tau}^{-1} - (du_{\tau})u_{\tau}^{-1}.$$ Note that the group $\mathrm{Aut}(E)$ of all complex linear automorphisms of $E$ is commonly denoted $\G^{\, \C}_{(E,h)}$, and called the complex gauge group. We then reach the goal of this subsection, which is to recall the following result. \begin{proposition} Let $(E,h)$ be a smooth Hermitian bundle of rank $r$ and degree $d$ over the compact Riemann surface $M$. Then the set $\mathrm{Vect}_M^{hol}(r,d)$ of isomorphism classes of holomorphic vector bundles of rank $r$ and degree $d$ over $M$ is in bijection with the orbit space $\mathcal{A}(E,h)/ \G^{\, \C}_{(E,h)}$. \end{proposition} \noindent One may observe that this bijection does not depend on the choice of the metric $h$ on $E$. Indeed, if another metric $h'$ is chosen, then $h'=\varphi h \varphi^*$ for some $\varphi\in \G^{\, \C}_{(E,h)}$, and $A$ is $h$-unitary if, and only if, $\varphi A\varphi^{-1}$ is $h'$-unitary. The bijection between $\mathrm{Dol}(E)$ and $\mathcal{A}(E,h)$, however, does depend on the choice of the metric $h$. On higher-dimensional compact K\"ahler manifolds, a Dolbeault operator $\overline{\partial}_{B}$ is integrable if, and only if, the corresponding unitary connection has type $(1,1)$ curvature, and the proposition above may be generalised by restricting one's attention to such connections. \subsection{The Narasimhan-Seshadri-Donaldson theorem} \subsubsection{Characterisation of the stable bundles} We recall the definition of stable, polystable and semistable bundles, which originates in geometric invariant theory. For an arbitrary, non-zero complex vector bundle $(\mathcal{E}\to M)$, we denote $\mu(\mathcal{E})$ the ratio $\frac{\mathrm{deg}(\mathcal{E})}{\mathrm{rk}(\mathcal{E})}$, called the \textbf{slope} of $\mathcal{E}$. \begin{definition}[Stable, polystable and semistable bundles]\label{def_stable_bundle} A holomorphic vector bundle $\mathcal{E}$ over a compact Riemann surface $M$ is called \textbf{stable} if, for any holomorphic subbundle $\mathcal{F}$ which is neither $\{0\}$ nor $\mathcal{E}$, one has $$\mu(\mathcal{F}) < \mu(\mathcal{E}).$$ $\mathcal{E}$ is called \textbf{polystable} if it is a direct sum of stable bundles of slope $\mu(\mathcal{E})$. Finally, $\mathcal{E}$ is called \textbf{semistable} if, for any holomorphic subbundle $\mathcal{F}$ which is neither $\{0\}$ nor $\mathcal{E}$, one has $$\mu(\mathcal{F}) \leq \mu(\mathcal{E}).$$ \end{definition} \noindent We recall that the condition $\mu(\mathcal{F}) < \mu(\mathcal{E})$ is equivalent to $\mu(\mathcal{E}/\mathcal{F}) > \mu(\mathcal{E})$, and likewise with large inequalities. Evidently, a stable bundle is both polystable and semistable. And as a matter of fact, polystable bundles are semistable. To see this last point, one first needs to observe that if $$0\to \mathcal{E}_1 \to \mathcal{E} \to \mathcal{E}_2\to 0$$ is a short exact sequence of holomorphic vector bundles \textit{having same slope}, then $\mathcal{E}_1,\mathcal{E}_2$ semistable implies $\mathcal{E}$ semistable, again with the same slope. One then concludes by arguing that a polystable bundle is a split extension of semistable bundles with the same slope.\\ Since we have seen in the previous subsection that holomorphic vector bundles of fixed topological type over $M$ correspond bijectively to orbits of unitary connections on a fixed Hermitian bundle with that same topological type, it is natural to look for a characterisation of the stable bundles in terms of the corresponding orbits of unitary connections. This is exactly the content of Donaldson's formulation of a theorem of Narasimhan and Seshadri (see \cite{Don_NS}). Before stating that result, we recall that, since $M$ is a compact oriented Riemannian manifold of real dimension $2$, we may identify smooth $2$-forms on $M$ with values in a vector bundle to global sections of that bundle ($0$-forms), using the Hodge star of $M$. In particular, if $F_A\in\Omega^2(M;\mathfrak{u}(E,h))$ denotes the curvature of a unitary connection $A$ on $(E,h)$, then $*F_A$ is an element of $\Omega^0(M;\mathfrak{u}(E,h))$. We also recall that a holomorphic vector bundle is called \textit{indecomposable} if it cannot be written as a proper direct sum. \begin{theorem}[Donaldson, \cite{Don_NS}]\label{charac_stable_bundles} A holomorphic vector bundle $\mathcal{E}$ of rank $r$ and degree $d$ is stable if, and only if, it is indecomposable, and admits a Hermitian metric $h$ and a compatible unitary connection $A$, whose curvature $F_A$ satisfies $$*F_A = i2\pi\frac{d}{r}\ \mathrm{Id}_{\mathcal{E}} \in \Omega^0(M;\mathfrak{u}(\mathcal{E},h)).$$ Such a connection is then unique up to a unitary automorphism of $(\mathcal{E},h)$. \end{theorem} \noindent A holomorphic vector bundle is therefore polystable if, and only if, it admits a unitary connection of the type above (such a connection is called a \textbf{Yang-Mills connection}). We shall often denote $\mu= \frac{d}{r}$ the slope of a holomorphic vector bundle of rank $r$ and degree $d$.\\ Let now $(E,h)$ be a fixed smooth Hermitian vector bundle of rank $r$ and degree $d$ over $M$. The map $$F: \begin{array}{rcl} \mathcal{A}(E,h) & \longrightarrow & \Omega^2(M;\mathfrak{u}(E,h)) \\ A & \longmapsto & F_A:= d_A\circ d_A\end{array}$$ taking a unitary connection to its curvature is equivariant for the action of $\mathcal{G}_{(E,h)}$ on $\mathcal{A}(E,h)$ considered in the previous subsection and the conjugacy action of $\mathcal{G}_{(E,h)}$ on $\Omega^2(M;\mathfrak{u}(E,h))$. The Hodge star $$*: \begin{array}{rcl} \Omega^2(M;\mathfrak{u}(E,h)) &\longrightarrow & \Omega^0(M;\mathfrak{u}(E,h))\\ R & \longmapsto & s\ \mathrm{such\ that}\ R=s\, \mathrm{vol}_M\end{array}$$ is also equivariant for the conjugacy action of the gauge group, so, as $i2\pi \mu\, \mathrm{\mathrm{Id}}_{E}$ is a central element in $\Omega^0(M;\mathfrak{u}(E,h))$, the fibre $(*F)^{-1}(\{i2\pi \mu\, \mathrm{\mathrm{Id}}_{E}\})$ is $\mathcal{G}_{(E,h)}$-invariant. \begin{corollary} The set of gauge equivalence classes of polystable holomorphic vector bundles of rank $r$ and degree $d$ over $M$ is in bijection with $$(*F)^{-1}(\{i2\pi \mu\, \mathrm{\mathrm{Id}}_{E}\}) / \mathcal{G}_{(E,h)}.$$ \end{corollary} \subsubsection{The K\"ahler structure of moduli spaces of semistable bundles} Recall that we have denoted $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ the moduli variety of semistable holomorphic vector bundles of rank $r$ and degree $d$ over $M$. As every semistable bundle of slope $\mu$ admits a Jordan-H\"older filtration whose successive quotients are stable bundles of slope $\mu$ (see for instance \cite{Le_Potier_ENS}, expos\'e 2, theorem I.2, page 33), the associated graded vector bundle is a polystable bundle of rank $r$ and degree $d$, whose isomorphism class, as a graded vector bundle, is independent of the choice of the filtration. Next, by a theorem of Seshadri (\cite{Seshadri}), two semistable holomorphic bundles define the same point in $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ if, and only if, the associated graded bundles are isomorphic. This theorem, combined with Donaldson's characterisation of polystable bundles, establishes a bijection between $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ and the set of gauge equivalence classes of polystable bundles of rank $r$ and degree $d$, which we have just expressed as $$(*F)^{-1}(\{i2\pi\frac{d}{r}\ \mathrm{\mathrm{Id}}_E\})/\mathcal{G}_{(E,h)},$$ where $F:\mathcal{A}(E,h)\to\Omega^2(M;\mathfrak{u}(E,h))$ is the curvature and $*$ is the Hodge star of $M$. We may also write this $$\mathcal{M}^{\, r,d}_{(M,\sigma)} = F^{-1}(\{*i2\pi\frac{d}{r}\ \mathrm{\mathrm{Id}}_E\})/\mathcal{G}_{(E,h)}.$$ The $\mathrm{Ad}$-invariant positive definite inner product $(M,N)\mapsto -\mathrm{tr}(MN)$ on $\mathfrak{u}_r$ induces a gauge-invariant Riemannian metric on the bundle $\mathfrak{u}(E,h)$ of anti-Hermitian endomorphisms of $(E,h)$. This in turn induces a $\mathcal{G}_{(E,h)}$-invariant positive definite scalar product on the real vector space $\Omega^0(M;\mathfrak{u}(E,h))$, defined by $$(s,t) \mapsto \int_M -\mathrm{tr}(st)\, \mathrm{vol}_M.$$ Hence a $\mathcal{G}_{(E,h)}$-equivariant isomorphism of real vector spaces $$\begin{array}{rcl}\Omega^0(M;\mathfrak{u}(E,h)) & \longrightarrow & \big(\Omega^0(M;\mathfrak{u}(E,h))\big)^*\\ t & \longmapsto & \left( s\mapsto \int_M -\mathrm{tr}(st)\, \mathrm{vol}_M\right)\end{array}.$$ Composing the Hodge star $*:\Omega^2(M;\mathfrak{u}(E,h)) \to \Omega^0(M;\mathfrak{u}(E,h))$ with this isomorphism gives a $\mathcal{G}_{(E,h)}$-equivariant isomorphism of real vector spaces $$\begin{array}{rcl} \Omega^2(M;\mathfrak{u}(E,h)) & \overset{\simeq}{\longrightarrow} & (\Omega^0(M;\mathfrak{u}(E,h)))^* \\ R & \longmapsto & \left( s \mapsto \int_M -\mathrm{tr}(sR)\right) \end{array},$$ so we may think of the curvature $F$ as a map with values in $(\Omega^0(M;\mathfrak{u}(E,h)))^*$, the dual of the Lie algebra of the gauge group. Atiyah and Bott have shown (\cite{AB}) that $\mathcal{A}(E,h)$ has a K\"ahler structure and that the action of $\mathcal{G}_{(E,h)}$ is a K\"ahler action which admits the curvature as a momentum map. Therefore, the above presentation of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ gives this moduli space a K\"ahler structure, obtained by performing reduction at level $*i2\pi\frac{d}{r} \, \mathrm{\mathrm{Id}}_E$ on the infinite-dimensional K\"ahler manifold $\mathcal{A}(E,h)$. We shall sometimes denote $$\mathcal{A}(E,h)/\negthickspace /_{*i2\pi\frac{d}{r} \mathrm{\mathrm{Id}}_E} \mathcal{G}_{(E,h)}$$ the quotient $F^{-1}(\{*i2\pi\frac{d}{r}\, \mathrm{\mathrm{Id}}_E\})/\mathcal{G}_{(E,h)}$. Note that we should in fact consider $L^2_1$ connections instead of $C^{\infty}$ ones, with curvature in $L^2$ and gauge transformations in $L^2_2$, in order to make the affine space $\mathcal{A}(E,h)$ a Banach manifold (see \cite{AB}, sections $6$ and $14$, and \cite{Don_NS}, section $2$). We have deliberately omitted this, to lighten the exposition and stress the geometric ideas underlying the construction.\\ The only part of the above theory that we need to make explicit in order to use it later in the paper is the expression of the symplectic form of $\mathcal{A}(E,h)$, which we denote $\omega$. As the space of all unitary connections is an affine space whose group of translations is $\Omega^1(M;\mathfrak{u}(E,h))$, the tangent space at a given point $A$ of $\mathcal{A}(E,h)$ may be canonically identified to $\Omega^1(M;\mathfrak{u}(E,h))$. If $a,b\in\Omega^1(M;\mathfrak{u}(E,h))$ are two tangent vectors at $A$, the $2$-form $a\wedge b$ is $\mathfrak{u}(E,h) \otimes \mathfrak{u}(E,h)$-valued. Combining this with the $\mathrm{Ad}$-invariant scalar product $M\otimes N \mapsto -\mathrm{tr}(MN)$ of $\mathfrak{u}_r$, one obtains a real valued $2$-form on $M$, denoted $-\mathrm{tr}(a \wedge b)$. Explicitly, it is the $2$-form on $M$ defined pointwise by $$\big(-\mathrm{tr}(a\wedge b)\big)_x: \begin{array}{rcl} T_x M \times T_x M & \longrightarrow & \mathbb{R} \\ (v_1,v_2) & \longmapsto & -\frac{1}{2} \mathrm{tr}\big( a_x(v_1)b_x(v_2) - a_x(v_2)b_x(v_1)\big)\end{array}.$$ Then, the expression $$\omega_A(a,b) = \int_M -\mathrm{tr}(a \wedge b)$$ defines a $2$-form $\omega$ on $\mathcal{A}(E,h)$, and this $2$-form is symplectic. \section{Real semistable bundles over a Riemann surface}\label{lag_embedding} In this section, we show that, for any anti-holomorphic involution $\sigma$ of $M$, the set $$\mathcal{N}^{r,d}_{\sigma}:=\{[\mathrm{gr}(\mathcal{E})]\in\mathcal{M}^{\, r,d}_{(M,\sigma)} : \mathcal{E}\ \mathrm{is\ semistable\ and\ real}\}$$ of moduli of real semistable bundles of rank $r$ and degree $d$ is a totally real, totally geodesic, Lagrangian submanifold of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$. Our strategy is to show that there is an anti-symplectic, involutive isometry $\beta_{\sigma}$ of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ such that $\mathcal{N}^{r,d}_{\sigma}$ is a union of connected components of $\mathrm{Fix}(\beta_{\sigma})$. To that end, we use the presentation of the moduli variety $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ as a K\"ahler quotient, $$\mathcal{M}^{\, r,d}_{(M,\sigma)} = \mathcal{A}(E,h)/\negthickspace /_{*i2\pi\frac{d}{r} \mathrm{\mathrm{Id}}_E} \mathcal{G}_{(E,h)},$$ and show that $\beta_{\sigma}$ comes from a family of anti-symplectic, involutive isometries $\alpha_{\sigt}$ of $\mathcal{A}(E,h)$, where $\widetilde{\sigma}$ runs through the set of real Hermitian structures of $(E,h)$.\\ Elements of $\mathcal{N}^{r,d}_{\sigma}$ are, by definition, strong equivalence classes of semistable holomorphic vector bundles of rank $r$ and degree $d$ which contain a real bundle. As a matter of fact, more is true: any bundle in the strong equivalence class of a bundle which is both semistable and real is itself real. Let us show this. By Seshadri's theorem, the strong equivalence class of a semistable holomorphic vector bundle $\mathcal{E}$ is the graded isomorphism class of the graded vector bundle $\mathrm{gr}(\mathcal{E})$ associated to any Jordan-H\"older filtration of $\mathcal{E}$. The successive quotients of that filtration are stable bundles of slope equal to that of $\mathcal{E}$, and $\mathrm{gr}(\mathcal{E})$ is therefore a polystable bundle of slope $\mu(\mathcal{E})$. Assume now that $\mathcal{E}$ is both semistable and real. Then, for any Jordan-H\"older filtration of $\mathcal{E}$, the successive quotients are both stable and real (and have same slope). Indeed, this is a simple consequence of the fact that the kernel and the image of a homomorphism of real bundles (a bundle map which intertwines the real structures) have naturally induced real structures, and this in fact turns the category of real semistable bundles of slope $\mu$ into an Abelian category which is stable by extensions (a strict subcategory of the Abelian category of semistable bundles of slope $\mu$, compare \cite{Le_Potier_ENS}). As a consequence, the graded bundle associated to a real semistable bundle is a direct sum of bundles which are both stable and real, and any bundle $\mathcal{E}$ such that $\mathrm{gr}(\mathcal{E})$ is a direct sum of bundles which are both stable and real is itself real. The next result then shows that points of $\mathcal{N}^{r,d}_{\sigma}$ are precisely graded real isomorphism classes of such bundles. This in turn suggests that points of $\mathcal{N}^{r,d}_{\sigma}$ should, perhaps, be thought of as moduli of real semistable bundles over $(M,\sigma)$. \begin{proposition}\label{real_moduli} Let $(\mathcal{E},\widetilde{\sigma})$ and $(\mathcal{E}',\widetilde{\sigma}')$ be two holomorphic bundles which are both semi\-stable and real, and assume that $\mathrm{gr}(\mathcal{E})$ is isomorphic to $\mathrm{gr}(\mathcal{E}')$ as a graded vector bundle. Then $\mathrm{gr}(\mathcal{E})$ is isomorphic to $\mathrm{gr}(\mathcal{E}')$ as a graded real vector bundle. \end{proposition} \begin{proof} One first proves the result under the additional assumption that $\mathcal{E}$ and $\mathcal{E}'$ are both stable, and conclude by induction on the length of the Jordan-H\"older filtration. For a stable bundle $\mathcal{E}$, one has $\mathrm{gr}(\mathcal{E})\simeq \mathcal{E}$, so the assumption of the proposition is that $\mathcal{E}\simeq \mathcal{E}'$. Replacing $\widetilde{\sigma}'$ with $\varphi\widetilde{\sigma}'\varphi^{-1}$ if necessary, we may further assume that $\widetilde{\sigma}$ and $\widetilde{\sigma}'$ are two distinct real structures on the same vector bundle $\mathcal{E}$. Then $\widetilde{\sigma}\sigt'$ is $\mathbb{C}$-linear and, as $\mathcal{E}$ is stable, this implies that $\widetilde{\sigma}\sigt'=\lambda\in\mathbb{C}^*$. This in turn implies that $\widetilde{\sigma} = |\lambda|^2 \widetilde{\sigma}$, so $\lambda = e^{i\theta}$ for some $\theta \in \mathbb{R}$, whence one obtains $$\widetilde{\sigma} = e^{i\theta} \widetilde{\sigma}' = e^{i\frac{\theta}{2}} \widetilde{\sigma}' (e^{-i\frac{\theta}{2}}\cdot),$$ showing that $\widetilde{\sigma}$ and $\widetilde{\sigma}'$ are conjugate by an automorphism of $\mathcal{E}$. \end{proof} \noindent As a final remark on $\mathcal{N}^{r,d}_{\sigma}$, we observe that the functor $\mathcal{E} \mapsto \os{\mathcal{E}}$ sends a Jordan-H\"older filtration of $\mathcal{E}$ to a Jordan-H\"older filtration of $\os{\mathcal{E}}$, so it induces an automorphism $$\beta_{\sigma}: [\mathrm{gr}(\mathcal{E})] \mapsto [\mathrm{gr}(\os{\mathcal{E}})]$$ of the moduli variety $\mathcal{M}^{\, r,d}_{(M,\sigma)}$, which is involutive and fixes $\mathcal{N}^{r,d}_{\sigma}$ pointwise. We now set out to prove that this involution is an anti-symplectic isometry of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$, and that $\mathcal{N}^{r,d}_{\sigma}$ is a union of connected components of $\mathrm{Fix}(\beta_{\sigma})$. Note that, since the tangent space to $\mathcal{M}^{\, r,d}_{(M,\sigma)}$ at a given smooth point $[\mathrm{gr}(\mathcal{E})]$ may be identified with $H^1(M;\mathrm{End}(\mathcal{E}))$, one sees that the involution $\beta_{\sigma}$ is anti-holomorphic (for the tangent map takes an $\mathrm{End}(\mathcal{E})$-valued holomorphic $1$-form $\nu$ to $\os{\nu}$). We shall show: \begin{enumerate} \item that there is an involution $\alpha_{\sigt}$ associated to each choice of a real Hermitian structure $\widetilde{\sigma}$ on $(E,h)$, such that a unitary connection $A\in\mathcal{A}(E,h)$ defines a real holomorphic structure on $(E,h,\widetilde{\sigma})$ if, and only if, $\alpha_{\sigt}(A)=A$. \item that $\alpha_{\sigt}$ is an anti-symplectic isometry of $\mathcal{A}(E,h)$ which induces $\beta_{\sigma}$ on the K\"ahler quotient $$F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big)/ \G_{(E,h)} = \mathcal{M}^{\, r,d}_{(M,\sigma)},$$ confirming the fact that the latter involution is an anti-symplectic isometry. \item that we can form a so-called \textbf{real quotient}, $$\mathcal{L}^{r,d}_{\sigt} := \Big(F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big)\Big)^{\alpha_{\sigt}} / \unitarygaugegp^{\sigt},$$ which embeds onto a union of connected component of $\mathcal{N}^{r,d}_{\sigma}\subset \mathcal{M}^{\, r,d}_{(M,\sigma)}$. As $\alpha_{\sigt}$ induces $\beta_{\sigma}$, the real quotient also embeds onto a union of connected components of $\mathrm{Fix}(\beta_{\sigma})$, and our result will be proved. \end{enumerate} \subsection{Real unitary connections} Let $(E,h,\widetilde{\sigma})$ be a fixed real Hermitian bundle (meaning that $\widetilde{\sigma}$ is a $\mathbb{C}$-antilinear isometry which covers $\sigma$ and squares to $\mathrm{Id}_E$). The choice of $\widetilde{\sigma}$ induces a canonical isomorphism $\varphi: \os{E} \overset{\simeq}{\longrightarrow} E$, as well as so-called \textbf{real invariants} which classify $(E,h,\widetilde{\sigma})$ up to isomorphism of real Hermitian bundles. We denote $M^{\sigma}$ the fixed-point set of $\sigma:M\to M$, and $g$ the genus of $M$. \begin{proposition}[\cite{BHH}, Propositions 4.1 and 4.2]\label{real_invariants}One has: \begin{enumerate} \item if $M^{\sigma} = \emptyset$, then real Hermitian bundles are topologically classified by their rank and degree. It is necessary and sufficient for a real Hermitian bundle of rank $r$ and degree $d$ to exist that $d$ should be even. \item if $M^{\sigma} \not= \emptyset$, then $(E^{\sigt} \to M^{\sigma})$ is a real vector bundle in the ordinary sense, over the disjoint union $M^{\sigma} = \mathcal{C}_1 \sqcup \ldots \sqcup \mathcal{C}_k$ of at most $(g+1)$ circles. Denoting $w^{(j)} := w_1(E^{\sigt}_j) \in H^1(S^1; \mathbb{Z} / 2\mathbb{Z}) \simeq \mathbb{Z} / 2\mathbb{Z}$ the first Stiefel-Whitney class of $E^{\sigt}$ restricted to $\mathcal{C}_j$, real Hermitian bundles over $M$ are topologically classified by their rank, their degree, and the sequence $(w^{(1)}, \ldots, w^{(k)})$. It is necessary and sufficient for a real Hermitian bundle with given invariants $r$, $d$ and $(w^{(1)}, \ldots, w^{(k)})$ to exist that $$w^{(1)} + \cdots + w^{(k)} \equiv d\ (\mod 2).$$ \end{enumerate} \end{proposition} \noindent The choice of a real structure $\widetilde{\sigma}$ on $(E,h)$ induces real structures on the complex vector space of smooth global sections of $E$, \begin{eqnarray*} \Omega^0(M;E) & \longrightarrow & \Omega^0(M;E) \\ s & \longmapsto & \ov{s}: x \mapsto \ov{s(\ov{x})}, \end{eqnarray*} \noindent (the fixed-points of which are the real sections of $E$, defined in section \ref{intro}) and, more generally, on the space of $E$-valued $k$-forms on $M$, \begin{eqnarray*} \Omega^k(M;E) & \longrightarrow & \Omega^k(M;E) \\ \eta & \longmapsto & \ov{\eta}: v\mapsto \ov{\eta_{\ov{x}} (\ov{v})}. \end{eqnarray*} \begin{definition}[Real unitary connections]\label{real_unitary_connection_def} A unitary connection $$d_A : \Omega^0(M;E) \longrightarrow \Omega^1(M;E)$$ is called \textbf{real} if it commutes to the real structures of $\Omega^0(M;E)$ and $\Omega^1(M;E)$: $$d_A\ov{s} = \ov{d_A s}\ \mathrm{for\ all\ } s\in \Omega^0(M;E).$$ \end{definition} \noindent A similar definition is possible on Dolbeault operators: the complex vector space $\Omega^{0,1}(M;E)$ is invariant under the real structure of $\Omega^1(M;E)$, and a Dolbeault operator $$\overline{\partial}_B: \Omega^0(M;E) \longrightarrow \Omega^{0,1}(M;E)$$ is called real if it commutes to the real structures. This definition is compatible with the isomorphism between $\mathrm{Dol}(E)$ and $\mathcal{A}(E,h)$ in the sense that $d_{A}^{\, †0,1}$ is real if, and only if, $d_A$ is real. In particular, $\ker d_A^{\, 0,1}$ has an induced real structure, and the holomorphic bundle $(E,d_A^{\, 0,1})$ is a real holomorphic bundle in the sense of Atiyah. Conversely, the Dolbeault operator of a real holomorphic bundle commutes to the real structures of $\Omega^0(M;E)$ and $\Omega^{0,1}(M;E)$, so the unitary connection associated to that operator is a real unitary connection in the sense of definition \ref{real_unitary_connection_def}.\\ We now come to the construction of the involution $\alpha_{\sigt}$ of $\mathcal{A}(E,h)$. To define it, we make use of the canonical isomorphism $$\phi:\os{E} \overset{\simeq}{\longrightarrow} E$$ determined by $\widetilde{\sigma}$, and define $\alpha_{\sigt}(A)$ to be the unitary connection on $E$ such that, for all $s\in\Omega^0(M;E)$, $$d_{\alpha_{\sigt}(A)} s = \phi \big(d_{\os{A}} (\phi^{-1} s)\big).$$ To make this more explicit, let us choose a unitary $1$-cocycle $$(g_{\tau\tau'}:U_{\tau} \cap U_{\tau'} \to \mathbf{U}_r)_{\tau,\tau'}$$ representing $E$ and subordinate to a covering $(U_{\tau})_{\tau\in T}$ by open sets which satisfy $\sigma(U_{\tau})=U_{\tau}$. Then $\phi$ may be represented by a $0$-cocycle $(\lambda_{\tau}:U_{\tau}\to\mathbf{U}_r)_{\tau\in T}$ such that $$\lambda_{\tau} \os{g_{\tau\tau'}}\lambda_{\tau}^{-1} = g_{\tau\tau'} \quad \mathrm{and}\quad \os{\lambda_{\tau}}=\lambda_{\tau}^{-1}.$$ The first condition translates the fact that $\phi$ is an isomorphism between $\os{E}$ and $E$, and the second condition translates the fact that $\widetilde{\sigma}^2 = \mathrm{Id}_E$. It can then easily be checked that, if $(A_{\tau}: U_{\tau} \to \Omega^1(U_{\tau};\mathfrak{u}_r))_{\tau\in T}$ is a framed unitary connection on $E$, then so is $$\big(\lambda_{\tau}\os{A_{\tau}}\lambda_{\tau}^{-1} - (d\lambda_{\tau})\lambda_{\tau}^{-1}\big)_{\tau\in T},$$ and the connection it defines is $\alpha_{\sigt}(A)$. In the following, we shall simply denote $$\alpha_{\sigt}: \begin{array}{rcl} \mathcal{A}(E,h) & \longrightarrow & \mathcal{A}(E,h) \\ A & \longmapsto & \ov{A} \end{array}.$$ This suggestive notation is justified by the following result. \begin{proposition}\label{charac_real_connections} The unitary connection $$d_A: \Omega^0(M;E) \longrightarrow \Omega^1(M;E)$$ is real if, and only if, $\ov{A}=A.$ \end{proposition} \begin{proof} The key observation is that $$d_{\ov{A}}\, s = \ov{d_A \ov{s}},$$ which follows from the definition of $d_{\ov{A}}$. Since, by definition, the connection $d_A$ is real if, and only if, $\ov{d_A \ov{s}}= d_A s$ for all $s$, the proposition is proved. \end{proof} \noindent The result may also be proved by computing in a local frame of the type described above. If we choose a different real Hermitian structure $\widetilde{\sigma}'$ on $(E,h)$, we obtain a new involution $\alpha_{\widetilde{\sigma}'}$, whose fixed points also define real holomorphic bundles, but with possibly different real invariants. Thus, letting $\widetilde{\sigma}$ run through the set of all possible real Hermitian structures, we obtain, as fixed-point sets of the various involutions thus defined, the unitary connections that define all possible real holomorphic bundles of rank $r$ and degree $d$. As a matter of fact, it suffices to let $\widetilde{\sigma}$ run through the set of all topological types of real Hermitian structures of rank $r$ and degree $d$, and proposition \ref{real_invariants} shows that these come in finite number. \subsection{Properties of the involutions} We now study the properties of $\alpha_{\sigt}$ for a fixed $\widetilde{\sigma}$. We start by observing that the choice of $\widetilde{\sigma}$ induces an involution of the unitary gauge group $\G_{(E,h)}$, and of the vector space $\Omega^2(M;\mathfrak{u}(E,h))$, which may be viewed as the dual as the dual of the Lie algebra of the unitary gauge group. These involutions are \begin{eqnarray*} \G_{(E,h)} & \longrightarrow & \G_{(E,h)} \\ u & \longmapsto & \phi\, \os{u}\, \phi^{-1} \end{eqnarray*} \noindent and \begin{eqnarray*} \Omega^2(M;\mathfrak{u}(E,h)) & \longrightarrow & \Omega^2(M;\mathfrak{u}(E,h)) \\ R & \longmapsto & \phi\, \os{R}\, \phi^{-1} \end{eqnarray*} \noindent We shall simply denote them $u\mapsto \ov{u}$ and $R\mapsto\ov{R}$. \begin{proposition} One has the following compatibility relations: \begin{enumerate} \item between $\alpha_{\sigt}$ and the gauge action: $$\ov{u(A)} = \ov{u}(\ov{A}).$$ \item between $\alpha_{\sigt}$ and the momentum map of the gauge action: $$F_{\ov{A}} = \ov{F_A}.$$ \end{enumerate} \noindent Moreover, $\alpha_{\sigt}$ is an anti-symplectic involutive isometry of $\mathcal{A}(E,h)$. \end{proposition} \noindent Note that $\ov{F_A}$ is, by definition, the operator from $\Omega^0(M;E)$ to $\Omega^2(M;E)$ that takes a section $s$ to the $2$-form $\ov{F_A\ov{s}}$. We could have defined the operator $\ov{d_A}$ in a similar way, and proposition \ref{charac_real_connections} then shows that $d_A$ is real if, and only if, $\ov{d_A} = d_A$. \begin{proof} First, as $\os{\phi} = \phi^{-1}$, $\alpha_{\sigt}$ squares to the identity. Second, observe that there is an involution $$\begin{array}{rcl} \Omega^1(M;\mathfrak{u}(E,h)) & \longrightarrow & \Omega^1(M;\mathfrak{u}(E,h)) \\ a & \longmapsto & \ov{a} := \phi\, \os{a}\, \phi^{-1} \end{array}.$$ \noindent Then, for all $A\in\mathcal{A}(E,h)$ and all $a,b\in T_A \mathcal{A}(E,h) = \Omega^1(M;\mathfrak{u}(E,h))$, \begin{eqnarray*} (\alpha_{\sigt}^*\omega)_A (a,b) & = & \int_M -\mathrm{tr}(\ov{a} \wedge \ov{b}) \\ & = & \int_M \sigma^*\big(-\mathrm{tr}(a \wedge b)\big) \\ & = & -\int_M -\mathrm{tr}(a \wedge b)\\ & = & - \omega_A (a,b) \end{eqnarray*} \noindent As a consequence, to show that $\alpha_{\sigt}$ is an isometry, it suffices to show that $\alpha_{\sigt}$ is $\mathbb{C}$-antilinear with respect to the compatible complex structure of $\mathcal{A}(E,h)$, which is given by the Hodge star. This last point follows from the fact that the tangent map to $\alpha_{\sigt}$ is $$a \mapsto \ov{a}\ \mathrm{on}\ \Omega^1(M;\mathfrak{u}(E,h)),$$ which is clearly $\mathbb{C}$-antilinear. Finally, let us prove that we have the asserted compatibility relations. \begin{enumerate} \item One has $$\ov{u(A)} = \ov{A + (d_A u)u^{-1}} = \ov{A} + (\ov{d_A u})\ov{u^{-1}} = \ov{A} + (d_{\ov{A}} \ov{u}) \ov{u}^{-1} = \ov{u}(\ov{A}).$$ \item Similarly, for all $s\in\Omega^0(M;E)$, $$ F_{\ov{A}} s = d_{\ov{A}} (d_{\ov{A}} s) = d_{\ov{A}} \ov{(d_{A} \ov{s})} = \ov{d_{A} (d_A \ov{s})} = \ov{F_A \ov{s}},$$ hence $F_{\ov{A}} = \ov{F_A}$. \end{enumerate} \end{proof} \subsection{Embedding the real quotients} For a given $\widetilde{\sigma}$ on $(E,h)$, the elements of $\mathrm{Fix}(\alpha_{\sigt})$ are the connections that define the real holomorphic structures on $E$ with real invariants those determined by $\widetilde{\sigma}$. Let us denote $\unitarygaugegp^{\sigt}$ the subgroup of $\G_{(E,h)}$ consisting of fixed points of the automorphism $u \mapsto \ov{u}$, and call it the \textbf{real gauge group}. Note that it is precisely the group of automorphisms of the real Hermitian bundle $(E,h,\widetilde{\sigma})$. The compatibility relation $\ov{u(A)} = \ov{u}(\ov{A})$ shows that the real gauge group acts on the space $\mathrm{Fix}(\alpha_{\sigt})$ of real connections. We then observe that $*i2\pi\frac{d}{r}\mathrm{Id}_E$ is fixed under the involution $R\mapsto \ov{R}$ of $\Omega^2(M;\mathfrak{u}(E,h))$. Indeed, \begin{eqnarray*} \phi\, \os{(i2\pi\frac{d}{r}\mathrm{Id}_E \mathrm{vol}_M)}\, \phi^{-1} & = & \left( \phi (-i2\pi\frac{d}{r} \mathrm{Id}_{\os{E}}) \phi^{-1} \right) \big(\sigma^* \mathrm{vol}_M\big) \\ & = & (- i2\pi\frac{d}{r}\mathrm{Id}_E)\, (- \mathrm{vol}_M) \\ & = & i2\pi\frac{d}{r}\mathrm{Id}_E \mathrm{vol}_M, \end{eqnarray*} and so $F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big)$ is invariant under $\alpha_{\sigt}$. Consequently, the following quotient, $$\mathcal{L}^{r,d}_{\sigt}=(F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big))^{\alpha_{\sigt}} / \unitarygaugegp^{\sigt},$$ is a well-defined object. By Donaldson's theorem and proposition \ref{charac_real_connections}, its elements are real gauge equivalences classes of holomorphic bundles of rank $r$ and degree $d$ which are both polystable and real. The compatibility of $\alpha_{\sigt}$ with the gauge action and the momentum map of that action also shows that $\alpha_{\sigt}$ induces an involution of the quotient $$\mathcal{M}^{\, r,d}_{(M,\sigma)} = F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big) / \G_{(E,h)},$$ sending the gauge equivalence class of a connection $A$ to that of $\ov{A}$. In other words, the involution induced by $\alpha_{\sigt}$ is $$\beta_{\sigma}: [\mathrm{gr}(\mathcal{E})] \mapsto [\mathrm{gr}(\os{\mathcal{E}})],$$ \textit{regardless of the choice of the Hermitian real structure} $\widetilde{\sigma}$ \textit{on} $(E,h)$. In particular, $\beta_{\sigma}$ is an anti-symplectic, involutive isometry of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$. Evidently, there is a map $$\mathcal{L}^{r,d}_{\sigt} \longrightarrow \mathcal{N}^{r,d}_{\sigma} \subset \mathrm{Fix}(\beta_{\sigma}),$$ of which we can think as the map which forgets the real structure of a holomorphic bundle which is both polystable and real. Observe that the real quotient $\mathcal{L}^{r,d}_{\sigt}$ has dimension half the dimension of the K\"ahler quotient $\mathcal{M}^{\, r,d}_{(M,\sigma)}$, and that so does $\mathrm{Fix}(\beta_{\sigma})$. Therefore, to prove that $\mathcal{N}^{r,d}_{\sigma}$ is a union of connected components of $\mathrm{Fix}(\beta_{\sigma})$, it suffices to prove that the above mapping is a closed embedding of $\mathcal{L}^{r,d}_{\sigt}$ into $\mathcal{N}^{r,d}_{\sigma}$. \begin{proposition} Let $A$ and $A'$ be two real, irreducible, Yang-Mills connections which lie in the same $\G_{(E,h)}$-orbit. Then they lie in the same $\unitarygaugegp^{\sigt}$-orbit. \end{proposition} \noindent We recall that an irreducible connection is a unitary connection that defines an \textit{indecomposable} holomorphic structure on $(E,h)$. This is equivalent to asking that its stabiliser, \textit{in} $\G^{\, \C}_{(E,h)}$, should be isomorphic to $\mathbb{C}^*$. \begin{proof} The proof is similar to that of proposition \ref{real_moduli}. Assume that $A'=u(A)$ for some $u\in\G_{(E,h)}$. Then $$u(A)=A'=\ov{A'}=\ov{u(A)}=\ov{u}(\ov{A})=\ov{u} (A).$$ As $A$ is both Yang-Mills and irreducible, its stabiliser is, by Donaldson's theorem, contained in $S^1$. This implies that $u^{-1}\ov{u} = e^{i\theta}$ for some $\theta\in\mathbb{R}$. Put $v=e^{i\frac{\theta}{2}} u$. Then $v(A)=u(A)=A'$, and $\ov{v}=e^{-i\frac{\theta}{2}} \ov{u} = e^{-i\frac{\theta}{2}} e^{i\theta} u = e^{i\frac{\theta}{2}} u = v$, so $v\in\unitarygaugegp^{\sigt}$. \end{proof} \begin{corollary} The map $$\mathcal{L}^{r,d}_{\sigt}\longrightarrow\mathcal{N}^{r,d}_{\sigma},$$ which takes the real gauge orbit of a real Yang-Mills connection to its unitary gauge orbit, is injective. \end{corollary} \begin{proof} Recall that a Yang-Mills connection is an element of $F^{-1}\big(\{*i2\pi\frac{d}{r}\, \mathrm{Id}_E\}\big)$, and that it is a direct sum of irreducible Yang-Mills connections. If the connection is real, then so are its irreducible components and the corollary follows by applying the proposition to those irreducible components. \end{proof} \noindent We may now state the conclusion to this paper. \begin{theorem}\label{lag_submanifold} The set $$\mathcal{N}^{r,d}_{\sigma} = \{ [\mathrm{gr}(\mathcal{E})]\in\mathcal{M}^{\, r,d}_{(M,\sigma)} : \mathcal{E}\ \mathrm{is\ semistable\ and\ real}\}$$ of moduli of holomorphic bundles of rank $r$ and degree $d$ which are both semistable and real is a totally real, totally geodesic, Lagrangian submanifold of $\mathcal{M}^{\, r,d}_{(M,\sigma)}$. \end{theorem} \begin{ack} This research was carried out at the University of Los Andes in Bogot\'a, and at the IHES in Bures-sur-Yvette, over the second half of 2009. I thank both these institutions for their hospitality. I would also like to thank Olivier Guichard, Nan Kuo Ho, Melissa Liu, and Richard Wentworth, for discussing the constructions of the paper with me and for asking challenging questions. Finally, I thank Thomas Baird for his comments on an early version, and the referee for comments which have helped improve the general presentation of the paper. \end{ack}
1,314,259,992,690
arxiv
\section{Definitions and notations.} A graph is \emph{outerplanar} if it can be embedded in the plane without crossing edges, in such a way that all the vertices are on the boundary of the exterior region. An \emph{incidence} of a simple graph $G$ is a pair $(v, vw)$ of an edge $vw$ and one of its vertices. Two incidences $(v,vw)$ and $(\hat{v}, \hat{v}\hat{w})$ are \emph{adjacent} if $v=\hat{v}$, or $w=\hat{v}$ or $v=\hat{w}$. Following Wang, Ma, Xu, and Yan, we define \emph{$(k,l)$-incidence colorings} to be a proper colorings of incidences of a given graph $G$ with at most $k$ colors such that for any vertex $v$ of $G$ the number of colors used in coloring all incidences $(u, uv)$ is at most $l$. This notion also appears in \cite{H}. The maximum degree of a vertex in $G$ is denoted by $\Delta$. Finaly, the neighbourhood $N(v)$ of a vertex $v$ is a set of all vertices adjacent to $v$ in $G$. \section{The proof.} \begin{thm} Any outerplanar graph $G$ has a $(\Delta+2,2)$-incidence coloring. \end{thm} \begin{proof} It suffices to prove the theorem for connected graphs. We will need the following lemma. \begin{lemma} For every connected simple outerplanar graph $G$ at least one of the following holds: Case 1: $G$ has a vertex of dergree 1. Case 2: $G$ has two adjacent vertices of degree 2. Case 3: $G$ has a vertex $u$ of degree 2 with $N(u)=(v,w)$ and $vw \in G$. Case 4: $G$ contains a vertex $u$ of degre 2 with $G-u$ disconnected. \end{lemma} The proof is based on the proof of Proposition 7.1.15 in (\cite{W}, p.254). \begin{proof} Suppose $G$ has no vertex of degree 1. The following procedure exibits $G$ as a subgrpah of an outerplanar graph $H$ such that the bounadry of the unbounded face of $H$ is a cycle, i.e. a 2-connected outerplanar graph: If boundary of $G$ is not a cycle then it is a walk that visits some vertex $u$ twice. If $\ldots, v,u,w \ldots$ is such a visit we add the edge $vw$. We continue in this way untill we get to $H$. Now the weak dual of $H$ is a tree and its leaves correspond to faces with exactly one internal edge. Take one such face $F$ with the internal edge $e=ab$. Case A) There are at least 4 edges in the boundary of $F$. Then there are 2 adjacent vertices $u,v$ on the boundary of $F$ different from $a, b$. Both of these are of degree 2 in $H$, so of degree at most $2$ in $G$. Since $G$ is connected and has no degree 1 vertices, they are both of degree 2. This is Case 2 of the lemma. Case B) There are 3 edges in the boundary of $F$. Denote the vertex not on the edge $e$ by $u$. Again $u$ is of dgree 2 in $H$, hence also in $G$. If $e$ is in $G$ we are in Case 3 of the lemma. If $e$ is not in $G$ then it was added in passing from $G$ to $H$, wich means that $v$ was traversed twice in the walk of the unbounded face of $G$. Then $G-u$ is disconnected, we are in Case 4. \end{proof} We shall now prove the theorem by induction on order of $G$. If $\Delta =2$ it is obvious, so we assume $\Delta \geq 3$. Note that the case $\Delta=3$ follows from \cite{M}, but the resulting simplification in the proof is minor, and we prefer to keep the argument self-contained. We now have four cases, corresponding to the cases in the lemma: Case 1: The graph $G$ has a vertex $u$ of degree $1$. Let's denote the vertex adjacent to $u$ by $v$. Then $G^*= G - u$ is an outerplanar graph of smaller order and maximum degree at most $\Delta$. Hence by induction hypothesis $G^*$ can be $(\Delta+2,2)$-incidence colored by a coloring $\sigma^*$. We extend it to a coloring $\sigma$ of $G$. The degree of $v$ in $G^*$ is at most $\Delta-1$, so there are at most $\Delta-1$ colors used by incidences $(v, vw)$ outgoing from $v$, and at most $2$ used by the incidences $(w, wv)$ incoming into $v$. Hence there is at least one color left to color $(v, vu)$. The incidence $(u,uv)$ can be colored by one of the colors incoming into $v$. Case 2: The graph $G$ has two adjacent vertices $u, v$ of degree $2$. Denote the other vertex adjacent to $u$ by $w$, the one adjacent to $v$ by $x$. Consider $G^* = G-u$. Again, $G$ is outerplanar, has smaller order and maximum degree at most $\Delta$ and so can be $(\Delta+2,2)$-incidence colored by a coloring $\sigma^*$. Degree of $w$ in $G^*$ is at most $\Delta-1$, so there is at least one color $\alpha$ available to color $(w, wu)$. One of the incoming colors of $w$ can be used to color $(u, uw)$. Now we need to color $(u,uv)$ and $(v, vu)$. There are at most 4 prohibited colors and at least 5 available (as $\Delta \geq 3)$. If the color of $(w,wu)$ or $(u ,uw)$ is the same as the color of $(x, xv)$ then there are at most 3 prohibited colors, and we can use 2 remaining ones to finish the coloring. If all $(w,wu)$, $(u ,uw)$ and $(x, xv)$ have distinct colors, we can use the color of $(x, xv)$ to color $(u, uv)$, and have a color left to finish coloring $(v,vu)$. Resulting coloring is in fact a $(\Delta+2, 2)$ coloring. Case 3: The graph $G$ has a vertex $u$ of degree $2$ with $N(u)=(v,w)$ and $vw \in G$. Consider $G^* = G-u$. Again, $G$ is outerplanar, has smaller order and maximum degree at most $\Delta$ and so can be $(\Delta+2,2)$-incidence colored by a coloring $\sigma^*$. Suppose $(v, vw)$ is colored by color $\alpha$ and $(w, wv)$ by color $\beta$. We now assign color $\alpha$ to $(u, uw)$ and color $\beta$ to $(u, uv)$. This does not produce any conflicts since $\alpha$ already was an incoming color for $w$ and $\beta$ for $v$, and $\alpha \neq \beta$. Finally, the vertex $v$ has degree at most $\Delta -1$ in $G^*$ so there is at least one color $\gamma$, $\gamma \neq \alpha, \beta$, that can be used to color $(v, vu)$. Similarly, there is a color $\delta$, $\delta \neq \alpha, \beta$, that can be used to color $(w,wu)$ (it is possible that $\delta=\gamma)$. This produces a $(\Delta+2, 2)$- incidence coloring of $G$. Case 4: The graph $G$ has a vertex $u$ of degree $2$ such that $G-u$ is disconnected. Again $G^*$ is outerplanar, of smaller order and maximal degree at most $\Delta$, hence $(\Delta+2, 2)$ colorable. Denote $N(u)=(v,w)$. Let a $(\Delta+2, 2)$ coloring of the component of $G^*$ containing $v$ be $\sigma_1$ and a $(\Delta+2, 2)$ coloring of the component of $G^*$ containing $w$ be $\sigma_2$. Other components of $G-u$ are components of $G$, they can be $(\Delta+2,2)$ colored and left unmodified. Since degrees of $v$ and $w$ are at most $\Delta-1$ there exists a way to color incidences $(v,vu)$ by $\alpha$ and $(w, wu)$ by $\beta$, and then to assign one of the clors $\gamma$ incoming to $v$ to the incidence $(u, uv)$ and one of the clors $\delta$ incoming to $w$ to the incidence $(u, uw)$. The problem is that while $\alpha \neq \gamma$ and $\beta \neq \delta$ there may be other equalities, so we get adjacent incidences at $u$ colored in the same way. However, the set of colors has at least 4 elements. Hence for any colors $\beta, \delta$ exists a permutation of colors sending $\beta, \delta$ to colors different from $\alpha, \gamma$. Composing $\sigma_2$ (together with the coloings of $(w, wu)$ and $(u, uw)$) with this permutation gives a $(\Delta+2, 2)$ coloring of $G$. This completes the proof. \end{proof} \section{Questions on the incidence coloring of planar and higher-genus graphs.} Even though not every graph is $(\Delta+2)$-colorable (c.f. \cite{G}), the counterexamples known to me are not planar. The question of whether planar graphs are $(\Delta+2)$-colorable is unsolved. The bound of $\Delta+7$ was obtained in \cite{H}. More generally, in the same paper it is shown that any k-degenerate graph has a $(\Delta+2k-1, k)$ incidence coloring. Any graph of positive genus $g$ has a vertex of degree at most $d= \frac{1}{2} (7+\sqrt(1+48g)$, and hence is $d$-degenerate, producing a bound of $\Delta+6+\sqrt(1+48g)$ on the incidence coloring number. Planar graphs are 5-degenerate, and outerplanar graphs are 2-degenerate, so the resulting bounds of $\Delta+9$ and $\Delta+3$, respectively, are not optimal. The higher-genus bounds are probably not tight either.
1,314,259,992,691
arxiv
\section{Introduction} \IEEEPARstart{I}{n} the traditional source coding doctrine, performance of algorithms are characterized in the limit of large blocklengths. In some modern applications, however, data is continuously generated and updated, making them highly delay-sensitive. Therefore, it is vital to characterize the overheads associated with operation in the short blocklength regime. To evaluate the performance of source coding for blocklengths at which the law of large numbers does not apply, we need a more refined metric than expected length. Thus, we use $\epsilon$-coding rate, the minimum rate such that the corresponding overflow probability is less than $\epsilon$. Fundamental limits of $\epsilon$-coding rate for fixed-to-variable lossless data compression in the non-universal setup are derived in \cite{verdulossless}, both for $i.i.d.$ as well as Markov sources. In most applications, however, the statistics of the source are unknown or arduous to estimate, especially at short blocklengths, where we have constraints on the available data for the inference task. In the universal setup, a class of models is given, however the true model in the class that generates the data is unknown. From an algorithmic angle, the aim of universal source coding is to propose a compression algorithm in which the encoding process is ignorant of the underlying unknown parameters, yet achieving the performance criteria. Analysis of the finite blocklength behavior as well as fine asymptotics of universal source coding have been considered in \cite{oliver,oliver2,oliverArxiv,tanAsyEs} for the class of $i.i.d.$ sources, and in \cite{nemat} for the class of Markov sources. Similar to the aforementioned works, the universal source coding scheme in this paper compresses the whole file, so we relax the prefix condition \cite{szpan}. Imposing the prefix free condition, the $\epsilon$-coding rate of the Two Stage code \cite{oliver2,oliverArxiv} and that of the Bayes code \cite{saito,saitoISIT} are also considered in the literature. The Type Size code (TS code) is introduced in \cite{oliver} for compression of the class of \emph{all} stationary memoryless sources, in which sequences are encoded in increasing order of type class size. It is shown that the resulting third-order term is $\frac{|\mathcal{X}|-3}{2}\log{n}$ bits, where $|\mathcal{X}|$ is the alphabet size. Its optimality is shown in \cite{oliver2}. Subsequently, a converse bound is derived in \cite{fekri} for one-to-one average minimax (and maximin) redundancy of memoryless sources, which consequently shows that the TS code is optimal up to $o(\log n)$ for universal one-to-one compression of \emph{all} memoryless sources, considering expected length as the performance metric \cite{fekri}. However, an achievable scheme for universal one-to-one compression of parametric sources with more \emph{structure} is not provided. Departing from average case analysis, we consider $\epsilon$-coding rate as the performance metric and provide an achievable scheme for compressing exponential families of distributions as the parametric class. Moreover, we provide a converse result, showing that our proposed scheme is optimal up to the third-order coding rate. Type classes in \cite{oliver,nemat,fekri} are based on the empirical probability mass function (EPMF). In particular, two sequences are in the same (elementary) type class if they have the same EPMF. Elementary type classes do not exploit the inherited structure in the model class. To generalize the notion of a type to richer model classes, we define the \emph{point} type class as the set of sequences equiprobable under any model in the class. The size of the point type class structure is analyzed in \cite{merhav}. This natural characterization of type classes is based on the philosophy that the sequences with the same probability (under any model in the class) are ``\emph{indistinguishable}''. Such a philosophy has been employed before in the relevant applications, e.g. the universal simulation \cite{merhav} and the universal random number generation \cite{gadiel} problems. Perhaps surprisingly, we show that this natural approach is suboptimal for the universal source coding problem. In this paper, we characterize the structure of the type classes in a new fashion for the sake of optimally compressing exponential families of distributions. We refer to this new approach as \emph{quantized} types. We divide the convex hull of the set of minimal sufficient statistics into cuboids. Two sequences are in the same quantized type class if their minimal sufficient statistics belong to the same cuboid. Therefore, we show that \emph{approximate} indistinguishability leads to optimality for the source coding problem. We consider fixed-to-variable length codes for a $d$-dimensional exponential family of distributions over a finite alphabet $\mathcal{X}$. For ease of exposition, we first assume, data generated by the unknown true model in this family is independent and identically distributed ($i.i.d.$). We subsequently extend the results to Markov data generation mechanisms. We provide performance guarantees for the Type Size code for these model classes. Using the Type Size code, we show that the minimal number of bits required to compress a length-$n$ sequence with probability $1-\epsilon$ is at most \begin{equation} \label{introResEq} nH+\sigma\sqrt{n}Q^{-1}\left(\epsilon\right)+\left(\frac{d}{2}-1\right)\log{n}+\mathcal{O}\left(1\right) \end{equation} where $H$ and $\sigma^2$ are the entropy and varentropy of the underlying source respectively, $Q(\cdot)$ is the tail of the standard normal distribution and $d$ is the dimension of the model class. The first two terms in (\ref{introResEq}) are the same as the non-universal case \cite{verdulossless}, while the third-order $\log{n}$ term represents the cost of universality; for comparison, in the non-universal case the third-order term is $-\frac{1}{2}\log{n}$ \cite{verdulossless}. Precise bounds on the fourth-order $\mathcal{O}(1)$ term is beyond the scope of this paper. However, analyzing the fourth-order term is considered in the literature for the related source coding problems. For example, it is shown in \cite{szpan2008} that the fourth-order term is either a constant or has fluctuating behavior for average codelength of a binary memoryless source. The rest of the paper is organized as follows. We introduce the exponential family, the finite-length lossless source coding problem and related definitions in Section \ref{sec::prelim}. In Section \ref{sec::TSC}, we describe quantized type classes and the variation of the TS code used in this paper. In Section \ref{sec::mainThm}, we present the main theorem of the paper, which characterizes the performance of the TS code using quantized type classes up to third order. We present preliminary results including a lemma bounding the size of a type class in Section \ref{sec::preRes}. We provide the proof of main theorem in Section \ref{sec::proofMain}. Extensions to the Markov case is considered in Section \ref{sec::ParMrk}. We show the suboptimality of the approach based on point type classes in Section \ref{sec::AltApprch}. We conclude in Section \ref{sec::conclusion}. A number of proofs are given in the appendices. \section{Problem Statement} \label{sec::prelim} Let $\Theta$ be a compact subset of $\mathbb{R}^d$ with non-empty interior. Probability distributions in an exponential family can be expressed in the form \cite{merhav} \begin{equation} \label{pThetaEq} p_{\theta}(x)=2^{\left\langle\theta,\boldsymbol{\tau}(x)\right\rangle - \psi(\theta)} \end{equation} where $\theta\in\Theta$ is the $d$-dimensional parameter vector, $\boldsymbol{\tau}(x): \mathcal{X}\rightarrow \mathbb{R}^d$ --- the crux of our parametric approach --- is the vector of sufficient statistics and $\psi(\theta)$ is the normalizing factor. Let the model class $\mathcal{P}=\left\{p_{\theta},\theta\in\Theta\right\}$, be the exponential family of distributions over the finite alphabet $\mathcal{X}=\left\{1,\cdots,|\mathcal{X}|\right\}$, parameterized by $\theta\in\Theta\subset \mathbb{R}^d$, where $d$ is the degrees of freedom in the minimal description of $p_{\theta}\in\mathcal{P}$ in the sense that no smaller dimensional family can capture the same model class. The degrees of freedom turns out to characterize the richness of the model class in our context. Compactness of $\Theta$ implies existence of a constant upper bound $\wp$ on the norm of the parameter vectors, namely $\|\theta\|\leq \wp$ for all $\theta\in\Theta$. We denote the (unknown) true model in force as $p_{\theta^*}$. $\mathbb{P}_{\theta}$, $\mathbb{E}_{\theta}$ and $\mathbb{V}_{\theta}$ denote probability, expectation and variance with respect to $p_{\theta}$, respectively. All logarithms are in base 2. Instead of introducing different indices for every new constant $C_1,C_2,...$, the same letter $C$ is used to denote different constants whose precise values are irrelevant. From (\ref{pThetaEq}), the probability of a sequence $x^n$ drawn $i.i.d.$ from a model $p_{\theta}$ in the exponential family takes the form \cite{merhav} \begin{align} p_{\theta}(x^n) &=\prod_{i=1}^{n}p_{\theta}(x_i) \nonumber \\ &= \prod_{i=1}^{n}2^{\big\langle\theta,{\boldsymbol{\tau}}(x_i)\big\rangle-\psi(\theta)} \nonumber \\ &=2^{\left\{n\big[\big\langle\theta,\boldsymbol{\tau}(x^n)\big\rangle-\psi(\theta)\big]\right\}}\label{pNdimEq} \end{align} where \begin{equation} \label{tauXnDef} \boldsymbol{\tau}(x^n)=\frac{\sum_{i=1}^{n}{\boldsymbol{\tau}(x_i)}}{n} \in \mathbb{R}^d \end{equation} is a minimal sufficient statistic \cite{merhav}. Note that $\boldsymbol{\tau}(x)$ and $\boldsymbol{\tau}(x^n)$ are distinguished based upon their arguments. We consider a fixed-to-variable code that encodes an $n$-length sequence from the parametric source to a variable-length bit string via a coding function \begin{equation*} \phi:\mathcal{X}^n\rightarrow \{0,1\}^*=\{\emptyset,0,1,00,01,10,11,000,\cdots\}. \end{equation*} We do not make the assumption that the code is prefix-free. Let $l(\phi(x^n))$ be the number of bits in the compressed binary string when $x^n$ is the source sequence. We gauge the performance of algorithms through the $\epsilon$-coding rate at blocklength $n$ given by \begin{equation*} R_n(\epsilon,\phi, p_{\theta^*}):=\min\left\{\frac{k}{n}: \mathbb{P}_{\theta^*}\Big[l(\phi(X^n))\geq k\Big]\leq \epsilon\right\}. \end{equation*} \section{Type Size Code} \label{sec::TSC} For the class of all memoryless sources over a finite alphabet $\mathcal{X}$, the fixed-to-variable TS code is introduced in \cite{oliver}, which sorts sequences based on the size of the elementary type class from smallest to largest and then encodes sequences to variable-length bit-strings in this order. More precisely, define the support of a sequence as the set of observed symbols in it. The output of the encoder consists of a header that encodes the support of the sequence and a body that maps sequences to binary strings based on the size of their type class, among all sequences with the support set indicated in the header. That is, if two sequences $x^n$ and $y^n$ have the same support and $|T_{x^n}|\leq|T_{y^n}|$, then $l\left(\phi(x^n)\right)\leq l\left(\phi(y^n)\right)$, where $T_{x^n}$ is the type class of $x^n$. We borrow the spirit of the TS code, yet our approach for parametric sources departs from that of \cite{oliver} in two ways \begin{enumerate} \item Rather than defining type classes based on the EPMFs, we use quantized type classes, which are based on the neighborhoods of the minimal sufficient statistics. \item We omit the header encoding the support of the observed sequence. This header is unnecessary given the assumption that $\Theta$ is compact, because under this assumption, for any distribution in $\mathcal{P}$, each letter $x\in\mathcal{X}$ occurs with some probability bounded away from zero. Thus, all letters are likely to be observed for even moderate blocklengths. \end{enumerate} We first define quantized type classes for the purpose of compressing the exponential family. We cover the convex hull of the set of minimal sufficient statistics $\mathcal{T}=\text{conv}\left\{\boldsymbol{\tau}(x^n): x^n\in\mathcal{X}^n\right\}$, into $d$-dimensional cubic grids --- cuboids --- of side length $\frac{s}{n}$, where $s>0$ is a constant. The union of such disjoint cuboids should cover $\mathcal{T}$. The position of these cuboids is arbitrary, however once we cover the space, the covering is fixed throughout. We represent each $d$-dimensional cuboid by its geometrical \emph{center}. Denote $G(\boldsymbol{\tau}_0)$ as the cuboid with center $\boldsymbol{\tau}_0$, more precisely \begin{equation} \label{cuboidEq} G(\boldsymbol{\tau}_0):= \left\{\boldsymbol{z}+\boldsymbol{\tau}_0 \in \mathbb{R}^d: -\frac{s}{2n}<z_i\leq \frac{s}{2n} \mbox{ for } 1\leq i \leq d \right\} \end{equation} where $z_i$ is the $i$-th component of the $d$-dimensional vector $\boldsymbol{z}$. Let $\boldsymbol{\tau}_c(x^n)$ be the center of the cuboid that contains $\boldsymbol{\tau}(x^n)$. Let us denote $\mathcal{T}_c$ as the set of cuboid centers, i.e., $\mathcal{T}_c=\left\{\boldsymbol{\tau}_c(x^n):x^n\in\mathcal{X}^n\right\}$. We then define the quantized type class of $x^n$ as \begin{equation} \label{typeClassDefEq} T_{x^n}:=\left\{y^n\in\mathcal{X}^n: \boldsymbol{\tau}(y^n)\in G\left(\boldsymbol{\tau}_c(x^n)\right)\right\} \end{equation} the set of all sequences $y^n$ with minimal sufficient statistic belonging to the very same cuboid containing the minimal sufficient statistic of $x^n$ (See Figure \ref{fig::TClassPar}). \begin{figure} \centering \captionsetup{justification=centering} \begin{tikzpicture} \draw[step=2cm,color=gray] (0,0) grid (6,6); \node at (3,4.3) {$\frac{s}{n}$}; \node at (2.5,4.2) {$\longleftarrow$}; \node at (3.5,4.2) {$\longrightarrow$}; \node at (6.2,6.2) {$\mathcal{T}$}; \node at (2.3,2.2) {$\bullet$}; \node at (2.5,2.5) {$\boldsymbol{\tau(x^n)}$}; \node at (3,3) { $\bullet$ } ; \node at (3.2,3.3) {$\boldsymbol{\tau_c(x^n)}$}; \draw[step=2cm,color=red] (2,2) grid (4,4); \node at (3,3.7) {{$\color{red}{G(\boldsymbol{\tau_c(x^n)})}$}}; \end{tikzpicture} \caption{Type class structure for the exponential families} \label{fig::TClassPar} \end{figure} Since quantized type classes are represented by the cuboids and consequently the cuboid centers, we may interchangeably use $T_{\boldsymbol{\tau}_0}$ as the type class with corresponding cuboid center $\boldsymbol{\tau}_0$. Hence, $T_{\boldsymbol{\tau}_c(x^n)}$ is the same as $T_{x^n}$. Two sequences within the given type class are indistinguishable from the coding perspective. The sequence indistinguishability introduced in this paper is reminiscent of the Balasubramanian's model indistinguishability \cite{bala}. In contrast to the sequence indistinguishability approach where the space of minimal sufficient statistics is partitioned into cuboids, in a model indistinguishability approach one may partition the source space. Asymptotics of the model indistinguishability approach is derived in \cite{JormaStCom}, where the maximum likelihood estimate is quantized to some precision by being the center of a cuboid. However, in their setup, the quantized code has the same logarithmic term as the maximum likelihood code with no quantization (See also \cite{JormaSNML}). For parametric TS code, the type class structure in \cite{merhav}, corresponds to the point type approach, wherein no quantization is done; i.e. $s=0$. In this limit, the size of the type class in \cite{merhav} depends on the dimension $d'$ of the derived lattice space \cite[Eq.A3]{merhav} rather than the model parameter dimension $d$. We return to this issue in Section \ref{sec::AltApprch}, wherein we show that using point types, the TS code achieves a third-order rate of $\left(\frac{d'}{2}-1\right)\log{n}$, which is not tight enough for our purposes due to the fact that $d'$ is in general larger than $d$. As a direct consequence of our TS code construction, we have the following finite blocklength achievable bound; it constitutes a modification of Theorem 3 in \cite{oliver}. \begin{Theorem}\cite{oliver}\label{finiteBlckThm} For the TS code \begin{equation} R_n(\epsilon,\phi,p_{\theta^*})= \frac{1}{n}\left\lceil\log{M(\epsilon)}\right\rceil \end{equation} where \begin{equation} \label{mEq} M(\epsilon)=\inf_{\gamma:\mathbb{P}_{\theta^*}\left(\frac{1}{n}\log{|T_{\boldsymbol{\tau}_c(X^n)}|}>\gamma\right)\leq \epsilon}\sum_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c: \\ \frac{1}{n}\log{|T_{\boldsymbol{\tau}_c}|}\leq \gamma }}{{|T_{\boldsymbol{\tau}_c}|}}. \end{equation} \end{Theorem} \section{Main Result} Let $H(p_{\theta})=\mathbb{E}_{\theta}\left(\log{\frac{1}{p_{\theta}(X)}}\right)$ and $\sigma^2(p_{\theta})=\mathbb{V}_{\theta}\left(\log{\frac{1}{p_{\theta}(X)}}\right)$ be the entropy and the varentropy of $p_{\theta}$. \label{sec::mainThm} The following theorem exactly characterizes achievable $\epsilon$-rates up to third-order term, as well as asserting that this rate is achievable by the TS code. \begin{Theorem} \label{mainThm} For any stationary memoryless exponential family of distributions parameterized by $\Theta$, \begin{equation} \inf_{\phi}\sup_{\theta\in\Theta}\left[R_n(\epsilon,\phi,{p_{\theta}})-H(p_{\theta})-\frac{\sigma(p_{\theta})}{\sqrt{n}}Q^{-1}(\epsilon)\right]=\left(\frac{d}{2}-1\right)\frac{\log{n}}{n}+\mathcal{O}\left(\frac{1}{n}\right) \label{mainEq} \end{equation} where the infimum is achieved by the TS code using quantized types. \end{Theorem} \begin{Example} For the class of all $i.i.d.$ distributions $d=|\mathcal{X}|-1$, and Theorem \ref{mainThm} reduces to the result in \cite{oliver}. \end{Example} \section{Auxiliary Results} \label{sec::preRes} Define \begin{equation} \label{thetaHatEquation} \hat{\theta}\left(\boldsymbol{\tau}\right) =\underset{\theta\in\Theta}{\arg\max} \left(\langle\theta,\boldsymbol{\tau}\rangle-\psi(\theta)\right). \end{equation} Note that since the Hessian matrix of $\psi(\theta)$, $\boldsymbol{\nabla}^2\left(\psi(\theta)\right)=\text{Cov}_{\theta}\left(\boldsymbol{\tau}(X)\right)$ is positive definite, the log-likelihood function is strictly concave and hence the maximum likelihood $\hat{\theta}(\boldsymbol{\tau})$ is unique. For notational convenience, we may omit the dependencies on $\boldsymbol{\tau}$ and $\boldsymbol{\tau}_c$ in $\hat{\theta}\left(\boldsymbol{\tau}(x^n)\right)$ and $\hat{\theta}\left(\boldsymbol{\tau}_c(x^n)\right)$, and simply denote them by $\hat{\theta}(x^n)$ and $\hat{\theta}_c(x^n)$, respectively. The next lemma provides tight upper and lower bounds on the type class size. Beside its exclusive bearing, it is a main component of the achievability proof. \begin{Lemma}[Type Class Size] \label{TypeClSizeLem} Let $\kappa=\wp\frac{\sqrt{d}}{2}$. For large enough $n$, the size of the type class of $x^n$ is bounded as \begin{equation*} r(x^n)-2\kappa s+C' \leq \log{|T_{x^n}|} \leq r(x^n)+2\kappa s+C \end{equation*} where \begin{equation*} r(x^n)=-\log{p_{\hat{\theta}_c(x^n)}(x^n)}-\frac{d}{2}\log{n}+d\log{s} \end{equation*} is the common part of the upper and lower bounds and $C,C'$ are constants independent of $n$. \end{Lemma} \begin{proof} For notational convenience, when it is clear from the context, we may suppress the arguments in $\boldsymbol{\tau}_c(x^n)$ and $G(\boldsymbol{\tau}_c(x^n))$ and denote them simply as $\boldsymbol{\tau}_c$ and $G(\boldsymbol{\tau}_c)$. Motivated by \cite[Eq. A2]{merhav}, we bound $|T_{x^n}|$ as follows: \begin{equation} \displaystyle \frac{{\mathbb{P}}_{\hat{\theta}_c(x^n)}\left\{\boldsymbol{\tau}(X^n)\in G\left(\boldsymbol{\tau}_c(x^n)\right)\right\}}{{\displaystyle\max_{\substack{y^n:\\ \boldsymbol{\tau}(y^n)\in G\left(\boldsymbol{\tau}_c(x^n)\right)}}}{\mathbb{P}}_{\hat{\theta}_c(x^n)}(y^n)} \leq |T_{x^n}| \leq \frac{{\mathbb{P}}_{\hat{\theta}_c(x^n)}\left\{\boldsymbol{\tau}(X^n)\in G\left(\boldsymbol{\tau}_c(x^n)\right)\right\}}{{\displaystyle\min_{\substack{y^n:\\ \boldsymbol{\tau}(y^n)\in G\left(\boldsymbol{\tau}_c(x^n)\right)}}{\mathbb{P}}_{\hat{\theta}_c(x^n)}(y^n)}}. \label{TypeMainEq} \end{equation} Let \begin{math} nG(\boldsymbol{\tau}_c)=\left\{n\textbf{z}:\textbf{z}\in G(\boldsymbol{\tau}_c)\right\}. \end{math} It is clear that \begin{equation*} \mathbb{P}_{\hat{\theta}_c(x^n)}\left\{\boldsymbol{\tau}(X^n)\in G(\boldsymbol{\tau}_c)\right\}=\mathbb{P}_{\hat{\theta}_c(x^n)}\left\{n\boldsymbol{\tau}(X^n)\in nG(\boldsymbol{\tau}_c)\right\}. \end{equation*} Exploiting the result in \cite[Corollary 1]{stone}, we have \begin{equation} \mathbb{P}_{\hat{\theta}_c(x^n)}\left\{n\boldsymbol{\tau}(X^n)\in nG(\boldsymbol{\tau}_c)\right\} =\frac{s^d}{\left(2\pi n\right)^{\frac{d}{2}}|\boldsymbol{\Sigma}|^{\frac{1}{2}}}e^{-\frac{\left(n\boldsymbol{\tau}_c-n\boldsymbol{\mu}_c\right)\cdot \boldsymbol{\Sigma}^{-1}\cdot \left(n\boldsymbol{\tau}_c-n\boldsymbol{\mu}_c\right)}{2n}} +o\left(n^{-\frac{d}{2}}\right)\label{stoneEq} \end{equation} where $\boldsymbol{\mu}_c$ and $\boldsymbol{\Sigma}$ are the mean and the covariance (resp.) of $\boldsymbol{\tau}(X)$ under $\hat{\theta}_c(x^n)$. To proceed, we show that $\boldsymbol{\mu}_c=\boldsymbol{\tau}_c$. We have \begin{align*} \hat{\theta}_c(x^n)&=\underset{\theta\in\Theta}{\arg\min}\: \Big(D(p_{\hat{\theta}_c(x^n)}\|p_{\theta})+H(p_{\hat{\theta}_c(x^n)})\Big) \nonumber \\ &=\underset{\theta\in\Theta}{\arg\max}\: \mathbb{E}_{\hat{\theta}_c(x^n)}\Big(\log{p_{\theta}(X)}\Big) \nonumber \\ &=\underset{\theta\in\Theta}{\arg\max}\: \mathbb{E}_{\hat{\theta}_c(x^n)}\Big(\langle\theta,\boldsymbol{\tau}(X)\rangle-\psi(\theta)\Big) \nonumber \\ &=\underset{\theta\in\Theta}{\arg\max}\: \langle\theta,\boldsymbol{\mu}_c\rangle-\psi(\theta). \end{align*} That is, $\hat{\theta}_c(x^n)$ is the maximum likelihood estimate for $\boldsymbol{\mu}_c$ and (by definition (\ref{thetaHatEquation})) $\boldsymbol{\tau}_c$. However, in order to be the maximum likelihood estimate, it must be that the derivative of the log-likelihood function is 0, hence $\nabla \psi(\hat{\theta}_c(x^n))=\boldsymbol{\mu}_c$ and $\nabla\psi(\hat{\theta}_c(x^n))=\boldsymbol{\tau}_c$. Therefore $\boldsymbol{\mu}_c$ and $\boldsymbol{\tau}_c$ are equal. Due to (\ref{stoneEq}) and $\boldsymbol{\mu}_c=\boldsymbol{\tau}_c$, there exist constants $C,C'$ such that, for large enough $n$, \begin{equation} d\log s -\frac{d}{2}\log n +C'\leq \log p_{\hat{\theta}_c(x^n)}\{\boldsymbol{\tau}(X^n)\in G(\boldsymbol{\tau}_c)\} \leq d\log s -\frac{d}{2}\log n +C.\label{logPthetaEq} \end{equation} On the other hand \begin{equation*} \log p_{\hat{\theta}_c(x^n)}(x^n) =n \left[\langle\hat{\theta}_c(x^n),\boldsymbol{\tau}(x^n)\rangle-\psi\left(\hat{\theta}_c(x^n)\right)\right]. \end{equation*} Therefore \begin{equation} \label{eqTwo} \max_{\substack{y^n:\\ \boldsymbol{\tau}(y^n)\in G(\boldsymbol{\tau}_c(x^n))}}\log{p_{\hat{\theta}_c(x^n)}(y^n)}\leq \log p_{\hat{\theta}_c(x^n)}(x^n)+2\kappa s \end{equation} and \begin{equation} \label{eqOne} \min_{\substack{y^n:\\ \boldsymbol{\tau}(y^n)\in G(\boldsymbol{\tau}_c(x^n))}}\log{p_{\hat{\theta}_c(x^n)}(y^n)}\geq \log p_{\hat{\theta}_c(x^n)}(x^n)-2\kappa s \end{equation} where we used $\|\hat{\theta}_c(x^n)\|\leq \wp$ and the fact that if $\boldsymbol{\tau}(x^n)$ and $\boldsymbol{\tau}(y^n)$ belong to the same cuboid, then $\|\boldsymbol{\tau}(x^n)-\boldsymbol{\tau}(y^n)\|<\frac{s\sqrt{d}}{n}$. Plugging (\ref{logPthetaEq},\ref{eqTwo},\ref{eqOne}) in (\ref{TypeMainEq}), the lemma follows. \end{proof} \begin{Corollary} \label{TypeCorollary} For large enough $n$, the size of the type class of $x^n$ with corresponding cuboid center $\boldsymbol{\tau}_c$ is bounded as \begin{equation*} nf(\boldsymbol{\tau}_c)-6\kappa s -C''\leq \log{|T_{\boldsymbol{\tau}_c}|} \leq nf(\boldsymbol{\tau}_c) \end{equation*} where, $C''=C-C'$ and \begin{equation} f(\boldsymbol{\tau})=-\langle\hat{\theta}(\boldsymbol{\tau}),\boldsymbol{\tau}\rangle + \psi\left(\hat{\theta}(\boldsymbol{\tau})\right)-\frac{d}{2n}\log{n}+\frac{d\log{s}}{n} +\frac{3\kappa s}{n}+\frac{C}{n}.\label{fEqUpp2} \end{equation} \end{Corollary} We appeal to the following normal approximation result in order to bound the CDF of the type class size (in the achievability proof) and further CDF of the mixture distribution (in the converse proof) with that of the normal distribution. \begin{Lemma}[Asymptotic Normality of Information]\label{maxLikeBerr} Fix a positive constant $\alpha$. For a stationary memoryless source, there exists a finite positive constant $A$, such that for all $n\geq 1$ and $z$ such that $|z|\leq \alpha$, \begin{equation} \left|\mathbb{P}_{\theta^*}\left\{\frac{-\log{p_{\hat{\theta}(X^n)}(X^n)}-nH}{\sqrt{n}\sigma}>z\right\}-Q(z)\right|\leq \frac{A}{\sqrt{n}} \end{equation} where $H:=H(p_{\theta^*})$ and $\sigma^2:=\sigma^2(p_{\theta^*})$, are the entropy and varentropy of the true model $p_{\theta^*}$, respectively. \end{Lemma} \begin{proof} See Appendix \ref{app::maxLikeBerr} \end{proof} The following lemma provides a guarantee in approximation of $p_{\hat{\theta}(x^n)}(x^n)$ with $p_{\hat{\theta}_{c}(x^n)}(x^n)$, which allows us to use the Lemma \ref{maxLikeBerr} in the achievability proof. \begin{Lemma}[Maximum Likelihood Approximation] \label{apprxLemm} Let $\kappa$ be defined as in Lemma \ref{TypeClSizeLem}. We have \begin{equation*} \log{p_{\hat{\theta}(x^n)}(x^n)}-\log{p_{\hat{\theta}_c(x^n)}(x^n)}\leq 2\kappa s. \end{equation*} \end{Lemma} \begin{proof} See Appendix \ref{app::apprxLemm}. \end{proof} We need the following machinery lemmas for the achievability proof. \begin{Lemma} \label{fIsLipschitz} There exists a Lipschitz constant $K_0$ independent of $n$, so that for any minimal sufficient statistics $\boldsymbol{\tau}_1$ and $\boldsymbol{\tau}_2$, \begin{equation} |f(\boldsymbol{\tau}_1)-f(\boldsymbol{\tau}_2)|\leq K_0 \|\boldsymbol{\tau}_1-\boldsymbol{\tau}_2\|. \end{equation} \end{Lemma} \begin{proof} See Appendix \ref{app:fLipschitz}. \end{proof} Let $\omega=\frac{\log{|\mathcal{X}|}-H}{5}$. Without loss of generality, we may assume that the true model is non-uniform distribution, otherwise TS code (like any other rational code) is obviously optimal. Therefore, $\omega>0$. Let $0\leq \lambda< H+\omega$, and $\rho(\lambda)=\text{Vol}\left\{\boldsymbol{\tau}:f(\boldsymbol{\tau})\leq \lambda\right\}$ be the volume of the sub-level sets. \begin{Lemma} \label{rhoIsLipschitz} There exists a Lipschitz constant $K_1$ so that for all $0\leq a,b< H+\omega$, \begin{equation*} |\rho(a)-\rho(b)|\leq K_1|a-b|. \end{equation*} \end{Lemma} \begin{proof} See Appendix \ref{app:hLipschitz}. \end{proof} For our converse proof, we will need the regular value theorem \cite[Prop. 2.3.2]{balasko} from manifold theory (see also \cite[Theorem 9]{robbin}), stated as follows. \begin{Theorem} \label{thetaZeroDim} Let $M$ and $N$ be smooth manifolds of dimensions $m_1,m_2$ with $m_1\geq m_2$. Let $\eta_0:M\longrightarrow N$ and $b\in N$ be such that for any $a\in\eta_0^{-1}(b)$, the Jacobian matrix of $\eta_0$ at $a$ is a surjective map from $M$ to $N$. Then, $\eta_0^{-1}(b)$ is a $(m_1-m_2)$-dimensional manifold. \end{Theorem} We have the following Laplace's approximation theorem for the integral of manifolds. We refer the reader to \cite[Chap. 9, Th. 3]{wong} for a detailed proof. In the converse proof, we use the Laplace's approximation to bound the self information of the mixture distribution. \begin{Theorem}[Laplace's Approximation]\label{laplaceTheorem}\cite{oliver2} Let $D$ be a $\tilde{d}-$dimensional differentiable manifold embedded in $\mathbb{R}^m$ and $\eta_1(\cdot)$ and $\eta_2(\cdot)$ be functions that are infinitely differentiable on $D$. Let \begin{equation} \label{lapint} Z(n)=\int_{D}{\eta_2(x)e^{-n\eta_1(x)}dx} \end{equation} Assume that: (i) the integral in (\ref{lapint}) converges absolutely for all $n\geq n_0$; (ii) there exists a point $x^*$ in the interior of $D$ such that for every $\epsilon>0$, $\xi(\epsilon)>0$ where \begin{equation*} \xi(\epsilon)=\inf\left\{\eta_1(x)-\eta_1(x^*): x\in D \mbox{ and } |x-x^*|\geq \epsilon\right\} \end{equation*} and (iii) the Hessian matrix $\mathcal{E}=\left(\frac{\partial^2 \eta_1(x)}{\partial{x}_i\partial x_j}\right) \Big| _{x=x^*}$ is positive definite. Let $F\in \mathbb{R}^{m\times \tilde{d}}$ be an orthonormal basis for the tangent space to $D$ at $x^*$. Then \begin{equation*} Z(n)=e^{-n\eta_1(x^*)}\left(\frac{2\pi}{n}\right)^{\frac{\tilde{d}}{2}}\eta_2(x^*)\left|F^T\mathcal{E}F\right|^{-\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{1}{n}\right)\right) \end{equation*} \end{Theorem} \section{Proof of Theorem \ref{mainThm}} \label{sec::proofMain} \subsection{Achievability} \label{subsec::Achiev} In this subsection we bound the third-order coding rate of the quantized implementation of the TS code. We continue from the finite blocklength result in Theorem \ref{finiteBlckThm}, and evaluate its asymptotic performance. For the constants $C$ and $A$ in Lemmas \ref{TypeClSizeLem} and \ref{maxLikeBerr}, let \begin{equation} \label{gammaEq} \gamma = H+\frac{\sigma}{\sqrt{n}}Q^{-1}\Big(\epsilon-\frac{A}{\sqrt{n}}\Big)-\frac{d}{2n}\log{n} +\frac{d}{n}\log{s}+\frac{4\kappa s}{n}+\frac{C}{n}. \end{equation} Denote \begin{align} p_{\gamma}&:=\mathbb{P}_{\theta^*}\Big[\log{|T_{X^n}|}>n\gamma\Big] \label{pGammaFirst}\\ &=\mathbb{P}_{\theta^*}\Big[\log{|T_{\boldsymbol{\tau}_c(X^n)}|}>n\gamma\Big].\nonumber \end{align} We have \begin{align} p_{\gamma} &\leq \mathbb{P}_{\theta^*}\Big[-\log{{p_{\hat{\theta}_c(x^n)}(X^n)}}>nH +\sqrt{n}\sigma Q^{-1}\Big(\epsilon-\frac{A}{\sqrt{n}}\Big) +2\kappa s\Big] \label{aEq} \\ &\leq\mathbb{P}_{\theta^*}\Big[\frac{-\log{{p_{\hat{\theta}(x^n)}(X^n)}}-nH}{\sqrt{n}\sigma}>Q^{-1}\Big(\epsilon-\frac{A}{\sqrt{n}}\Big) \Big] \label{bEq} \\ &\leq Q\Big(Q^{-1}\big(\epsilon-\frac{A}{\sqrt{n}}\big)\Big)+\frac{A}{\sqrt{n}} \label{dEq} \\ &=\epsilon \nonumber \end{align} where (\ref{aEq}) follows from Lemma \ref{TypeClSizeLem} and (\ref{gammaEq}), (\ref{bEq}) is from Lemma \ref{apprxLemm}, and (\ref{dEq}) is a consequence of Lemma \ref{maxLikeBerr}. Since for $\gamma$ in (\ref{gammaEq}), we have $p_{\gamma}\leq \epsilon$, therefore it satisfies the constraint of (\ref{mEq}). We can therefore, bound $M(\epsilon)$ defined in (\ref{mEq}), with this choice of $\gamma$. Fixing $\Delta=\frac{1}{n}$, we have \begin{align} M(\epsilon) &\leq \sum_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c:\\ \frac{1}{n}\log{|T_{\boldsymbol{\tau}_c}|}\leq \gamma}}{|T_{\boldsymbol{\tau}_c}|} \nonumber \\ &\leq \sum_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c:\\ f(\boldsymbol{\tau}_c)-\frac{6\kappa s+C''}{n}\leq \gamma}}{2^{nf(\boldsymbol{\tau}_c)}} \label{useBoundLem} \\ &= \sum_{i=0}^{\infty}\sum_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c:\\ f(\boldsymbol{\tau}_c)\in\mathcal{A}_i }} {2^{nf(\boldsymbol{\tau}_c)}} \nonumber \\ &\leq \sum_{i=0}^{\infty}\left|\left\{\boldsymbol{\tau}_c\in\mathcal{T}_c:f(\boldsymbol{\tau}_c)\in\mathcal{A}_i\right\}\right| \cdot 2^{n\gamma+6\kappa s+C''-ni\Delta} \label{mEpsilon} \end{align} where (\ref{useBoundLem}) follows from Corollary \ref{TypeCorollary} and $\mathcal{A}_i=\left(\gamma+\frac{6\kappa s+C''}{n}-(i+1)\Delta, \gamma+\frac{6\kappa s+C''}{n}-i\Delta\right]$. The rest of the proof is similar to \cite{oliver}, however we continue the proof for completeness. We have \begin{align} \left|\left\{\boldsymbol{\tau}_c\in\mathcal{T}_c:f(\boldsymbol{\tau}_c)\in\mathcal{A}_i\right\}\right|&=\sum_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c:\\f(\boldsymbol{\tau}_c)\in\mathcal{A}_i}}\frac{\text{Vol}\left(G(\boldsymbol{\tau}_c)\right)}{\left(\frac{s}{n}\right)^d} \label{sumOne} \\ &= \frac{1}{\left(\frac{s}{n}\right)^d}{\text{Vol}\left(\bigcup_{\substack{\boldsymbol{\tau}_c\in\mathcal{T}_c:\\f(\boldsymbol{\tau}_c)\in\mathcal{A}_i}}G(\boldsymbol{\tau}_c)\right)} \label{aaEq2} \\ &\leq \frac{1}{\left(\frac{s}{n}\right)^d}{\text{Vol}\left(\bigcup_{\boldsymbol{\tau}\in\mathcal{T}:f(\boldsymbol{\tau})\in\mathcal{A}_i}G(\boldsymbol{\tau})\right)} \nonumber \end{align} where (\ref{sumOne}) results from $\text{Vol}\left(G(\boldsymbol{\tau}_c)\right)=\left(\frac{s}{n}\right)^d$, (\ref{aaEq2}) follows from disjointness of the cuboids. If $\boldsymbol{\tau}\in G(\boldsymbol{\tau}_c)$, then $\|\boldsymbol{\tau}-\boldsymbol{\tau}_c\|\leq\frac{s\sqrt{d}}{2n}$ and consequently by Lemma \ref{fIsLipschitz} \begin{equation} |f(\boldsymbol{\tau})-f(\boldsymbol{\tau}_c)|\leq K_0\cdot \frac{s\sqrt{d}}{2n} := K_2\frac{s}{n} \label{ffLipDist} \end{equation} where $K_2=K_0\frac{\sqrt{d}}{2}$. Therefore, for $a=\gamma+\frac{6\kappa s+C''}{n}-(i+1)\Delta$, \begin{align} \left|\left\{\boldsymbol{\tau}_c\in\mathcal{T}_c:f(\boldsymbol{\tau}_c)\in \mathcal{A}_i\right\}\right| &\leq \frac{1}{(\frac{s}{n})^d}\cdot \text{Vol}\Big(\bigcup_{ a<f(\boldsymbol{\tau})\leq a+\Delta}{G(\boldsymbol{\tau})}\Big) \nonumber \\ &\leq \frac{1}{(\frac{s}{n})^d}\text{Vol}\left(\left\{\boldsymbol{\tau}:f(\boldsymbol{\tau})\in \left(a-K_2\frac{s}{n},a+\Delta+K_2\frac{s}{n}\right]\right\}\right) \label{bistoshish} \\ &= \frac{1}{\left(\frac{s}{n}\right)^d}\left[\rho\left(a+\Delta+K_2\frac{s}{n}\right)-\rho\left(a-K_2\frac{s}{n}\right)\right] \label{breakPoint} \end{align} where (\ref{bistoshish}) is from (\ref{ffLipDist}). In order to continue from (\ref{breakPoint}), recall $\omega=\frac{\log{|\mathcal{X}|}-H}{5}$. Observe that by (\ref{gammaEq}) , $a+K_2\frac{s}{n}+\Delta\leq H+\frac{C}{\sqrt{n}}$, for a positive constant $C$. Since $\omega>0$, $H+\frac{C}{\sqrt{n}}< H+\omega$ for large enough $n$. Similar argument shows that $0\leq a-K_2\frac{s}{n}<H+\omega$. Therefore boundary conditions of Lemma \ref{rhoIsLipschitz} are satisfied. Continuing from (\ref{breakPoint}) and using Lemma \ref{rhoIsLipschitz}, we then have \begin{equation} \left|\left\{\boldsymbol{\tau}_c\in\mathcal{T}_c:f(\boldsymbol{\tau}_c)\in \mathcal{A}_i\right\}\right|\leq \frac{K_1}{\left(\frac{s}{n}\right)^d}\cdot \left[\Delta+2K_2\frac{s}{n}\right]. \label{sizeTau} \end{equation} Applying (\ref{sizeTau}) to (\ref{mEpsilon}), we obtain \begin{align*} M(\epsilon) &\leq \sum_{i=0}^{\infty}{\frac{K_1}{(\frac{s}{n})^d}\cdot \left[\Delta+2K_2\frac{s}{n}\right]\cdot 2^{n\gamma+6\kappa s+C''-ni\Delta}} \nonumber \\ &= \frac{n^d}{s^d}\cdot \left[\Delta+2K_2\frac{s}{n}\right]\cdot 2^{n\gamma+6\kappa s+C''}\cdot \frac{K_1}{1-2^{-n\Delta}}. \end{align*} From (\ref{gammaEq}) and since $s>0$ is a constant and $\Delta=\frac{1}{n}$, we obtain \begin{equation*} \log {M(\epsilon)}\leq nH+\sigma\sqrt{n}Q^{-1}(\epsilon)+\left(\frac{d}{2}-1\right)\log{n}+\mathcal{O}(1). \end{equation*} \subsection{Converse} For a parameter vector $\theta\in\Theta$, define $J(\theta)=nH(p_{\theta})+\sigma(p_{\theta})\sqrt{n}Q^{-1}(\epsilon)$. We first rewrite the entropy function as follows: \begin{align} H(p_{\theta}) &= -\sum_{x\in\mathcal{X}}{p_{\theta}(x)\log{p_{\theta}}(x)} \nonumber \\ &= -\sum_{x\in\mathcal{X}}{p_{\theta}(x)\left(\langle\theta,\boldsymbol{\tau}(x)\rangle-\psi(\theta)\right)} \label{defThetaP} \\ &= -\langle\theta,\mathbb{E}_{\theta}(\boldsymbol{\tau}(x))\rangle+\psi(\theta) \nonumber \\ &= -\langle\theta,\nabla\psi(\theta)\rangle+\psi(\theta) \label{fromFundEq} \end{align} where (\ref{defThetaP}) is from (\ref{pThetaEq}) and (\ref{fromFundEq}) is from $\mathbb{E}_{\theta}(\boldsymbol{\tau}(x))=\nabla\psi(\theta)$ \cite{jordan}. Taking derivative of (\ref{fromFundEq}) with respect to $\theta$, we obtain \begin{equation} \label{findZeros} \nabla H(p_{\theta}) = -\theta\nabla^2\psi(\theta). \end{equation} Since $\nabla^2\psi(\theta) = \text{Cov}(\boldsymbol{\tau}(X))$ is positive definite, (\ref{findZeros}) vanishes only at the uniform distribution $\theta_{u}=(0,\cdots,0)$. Since $\Theta$ has nonempty interior, let $\theta_0$ be a point in the interior of $\Theta$ with $J(\theta_0)\neq J(\theta_u)$. Define \begin{equation*} \Theta_0:=\left\{\theta\in\Theta:J(\theta)=J(\theta_0)\right\}. \end{equation*} As $\theta_u\notin\Theta_0$, $\nabla H(p_{\theta})$ is nonzero for all parameters $\theta\in\Theta_0$. Therefore, for large enough $n$, $\nabla J(\theta)$ is also nonzero for all $\theta\in\Theta_0$. Hence, the Jacobian of $J(\cdot)$ at any point in the set $J^{-1}(J(\theta_0))$ is a surjective map from $\Theta_0$ to $\mathbb{R}$. Theorem \ref{thetaZeroDim} then implies that $\Theta_0$ is a $(d-1)$-dimensional manifold. In order to prove the converse, it suffices to show that \begin{equation*} \sup_{\theta\in\Theta_0}R_n(\epsilon,\phi,p_{\theta})\geq \frac{J(\theta_0)}{n}+\left(\frac{d}{2}-1\right)\frac{\log{n}}{n}-\mathcal{O}\left(\frac{1}{n}\right). \end{equation*} Let $\overline{p}(x^n)$ be the mixture distribution with uniform prior among $n$-length $i.i.d.$ distributions with marginals parametrized by $\Theta_0$, i.e. \begin{equation} \label{mixtureEq} \overline{p}(x^n)=\frac{1}{\text{Vol}(\Theta_0)}\int_{\theta\in\Theta_0}p_{\theta}(x^n)d\theta \end{equation} where $\text{Vol($\cdot$)}$ is the $d$-dimensional volume. For any $\gamma>0$, applying Theorem 3 in \cite{oliver2} gives \begin{equation} \label{epsilon2tauEq} \epsilon+2^{-\gamma}\geq \inf_{\theta\in\Theta_0}\mathbb{P}_{\theta}\left(\iota_{\overline{p}}(X^n)\geq k+\gamma\right) \end{equation} where $\iota_{\overline{p}}(X^n):=-\log{\overline{p}(X^n)}$ is the self information of the mixture distribution. We then provide a lower bound for the self information. We may rewrite (\ref{mixtureEq}) as \begin{align*} \overline{p}(x^n)=\frac{1}{\text{Vol}(\Theta_0)}\int_{\theta\in\Theta_0}2^{-g(\theta)}d\theta \end{align*} where $g(\theta):=-\log{p_{\theta}(x^n)}$. Since $\Theta_0$ is a $(d-1)$-dimensional manifold, application of the Laplace's approximation of integrals (Theorem \ref{laplaceTheorem}) yields \begin{equation} \label{overlineEq} \overline{p}(x^n)=\frac{1}{\text{Vol}(\Theta_0)}2^{-g(\hat{\theta})}\left(\frac{2\pi}{n}\right)^{\frac{d-1}{2}}\left|F^T\mathcal{E}F\right|^{-\frac{1}{2}}\left(1+\mathcal{O}\left(\frac{1}{n}\right)\right) \end{equation} where $\hat{\theta}:=\hat{\theta}(x^n)$ is the maximum likelihood estimate of $\theta$ for $x^n$. Continuing from (\ref{epsilon2tauEq}) for a constant $C>0$, we obtain \begin{align} &\epsilon+2^{-\gamma}\nonumber \\ &\geq \inf_{\theta\in\Theta_0}\mathbb{P}_{\theta}\left(\iota_{\overline{p}}(X^n)\geq k+\gamma \right) \nonumber \\ &\geq\inf_{\theta\in\Theta_0}\mathbb{P}_{\theta}\left(-\log{p_{\hat{\theta}}(X^n)}+\frac{d-1}{2}\log{n}+C\geq k+\gamma\right) \label{converseAEQ}\\ &=\inf_{\theta\in\Theta_0}\mathbb{P}_{\theta}\Bigg(\frac{-\log{p_{\hat{\theta}}(X^n)}-nH(p_{\theta})}{\sqrt{n}\sigma} \geq\frac{k+\gamma-\frac{d-1}{2}\log{n}-C-nH(p_{\theta})}{\sqrt{n}\sigma}\Bigg) \nonumber\\ &\geq Q\left(\frac{k+\gamma-\frac{d-1}{2}\log{n}-C-nH(p_{\theta})}{\sqrt{n}\sigma}\right) -\frac{A}{\sqrt{n}} \label{converseBerryEsseen} \end{align} where (\ref{converseAEQ}) is due to (\ref{overlineEq}) and the definition of $g(\cdot)$, while (\ref{converseBerryEsseen}) is from Lemma \ref{maxLikeBerr}. Setting $\gamma=\frac{1}{2}\log{n}$ and rearranging gives \begin{equation*} \frac{k}{n}\geq \inf_{\theta\in\Theta_0}H(p_{\theta})+\frac{\sigma(p_{\theta})}{\sqrt{n}}Q^{-1}\left(\epsilon+\frac{A+1}{\sqrt{n}}\right)+\left(\frac{d}{2}-1\right)\frac{\log{n}}{n}+\frac{C}{n}. \end{equation*} Recalling that $H(p_{\theta})+\frac{\sigma(p_{\theta})}{\sqrt{n}}Q^{-1}(\epsilon)$ is fixed at $\frac{J(\theta_0)}{n}$ for all $\theta\in\Theta_0$ and that $\frac{k}{n}=\max_{\theta\in\Theta_0}R_n(\epsilon,\phi,p_{\theta})$, theorem follows. \section{Parametric Markov Class} \label{sec::ParMrk} We now consider extensions to the class of parametric Markov models. Let $\mathcal{M}$ be the exponential family of first-order, stationary, irreducible and aperiodic Markov sources, parametrized by a $d$-dimensional parameter vector $\theta\in\Theta_{\mathcal{M}}\subset\mathbb{R}^d$. Transition probabilities of the distribution $p_{\theta}\in\mathcal{M}$ has the following exponential structure \begin{equation} \label{parMrkBaseEq} p_{\theta}(x_{i}|x_{i-1})=2^{\langle \theta, \boldsymbol{\tau}(x_{i-1},x_{i})\rangle -\psi(\theta)} \end{equation} where $\boldsymbol{\tau}:\mathcal{X}\times\mathcal{X}\to \mathbb{R}$ is the vector of sufficient statistics. Similar to $\cite{merhavVFvsFV}$, we assume that the initial source symbol $x_0$ is fixed and known to both the encoder and the decoder. From (\ref{parMrkBaseEq}), the probability of a sequence $x^n$ drawn according to the first-order Markov source $p_{\theta}\in\mathcal{M}$ in the exponential family takes the form \begin{align*} p_{\theta}(x^n)&=\prod_{i=1}^{n}{p_{\theta}\left(x_i|x_{i-1}\right)} \\ &= \prod_{i=1}^{n}{2^{\left\langle\theta,\boldsymbol{\tau}(x_{i-1},x_i)\right\rangle-\psi(\theta)}} \\ &=2^{n\left[\left\langle \theta,\boldsymbol{\tau}(x^n)\right\rangle - \psi\left(\theta\right)\right]} \end{align*} where $\boldsymbol{\tau}(x^n)=\frac{\sum_{i=1}^{n}\boldsymbol{\tau}(x_{i-1},x_i)}{n}\in\mathbb{R}^d$ is a minimal sufficient statistic. Through the same approach as in Section {\ref{sec::TSC}}, we partition the convex hull of the space of minimal sufficient statistics into cuboids of side length $\frac{s}{n}$ defined as in (\ref{cuboidEq}). We then characterize quantized type classes as in (\ref{typeClassDefEq}). Let \begin{equation} \label{entropyRate} H(p_{\theta})=\lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}_{\theta}\left[\log{\frac{1}{p_{\theta}(X^n)}}\right] \end{equation} and \begin{equation} \label{varentropyRate} \sigma^2(p_{\theta})=\lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{V}_{\theta}\left[\log{\frac{1}{p_{\theta}(X^n)}}\right] \end{equation} be the entropy and the varentropy rate of the Markov process parametrized by $\theta$, respectively. The following theorem characterizes the fundamental limits of universal one-to-one compression of parametric Markov sources, as well as asserting that the TS code is optimal up to the third-order term. \begin{Theorem} \label{markovTheorem} For any first-order, stationary, irreducible and aperiodic Markov exponential model class parametrized by $\Theta_{\mathcal{M}}$ \begin{equation*} \inf_{\phi}\sup_{\theta\in\Theta_{\mathcal{M}}}\left[R_n(\epsilon,\phi,p_{\theta})-H(p_{\theta})-\frac{\sigma(p_{\theta})}{\sqrt{n}}Q^{-1}(\epsilon)\right]= \left(\frac{d}{2}-1\right)\frac{\log{n}}{n}+\mathcal{O}\left(\frac{1}{n}\right).\nonumber \end{equation*} where the infimum is achieved by the quantized type class implementation of the TS code. \end{Theorem} \begin{proof} Let $Y_i=(X_{i-1},X_i)$ be a random vector defined by overlapping blocks of $\{X_n\}$. Since ${X_n}$ form a Markov chain, so does $\{Y_n\}$. The proof follows the same lines as those in the proof of the parametric $i.i.d.$ class $\mathcal{P}$, with $\boldsymbol{\tau}(Y_n)$ playing the role of $\boldsymbol{\tau}(X_n)$. The only deviations from the memoryless proof occur in lines (\ref{stoneEq}), (\ref{dEq}) and (\ref{converseBerryEsseen}). As a counterpart of the $i.i.d.$ ratio limit theorem of (\ref{stoneEq}) for a Markov sources, we may use Theorem 8 of \cite{korshunov}, which states that \begin{equation*} p_{\hat{\theta}_c(x^n)}\left\{n\boldsymbol{\tau}(Y^n)\in nG(\boldsymbol{\tau}_c))\right\}= \frac{s^d}{\left(2\pi n\right)^{\frac{d}{2}}|\boldsymbol{\Sigma}|^{\frac{1}{2}}}e^{-\frac{\left\langle\left(x-n\boldsymbol{\mu}\right)\boldsymbol{\Sigma}^{-1},x-n\boldsymbol{\mu}\right\rangle}{2n}}+o\left(n^{-\frac{d}{2}}\right) \end{equation*} where $\boldsymbol{\Sigma}$ and $\boldsymbol{\mu}$ are the covariance and mean of the stationary distribution of the Markov chain, respectively. Finally (\ref{dEq}) and (\ref{converseBerryEsseen}) can be derived from the Markov version of the normal approximation inequality stated below. The proof is the same as in Appendix \ref{app::maxLikeBerr}. \begin{Lemma}[Asymptotic Normality of Information]\label{berryLemma} Fix a positive constant $\alpha$. For a first-order, stationary, irreducible and aperiodic Markov source, there exists a finite positive constant $A'$ such that for all $n\geq 1$ and $z$ such that $|z|\leq \alpha$, \begin{equation} \left|\mathbb{P}_{\theta^*}\left\{\frac{-\log{p_{\hat{\theta}(X^n)}(X^n)}-nH}{\sqrt{n}\sigma}>z\right\}-Q(z)\right|\leq \frac{A'}{\sqrt{n}} \end{equation} where $H:=H(p_{\theta^*})$ and $\sigma^2:=\sigma^2(p_{\theta^*})$, are the entropy and varentropy rate of the true model, $p_{\theta^*}$, respectively. \end{Lemma} The rest of the proof is the same as the $i.i.d.$ case and we omit it due to similarity. \end{proof} \begin{Example} For the class of all first-order stationary, irreducible and aperiodic Markov sources $d=|\mathcal{X}|\left(|\mathcal{X}|-1\right)$, and Theorem \ref{markovTheorem} reduces to the result in \cite{nemat}. \end{Example} \section{Type Size Code with Point Type Classes} \label{sec::AltApprch} In this section we analyze the performance of the point type class implementation of the TS code. For a sequence $x^n\in\mathcal{X}^n$, define the point type class containing $x^n$ as \begin{equation} \label{merhavTypeClass} T_{x^n}=\left\{y^n\in\mathcal{X}^n: p_{\theta}(x^n)=p_{\theta}(y^n) \mbox{ for all } \theta\in\Theta\right\} \end{equation} the set of all $n$-length sequences $y^n\in\mathcal{X}^n$ equiprobable with $x^n$, simultaneously under all models in $\mathcal{P}$. Consequently, (\ref{pNdimEq}) enforces two sequences to be in the same type class if and only if their minimal sufficient statistics are equal. Hence, from a geometric perspective, point type classes correspond to zero sidelength $s=0$ in Figure \ref{fig::TClassPar}, i.e. type classes are points in the space of minimal sufficient statistics. We first review the derivation of the size of a point type class from \cite{merhav}. We then provide upper and lower bounds for the asymptotic rate of the TS code with point type class implementation, showing that the TS code performs strictly worse for $s=0$ in terms of third-order coding rate. Let $\boldsymbol{\tau}(x)[j]$, $j=1,\cdots,d$, be the $j$-th component of the $d$-dimensional vector $\boldsymbol{\tau}(x)$. For any index $j=1,\cdots,d$, there exists a fixed real number $\beta[j][0]$ and $r_j$ pairwise incommensurable real numbers $\beta[j][t]$, $t=1,\cdots,r_j$, such that regardless of the observed sample $x\in\mathcal{X}$, $\boldsymbol{\tau}(x)[j]$ can be uniquely decomposed as \cite{merhav} \begin{equation} \label{decomEq} \boldsymbol{\tau}(x)[j] = \beta[j][0]+\sum_{t=1}^{r_j}\beta[j][t]\tilde{L}(x)[j][t] \end{equation} where $\tilde{L}(x)[j][t]$, $t=1,\cdots,r_j$, are integers depending on the sample $x$ through $\boldsymbol{\tau}(x)[j]$. The decomposition (\ref{decomEq}) defines a unique one-to-one mapping between the real-valued $\boldsymbol{\tau}(x)[j]$ and $r_j$ integers $\tilde{L}(x)[j][t]$. Concatenating the corresponding unique integers $\tilde{L}(x)[\cdot][\cdot]$, each $d$-dimensional vector $\boldsymbol{\tau}(x)$ corresponds to a unique integer-valued vector $\tilde{\boldsymbol{L}}(x)\in\mathbb{Z}^{\sum_{j=1}^{d}r_j}$. For all $j=1\cdots d$, we may choose without loss of generality $ \beta[j][0]=\boldsymbol{\tau}(1)[j]$. With this choice we always have $\tilde{\boldsymbol{L}}(1)=(0,\cdots,0)^T$. Let $d'$, which is called the dimensionality of the type class in \cite{merhav}, be the rank of the matrix ${\mathbb{\tilde{L}}}=\begin{bmatrix} \tilde{\boldsymbol{L}}(2)-\tilde{\boldsymbol{L}}(1) & \cdots & \tilde{\boldsymbol{L}}(|\mathcal{X}|)-\tilde{\boldsymbol{L}}(1) \end{bmatrix}$. Therefore, there are $d'$ linearly independent rows in $\tilde{\mathbb{L}}$. Let the indices of the linearly independent rows be $i_1,\cdots,i_{d'}$. For any $x\in\mathcal{X}$, define $d'$-dimensional vector $\boldsymbol{L}(x)$ as $\boldsymbol{L}(x)[j]=\tilde{\boldsymbol{L}}(x)[i_j]$ for $j=1\cdots d'$. Since the other rows are linear combination of the independent rows, we can denote this transformation as $\tilde{\mathbb{L}}=\boldsymbol{R}\mathbb{L}$, where $\boldsymbol{R}$ is a $\sum_{j=1}^{d}{r_j}\times d'$ matrix and $\mathbb{L}$ is a full-rank $d'\times(|\mathcal{X}|-1)$ dimensional matrix $\mathbb{L}=\begin{bmatrix} {\boldsymbol{L}}(2)-{\boldsymbol{L}}(1) & \cdots & {\boldsymbol{L}}(|\mathcal{X}|)-{\boldsymbol{L}}(1) \end{bmatrix}$. Since $\tilde{\boldsymbol{L}}(1)=\boldsymbol{L}(1)=\boldsymbol{0}$, there is a one to one correspondence between $\boldsymbol{L}(x)$ and $\tilde{\boldsymbol{L}}(x)$ and consequently between $\boldsymbol{L}(x)$ and $\boldsymbol{\tau}(x)$. Note that $d'\geq d$, and in many cases the inequality is strict. The main finding of this section is that $d'$ is the critical dimension for the behavior of the TS code under point type classes, rather than $d$. Since $d'$ may be larger than $d$, the performance of the TS code with point type classes may be strictly worse than that with quantized type classes. Let $\boldsymbol{\mathbf{b}}$ be a $d\times 1$ column vector containing $\beta[j][0]$'s for $j=1,\cdots,d$ and $\boldsymbol{\mathbb{A}}$ is a $d\times \sum_{j=1}^{d}{r_j}$ block diagonal matrix containing $\beta[j][t]$'s in (\ref{decomEq}). For real-valued vector $\ell\in\mathbb{R}^{d'}$, let $\boldsymbol{\tau}(\ell)=\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}}\boldsymbol{R}\ell$. For a constant $C>0$ to be defined later, define $f_0(\ell)$ as follows: \begin{align} f_0(\ell)&= -\frac{1}{n} \left(\left\langle \hat{\theta}(\boldsymbol{\tau}(\ell)),\boldsymbol{\tau}(\ell) \right\rangle - \psi\left(\hat{\theta}(\boldsymbol{\tau}(\ell))\right)\right) -\frac{d'}{2n}\log{2\pi n} +\frac{C}{n} \label{subnserf} \\ &=-\frac{1}{n} \left(\left\langle \hat{\theta}(\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}} \boldsymbol{R} \ell),\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}}\boldsymbol{R}\ell \right\rangle - \psi\left(\hat{\theta}\left(\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}}\boldsymbol{R}\ell\right)\right)\right) -\frac{d'}{2n}\log{2\pi n} +\frac{C}{n} \label{secondContReal}. \end{align} For a sequence $x^n$, define $\boldsymbol{L}(x^n)$ similar to (\ref{tauXnDef}) as \begin{equation} \label{LSumEq} \boldsymbol{L}(x^n)=\frac{\sum_{i=1}^{n}\boldsymbol{L}(x_i)}{n} \end{equation} and let $\mathcal{L}=\left\{\boldsymbol{L}(x^n):x^n\in\mathcal{X}^n\right\}$ be the set of lattice points. Throughout, $\boldsymbol{L}\in\mathbb{Z}^{d'}$ denotes an integer-valued lattice point, while $\ell\in\mathbb{R}^{d'}$ denotes real-valued $d'$-dimensional vector. The size of a point type class is derived in \cite{merhav}, which we reproduce it in Appendix \ref{app::pointProof} for completeness. Moreover, we show that the third-order term in their result is a constant to obtain the following lemma. \begin{Lemma} \label{PointTypeClasLemma} For large enough $n$, the size of the point type class containing $x^n$ with $\boldsymbol{L}(x^n)=\boldsymbol{L}$, is bounded as \begin{equation} \label{typefNotBnd} nf_0(\boldsymbol{L})-2C\leq \log{|T_{x^n}|} \leq nf_0(\boldsymbol{L}) \end{equation} where $C$ is the constant in (\ref{subnserf}, \ref{secondContReal}). \end{Lemma} \begin{proof} See Appendix \ref{app::pointProof}. \end{proof} The following is our main theorem for this section, characterizing the exact performance of the TS code with point type classes up to third-order. \begin{Theorem} \label{latticeTSCThm} Let $\phi_0$ be the point type class implementation of the TS code. The $\epsilon$-coding rate of $\phi_0$, for all $\theta\in\Theta$ is given by \begin{equation} R_n(\epsilon,\phi_0,{p_{\theta}})=H(p_{\theta})+\frac{\sigma(p_{\theta})}{\sqrt{n}}Q^{-1}(\epsilon) +\left(\frac{d'}{2}-1\right)\frac{\log{n}}{n}+\mathcal{O}\left(\frac{1}{n}\right). \label{mainEq} \end{equation} \end{Theorem} \begin{proof} The achievability proof is similar to Section \ref{sec::proofMain}, hence we only highlight the differences. Again for simplicity, we denote $H=H(p_{\theta^*})$ and $\sigma=\sigma(p_{\theta^*})$ as the entropy and the varentropy of the underlying model $p_{\theta^*}$, respectively. Let \begin{equation} \label{gammaPrimeEq} \gamma'= H+\frac{\sigma}{\sqrt{n}}Q^{-1}\left(\epsilon-\frac{A}{\sqrt{n}}\right)-\frac{d'}{2n}\log\left(2\pi n\right) +\frac{C}{n}. \end{equation} We now show that for this choice of $\gamma'$, $p_{\gamma'}\leq \epsilon$, where $p_{\gamma'}$ is defined as in (\ref{pGammaFirst}). We have \begin{align} p_{\gamma'}&=\mathbb{P}_{\theta^*}\left[\log{|T_{X^n}|>n\gamma'}\right] \nonumber \\ &=\mathbb{P}_{\theta^*}\left[\frac{-\log{p_{\hat{\theta}}(x^n)}-nH}{\sigma\sqrt{n}}>Q^{-1}\left(\epsilon-\frac{A}{\sqrt{n}}\right)\right] \label{typeLemmas} \\ &\leq Q\left(Q^{-1}\left(\epsilon-\frac{A}{\sqrt{n}}\right)\right)+\frac{A}{\sqrt{n}} \label{berApp} \\ &= \epsilon \nonumber \end{align} where (\ref{typeLemmas}) follows from (\ref{typefNotBnd}, \ref{subnserf}, \ref{gammaPrimeEq}) by noticing that \begin{equation*} f_0(\boldsymbol{L})=-\frac{1}{n}\log{p_{\hat{\theta}(x^n)}(x^n)}-\frac{d'}{2n}\log{(2\pi n)}+\frac{C}{n} \end{equation*} for any $x^n$ with $\boldsymbol{L}(x^n)=\boldsymbol{L}$, and (\ref{berApp}) is an application of Lemma \ref{maxLikeBerr}. Recall that there is a one-to-one correspondence between $T_{x^n}$ and $\boldsymbol{L}(x^n)$, hence we can denote $T_{x^n}$ as $T_{\boldsymbol{L}(x^n)}$. Furthermore, once $x^n$ is understood from the context, we simplify $T_{\boldsymbol{L}(x^n)}$ and rewrite it as $T_{\boldsymbol{L}}$. We can then reformulate the equation for $M(\epsilon)$ in (\ref{mEq}) for point type classes. We can achieve this, simply by replacing $\boldsymbol{\tau}_c(X^n)$ with $\boldsymbol{L}(x^n)$ as the representative of the type class. We then bound $M(\epsilon)$ in (\ref{mEq}) with the choice of $\gamma'$ in (\ref{gammaPrimeEq}). Through the same approach as in Subsection \ref{subsec::Achiev}, one can show that \begin{equation} M(\epsilon)\leq \sum_{i=0}^{\infty}\left|\left\{\boldsymbol{L}\in\mathcal{L}:f_0(\boldsymbol{L})\in\mathcal{A}'_i\right\}\right| \cdot 2^{\left\{n\gamma'+2C-ni\Delta\right\}} \label{newmEpsilon} \end{equation} where $\mathcal{A}'_i=\left(\gamma'+\frac{2C}{n}-(i+1)\Delta,\gamma'+\frac{2C}{n}-i\Delta\right]$ and $C$ is the constant in (\ref{typefNotBnd}). We now evaluate $\left|\left\{\boldsymbol{L}\in\mathcal{L}:f_0(\boldsymbol{L})\in\mathcal{A}'_i\right\}\right|$. Define a 2-norm ball of radius $r$ around a point $\ell_0\in\mathbb{R}^{d'}$ as \begin{equation} \label{ballsEq} B_{r}(\ell_0)=\left\{\ell\in\mathbb{R}^{d'}:\|\ell-\ell_0\|<r\right\}. \end{equation} In the sequel we use $\boldsymbol{L}$ as the lattice points in $\mathcal{L}$, while we reserve the notation $\ell$ for points in the convex hull of $\mathcal{L}$ which we denote by $\mathfrak{L}=\text{conv}(\mathcal{L})$. Observe that for any two different points $\boldsymbol{L}_1,\boldsymbol{L}_2\in\mathcal{L}$, $\|\boldsymbol{L}_1-\boldsymbol{L}_2\|\geq \frac{1}{n}$, and therefore, $B_{\frac{1}{2n}}(\boldsymbol{L}_1)$ and $B_{\frac{1}{2n}}(\boldsymbol{L}_2)$ are disjoint. Since the convex hull $\mathfrak{L}$ is a $d'$-dimensional space, there exists a constant $C>0$ (its precise value is $\frac{\pi^{\frac{d'}{2}}}{2^{d'}\Gamma(\frac{d'}{2}+1)}$ \cite{ren}) such that \begin{equation} \label{volumeEquation} \text{Vol}\left(B_{\frac{1}{2n}}(\boldsymbol{L})\right)=\frac{C}{n^{d'}}. \end{equation} Therefore \begin{align} |\left\{\boldsymbol{L}\in\mathcal{L}:f_0(\boldsymbol{L})\in\mathcal{A}_i'\right\}|&=\sum_{\substack{\boldsymbol{L}\in\mathcal{L}\\f_0(\boldsymbol{L})\in\mathcal{A}_i'}}\frac{n^{d'}}{C}\text{Vol}\left(B_{\frac{1}{2n}}(\boldsymbol{L})\right) \nonumber \\ &=\frac{n^{d'}}{C}\text{Vol}\left(\bigcup_{\substack{\boldsymbol{L}\in\mathcal{L}\\f_0(\boldsymbol{L})\in\mathcal{A}'_i}}B_{\frac{1}{2n}}(\boldsymbol{L})\right) \label{ballDisjointness}\\ &\leq \frac{n^{d'}}{C}\text{Vol}\left(\bigcup_{\substack{\ell\in\mathfrak{L}\\f_0(\ell)\in\mathcal{A}'_i}}B_{\frac{1}{2n}}(\boldsymbol{L})\right). \nonumber \end{align} where (\ref{ballDisjointness}) follows from disjointness of the balls. Proceeding as in Subsection \ref{subsec::Achiev}, it is straightforward to show that for a constant $C>0$ \begin{equation} \label{numberOfLs} |\left\{\boldsymbol{L}\in\mathcal{L}:f_0(\boldsymbol{L})\in\mathcal{A}'_i\right\}| \leq Cn^{d'-1}. \end{equation} The rest of the proof is similar to the Subsection \ref{subsec::Achiev}, which we omit due to similarity. We now provide a converse for the performance of the Type Size code with point type classes. We can rewrite the corresponding finite blocklength result (\ref{mEq}) for point type classes as \begin{equation} \label{secondMEpsilon} M(\epsilon)=\inf_{\gamma':p_{\gamma'}\leq \epsilon}v(\gamma'), \end{equation} where $p_{\gamma'}$ is defined as in (\ref{pGammaFirst}) and \begin{equation} v(\gamma')=\sum_{\substack{\boldsymbol{L}\in\mathcal{L}: \\ \frac{1}{n}\log{|T_{\boldsymbol{L}}|}\leq \gamma' }}{{|T_{\boldsymbol{L}}|}}. \end{equation} Notice that $v(\gamma')$ is non-decreasing function of $\gamma'$, while $p_{\gamma'}$ is non-increasing function of $\gamma'$. Therefore, if for some $\gamma'_0$, $p_{\gamma'_0}>\epsilon$, then one can conclude that \begin{equation} M(\epsilon)\geq v(\gamma_0'). \label{mEpsVRel} \end{equation} We then show that $p_{\gamma_0'}>\epsilon$ for the following choice of $\gamma_0'$ \begin{equation} \label{gammaPrimeZeroEq} \gamma'_0=H+\frac{\sigma}{\sqrt{n}}Q^{-1}\left(\epsilon+\frac{A+1}{\sqrt{n}}\right)-\frac{d'}{2n}\log{(2\pi n)}-\frac{C}{n} \end{equation} where $A$ is the constant in Lemma \ref{maxLikeBerr} and $C$ is the constant in (\ref{typefNotBnd}). Indeed \begin{align} p_{\gamma'_0} &\geq\mathbb{P}_{\theta^*}\left[-\frac{1}{n}\log{p_{\hat{\theta}(X^n)}(X^n)}-\frac{d'}{2n}\log{(2\pi n)}-\frac{C}{n}>\gamma'_0\right] \label{firstLowLam} \\ &=\mathbb{P}_{\theta^*}\left[\frac{-\log{p_{\hat{\theta}(X^n)}(X^n)}-nH}{\sigma\sqrt{n}}>Q^{-1}\left(\epsilon+\frac{A+1}{\sqrt{n}}\right)\right] \label{secondLowLam} \\ &>\epsilon \label{thirdLowLam} \end{align} where (\ref{firstLowLam}) is from the type class size bound (\ref{typefNotBnd}) and the definition of $p_{\gamma'_0}$ in (\ref{pGammaFirst}), (\ref{secondLowLam}) is from the choice of $\gamma'_0$ in (\ref{gammaPrimeZeroEq}), and (\ref{thirdLowLam}) is a consequence of Lemma \ref{maxLikeBerr}. Continuing from (\ref{mEpsVRel}), we may write \begin{align} M(\epsilon)&\geq \sum_{\substack{\boldsymbol{L}\in\mathcal{L} \\ \frac{1}{n}\log{|T_{\boldsymbol{L}}|}\leq \gamma'_0}}{|T_{\boldsymbol{L}}|} \nonumber \\ &\geq \sum_{\substack{\boldsymbol{L}\in\mathcal{L} \\ f_0(\boldsymbol{L})\leq \gamma'_0}}{2^{nf_0(\boldsymbol{L})-2C}} \label{mfarSec} \end{align} where (\ref{mfarSec}) exploits the bounds for the type class size (\ref{typefNotBnd}). For $\Delta=\frac{1}{n}$, (\ref{mfarSec}) can simply be lower bounded as follows by restricting the summation to $\boldsymbol{L}$ in $\mathcal{A}_0$, where $\mathcal{A}_0=\{\boldsymbol{L}\in\mathcal{L}: \gamma'_0-\Delta< f_0(\boldsymbol{L})\le \gamma'_0\}$ \begin{equation} M(\epsilon)\geq \left|\mathcal{A}_0\right| \cdot 2^{n\gamma'_0-n\Delta-2C}. \label{newmEpsilon} \end{equation} We now provide a lower bound on $|\mathcal{A}_0|$. Let $\tilde{\mathcal{A}}_0=\{\ell\in\mathfrak{L}: \gamma'_0-\Delta< f_0(\ell)\le \gamma'_0\}$. \begin{Lemma} \label{disConLem} There exists a constant $C$ such that \begin{equation} \label{disConEq} \frac{\text{\emph{Vol}}\left(\bigcup_{\ell\in\tilde{\mathcal{A}}_0}B_{\frac{1}{2n}}(\ell)\right)}{\text{\emph{Vol}}\left(\bigcup_{\boldsymbol{L}\in\mathcal{A}_0}B_{\frac{1}{2n}}\left(\boldsymbol{L}\right)\right)}\leq C. \end{equation} \end{Lemma} \begin{proof} See Appendix \ref{app::disConLem}. \end{proof} We then have \begin{align} |\mathcal{A}_0| &=\sum_{\boldsymbol{L}\in\mathcal{A}_0}\frac{\text{Vol}\left(B_{\frac{1}{2n}}(\boldsymbol{L})\right)}{\text{Vol}\left(B_{\frac{1}{2n}}(\boldsymbol{L})\right)} \nonumber \\ &= C n^{d'}\sum_{\boldsymbol{L}\in\mathcal{A}_0}\text{Vol}\left(B_{\frac{1}{2n}}\left(\boldsymbol{L}\right)\right) \label{e1}\\ &= C n^{d'}\text{Vol}\left(\bigcup_{\boldsymbol{L}\in\mathcal{A}_0}B_{\frac{1}{2n}}\left(\boldsymbol{L}\right)\right) \label{e2}\\ &\geq n^{d'}\frac{\text{Vol}\left(\bigcup_{\ell\in\tilde{\mathcal{A}}_0}B_{\frac{1}{2n}}(\ell)\right)}{C} \label{e3} \end{align} where (\ref{e1}) follows from (\ref{volumeEquation}) (recall that the letter $C$ may denote different constants), (\ref{e2}) is due to disjointness of the balls, and (\ref{e3}) is a consequence of Lemma \ref{disConLem}. Define, \begin{equation} \rho_0(\lambda)=\text{Vol}\{\ell\in \mathfrak{L}:f_0(\ell)\leq \lambda\}. \end{equation} We need the following technical lemma, which we prove in Appendix \ref{sec::rhoZeroAppx}. \begin{Lemma} \label{rho0LowerDrive} There exists a positive constant $K_4$, such that for all $\gamma'_0-\Delta\leq\lambda \leq \gamma'_0$ we have \begin{equation*} \left|\frac{d}{d\lambda}\rho_0(\lambda)\right|\geq K_4. \end{equation*} \end{Lemma} Recalling the definition of $\tilde{\mathcal{A}}_0$, we may continue from (\ref{e3}) and write \begin{align} |\mathcal{A}_0| &\geq \frac{n^{d'}}{C}\text{Vol}\left(\cup_{\ell:\gamma_0'-\Delta<f_0(\ell)\leq\gamma_0'}B_{\frac{1}{2n}}(\ell)\right) \nonumber \\ &\geq \frac{n^{d'}}{C}\text{Vol}\left(\{\ell:f_0(\ell)\in\left(\gamma_0'-\Delta,\gamma_0'\right]\}\right) \label{eqqq1}\\ &=\frac{n^{d'}}{C}\left(\rho_0(\gamma_0')-\rho_0(\gamma_0'-\Delta)\right) \label{eqqq2} \\ &\geq \frac{n^{d'}}{C}K_4\Delta \label{eqqq3} \end{align} where (\ref{eqqq1}) is by lower bounding the volume of the ball-covering of $\tilde{\mathcal{A}}_0$ by the volume of $\tilde{\mathcal{A}}_0$ itself, (\ref{eqqq2}) is from the definition of $\rho_0$ and (\ref{eqqq3}) is from Lemma \ref{rho0LowerDrive}. Continuing from (\ref{newmEpsilon}), we have \begin{align} M(\epsilon) &\geq \frac{n^{d'}}{C}K_4\Delta \cdot 2^{n\gamma'_0-n\Delta-2C} \label{mepOne} \\ &=Cn^{d'-1}2^{n\gamma'_0-2C-1} \label{mepthree} \end{align} where (\ref{mepOne}) is from (\ref{eqqq3}), and (\ref{mepthree}) is from $\Delta=\frac{1}{n}$. Replacing $\gamma'_0$ by (\ref{gammaPrimeZeroEq}), we obtain \begin{equation*} \log{M(\epsilon)}\geq nH+\sigma\sqrt{n}Q^{-1}(\epsilon)+\left(\frac{d'}{2}-1\right)\log{n}+\mathcal{O}(1). \end{equation*} \end{proof} \section{Conclusion and Future Work} \label{sec::conclusion} We derived the fundamental limits for universal one-to-one coding of the $d$-dimensional memoryless as well as Markov exponential families of distributions. We proposed the quantized Type Size code, where type classes are associated with cuboids in the grid partitioning the space of minimal sufficient statistics. We showed that the quantized Type Size code achieves the optimal third-order term $\left(\frac{d}{2}-1\right)\log{n}$. Next, the naive point type class approach is considered, where two sequences are in the same type class if and only if they have the same probability under any distribution in the exponential family. In the point type class scenario, each point (rather than a cuboid) in the set of minimal sufficient statistics defines a type class. The third-order term of the point type class approach is shown to be exactly $(\frac{d'}{2}-1)\log{n}$, where $d'$ is the dimension of the lattice vector representation of the the sufficient statistic. Since $d'$ is in general larger than $d$, our findings reveal that the model class dimension $d$ --- rather than the lattice dimension $d'$ --- is the relevant dimension for optimal performance. This is a more intuitive result, because it is much easier to understand the role of $d$ as opposed to $d'$. Moreover, $d$ is a more robust parameter compared to $d'$; changing the model parameters infinitesimally (i.e. from rational to irrational) can change $d'$, but not $d$. For a more general parametric family without any information on the minimal sufficient statistics, one may partition the parameter space into cuboids and define two sequences to be in the same type class if and only if their maximum likelihood estimates belong to the same cuboid. One interesting future direction of this work is analyzing performance of such approach. As this work does not consider computational complexity of implementing the compression algorithms, an alternative future direction is to consider the blocklength-storage-complexity tradeoff. Finally, the lossy version of this research is also an interesting possible future direction. \begin{appendices} \section{Proof of Lemma \ref{maxLikeBerr}: Asymptotic Normality of Information}\label{app::maxLikeBerr} Define \begin{equation} \label{eeEqau} e(\boldsymbol{\tau}) = -\max_{\theta} \Big(\langle\theta,\boldsymbol{\tau}\rangle-\psi(\theta)+\langle\theta,\nabla\psi(\theta^*)\rangle\Big). \end{equation} Fuethermore, denote $\boldsymbol{U}_i(x^n)=\boldsymbol{\tau}(x_i)-\boldsymbol{\mu}$ for $i=1,\cdots ,n$, where $\boldsymbol{\mu}=\mathbb{E}_{\theta^*}\left[\boldsymbol{\tau}(X)\right]$. Therefore, $\boldsymbol{U}_i(X^n)$'s are zero-mean with finite covariance. First, observe that \begin{align} \frac{1}{n}\log{p_{\hat{\theta}}(x^n)}&= -\max_{\theta}\langle \theta,\boldsymbol{\tau}(x^n)-\boldsymbol{\mu} \rangle -\psi(\theta) +\langle \theta,\boldsymbol{\mu}\rangle \label{diffSum}\\ &=e\left(\frac{1}{n}\sum_{i=1}^{n}{\boldsymbol{U}_i(x^n)}\right) \label{sumUI} \end{align} where (\ref{diffSum}) is from (\ref{thetaHatEquation}), and since $\boldsymbol{\mu}=\nabla\psi(\theta^*)$ \cite{jordan}, (\ref{sumUI}) follows from (\ref{eeEqau},\ref{tauXnDef}). We then show that $e(\mathbf{0})=H$. Equating the derivative with respect to $\theta$ of the expression inside the parenthesis with zero, we find that $\theta^*$ is the maximizing parameter in (\ref{eeEqau}). Therefore \begin{align} e(\mathbf{0}) &= -\Big(-\psi(\theta^*)+\langle \theta^*,\nabla\psi(\theta^*) \rangle\Big) \label{thetaStarIsMax}\\ &= -\Big(-\psi(\theta^*)+\langle \theta^*,\mathbb{E}_{\theta^*}\left(\boldsymbol{\tau}(X)\right) \rangle \Big)\label{ExoProp}\\ &=-\mathbb{E}_{\theta^*}\Big(-\psi(\theta^*)+\langle \theta^*,\left(\boldsymbol{\tau}(X)\right)\rangle\Big) \nonumber\\ &=-\mathbb{E}_{\theta^*}\Big(\log{p_{\theta^*}(X)}\Big) \label{useFirst} \\ &=H\nonumber \end{align} where (\ref{thetaStarIsMax}) is from (\ref{eeEqau}), (\ref{ExoProp}) is an exponential family property \cite{jordan}, and (\ref{useFirst}) is from (\ref{pThetaEq}). Application of the Proposition 1 in \cite{nemat} completes the proof. \section{Proof of Lemma \ref{apprxLemm}: Maximum Likelihood Approximation} \label{app::apprxLemm} We show that $\log p_{\hat{\theta}(x^n)}(x^n)$ is constant away from $\log p_{\hat{\theta}_c(x^n)}(x^n)$. Recall that \begin{equation*} \log{p_{\hat{\theta}(x^n)}(x^n)}=n\max_{\theta}\Big[\langle\theta,\boldsymbol{\tau}(x^n)\rangle-\psi(\theta)\Big]. \end{equation*} For ease of notation, when it is clear from the context, we denote $\boldsymbol{\tau}_c(x^n)$ as $\boldsymbol{\tau}_c$, and similarly we remove the argument in $\hat{\theta}_c(x^n)$ and simply denote it as $\hat{\theta}_c$. Since $\boldsymbol{\tau}(x^n)$ is in a cuboid of side length $\frac{s}{n}$ with center $\boldsymbol{\tau}_c$, we have $\|\boldsymbol{\tau}(x^n)-\boldsymbol{\tau}_c\|\leq \frac{s\sqrt{d}}{2n}$. We hence have \begin{align} \left|\left\langle\hat{\theta}_c,\boldsymbol{\tau}(x^n)\right\rangle - \left\langle\hat{\theta}_c,\boldsymbol{\tau}_c\right\rangle\right|&=\left|\left\langle \hat{\theta}_c, \boldsymbol{\tau}(x^n)-\boldsymbol{\tau}_c\right\rangle\right| \nonumber \\ &\leq \|\hat{\theta}_c\| \|\boldsymbol{\tau}(x^n)-\boldsymbol{\tau}_c\| \nonumber \\ &\leq \wp\frac{s\sqrt{d}}{2n} = \frac{\kappa s}{n} \label{tauProdVSTauCProd} \end{align} where (\ref{tauProdVSTauCProd}) exploits the fact that $\|\theta\|\leq\wp$, for all $\theta\in\Theta$, including $\hat{\theta}_c$. Therefore \begin{align} \log{p_{\hat{\theta}_c(x^n)}(x^n)}&=n\left[\left\langle\hat{\theta}_c,\boldsymbol{\tau}(x^n)\right\rangle-\psi\left(\hat{\theta}_c\right)\right] \nonumber \\ &\geq n\left[\left\langle\hat{\theta}_c,\boldsymbol{\tau}_c(x^n)\right\rangle-\frac{\kappa s}{n}-\psi\left(\hat{\theta}_c\right)\right] \label{aaaEq}\\ &= n\max_{\theta}\Big[\left\langle\theta,\boldsymbol{\tau}_c(x^n)\right\rangle-\frac{\kappa s}{n}-\psi(\theta)\Big] \label{ThetaCResCor} \end{align} where (\ref{aaaEq}) follows from (\ref{tauProdVSTauCProd}) and (\ref{ThetaCResCor}) is from the definition of $\hat{\theta}_c$. Using the fact that for any two functions $g_1(\theta),g_2(\theta)$ \begin{equation} \max_{\theta}g_1(\theta)-\max_{\theta}g_2(\theta)\leq \max_{\theta}\Big((g_1-g_2)(\theta)\Big) \label{maxDiff} \end{equation} we obtain \begin{align} \log{p_{\hat{\theta}(x^n)}(x^n)}-\log{p_{\hat{\theta}_c}(x^n)} &\leq n\max_{\theta}\Big[\left\langle\theta,\boldsymbol{\tau}(x^n)\right\rangle-\psi(\theta)\Big] -n\max_{\theta}\left[\left\langle\theta,\boldsymbol{\tau}_c(x^n)\right\rangle-\frac{\kappa s}{n}-\psi(\theta)\right] \label{udePriorEq}\\ &\leq n\max_{\theta}\left[\left\langle\theta,\boldsymbol{\tau}(x^n)-\boldsymbol{\tau}_c\right\rangle+\frac{\kappa s}{n}\right] \label{fstLngEq} \end{align} where (\ref{udePriorEq}) exploits (\ref{ThetaCResCor}), and (\ref{fstLngEq}) is from (\ref{maxDiff}). Similar to (\ref{tauProdVSTauCProd}), one can show that $\langle\theta,\boldsymbol{\tau}(x^n)-\boldsymbol{\tau}_c\rangle\leq \frac{\kappa s}{n}$. Lemma then follows. \section{Proof of Lemma \ref{fIsLipschitz}: Lipschitzness of $f(\cdot)$} \label{app:fLipschitz} Let \begin{equation} \label{lEq} l(\boldsymbol{\tau})=\max_\theta \left(\left\langle\theta,\boldsymbol{\tau}\right\rangle-\psi(\theta)\right). \end{equation} Noticing that $\|\nabla f(\boldsymbol{\tau})\|=\|\nabla l(\boldsymbol{\tau})\|$, in order to show the Lipschitzness of $f(\boldsymbol{\tau})$ in (\ref{fEqUpp2}), it suffices to show that $l(\boldsymbol{\tau})$ is a Lipschitz function of $\boldsymbol{\tau}$. We first show that $\|\nabla l(\boldsymbol{\tau})\|=\|\hat{\theta}(\boldsymbol{\tau})\|$. Due to (\ref{thetaHatEquation}) \begin{equation*} l(\boldsymbol{\tau})=\langle\hat{\theta}(\boldsymbol{\tau}),\boldsymbol{\tau}\rangle - \psi\left(\hat{\theta}(\boldsymbol{\tau})\right). \end{equation*} Hence, taking gradient with respect to $\boldsymbol{\tau}$ \begin{align} \nabla l(\boldsymbol{\tau}) &=\left(\left(\nabla\hat{\theta}(\boldsymbol{\tau})\right) \boldsymbol{\tau}+\hat{\theta}(\boldsymbol{\tau})\right)-\nabla\hat{\theta}(\boldsymbol{\tau})\nabla_{\hat{\theta}}\psi\left(\hat{\theta}(\boldsymbol{\tau})\right) \nonumber \\ &= \left(\left(\nabla\hat{\theta}(\boldsymbol{\tau})\right) \boldsymbol{\tau}+\hat{\theta}(\boldsymbol{\tau})\right)-\nabla\hat{\theta}(\boldsymbol{\tau})\mathbb{E}_{\hat{\theta}(\boldsymbol{\tau})}(\boldsymbol{\tau}(X)) \label{ssstackrela} \\ &=\hat{\theta}(\boldsymbol{\tau}) \label{ssstackRellb} \end{align} where $(\ref{ssstackrela})$ follows from $\nabla_{\hat{\theta}}\psi\left(\hat{\theta}(\boldsymbol{\tau})\right)=\mathbb{E}_{\hat{\theta}(\boldsymbol{\tau})}\left(\boldsymbol{\tau}(X)\right)$ \cite{jordan}, and $(\ref{ssstackRellb})$ follows from $\mathbb{E}_{\hat{\theta}(\boldsymbol{\tau})}(\boldsymbol{\tau}(X))=\boldsymbol{\tau}$ (see the proof of Lemma \ref{TypeClSizeLem}). Lemma follows by recalling that $\|\hat{\theta}(\boldsymbol{\tau})\|\leq \wp$. \section{Proof of Lemma \ref{rhoIsLipschitz}: Lipschitzness of $\rho(\cdot)$} \label{app:hLipschitz} Let $\mathcal{K}=\{\boldsymbol{\tau}\in\mathcal{T}:f(\boldsymbol{\tau})\leq \lambda\}$ and $\mathcal{K}^c=\mathcal{T}\backslash\mathcal{K}$. We first show that $\mathcal{K}^c$ is a convex body. A sub-level set of $f(\cdot)$ is a sub-level set of $-l(\cdot)$ defined as in (\ref{lEq}). Therefore, it is enough to show that sub-level sets of $l(\cdot)$ (i.e. $\mathcal{K}^c$) are convex. Maximum of linear functions of $\boldsymbol{\tau}$ is a convex function, therefore $l(\cdot)$ defined in (\ref{lEq}) is a convex function of $\boldsymbol{\tau}$. Since the sub-level sets of a convex function are convex, $\mathcal{K}^c$ is a convex body. In order to show that $\rho(\lambda)$ $\left(=\text{Vol}\left(\mathcal{K}\right)\right)$ is Lipschitz, we provide an upper bound for the absolute value of its derivative $|\frac{d}{d\lambda}\rho(\lambda)|$. Let us denote the surface area of a convex body $\mathcal{K}^c$ as \cite[Section 3.3]{hug} \begin{equation} \label{surfaceDef} S(\mathcal{K}^c)=\lim_{\epsilon\rightarrow0}\frac{V^{(d)}\Big(\mathcal{K}^c+B(\epsilon)\Big)-V^{(d)}(\mathcal{K}^c)}{\epsilon} \end{equation} where $V^{(d)}(\cdot)$ is the $d$-dimensional volume, $B(\epsilon)$ is the $d$-dimensional unit ball and addition in $\mathcal{K}^c+B(\epsilon)$ is the Minkowski's sum \cite{hug}. Let us denote $\mathcal{K}^c_{\epsilon}=\{\boldsymbol{\tau}\in\mathcal{T}:f(\boldsymbol{\tau})> \lambda-\epsilon\}$. We have \begin{align} \frac{d}{d\lambda}\rho(\lambda) &= \lim_{\epsilon\rightarrow0}\frac{\rho(\lambda)-\rho(\lambda-\epsilon)}{\epsilon} \nonumber \\ &=\lim_{\epsilon\rightarrow0}\frac{\left(\text{Vol}(\mathcal{T})-\rho(\lambda-\epsilon)\right)-\left(\text{Vol}(\mathcal{T})-\rho(\lambda)\right)}{\epsilon} \nonumber\\ &= \lim_{\epsilon\rightarrow 0} \frac{\text{Vol}(\mathcal{K}^c_{\epsilon})-\text{Vol}(\mathcal{K}^c)}{\epsilon}. \label{hder} \end{align} Let us assume $\epsilon\rightarrow 0^+$; the case where $\epsilon\rightarrow 0^-$ is handled similarly. Let $\boldsymbol{\tau}_1\in\mathcal{K}^c_{\epsilon}$. From the Taylor series expansion of $f(\boldsymbol{\tau}_2)$ in the vicinity of $\boldsymbol{\tau}_1$ with distance at most $\|\boldsymbol{\tau}_2-\boldsymbol{\tau}_1\|\leq \sqrt{\epsilon}$, we obtain \begin{equation} f(\boldsymbol{\tau}_2)=f(\boldsymbol{\tau}_1)+ \langle \nabla f(\boldsymbol{\tau}_1),\boldsymbol{\tau}_2-\boldsymbol{\tau}_1\rangle+\Delta \end{equation} where $|\Delta|\leq C_f\|\boldsymbol{\tau}_1-\boldsymbol{\tau}_2\|^2$, for a constant $C_f$ independent of $n$. Let \begin{equation} \boldsymbol{\tau}_2=\boldsymbol{\tau}_1+\epsilon\frac{(1+C_f) \nabla f(\boldsymbol{\tau}_1)}{\|\nabla f(\boldsymbol{\tau}_1)\|^2}. \end{equation} With this choice of $\boldsymbol{\tau}_2$, we obtain \begin{align} f(\boldsymbol{\tau}_2)&=f(\boldsymbol{\tau}_1)+\epsilon (1+C_f)+\Delta \nonumber \\ &\geq f(\boldsymbol{\tau}_1)+\epsilon +\epsilon C_f- C_f\|\boldsymbol{\tau}_1-\boldsymbol{\tau}_2\|^2 \nonumber \\ &\geq f(\boldsymbol{\tau}_1)+\epsilon \label{epsilonDist} \\ &> \lambda-\epsilon+\epsilon \label{containRel} \\ &=\lambda \nonumber \end{align} where (\ref{epsilonDist}) follows form $\|\boldsymbol{\tau}_2-\boldsymbol{\tau}_1\|\leq \sqrt{\epsilon}$, and (\ref{containRel}) is a consequence of $\boldsymbol{\tau}_1\in\mathcal{K}^c_{\epsilon}$. Hence $\boldsymbol{\tau}_2\in\mathcal{K}^c$. Since $\boldsymbol{\tau}_1\in\mathcal{K}^c_{\epsilon}$ was arbitrary, we have $\mathcal{K}^c_{\epsilon}\subset \mathcal{K}^c+B\left(\frac{\epsilon (1+C_f)}{\|\nabla f(\boldsymbol{\tau}_1)\|}\right)$. Therefore, one can upper bound (\ref{hder}) in terms of the surface area (\ref{surfaceDef}) as follows: \begin{equation} \label{hPrimeBound1} \left|\frac{d}{d\lambda}\rho(\lambda)\right|\leq \frac{(1+C_f) S(\mathcal{K}^c)}{\|\nabla f(\boldsymbol{\tau}_1)\|} \:\:\:\mbox{ for all $\boldsymbol{\tau}_1\in\mathcal{K}_{\epsilon}$}. \end{equation} Since $\mathcal{K}^c,\mathcal{T}$ are convex bodies and $\mathcal{K}^c\subset \mathcal{T}$, consequently $S(\mathcal{K}^c)\leq S(\mathcal{T})$ \cite[Theorem 3.2.2]{hug}. Since $\mathcal{X}$ is finite, therefore $\mathcal{T}$ is a bounded set, which yields $S(\mathcal{K}^c)\leq S(\mathcal{T})<\infty$. From the proof of Lemma \ref{fIsLipschitz} in Appendix \ref{app:fLipschitz}, we have $\|\nabla f(\boldsymbol{\tau}_1)\|=\|\hat{\theta}(\boldsymbol{\tau}_1)\|$. That translates (\ref{hPrimeBound1}) into \begin{equation} \label{hPrimeBound} \left|\frac{d}{d\lambda}\rho(\lambda)\right|\leq \frac{(1+C_f) S(\mathcal{K}^c)}{\|\hat{\theta}(\boldsymbol{\tau}_1)\|} \:\:\:\mbox{ for all $\boldsymbol{\tau}_1\in\mathcal{K}_{\epsilon}$}. \end{equation} We finally show that $\|\hat{\theta}(\boldsymbol{\tau}_1)\|$ is bounded away from zero. Let $\boldsymbol{\tau}_u$ be such that $\hat{\theta}(\boldsymbol{\tau}_u)=(0,\cdots,0)$ (subscript $u$ stands for the uniform distribution.). Since $\omega=\frac{\log{|\mathcal{X}|}-H}{5}>0$ and $f(\boldsymbol{\tau}) = -\frac{1}{n}\log{p_{\hat{\theta}(\boldsymbol{\tau})}(x^n)}-\Theta\left(\frac{\log{n}}{n}\right)$, we have that \begin{equation} \label{fTauUEq} f(\boldsymbol{\tau}_u)\geq \log{|\mathcal{X}|}-\omega, \mbox{ for large enough $n$}. \end{equation} From boundedness of $\mathcal{T}$, we have \begin{equation*} T_{\max}:=\max\left\{\|\boldsymbol{\tau}\|:\boldsymbol{\tau}\in \mathcal{T}\right\}<\infty. \end{equation*} Therefore $\left\|\nabla\psi\left(\hat{\theta}({\boldsymbol{\tau}})\right)\right\| = \left\|\mathbb{E}_{\hat{\theta}}(\boldsymbol{\tau}(X))\right\|\leq T_{\max}$ is bounded. Hence $\psi\left(\hat{\theta}({\boldsymbol{\tau}})\right)$ is a Lipschitz function of $\hat{\theta}({\boldsymbol{\tau}})$ with Lipschitz constant $T_{\max}$. Hence if $\left\|\hat{\theta}({\boldsymbol{\tau}}) - \hat{\theta}({\boldsymbol{\tau}_u})\right\|\leq \frac{\omega}{T_{\max}}$, then $\left|\psi(\hat{\theta}({\boldsymbol{\tau}}))-\psi(\hat{\theta}({\boldsymbol{\tau}_u}))\right|\leq \omega$ and furthermore by the Cauchy-Schwarz inequality $\left|\left\langle\hat{\theta}(\boldsymbol{\tau})-\hat{\theta}(\boldsymbol{\tau}_u),\boldsymbol{\tau}\right\rangle\right|\leq \omega$. Therefore, if $\left\|\hat{\theta}({\boldsymbol{\tau}}) - \hat{\theta}({\boldsymbol{\tau}_u})\right\|\leq \frac{\omega}{T_{\max}}$, then \begin{align} \left|f(\boldsymbol{\tau})-f(\boldsymbol{\tau}_u)\right| &\leq \left|\left\langle \hat{\theta}({\boldsymbol{\tau}}),\boldsymbol{\tau}\right\rangle - \left\langle \hat{\theta}(\boldsymbol{\tau}_u),\boldsymbol{\tau}_u\right\rangle\right| +\left|\psi\left(\hat{\theta}(\boldsymbol{\tau})\right)-\psi\left(\hat{\theta}(\boldsymbol{\tau}_u)\right)\right|\nonumber \\ &\leq 2\omega \label{intermediateContEq} \end{align} where (\ref{intermediateContEq}) follows from $\hat{\theta}(\boldsymbol{\tau}_u)=(0,\cdots,0)$, $|\langle\hat{\theta}(\boldsymbol{\tau})-\hat{\theta}(\boldsymbol{\tau}_u),\boldsymbol{\tau}\rangle|\leq \omega$. Finally, for large enough $n$ and for all $\boldsymbol{\tau}_1\in\mathcal{K}_{\epsilon}$, it holds that \begin{align} f(\boldsymbol{\tau}_1)&\leq \lambda+\epsilon \nonumber \\ &< (H+\omega)+\omega \label{stackFirst} \\ &= \log{|\mathcal{X}|}-3\omega. \label{lambdaLessthanEq} \end{align} where (\ref{stackFirst}) follows from $\lambda<H+\omega$ and the fact that since $\epsilon\rightarrow 0$, $\epsilon<\omega$ and (\ref{lambdaLessthanEq}) is from the definition of $\omega$. From (\ref{fTauUEq}) and (\ref{lambdaLessthanEq}), we have $|f(\boldsymbol{\tau}_1)-f(\boldsymbol{\tau}_u)|>2\omega$ for all $\boldsymbol{\tau}_1\in\mathcal{K}_{\epsilon}$. Hence by (\ref{intermediateContEq}), we must certainly have $\left\|\hat{\theta}({\boldsymbol{\tau}}_1) - \hat{\theta}\left({\boldsymbol{\tau}_u}\right)\right\|>\frac{\omega}{T_{\max}}$. On the other hand $\hat{\theta}({\boldsymbol{\tau}_u})=(0,\cdots,0)$, which entails that $\left\|\hat{\theta}({\boldsymbol{\tau}_1})\right\|> \frac{\omega}{T_{\max}}$. This yields a positive lower bound, independent of $n$, for the denominator in (\ref{hPrimeBound}). \section{Proof of Lemma \ref{PointTypeClasLemma}: Point Type Class Size} \label{app::pointProof} The one-to-one mapping between $\boldsymbol{\tau}(x)$ and $\boldsymbol{L}(x)$, subsequently defines a one-to-one mapping between $\boldsymbol{\tau}(x^n)$ and $\boldsymbol{L}(x^n)$, which consequently defines a one-to-one correspondence between the point type class $T_{x^n}$ and $\boldsymbol{L}(x^n)$. Therefore, for any parameter value $\theta\in\Theta$, it holds that \cite{merhav} \begin{equation} \label{typeClassSizeApprch} |T_{x^n}|=\frac{\mathbb{P}_{\theta}\{\boldsymbol{L}({X^n})=\boldsymbol{L}({x^n})\}}{p_{\theta}(x^n)}. \end{equation} Since $\boldsymbol{L}(x^n)$ can be written as a sum of integer (lattice) random vectors $\boldsymbol{L}(x_i)$ (Eq. (\ref{LSumEq})), exploiting the local limit theorem of \cite{borovkov} to bound the numerator in (\ref{typeClassSizeApprch}), yields \cite{merhav} \begin{equation} \log{|T_{x^n}|}=-\log{p_{\hat{\theta}(x^n)}(x^n)}-\frac{d'}{2}\log{2\pi n} -\frac{1}{2}\log{\det M\left[\hat{\theta}(x^n)\right]}+o(1)\label{weinbergerEq} \end{equation} where $\hat{\theta}(x^n)$ is the maximum likelihood estimate of $\theta$ for $x^n$ and $M[\theta]$ denotes the covariance matrix of the random vector $\boldsymbol{L}(X)$ where $X$ is drawn from $p_\theta$. We show that absolute value of the third term in (\ref{weinbergerEq}), $\left|\frac{1}{2}\log{\det M\left[\hat{\theta}(x^n)\right]}\right|$, is upper bounded by a constant $C_M>0$ independent of $n$. Constant upper bound $C_{u}>0$, for $\det M\left[\hat{\theta}(x^n)\right]$ follows from Hadamard's inequality \cite[corollary 7.8.3]{horn}. For the lower bound, since $\det M[\theta]$ is a continuous function of $\theta$ over a compact domain $\Theta$, it attains a minimum at a point in the parameter space, say $\ddot{\theta}\in \Theta$. Let $\ddot{\boldsymbol{P}}$ be a diagonal $(|\mathcal{X}|-1)\times (|\mathcal{X}|-1)$ matrix with diagonal entries $\ddot{\boldsymbol{P}}_{ii}=\mathbb{P}_{\ddot{\theta}}\left(X=i+1\right)$ for $i\in\mathcal{X}$, and $\ddot{\boldsymbol{p}}$ be a column vector with $\ddot{\boldsymbol{p}}_i=\mathbb{P}_{\ddot{\theta}}\left(X=i+1\right)$ for $i\in\mathcal{X}$. We have \begin{align} M(\ddot{\theta})&= \mathbb{E}_{\ddot{\theta}}\left(\left[\boldsymbol{L}(X)\right]\left[\boldsymbol{L}(X)\right]^T\right) - \mathbb{E}_{\ddot{\theta}}\left(\left[\boldsymbol{L}(X)\right]\right)\left(\mathbb{E}_{\ddot{\theta}}\left(\left[\boldsymbol{L}(X)\right]\right)\right)^T \nonumber \\ &=\sum_{x\neq1}p_{\ddot{\theta}}(x)\boldsymbol{L}(x)\boldsymbol{L}(x)^T -\left(\sum_{x\neq1}p_{\ddot{\theta}}(x)\boldsymbol{L}(x)\right)\left(\sum_{x\neq1}p_{\ddot{\theta}}(x)\boldsymbol{L}(x)\right)^T \label{ZeroLone} \\ &= \mathbb{L}\ddot{\boldsymbol{P}}\mathbb{L}^T- \left(\mathbb{L}\ddot{\boldsymbol{p}}\right)\left(\mathbb{L}\ddot{\boldsymbol{p}}\right)^T \nonumber \\ &=\mathbb{L}(\ddot{\boldsymbol{P}}-\ddot{\boldsymbol{p}}\ddot{\boldsymbol{p}}^T)\mathbb{L}^T \label{MDecompL} \end{align} where (\ref{ZeroLone}) follows recalling that $\boldsymbol{L}(1)=\boldsymbol{0}$. We then show that $\left(\ddot{\boldsymbol{P}}-\ddot{\boldsymbol{p}}\ddot{\boldsymbol{p}}^T\right)$ is non-singular. Observe that \begin{align} \text{det}(\ddot{\boldsymbol{P}}-\ddot{\boldsymbol{p}}\ddot{\boldsymbol{p}}^T) &= (1-\boldsymbol{p}^T\ddot{\boldsymbol{P}}^{-1}\ddot{\boldsymbol{p}}) \text{det} \ddot{\boldsymbol{P}} \label{matDetLem} \\ &=\Big(1-(p_{\ddot{\theta}}(2)+\cdots+p_{\ddot{\theta}}(|\mathcal{X}|))\Big)\text{det} \ddot{\boldsymbol{P}} \nonumber \\ &=p_{\ddot{\theta}}(1) \text{det} \ddot{\boldsymbol{P}} \nonumber \\ &=p_{\ddot{\theta}}(1) p_{\ddot{\theta}}(2)\cdots p_{\ddot{\theta}}(|\mathcal{X}|) \nonumber \\ &\geq p_{\text{min}} ^{|\mathcal{X}|} \label{pminCmp} \end{align} where (\ref{matDetLem}) is from Matrix determinant Lemma \cite{harvill}, while existence of a constant $p_{\text{min}}$ in (\ref{pminCmp}) such that $p_{\ddot{\theta}}(x)\geq p_{\text{min}}\:\:\forall x\in\mathcal{X}$ follows from compactness of $\Theta$ and structure of the exponential family (\ref{pThetaEq}). Since $\mathbb{L}$ is full rank and rank of a matrix is invariant under multiplication by a non-singular matrix, (\ref{MDecompL}) implies $\det M\left[\ddot{\theta}\right]>0$. Positivity of $\det M\left[\ddot{\theta}\right]$, in turn provides a positive constant lower bound $C_l$ for $\det M\left[\hat{\theta}(x^n)\right]$. Let $C_M=\frac{1}{2}\max\{|\log{C_{l}}|,|\log{C_{u}}|\}$ and $C=C_M+1$ be the constant in the lemma. Finally, lemma follows by noticing that \begin{equation*} \log{p_{\hat{\theta}(x^n)}(x^n)}= n\left[\left\langle \hat{\theta}(\boldsymbol{\tau}(\boldsymbol{L})),\boldsymbol{\tau}(\boldsymbol{L}) \right\rangle - \psi\left(\hat{\theta}(\boldsymbol{\tau}(\boldsymbol{L}))\right)\right] \end{equation*} for any $x^n$ with $\boldsymbol{L}(x^n)=\boldsymbol{L}$. \section{Proof of Lemma \ref{disConLem}: Ratio of the Volumes} \label{app::disConLem} Similar to the Appendix \ref{app:fLipschitz}, one can show that $f_0(\ell)$ is a Lipschitz function of $\ell$. Therefore, for a Lipschitz constant $K_5>0$, we have $\|f_0(\ell)-f_0(\boldsymbol{L})\|\leq K_5\|\ell-\boldsymbol{L}\|$. Let $R:=\sum\limits_{i=1}^{|\mathcal{X}|}\|\boldsymbol{L}(i)\|$. We first show that \begin{align} \mathcal{A}_{R}&:=\left\{\ell\in\mathfrak{L}: \gamma'_0-\Delta+\frac{K_5R}{n}<f_0(\ell)\leq\gamma'_0-\frac{K_5R}{n}\right\} \subseteq \bigcup\limits_{\boldsymbol{L}\in\mathcal{A}_0}{B_{\frac{R}{n}}(\boldsymbol{L})}.\label{aaaDef} \end{align} For an arbitrary $\ell\in\mathcal{A}_R$, since $\mathcal{A}_R$ is a subset of the convex hull of $\mathcal{L}$, one can find real non-negative numbers $a_{i}, i=1,...,|\mathcal{X}|$ such that \begin{equation} \sum_{i=1}^{|\mathcal{X}|}{a_i}=1 \end{equation} and \begin{equation} \ell=\sum_{i=1}^{|\mathcal{X}|}{a_i\boldsymbol{L}(i)} \label{ellFDefintiino} . \end{equation} For an index $j$, let $n_i=\lfloor na_i\rfloor$ for $i=1,...,j$ and $n_i=\lceil na_i\rceil$ for $i=j+1,...,|\mathcal{X}|$. We claim one can choose the index $0\leq j \leq |\mathcal{X}|$ ($j=0$ corresponds to $n_i=\lceil na_i\rceil$ for all $i$) such that $\sum_{i=1}^{|\mathcal{X}|}n_i=n$. Observe that for $j=0$, we have $\sum_{i=1}^{|\mathcal{X}|}n_i\geq n$, while for $j=|\mathcal{X}|$, $\sum_{i=1}^{|\mathcal{X}|}n_i\leq n$. Incrementing $j$ by one, decreases the integer $\sum_{i=1}^{|\mathcal{X}|}n_i$ by at most one. The claim then follows. It is clear that $n_i$'s satisfy the following condition as well \begin{equation} \label{oneLeftLast} |n_i-na_i|< 1, \:\:\forall i=1,...,|\mathcal{X}|. \end{equation} Let $x^n\in\mathcal{X}^n$ be any sequence with empirical probability mass function $\left\{\frac{n_i}{n}\right\}$. Observe that \begin{equation*} \boldsymbol{L}(x^n)=\frac{1}{n}\sum_{i=1}^{|\mathcal{X}|}n_i\boldsymbol{L}(i)\in\mathcal{L}. \end{equation*} Therefore one obtains \begin{align} \|\ell-\boldsymbol{L}(x^n)\| &\leq \frac{1}{n}\sum_{i=1}^{|\mathcal{X}|}\left|n_i-na_i\right|\cdot \left\|\boldsymbol{L}(i)\right\| \label{cauchyEq} \\ &< \frac{1}{n}\sum_{i=1}^{|\mathcal{X}|}\left\|\boldsymbol{L}(i)\right\| \label{consEq} \\ &= \frac{R}{n} \label{yeksad} \end{align} where (\ref{cauchyEq}) follows from (\ref{ellFDefintiino}) and the Cauchy-Schwarz inequality, (\ref{consEq}) follows from (\ref{oneLeftLast}). Therefore $\ell\in B_{\frac{R}{n}}(\boldsymbol{L}(x^n))$. We then show that $\boldsymbol{L}(x^n)\in\mathcal{A}_0$. From (\ref{yeksad}) and the Lipschitzness of $f_0(\cdot)$ we have \begin{equation} \label{togewtherEq} f_0(\ell)-\frac{K_5R}{n}\leq f_0\left(\boldsymbol{L}(x^n)\right)\leq f_0(\ell)+\frac{K_5R}{n}. \end{equation} From (\ref{togewtherEq},\ref{aaaDef}) and since $\ell\in\mathcal{A}_R$, we obtain \begin{equation} \gamma'_0-\Delta< f_0\left(\boldsymbol{L}(x^n)\right)\leq \gamma'_0 \end{equation} which confirms $\boldsymbol{L}(x^n)\in\mathcal{A}_0$. Since for an arbitrary $\ell\in\mathcal{A}_R$, we are able to find $\boldsymbol{L}(x^n)\in\mathcal{A}_0$ within a distance of $\frac{R}{n}$, (\ref{aaaDef}) follows. We continue by observing the following \begin{align} \text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)&\leq \text{Vol} \left(\bigcup_{\boldsymbol{L}\in\mathcal{A}_0}B_{\frac{2R+1}{2n}}(\boldsymbol{L})\right) \label{ARSIcVol} \\ &\leq (2R+1)^{d'}\text{Vol} \left(\bigcup_{\boldsymbol{L}\in\mathcal{A}_0}B_{\frac{1}{2n}}(\boldsymbol{L})\right) \label{multPFac} \end{align} where (\ref{ARSIcVol}) is from (\ref{aaaDef}) and a geometrical observation (triangle inequality) that if a point is within a distance $\frac{1}{2n}$ of a point in $\mathcal{A}_R$, it is certainly within a distance $\frac{R}{n}+\frac{1}{2n}$ of a point in $\mathcal{A}_0$, (\ref{multPFac}) follows since scaling the radius of an sphere by a constant, changes its volume by a constant multiplicative factor. Given (\ref{multPFac}), to prove the lemma it is enough to show that for some constant $C>0$, \begin{equation} \frac{\text{Vol}\left(\bigcup_{\ell\in\tilde{\mathcal{A}}_0}B_{\frac{1}{2n}}(\ell)\right)}{\text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)} \leq C. \label{thispneRep} \end{equation} Observe the following \begin{align} &\text{Vol}\left(\bigcup_{\ell\in\tilde{\mathcal{A}}_0}B_{\frac{1}{2n}}(\ell)\right)- \text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right) \label{firstEq} \\ &\leq\text{Vol}\left(\ell: f_0(\ell)\in \left(\gamma'_0-\Delta-\frac{K_5}{2n},\gamma'_0+\frac{K_5}{2n}\right]\right)\label{fnotConv}\\ &\hspace{0.25in}-\text{Vol}\left(\ell: f_0(\ell)\in \left(\gamma'_0-\Delta+\frac{K_5R}{2n},\gamma'_0-\frac{K_5R}{2n}\right]\right) \label{fNotAR} \\ &=\rho_0\left(\gamma'_0+\frac{K_5}{2n}\right)-\rho_0\left(\gamma'_0-\Delta-\frac{K_5}{2n}\right)+\rho_0\left(\gamma'_0-\Delta+\frac{K_5R}{2n}\right)-\rho_0\left(\gamma'_0-\frac{K_5R}{2n}\right) \label{rhonitDef}\\ &\leq \frac{C}{n} \label{finaleEw} \end{align} where (\ref{fnotConv}) is an upper bound for the first term in (\ref{firstEq}) noticing the definition of $\tilde{\mathcal{A}}_0$ and Lipschitzness of $f_0(\cdot)$, (\ref{fNotAR}) is from lower bounding the volume of the ball-covering of $\mathcal{A}_R$ (second term in (\ref{firstEq})) by the volume of $\mathcal{A}_R$ itself, (\ref{rhonitDef}) is from the definition of $\rho_0(\cdot)$, and (\ref{finaleEw}) is from Lipschitzness of $\rho_0(\cdot)$ and recalling the choice of $\Delta=\frac{1}{n}$. Therefore \begin{align} \frac{\text{Vol}\left(\bigcup_{\ell\in\tilde{\mathcal{A}}_0}B_{\frac{1}{2n}(\ell)}\right)}{\text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)} &\leq \frac{\text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)+\frac{C}{n}}{\text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)} \nonumber \\ &=1+\frac{C}{n\text{Vol}\left(\bigcup_{\ell\in\mathcal{A}_R}B_{\frac{1}{2n}}(\ell)\right)} \nonumber \\ &\leq 1+\frac{C}{n\left(\rho_0\left(\gamma'_0-\frac{K_5R}{2n}\right)-\rho_0\left(\gamma'_0-\Delta+\frac{K_5R}{2n}\right)\right)} \label{bothCovIt} \\ &\leq 1+\frac{C}{K_4(K_5R+1)} \label{inAkhar} \end{align} where (\ref{bothCovIt}) is by lower bounding the volume of the ball-covering of $\mathcal{A}_R$ by the volume of $\mathcal{A}_R$ itself, along with the definition of $\rho_0(\cdot)$, and (\ref{inAkhar}) is an application of Lemma \ref{rho0LowerDrive} as well as recalling the choice of $\Delta=\frac{1}{n}$. This proves (\ref{thispneRep}), and the lemma follows. \section{Proof of Lemma \ref{rho0LowerDrive}: Lower bound on $|\frac{d}{d\lambda}\rho_0(\lambda)|$} \label{sec::rhoZeroAppx} Denote $\mathcal{K}_0=\{\ell\in\mathfrak{L}: f_0(\ell)\leq \lambda\}$ and $\mathcal{K}_0^c=\mathfrak{L}\backslash\mathcal{K}_0$. Furthermore, let us denote $\mathcal{K}_{0,\epsilon}=\{\ell\in\mathfrak{L}:f_0(\ell)\leq \lambda+\epsilon\}$. We have \begin{align} \frac{d}{d\lambda}\rho_0(\lambda) &= \lim_{\epsilon\rightarrow 0}\frac{\rho_0(\lambda+\epsilon)-\rho_0(\lambda)}{\epsilon} \nonumber \\ &=\lim_{\epsilon\rightarrow0}\frac{\text{Vol}(\mathcal{K}_{0,\epsilon})-\text{Vol}(\mathcal{K}_0)}{\epsilon}.\label{rhonotPrime} \end{align} Let $\ell_1$ be an arbitrary point in $\mathcal{K}_0$. Let \begin{equation} \label{secondVicinityEq} \ell_2=\ell_1+\frac{\epsilon}{2}\frac{\nabla f_0(\ell_1)}{\|\nabla f_0(\ell_1)\|^2}. \end{equation} From Taylor series expansion, we have \begin{equation} \label{ellTaylor} f_0(\ell_2)=f_0(\ell_1)+\left\langle\nabla f_0(\ell_1),\ell_2-\ell_1\right\rangle + \Delta_0 \end{equation} where $|\Delta_0|\leq C_{f_0}\|\ell_1-\ell_2\|^2$, for a constant $C_{f_0}$ which is independent of $n$. First observe from (\ref{secondVicinityEq}) that, since $\epsilon\rightarrow0$ is infinitesimal, $\ell_2$ resides in the vicinity of $\ell_1$ with distance at most \begin{equation} \label{vicinityEq} \|\ell_2-\ell_1\|< \sqrt{\frac{\epsilon}{2C_{f_0}}}. \end{equation} With the choice of $\ell_2$ in (\ref{secondVicinityEq}), we have \begin{align} f_0(\ell_2)&<f_0(\ell_1)+\frac{\epsilon}{2}+\frac{\epsilon}{2} \label{firtELL} \\ &\leq \lambda+\epsilon \label{secondEll} \end{align} where (\ref{firtELL}) follows from (\ref{secondVicinityEq},\ref{ellTaylor},\ref{vicinityEq}), and (\ref{secondEll}) is a consequence of $\ell_1$ being a point in $\mathcal{K}_0$. Therefore $\ell_2\in\mathcal{K}_{0,\epsilon}$. As a conclusion for all $\ell_1\in\mathcal{K}_0$, $\ell_1+\frac{\epsilon}{2}\frac{\nabla f_0(\ell_1)}{\|f_0(\ell_1)\|^2}\in \mathcal{K}_{0,\epsilon}$. That translates into the following subset relationship \begin{equation} \label{seubsetRelEq} \mathcal{K}_0+B\left(\frac{\epsilon}{2}\frac{\nabla f_0(\ell_1)}{\|f_0(\ell_1)\|^2}\right)\subset \mathcal{K}_{0,\epsilon}. \end{equation} Continuing from (\ref{rhonotPrime}) we have \begin{align} \left|\frac{d}{d\lambda}\rho_0(\lambda)\right|&\geq \lim_{\epsilon\rightarrow 0}\frac{\text{Vol}\left(\mathcal{K}_0+B\left(\frac{\epsilon}{2}\frac{\nabla f_0(\ell_1)}{\|\nabla f_0(\ell_1)\|^2}\right)\right)-\text{Vol}(\mathcal{K}_0)}{\epsilon} \label{yek} \\ &\geq \frac{S(\mathcal{K}_0)}{2\|\nabla f_0(\ell_1)\|} \label{do}\\ &= \frac{S(\mathcal{K}_0)}{2\|\hat{\theta}(\ell_1)\|} \label{se} \\ &\geq \frac{S(\mathcal{K}_0)}{\wp} \label{char} \end{align} where (\ref{yek}) is a consequence of (\ref{seubsetRelEq}), (\ref{do}) is due to the definition of the surface area in (\ref{surfaceDef}), (\ref{se}) is derived similar to (\ref{ssstackRellb}), and finally (\ref{char}) is from the fact that for all $\theta\in\Theta$, we have $\|\theta\|\leq \wp$. It remains to provide a positive constant lower bound for $S(\mathcal{K}_0)$ independent of $n$. We first show that in the range $\gamma'_0-\Delta\leq\lambda\leq\gamma'_0$, there exists a positive constant lower bound for $\text{Vol}(\mathcal{K}_0)$. Since \begin{align} \Upsilon(\ell)&:= -\frac{1}{n}\left(\left\langle\hat{\theta}(\boldsymbol{\tau}(\ell)),\boldsymbol{\tau}(\ell)\right\rangle-\psi\left(\hat{\theta}\left(\boldsymbol{\tau}(\ell)\right)\right)\right) \label{fNotContEqI} \\ &=-\frac{1}{n} \left(\left\langle \hat{\theta}(\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}} \boldsymbol{R} \ell),\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}}\boldsymbol{R}\ell \right\rangle - \psi\left(\hat{\theta}(\boldsymbol{\mathbf{b}}+\boldsymbol{\mathbb{A}}\boldsymbol{R}\ell)\right)\right) \label{fNotContEqII} \end{align} is a continuous function of $\ell$ over a compact domain $\mathfrak{L}$, it attains a minimum at a point, say $\ell^*\in \mathfrak{L}$. This minimum is certainly less than or equal to the minimum of $\Upsilon(\ell)$ over $\mathcal{L}$, which is attained at a point say $\boldsymbol{L}^*$. For any ${\theta}\in\Theta$, we have \begin{align} \Upsilon(\ell^*) &\leq \Upsilon(\boldsymbol{L}^*) \nonumber \\ &\leq \sum_{x^n}{p_{\theta}(x^n)}\left(-\frac{1}{n}\log{p_{\hat{\theta}(x^n)}(x^n)}\right) \label{minExp}\\ &\leq \sum_{x^n}{p_{\theta}(x^n)}\left(-\frac{1}{n}\log{p_{\theta}(x^n)}\right) \label{maxTrueRe} \\ &=H(p_{\theta}) \label{iHaveto} \end{align} where (\ref{minExp}) follows since $\Upsilon(\boldsymbol{L}^*)=-\frac{1}{n}\log{p_{\hat{\theta}(y^n)}(y^n)}$ for some $y^n\in\mathcal{X}^n$ with $\boldsymbol{L}(y^n)=\boldsymbol{L}^*$, more precisely \begin{equation*} \Upsilon(\boldsymbol{L}^*)=\min_{x^n\in\mathcal{X}^n}{-\frac{1}{n}\log{p_{\hat{\theta}(x^n)}(x^n)}} \end{equation*} and the minimum value of a function is less than or equal to its weighted average with respect to any weighting, (\ref{maxTrueRe}) is from $p_{\hat{\theta}(x^n)}(x^n)\geq p_{\theta}(x^n)$. Recall $H:=H(p_{\theta^*})$ as the entropy of the underlying model. We provide a positive lower bound, independent of $n$ for $\delta$ defined as follows: \begin{equation} \label{deltaAppros} \delta :=H-\Upsilon(\ell^*). \end{equation} We assume that the underlying model is not the lowest entropy model in the class, i.e. $H>\min_{\theta\in\Theta}H(p_{\theta})$. Since $H(p_{\theta})$ is a continuous function of $\theta$ over a compact domain, $\min_{\theta\in\Theta}H(p_{\theta})$ is achieved for a model in the class, say $\theta_{min}\in\Theta$. We then have \begin{align} \delta &\geq H-H(p_{\theta_{min}}) \label{minHandtheta} \\ &>0 \label{trueHandminH} \end{align} where (\ref{minHandtheta}) follows from (\ref{deltaAppros}) and since (\ref{iHaveto}) is true for any ${\theta}\in\Theta$ including $\theta_{min}$, (\ref{trueHandminH}) is from the assumption that $H>\min_{\theta\in\Theta}H(p_{\theta})$. Similar to the Appendix \ref{app:fLipschitz}, one can show that $f_0(\ell)$ is a Lipschitz function of $\ell$ with Lipschitz constant $K_5>0$. For any $\ell\in\mathfrak{L}$ with $\|\ell-\ell^*\|\leq \frac{\delta}{2K_5}$, we have \begin{align} f_0(\ell)&\leq f_0(\ell^*)+K_5\cdot\frac{\delta}{2K_5} \label{f0Lipschits} \\ &= \Upsilon(\ell^*)-\frac{d'}{2n}\log{(2\pi n)} +\frac{C}{n} + \frac{\delta}{2} \label{fNotDefinition} \\ &= H-\delta -\frac{d'}{2n}\log{(2\pi n)} +\frac{C}{n} + \frac{\delta}{2} \label{ConsAbovEq} \\ &< H-\frac{\delta}{3} \label{largeEnough}\\ &<\gamma'_0 -\Delta \label{largeEnoughSecond} \\ &\leq \lambda \label{rangeOfLambda} \end{align} where (\ref{f0Lipschits}) follows from the Lipschitzness of $f_0(\cdot)$ with Lipschitz constant $K_5$, (\ref{fNotDefinition}) is from (\ref{fNotContEqI},\ref{subnserf}), (\ref{ConsAbovEq}) is from the definition of $\delta$ in (\ref{deltaAppros}), (\ref{largeEnough}) holds for large enough $n$, and (\ref{largeEnoughSecond}) holds for large enough $n$, recalling the choices of $\gamma'_0$ in (\ref{gammaPrimeZeroEq}) and $\Delta=\frac{1}{n}$, and (\ref{rangeOfLambda}) is due to the range of $\lambda$. Therefore, from the definition of $\mathcal{K}_0$, we obtain the following relation \begin{equation*} \left\{\ell\in\mathfrak{L}:\|\ell-\ell^*\|\leq \frac{\delta}{2K_5}\right\}\subset \mathcal{K}_0. \end{equation*} Hence \begin{align} \text{Vol}(\mathcal{K}_0)&\geq \text{Vol}\left(\left\{\ell\in\mathfrak{L}:\|\ell-\ell^*\|\leq\frac{\delta}{2K_5}\right\}\right) \nonumber \\ &=C\left(\frac{\delta}{2K_5}\right)^{d'} \label{sphereSection} \\ &\geq C\left(\frac{H-H(p_{\theta_{min}})}{2K_5}\right)^{d'} \label{insertDFelra} \end{align} where (\ref{sphereSection}) is from the fact that the intersection of the sphere $\|\ell-\ell^*\|\leq \frac{\delta}{2K_5}$ and $\mathfrak{L}$ is independent of $n$ and only depends on the constellation of $\mathcal{L}$, and (\ref{insertDFelra}) is from (\ref{minHandtheta}). Finally, since sphere has the smallest surface area among all shapes of a given volume, therefore a positive constant lower bound on $\text{Vol}(\mathcal{K}_0)$, implies a positive constant lower bound on $S(\mathcal{K}_0)$. More precisely, recalling the equations for the volume and the surface area of a $d'$-dimensional sphere \cite[Eq. 1.5.1]{ren}, we have \begin{equation*} S(\mathcal{K}_0)\geq C\left(\frac{H-H(p_{\theta_{min}})}{2K_5}\right)^{d'} \frac{2\sqrt{\pi}\Gamma\left(\frac{d'}{2}+1\right)}{\Gamma\left(\frac{d'+1}{2}\right)}. \end{equation*} \end{appendices} {\bibliographystyle{IEEEtran}
1,314,259,992,692
arxiv
\section{Introduction} Aggregating knowledge across studies is a common practice for pooling multiple findings, increasing precision of common results and planning new studies. In this paper, we evaluate treatment effects after adjusting for potential confounding factors from various independent, existing studies. In doing so, we inform an efficient design of a follow-up validation analysis from these existing studies, and estimate the common parameters by aggregating summary information from each existing study with the validation data. The existing studies for instance can be pilot studies, preliminary analyses, or previously reported results, all studying the impact of the same treatment effect. Moreover, we work under a challenging framework in which the number of potential confounders might be large. Estimation of treatment effects in the presence of many possible confounders usually calls for a model selection procedure such as the LASSO \citep{tibshirani1996regression} in any individual study. Two practical considerations gain prominence in an estimation of the treatment effect from multiple studies. First, full data from the existing studies such as raw data for the confounders might be unavailable, due to either practical limitations in data-sharing, or due to legitimate privacy and confidentiality concerns \citep{wolfson2010datashield, cai2021individual}. Second, if feature selection is performed on each existing study, the post-hoc estimation of treatment effects based on the selected model tends to be biased. We thus ask: (i) whether and how data from two or more studies can be aggregated without the risk of selection bias; (ii) whether the former goal can be achieved without much loss of information and using only summary statistics sufficient for data aggregation. To address the above challenges, we consider an $\ell_1$-regularized framework assuming a parsimonious linear model for all the studies. In each existing study when the LASSO is used to select a set of covariates as confounding variables, the usual summary statistics, the first two moments of the data in the least squares analysis refitted for the selected model, are no longer sufficient for the next stage of data aggregation. It however turns out that the summary statistics required by our aggregation protocol assume a fairly simple and compressed form. An immediate by-product of this finding is a new guideline for what needs to be reported in publications of statistical analyses in order to facilitate an efficient aggregation of models selected by the LASSO. While data aggregation from existing studies can be performed on their own, a further validation study is often necessitated by a regulatory requirement, or pursued due to the need for higher statistical power. More generally, researchers in science and public health often find it more cost-efficient to design and carry out new studies by relying on a synthesis of prior research, in order to focus on a smaller set of confounding factors and to reduce variability of estimators from any single study. Such new studies play the role of validation studies in our framework. Informed by the selection from the existing studies, the validation study $\mathcal{V}$ collects data on all the potentially important confounding factors that are identified as active in any of these existing studies. We showcase how a recently developed technique, namely \textit{data carving}, can be utilized to efficiently pool the information from each existing study with the validation data for an unbiased estimation of the treatment effect in the selected models. Data carving borrows information that has not been been fully exploited by the LASSO, and subsequently leads to higher statistical efficiency for estimating treatment effects than off-the-shelf alternatives like splitting which only uses the (independent) data from the follow-up study during estimation. In contrast to the debiased LASSO estimator based on a one-step bias correction to the LASSO solution, the appeal of our estimator lies in the fact that it does not require estimating a high dimensional matrix (or its inverse) based on the existing data. Combining summary information from each existing study with the validation data through data carving, we call the new estimator a \textit{carved estimator}. A simple averaging of the carved estimators then aggregates the findings from the existing studies. We schematically depict our aggregation protocol in Figure \ref{fig:schematic}. Our new estimator, after combining each existing study $k$ with the validation data through data carving, is denoted by $\widehat{\alpha}_k^{\text{\; carve}}$, and the aggregated estimator is given by $\widetilde{\alpha}$. \begin{figure} \centering \includegraphics[width = 0.95\linewidth]{Flowchartnew.jpg} \caption{Schematic depiction of efficient and privacy-preserving data aggregation. The precise forms for the summary statistics and $\widetilde{\alpha}$ are provided in Section 2.} \label{fig:schematic} \end{figure} Our present work relates with multiple frontiers in the literature. In the context of high dimensional meta-analysis, \cite{tang2020distributed}, \cite{lee2017communication}, and \cite{maity2019communication} propose distributed inference procedures based on aggregating debiased LASSO estimators \citep{zhang2014confidence, van2014asymptotically} from multiple studies. In a similar spirit as \cite{lin2010relative}, \cite{cai2021individual} propose a more efficient inversely variance weighted debiased LASSO estimator to accommodate cross-study heterogeneity. The debiased approaches in the aforementioned papers offer a valid way out to avoid model selection bias, but lose statistical efficiency when the treatment assignment is correlated with the noise variables within the data. Such correlations are commonly expected in large observational studies and a demonstration of this phenomenon is provided in \cite{wang2020debiased}. A simple estimator in our framework is a split-and-aggregate estimator, obtained by: (i) refitting the validation sample with the selected model from each existing study and followed by (ii) averaging the $K$ split-based estimators. Because the data used for model selection is not used for estimation, the resulting estimator does not suffer from the issue of the over-fitting bias. However, the estimator suffers from a loss of efficiency by discarding the data from existing studies, which might be very significant when the more carefully designed validation study is based on fewer samples relative to the existing studies. Drawing inspiration from recent efforts to reuse data from selection \citep{dwork2015generalization, tian2020prediction, chen2020valid, gao2020selective, rasines2021splitting}, utilizing residual information from the existing studies for an estimation of the treatment effect from the full data steers our proposal. The idea of conditioning upon the observed selection, previously explored in \cite{exact_lasso, tian2018selective, panigrahi2018scalable}, among others, discards bias from model selection through p-values, confidence intervals, credible intervals based on conditional probabilities. The estimator we propose here implements the principles of data carving \citep{fithian2014optimal, panigrahi2016integrative, panigrahi2019carving, schultheiss2021multicarving} within the conditional framework. Data carving resembles data-splitting in the selection stage, because model selection operates on only a subset of the full data. However, estimators based on data carving, unlike those based on splitting, use the entire data instead of the held-out portion alone. The remaining paper is organized as follows. Section \ref{Sec:framework} presents our framework and introduces our estimator based on data carving. Section \ref{Sec:example} begins with an illustrative case analysis to present the heuristics behind the construct of our new estimator. Section \ref{Sec:theory} provides details of the carved estimator in the general framework and proves asymptotic unbiasedness of our estimator. Numerical experiments that back up our asymptotic results are presented in Section \ref{Sec:simulation}. Section \ref{Sec:proofs} provides proofs for our main theory. Section \ref{Sec:conclusion} concludes our work with some summarizing remarks. \section{Methodology} \label{Sec:framework} \subsection{Framework} We denote the data from the $K$ independent studies by: $$\mathcal{D}_k = \left\{D_{k}, X_{k}, Y_{k}\right\}$$ for $k\in \{1,2,\cdots, K\}$, where each study measures $n_k$ independently and identically distributed triples; (i) $Y_{k}\in \mathbb{R}^{n_k \times 1}$ are the observed outcomes, (ii) $X_{k} \in \mathbb{R}^{n_k\times p_k}$ represents the high-dimensional covariates with $p_k$ potentially greater than $n_k$, and (iii) $D_{k}\in\mathbb{R}^{n_k\times s}$ denotes the $\mathbb{R}^s$-valued treatment variable. Fixing notations, we use $E_k$ to denote an index set of covariates with cardinality less than $p_k$ for any $k$. For any vector or matrix $A$ and an index set $E$, let $A_E$ be the sub-vector or sub-matrix containing the columns of $A$ indexed by the set $E$. Lastly, we use the symbol $1_k$ and $0_k$ to represent a $\mathbb{R}^k$-valued vector of all ones and all zeroes, respectively. We assume that the data collected in our studies are governed by the population model \begin{align} \label{popn:modelk} Y_k = D_k \alpha + X_{k,E_0}\beta_{E_0} + \varepsilon_k, \quad \mathbb{E}(\varepsilon_k | X_k) = 0,\quad k =1, \ldots, K, \end{align} where $E_0$ indicates the active covariates in the model, $\alpha \in R^s $ measures the average treatment effect, and $\beta_{E_0}$ are the unknown parameters that capture the effects of the covariates in the model, and $\varepsilon_k$ are random errors. We assume $\beta_{E_0}$ to be the same across the studies for the sake of simplicity, but note that our work generalizes easily to cases where the parameter vector $ \beta_{E_0}$ varies with $k$, i.e., the true confounding variables are allowed to be different across the studies. Later in the section, we discuss the extension of our framework to accommodate heterogeneity across studies. We can think of the first column of $X_k$ as a vector of all 1's to represent the intercept in the population model in \eqref{popn:modelk}; in practice, we center all the data vectors to proceed by simply assuming that $Y_k$ sums to zero and so does every column of $D_k$ and $X_k$. Each existing study in our framework includes an extensive collection of covariates and the treatment assignment is randomized conditional on these covariates. We further suppose that the observed covariates, $X_k$, in each study include the ones indexed by $E_{0, k}$, that is, the active covariates of interest in the population model are measured in each study. This is a rather strong assumption, requiring any unobserved confounding variables to be balanced in all the studies. Unobserved confounding effects in more general settings certainly deserve further attention. Because each study includes a large number of covariates, we assume that the treatment effect evaluation will rely on the LASSO, a common tool of choice in high dimensional data analyses. Fixing $E_k$ to be the active set of covariates selected by the LASSO, each study will report $E_k$ and a set of summary statistics which we specify explicitly in the next section. Under appropriate conditions, we have $E_0 \subset E_k$ with probability tending to one as the sample size $n_k$ increases. A validation study, $\mathcal{V} = \left\{ D, X, Y\right\}$, collects information on the response $Y \in R^n$, treatment variables $D \in R^{n \times s}$, and the covariates $X:=X_E \in R^{n \times q}$ from the same population model in \eqref{popn:modelk} and independent of $\mathcal{D}_k$, $k=1,\cdots, K$, where $E:=\cup_{k=1}^{K} E_k$ with cardinality equal to $q$. In this protocol, we note that the selection from the existing studies informs a careful design for the downstream validation study. In the rest of the paper, let (i) $N_k= n+n_k$, (ii) $r_k= n_k/N_k$, and (iii) $q_k=|E_k|$. Lastly, we use $\|v\|$ to represent the $\ell_2$ norm of a vector unless specified otherwise. We conclude the section by showing that our framework easily generalizes to accommodate some heterogeneity across studies. Let $E_{0,k}$ contain the active covariates in the existing study $k$, and $E_{0,0}$ contain the active covariates in the validation study. Suppose, the data in our existing studies is drawn from the following model: $$Y_k = D_k \alpha + X_{k,E_{0,k}}\beta_{E_{0,k}} + \varepsilon_k,$$ and the data in the validation study are generated from: $$Y_0 = D_0 \alpha + X_{0,E_{0,0}}\beta_{E_{0,0}} + \varepsilon_0,$$ where $\mathbb{E}(\varepsilon_k | X_k) = 0$ for $k =0, \ldots, K$. Now, let $\delta_{i,k}$ represent a dummy variable that assumes the value 1 if the observation $i$ is measured in study $k$ and assumes the value $0$ otherwise, for $k=0,1,\ldots, K$, i.e., $\sum_{k=0}^K \delta_{i,k} =1$. For an observation $v_i \in \mathbb{R}^k$, we have \[ \delta_{i,k}v_i = \begin{cases} v_i & \text{ if observation } i \in \text{ study } k, \\ 0_k & \text{ otherwise.} \end{cases} \] Given the above generating models, we can describe observation $i$ in the data from the validation and $K$ existing studies through the following unified model: \begin{align*} Y_i = &\ D'_i \alpha + Z'_{i, E_0}\gamma_{E_0} + \varepsilon_i, \end{align*} where $$\gamma_{E_0}= \begin{pmatrix} \beta_{E_{0,0}}' & \beta_{E_{0,1}}' &\cdots &\beta_{E_{0,K}}' \end{pmatrix}', \text{ and } Z_{i, E_0} =\begin{bmatrix} (\delta_{i,0} X_{i, E_{0,0}})' & (\delta_{i,1} X_{i, E_{0,1}})' & \cdots & (\delta_{i,K} X_{i, E_{0,K}})' \end{bmatrix}'.$$ Based on the above unified model, our proposal easily extends to two scenarios. First, when $E_{0,0} \subseteq \cap_{k=1}^K E_{0,k}$, the summary statistics reported in the existing studies are exactly the same as the case under the model in \eqref{popn:modelk}, wherein the support sets coincide in all the studies. Second, when $E_{0,0} \subseteq \cup_{k=1}^K E_{0,k}$, the construct of our estimator follows the same recipe as proposed in the paper, except now the unbiasedness of the estimator holds under the additional assumption that the union of the selected variables $E = \cup_{k=1}^K E_k$ is collected in all existing studies, and we proceed by fitting the union of the selected models $E$ to the combined data for an existing study and the validation study. In the second situation, we need to retrieve the summary statistics from the existing studies based on the union of the selected model $E$ instead. Because the unified model is cast into the same form as the simpler model in \eqref{popn:modelk}, we work with the simpler model in the rest of the paper for ease of presentation. Later in Section \ref{Sec:simulation}, we verify the validity of such an extension in a simulation study. \subsection{Efficient and privacy-preserving aggregation with data carving} \label{subsec:method-carving} Suppose, for each existing study, we use the LASSO to estimate: \begin{equation}\label{eq:lasso} \widehat{\gamma}^{\text{\;(L)}}_{k} = \begin{pmatrix} \widehat{\alpha}^{\text{\;(L)}}_{k} \\ \widehat{\beta}^{\text{\;(L)}}_{k}\end{pmatrix}=\underset{\alpha\in\mathbb{R}^s,\beta\in \mathbb{R}^{p_k} }{\arg\min}\left\{ \frac{1}{2r_k\sqrt{N_k}} ||Y_k - D_k \alpha - X_k \beta ||^2 + || \Lambda_k \beta ||_1 \right\}, \end{equation} $\widehat{\alpha}^{\text{\;(L)}}_{k}\in \mathbb{R}^s$, $\widehat{\beta}^{\text{\;(L)}}_{k} \in \mathbb{R}^{q_k}$ for $k=1,2,\cdots, K$, where $\Lambda_k \in R^{p_k \times p_k}$ is a diagonal matrix with the tuning parameters in the LASSO penalty in the diagonal entries. Recall, $E_k$ is the support set of $\widehat{\beta}_k^{\text{\;(L)}}$. We first specify the summary statistics involved in the construct of our estimator: \begin{enumerate} \setlength\itemsep{1em} \item \textit{Summary from model selections}: the support set of the LASSO estimator $E_k$, the penalty weights $\Lambda_{ E_k}$, and the signs of the (nonzero) LASSO estimator for the selected covariates $s_{E_k} = \text{sign}(\widehat{\beta}_k^{\text{\;(L)}})$. \label{summary:sel} \item \textit{Summary based on first two moments}: the sample covariance matrices from the existing study $k$ based on the selected model $E_k$ and the observed sample of size $n_k$, that is, \begin{align*} \widehat{\xi}_k := \left[ \begin{array}{cc} \widehat{\xi}_{k, 01} \\ \widehat{\xi}_{k, 02} \end{array} \right] =& \frac{1}{n_k} \left[ \begin{array}{c} D_{k}'Y_k \\ X_{k, E_k}'Y_k \end{array} \right], \\ \widehat{\Xi}_{k} := \ \left[ \begin{array}{cc} \widehat{\Xi}_{k, 11} & \widehat{\Xi}_{k, 12}\\ \widehat{\Xi}_{k, 21} & \widehat{\Xi}_{k, 22} \end{array} \right] = & \frac{1}{n_k} \left[ \begin{array}{cc} D_k'D_k & D_k'X_{k, E_k}\\ X_{k, E_k}'D_k & X_{k, E_k}'X_{k, E_k} \end{array} \right]. \end{align*} Note that $\widehat{\xi}_k$ has two blocks, one $s$-dimensional and the other $q_k$-dimensional. In a similar fashion, $\widehat{\Xi}_{k} $ is partitioned along the same dimensional structure. \label{summary:data} \end{enumerate} Using these summary statistics from the existing study $k$ together with the validation study data $\mathcal{V} $, we introduce our carved estimator below. We begin by computing the least squares estimator \begin{align}\label{eq:pooled-estimate} \widehat{\gamma}_k :=\begin{pmatrix} \widehat{\alpha}_k\\ \widehat{\beta}_k \end{pmatrix} = \begin{pmatrix} n_k\widehat{\Xi}_{k, 11} + D'D & n_k\widehat{\Xi}_{k, 12} + D'X_{E_k}\\ n_k\widehat{\Xi}_{k, 12} +X'_{E_k}D & n_k\widehat{\Xi}_{k, 22} + X_{E_k}'X_{E_k} \end{pmatrix}^{-1} \begin{pmatrix} n_k \widehat{\xi}_{k, 01} + D'Y\\ n_k \widehat{\xi}_{k, 02} + X_{E_k}'Y \end{pmatrix}. \end{align} For a fixed vector $u\in \mathbb{R}^k$ and for $v\in \mathbb{R}^k$, let $\log(1+ 1/uv)$ be a $\mathbb{R}^k$-valued vector whose $j$-th component equals $$\log(1+ \frac{1}{u_j v_j}).$$ Letting \begin{equation} \widehat{\Sigma}_{k} := \frac{1}{N_k}\left(n_k \widehat{\Xi}_{k} + \begin{pmatrix} D & X_{E_k} \end{pmatrix}' \begin{pmatrix} D & X_{E_k} \end{pmatrix}\right), \label{sample:pooled:covariance} \end{equation} we then solve $\widehat{z}_k:=\begin{pmatrix} \widehat{z}'_{k,1} & \widehat{z}'_{k,2} \end{pmatrix}'$ from a convex optimization problem: \begin{equation} \label{optimizer} \begin{aligned} & {\text{arginf}}_{z_k\in\mathbb{R}^{s+q_k}} \Bigg\{ (1-r_k)^{-1}r_k\frac{1}{2}\left(\sqrt{N_k}z_k-\sqrt{N_k}\widehat{\gamma}_k + \widehat{\Sigma}_{k}^{-1}\begin{pmatrix} 0' & (\Lambda_{E_k} s_{E_k})'\end{pmatrix}\right)'\widehat{\Sigma}_{k} \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \left(\sqrt{N_k}z_k-\sqrt{N_k}\widehat{\gamma}_k + \widehat{\Sigma}_{k}^{-1}\begin{pmatrix} 0' & (\Lambda_{E_k} s_{E_k})'\end{pmatrix}\right)+ B_{s_{E_k}}\left(\sqrt{N_k}z_{k,2}\right)\Bigg\}, \end{aligned} \end{equation} where $$B_{s_{E_k}}\left(\sqrt{N_k}z_{k,2}\right)= 1_k'\log\left(1+ \frac{1}{s_{E_k} \sqrt{N_k}z_{k,2}}\right).$$ Then, our carved estimator for $\alpha$ from the the existing study $k$ and the validation study is given by \begin{align}\label{eq:gamma-carve} \widehat{\alpha}_k^{\text{\; carve}} :=\widehat{\alpha}_{k} + (1- r_{k})^{-1}r_k \left(\widehat{\alpha}_k - \frac{1}{\sqrt{N_k}}e'_{1}\widehat{\Sigma}_{k}^{-1}\begin{pmatrix} 0' & (\Lambda_{E_k} s_{E_k})'\end{pmatrix}' - e'_{1}\widehat{z}_k\right). \end{align} Finally, we propose the aggregated estimator by taking a simple average of the $K$ carved estimators: \begin{align}\label{proposed estimate} \widetilde{\alpha} = \frac{1}{K}\sum_{k=1}^{K} \widehat{\alpha}_k^{\text{\; carve}} . \end{align} If the sample size for each existing study, $n_k$, varies with $k$, we can naturally replace the simple average in (\ref{proposed estimate}) by the weighted average, where the weights are proportional to $N_k$. As indicated in the above discussion, the summary statistics from each existing study includes the first two moments needed for the least squares estimator, alongside the penalty weights and the signs of the Lasso solution for the selected covariates indexed by $E_k$. We note some natural comparisons with existing approaches to estimate the common treatment effects in the following remarks. \begin{remark}[A comparison with ``split-and-aggregate" strategy] An off-the-shelf option available for estimation in our context is a simple split-and-aggregate estimator, obtained by refitting the validation sample $\mathcal{V}$ with the selected model $E_k$ (call this estimator $\widehat{\alpha}_k^{\text{split}}$, followed by averaging these $K$ estimators. Because the data used for model selection is not used for estimation, the resulting estimator does not suffer from the issue of the over-fitting bias, but, can prove to be significantly less efficient than our proposed carved-and-aggregate strategy. \label{remark:split-and-aggregate} \end{remark} \begin{remark}[A comparison with the debiased LASSO strategy] Another legitimate estimator is the aggregated debiased LASSO estimator that can be viewed as the proposal in \cite{cai2021individual} customized to our setup. To be specific, we aggregate by averaging the debiased LASSO estimators from each site $\mathcal{D}_k$ and the validation dataset $\mathcal{V}$, denoted as $\widehat{\alpha}_k^{\text{debias}}$ and $\widehat{\alpha}^{\text{debias}}$ respectively. The debiased LASSO estimator reduces the penalization bias introduced by the high dimensional covariates. However, it tends to be statistically inefficient whenever there exist high correlations between the treatment variable and some of the covariates as noted previously by \cite{wang2020debiased}. In a simulated setting under Section \ref{Sec:simulation} with highly correlated treatment and confounding variables, we observe a significant loss in efficiency for the debiased estimator. \label{remark:debias-and-aggregate} \end{remark} \begin{remark}[Overfitting bias correction in the carved estimator] We note that the refitted pooled estimate $\widehat{\alpha}_k$ is a biased estimate of $\alpha$ defined in \eqref{eq:pooled-estimate}, because it double uses the data for model selection and parameter estimation. To see this, whenever $E_{0,k}\subset E_k$, the refitted pooled estimate $\widehat{\alpha}_k$ satisfies that $ \widehat{\alpha}_k - \alpha = e_1' \widehat{\Sigma}_{k}^{-1} ({D}_k', {X}_{k, E_k}' )'{\varepsilon}_k/(n + n_k)$. Due to the correlation between ${\varepsilon}_k$ and the selected set of covariates in $E_k$, we typically have $\mathbb{E}({\varepsilon}_k| {X}_{k, E_k}) \neq 0$, meaning that $\widehat{\alpha}_k - \alpha$ is biased. We refer to the resulting bias in $\widehat{\alpha}_k$ as the over-fitting bias. The proposed carved estimator $\widehat{\alpha}_k^{\text{\; carve}} $ defined in \eqref{eq:gamma-carve} applies a novel bias correction term to remove the over-fitting bias. In this key step, carving allows us to free our statistical estimate of model-selection bias, and the Rao-Blackwellization deployed on a carved likelihood improves the efficiency upon this unbiased estimate. We shall see more detailed discussion of this appealing property of our carved estimator through a simple example in the next section. \end{remark} \section{Debiasing through data carving: an illustrative case analysis} \label{Sec:example} We present in this section an illustrative case analysis to introduce the debiasing approach of our carved estimator in (\ref{eq:gamma-carve}). Assuming access to the summary information (\ref{summary:sel}) and (\ref{summary:data}) for an existing study (i.e., $K=1$), we let $\mathcal{D}_1$ contain a real valued covariate $X_1\in\mathbb{R}$ and a treatment variable $D_1$ (i.e., $p_1=1$ and $s=1$). Both $X_1$ and $D_1$ in this example are standardized to have mean zero and variance one with correlation $\rho$. Consider observing $n_1$ and $n$ observations in $\mathcal{D}_1 $ and $\mathcal{V}$ respectively, from the population model \eqref{popn:modelk} with $\beta_{E_0}=0$ (i.e. $E_0 = \emptyset$) and with noise variance $\sigma^2=1$. Letting $n=n_1$, recall that the augmented sample size is $N_1= n +n_1$, and the ratio of the samples from the existing study is $r_1 = \frac{n_1}{N_1}$ which we set to be equal to $0.5$ in the current illustrative example. Suppose the model we select after solving the LASSO from our existing study is the full model $E_1= \{1\}$, i.e., the model includes the covariate $X_1$. Using the validation study, we proceed to evaluate the treatment effect in the overfitted model: \begin{equation} \label{eg:working:model} Y\ \lvert \ D, X_1 \sim N(\alpha D + \beta X_1, I); \end{equation} where $I$ is the identity matrix. Define $$\gamma_1 = \begin{pmatrix} \alpha & \beta \end{pmatrix}'.$$ Based on model \eqref{eg:working:model}, the likelihood is denoted as $$\ell(N_1, \gamma_1; \widehat\gamma_1) \propto \exp\left(-\frac{N_1}{2}(\widehat\gamma_1-\gamma_1)' \widehat{\Sigma}_1 (\widehat\gamma_1-\gamma_1)\right),$$ where $\widehat\gamma_1$ is the least squares estimator for $\gamma_1$ using the combined data. Following the notation in the previous sections, denote $$\zeta_1 = \begin{pmatrix} 0 & \lambda s_{E_1}\end{pmatrix}',$$ where $s_{E_1}$ is the sign of the LASSO estimator for the coefficient of $X_1$ and $\lambda :=\Lambda_{E_1}$ is the corresponding tuning parameter. To outline the construct of our estimator, we begin by defining the variable $\omega_1= \begin{pmatrix} \omega_{1,1} & \omega_{1,2} \end{pmatrix}'$ as \begin{equation*} \begin{aligned} & \dfrac{\partial}{\partial\gamma}\Bigg(\dfrac{1}{2\sqrt{N_1}}\|Y_{1} - \alpha D_{1} - \beta X_{1}\|^2 + \dfrac{1}{2\sqrt{N_1}}\|Y - \alpha D - \beta X\|^2\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\dfrac{1}{2r_1\sqrt{N_1}}\|Y_{1} - \alpha D_{1} - \beta X_{1}\|^2\Bigg)\Bigg\lvert_{\widehat{\gamma}^{(L)}_1}, \end{aligned} \end{equation*} where $r_1=\frac{1}{2}$. We observe a simple equivalence of the Karush–Kuhn–Tucker (K.K.T.) mapping for \eqref{eq:lasso} with that from: \begin{equation} \label{lasso:randomized:rep} \text{minimize}_{\alpha, \beta} \;\; \dfrac{1}{2\sqrt{N_1}}\|Y -\alpha D - \beta X \|^2 + \dfrac{1}{2\sqrt{N_1}}\|Y_1 -\alpha D_1 - \beta X_1 \|^2- \omega_{1,1} \alpha -\omega_{1,2} \beta + \lambda|\beta|, \end{equation} when both mappings are evaluated at the LASSO estimator $\widehat{\gamma}^{(L)}_1$. We term $\omega_1$ as a randomization variable, so named because this variable is generated by using a random subsample of our i.i.d. observations (of size $n_1$) for selection. Under the selected model \eqref{eg:working:model}, we have as $N_1\to \infty$: \begin{enumerate} \setlength\itemsep{1em} \item \label{prop:1} $\widehat\gamma_1$ and $\omega_1$ jointly follow an Gaussian distribution such that $\omega_1$ is centered at $0$ with the covariance $(1-r_1)(r_1)^{-1}\cdot \mathbb{E}[\widehat\Sigma_1]$ \item \label{prop:2} $\omega_1$ is independent of $\sqrt{N_1}(\widehat\gamma_1-\gamma_1)\sim N_2(0_2, (\mathbb{E}[\widehat{\Sigma}_1])^{-1})$. \end{enumerate} Using \eqref{lasso:randomized:rep}, we characterize the event that the LASSO selects the active set of covariates $E_1$ with signs $s_{E_1}$ in terms of the randomization variable and the least squares estimator as follows: \begin{equation} \label{cond:event:1} \left\{ \widehat{E}_1 = E_1, \text{sign}(\widehat{\beta}^{(L)}_1) =s_{E_1} \right\} = \left\{s_{E_1} (\omega_{1,2}-\rho\omega_{1,1})+s_{E_1} (1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda >0\right\}. \end{equation} Suppose for now the limiting distributional properties listed under \ref{prop:1} and \ref{prop:2} hold exactly with $$\mathbb{E}[\widehat\Sigma_1] = \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}.$$ Our carved estimator can be constructed in two steps. In the first step, we recognize an unbiased initial estimator that is independent of the selection event: $$\widehat\alpha_1^{\text{\;initial}}=\widehat{\alpha}_1 - (1-\rho^2)^{-1} \dfrac{1}{\sqrt{N_1}}\cdot (\omega_{1,1}- \rho \omega_{1,2}).$$ This leads us to observe that $\widehat\alpha_1^{\text{\;initial}}$ is unbiased for $\alpha_1$ conditional upon the event in \eqref{cond:event:1}. In the next step, we improve upon the initial estimator through Rao-Blackwellization by conditioning further upon the complete sufficient statistic for $\gamma_1$ in the conditional likelihood. In our case, this statistic is $\widehat{\gamma}_1$, because of a basic fact that conditioning preserves the complete sufficient statistic in the unconditional law \citep{fithian2014optimal}. Our estimator is thus given by: \begin{equation*} \begin{aligned} \widehat\alpha_1^{\text{\;carve}} &= \mathbb{E}\Big[ \widehat{\alpha}_1^{\text{initial}} \ \Big\lvert \widehat{\gamma}_1, \widehat{E}_1 = E_1, \text{sign}(\widehat{\beta}^{(L)}_1) =s_{E_1}\Big], \end{aligned} \label{simple:carved} \end{equation*} where we have conditioned upon $\widehat{\gamma}_1$ alongside the event of selection in \eqref{cond:event:1}. Simplifying the expression on the right-hand side of \eqref{simple:carved}, we have: \begin{equation*} \begin{aligned} \widehat\alpha_1^{\text{\;carve}} &= \widehat{\alpha}_1 - (1-\rho^2)^{-1} \dfrac{1}{\sqrt{N_1}} \cdot \mathbb{E}\Big[\omega_{1,1}- \rho \omega_{1,2} \; \lvert \; \widehat{\gamma}_1,\; s_{E_1}(\omega_{1,2}-\rho\omega_{1,1})\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+s_{E_1}(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda>0\Big]. \end{aligned} \end{equation*} Consistent with the estimator in \eqref{eq:gamma-carve}, $\widehat\alpha_1^{\text{\;carve}}$ takes the form of an additive debiasing correction applied to the simple least squares estimator $\widehat{\alpha}_1$. Completing the details, we give the expression for $\widehat\alpha_1^{\text{\;carve}}$ in Proposition \ref{exact:UMVU:simple} and note: $$\mathbb{E}\left[\widehat{\alpha}_1^{\text{\; carve}}\; \lvert \; \widehat{E}_1 = E_1, \text{sign}(\widehat{\beta}^{(L)}_1) =s_{E_1}\right] = \alpha, \text{ and as a result: } \mathbb{E}\left[\widehat{\alpha}_1^{\text{\; carve}}\right] = \alpha.$$ To provide some numerical evidence for the effectiveness of our new estimator in debiasing the treatment effect from data re-use in model selection and parameter estimation, we depict in Figure \ref{fig:boxplot-example} the result of a simulation for $n=100$, $n_1 = 50$ and $K=1$ and $1,000$ Monte Carlo samples. Our carved estimator takes the expression in Proposition \ref{exact:UMVU:simple}, which we compare against the two popular alternatives in Remark \ref{remark:split-and-aggregate} and \ref{remark:debias-and-aggregate}. The generative scheme for the simulation and the implementation details for the other two alternatives under comparison are elaborated in Section \ref{Sec:simulation}. Noteworthy, all the three estimators are centered around the true value $\alpha_1 = 1$, but the carved estimator has smallest variability. \begin{figure} \centering \includegraphics[width=0.50\textwidth]{boxplot.jpeg} \caption{\label{fig:boxplot-example} Box-plots of the parameter estimates in the simulation experiment for Section \ref{Sec:example}.} \end{figure} \section{Carved estimator} \label{Sec:theory} In the present section, we generalize the debiasing approach for the illustrative analysis to our framework in the paper. Our carved estimator $\widehat{\alpha}_k^{\text{\; carve}}$ is obtained by conditioning upon the observed event: \begin{equation} \label{sel:event:lasso} \left\{\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right\}, \end{equation} which in turn allows us to construct a debiasing term for the treatment effect based on only summary statistics from the existing studies. We then prove the asymptotic unbiasedness of our estimator and identify the relative gains in variance for our estimator over splitting. \subsection{Our debiasing term} In line with the illustrative analysis in the preceding discussion, we define our randomization variable as follows: \begin{equation} \begin{aligned} \omega_k &= \begin{pmatrix} (\omega_{k,1})' & (\omega_{k,2})' \end{pmatrix}'\\ &=\dfrac{\partial}{\partial\gamma}\Bigg(\dfrac{1}{2\sqrt{N_k}}\|Y_{k} -D_{k} \alpha - X_{k}\beta\|^2 + \dfrac{1}{2\sqrt{N_k}}\|Y - D\alpha - X\beta\|^2\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\dfrac{1}{2r_k\sqrt{N_k}}\|Y_{k} - D_{k}\alpha - X_{k}\beta\|^2\Bigg)\Bigg\lvert_{\widehat{\gamma}^{(L)}_k}; \; \omega_{k,1}\in \mathbb{R}^s, \omega_{k,2}\in \mathbb{R}^{p_k}, \label{randomization:gen} \end{aligned} \end{equation} based on the solution of \eqref{eq:lasso}, $\widehat{\gamma}^{(L)}_k$. Further, using $E_k^c$, the indices of covariates not selected by \eqref{eq:lasso}, consider the statistic: $$\widehat{\Gamma}_{k} = \dfrac{1}{N_k}\left\{X'_{k,E_k^c}(Y_k-D_k\widehat\alpha_k- X_{k, E_k}\widehat\beta_k) + X'_{E_k^c}(Y-D\widehat\alpha_k- X_{E_k}\widehat\beta_k) \right\}.$$ Note, we need not observe the covariates not selected by the LASSO in the validation study. The variables defined above only serve to assist a theoretical investigation of the debiasing term. In Lemma \ref{limiting:Gaussianity}, we state an asymptotic linear representation for the variables $\widehat\gamma_k$, $\Gamma_k$ and $\omega_k$, which implies that the variables are distributed as Gaussian variables in the limit as the sample size $N_k\to \infty$. Conditioning the limiting Gaussian law in this lemma upon the observed event of selection in \eqref{sel:event:lasso} gives us an asymptotic conditional distribution, which we simply call our conditional law. Theorem \ref{exact:UMVU} then provides the expression of an asymptotic debiasing term for the refitted least squares estimate with respect to this conditional law. The results in the section assume that we fit the selected model to the combined data from the existing and validation study: $$Y\ \lvert \ D, X_{E_k} \sim N(D\alpha + X_{k,E_k}\beta_{E_k}, \sigma^2 I).$$ Since we work under a fixed $\sigma^2$ setting, we let $\sigma^2=1$ in the remaining analysis. \begin{lemma} \label{limiting:Gaussianity} Fix $\gamma_k =\begin{pmatrix} \alpha' & \beta'_{E_k}\end{pmatrix}'$. Then, the following assertion holds: $$\sqrt{N_k}\begin{pmatrix} (\widehat\gamma_k-\gamma_k)' & \Gamma'_k & \omega'_k\end{pmatrix}'= \sqrt{N_k}\bar{T}_{N_k} + R_{N_k}, $$ where (i) $\bar{T}_{N_k}$ is the average of $N_k$ i.i.d. variables with mean equal to $0_{2p_k+ 2s_k}$, (ii) $R_{N_k}= o_p(1)$, (iii) $\widehat{\gamma}_k$, $\widehat{\Gamma}_{k}$ and $\omega_k$ are asymptotically independent, and (iv) $\omega_k$ is asymptotically centered at $0$ with the covariance $(1-r_k)r_k^{-1} \mathbb{E}[\mathfrak{G}_k]$, where $\mathfrak{G}_k = {N_k}^{-1}\left\{\begin{pmatrix} D & X \end{pmatrix}' \begin{pmatrix} D & X \end{pmatrix} + \begin{pmatrix} D_k & X_k \end{pmatrix}' \begin{pmatrix} D_k & X_k \end{pmatrix}\right\}.$ \end{lemma} Before stating Theorem \ref{exact:UMVU}, we let $$\mu_{\mathcal{H}_0}(\alpha_0, \Sigma_0)$$ denote the first moment of a Gaussian variable with mean $\alpha_0$ and covariance $\Sigma_0$ truncated to the region $\mathcal{H}_0$. Further, let $\Delta(\eta_0)$ be the associated log-partition function for the truncated Gaussian density at the natural parameter $$\eta_0= \Sigma_0^{-1}\alpha_0.$$ Finally, for $\widehat{\Sigma}_k$ defined in \eqref{sample:pooled:covariance}, let $\Sigma_k =\mathbb{E}[\widehat{\Sigma}_k]$ be the population covariance. \begin{theorem} \label{exact:UMVU} Define the half-space $$\mathcal{H} =\{z \in \mathbb{R}^{s+q_k}: \text{sign}(z_{2}) = s_{E_k}\}.$$ Then, the estimate \begin{equation*} \widehat\alpha_k + (1-r_k)^{-1}{r_k}\left(\widehat\alpha_k-\frac{1}{ \sqrt{N_k}}\cdot e'_1 {\Sigma}^{-1}_{k} \zeta_k - \frac{1}{ \sqrt{N_k}}\cdot e'_1\mu_{\mathcal{H}}\left(\sqrt{N_k}\widehat\gamma_k -\Sigma^{-1}_{k} \zeta_k, r_k^{-1}(1-r_k){\Sigma}^{-1}_k\right)\right) \end{equation*} is unbiased for $\alpha$ with respect to the conditional law derived from Lemma \ref{limiting:Gaussianity}. \end{theorem} The estimate in Theorem \ref{exact:UMVU} however does not admit direct computations, due to lack of readily available expressions for the moments of a truncated Gaussian variable. To this end, we use the plug-in estimate $\widehat{\Sigma}_k$ and the inverse for $\Sigma_k$ and its inverse, respectively, and apply a Laplace-type approximation to facilitate a manageable calculation for our proposal \eqref{eq:gamma-carve} through an easy-to-solve optimization. Deferring the rigorous justification of asymptotic unbiasedness to the next discussion, we outline the approximate construct of our estimator. For $$\eta_0:=(1-r_k)^{-1}r_k\widehat{\Sigma}_k(\sqrt{N_k}\widehat\gamma_k -\widehat\Sigma^{-1}_{k} \zeta_k),$$ a Laplace approximation grants us the following: \begin{equation*} \begin{aligned} & \Delta(\eta_0) \approx (1-r_k)^{-1}r_k \dfrac{1}{2} (\sqrt{N_k}\widehat\gamma_k - \widehat\Sigma^{-1}_{k} \zeta_k)' \widehat{\Sigma}_{k} (\sqrt{N_k}\widehat\gamma_k - \widehat\Sigma^{-1}_{k} \zeta_k) \\ &-\textstyle\inf_{\text{sign}(z_{k,2}) = s_{E_k}}\;\; (1-r_k)^{-1}r_k \dfrac{1}{2} (\sqrt{N_k}z_{k}- \sqrt{N_k}\widehat\gamma_k + \widehat\Sigma^{-1}_{k} \zeta_k)' \widehat{\Sigma}_{k} (\sqrt{N_k}z_{k}- \sqrt{N_k}\widehat\gamma_k + \widehat\Sigma^{-1}_{k} \zeta_k)\\ &=\textstyle\sup_{\text{sign}(z_{k,2}) = s_{E_k}}\;\; (1-r_k)^{-1}r_k (\sqrt{N_k}z_{k})' \widehat{\Sigma}_{k} (\sqrt{N_k}\widehat\gamma_k - \widehat\Sigma^{-1}_{k} \zeta_k)\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;- (1-r_k)^{-1}r_k \dfrac{1}{2} (\sqrt{N_k}z_{k})'\widehat{\Sigma}_{k} \sqrt{N_k} z_{k}. \end{aligned} \end{equation*} We replace the constrained optimization with the unconstrained version through a logarithmic barrier penalty to obtain: \begin{equation*} \begin{aligned} \Delta(\eta_0) &\approx \textstyle\sup_{z_k} \;\; (1-r_k)^{-1}r_k (\sqrt{N_k}z_{k})' \widehat{\Sigma}_{k} (\sqrt{N_k}\widehat\gamma_k - \widehat\Sigma^{-1}_{k} \zeta_k) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;- (1-r_k)^{-1}r_k \dfrac{1}{2} (\sqrt{N_k}z_{k})'\widehat{\Sigma}_{k} \sqrt{N_k}z_{k}- B_{s_{E_k}}(\sqrt{N_k} z_{k,2}). \end{aligned} \end{equation*} Applying the outlined approximation to the asymptotic debiasing term in Theorem \ref{exact:UMVU} allows us to write \begin{equation*} \begin{aligned} \mu_{\mathcal{H}}\left(\sqrt{N_k}\widehat\gamma_k -\widehat\Sigma^{-1}_{k} \zeta_k, r_k^{-1}(1-r_k)\widehat{\Sigma}^{-1}_k\right) &=\nabla \Delta( (1-r_k)^{-1}r_k \widehat{\Sigma}_{k} (\sqrt{N_k}\widehat\gamma_k - \widehat\Sigma^{-1}_{k} \zeta_k))\\ & \approx \sqrt{N}_k\widehat{z}_k, \end{aligned} \end{equation*} where $\widehat{z}_k$ is the solution in \eqref{optimizer}. This gives us the expression of our carved estimator in \eqref{eq:gamma-carve}. \subsection{Asymptotic properties} Our main result in the section, Theorem \ref{unbiasedness}, establishes the asymptotic unbiasedness of our carved estimator. We first present the regularity and moment conditions on the data generating mechanism, required for the theoretical justification of our debiasing approach. Consider $N$ i.i.d. realizations of the response $\mathrm{y}\in \mathbb{R}$, the covariates collected across the existing studies $\mathrm{x}\in \mathbb{R}^{p_1+ \cdots+p_k}$ and the treatment by $\mathrm{d}\in \mathbb{R}^{s}$, such that $\mathrm{y}$ is drawn from the model in \eqref{popn:modelk} with $$\begin{pmatrix} \alpha' & \beta'_{E_0}\end{pmatrix}':=\begin{pmatrix} (\alpha^{(N)})' & (\beta^{(N)}_{E_0})'\end{pmatrix}'.$$ Each of our existing studies in particular observes $N_k$ samples of the outcome and the treatment along with the samples for a subset of covariates seen in all the studies. For the remaining section, we consider the following parameters $$\sqrt{N}\begin{pmatrix} (\alpha^{(N)})' & (\beta^{(N)}_{E_0})'\end{pmatrix}' = a_N \begin{pmatrix} (\alpha_0)' & (\beta_{0,E_0})'\end{pmatrix}$$ in our generation scheme, such that $\alpha_0$ and $\beta_{0,E_0}$ are constants and $a_N= O(\sqrt{N})$. The following condition allows us to control the bias that results from the use of a Laplace-type approximation towards a feasible expression for our debiasing factor. The first part of the condition assumes the existence of an exponential raw moment, which in turn in necessary to justify a Laplace-type approximation for a mean of $N$ i.i.d. variables, which in our case is $\bar{T}_{N_k}$ in Lemma \ref{limiting:Gaussianity} by setting $N:= N_k$. The second part of the condition is required to extend this approximation to asymptotically linear variables with an added $o_p(1)$ error term, as is seen in the left-hand side of the representation in Lemma \ref{limiting:Gaussianity}. \begin{condition} \label{moment:condition:0} Fix $$\mathrm{V}= ( \mathrm{y} - \alpha'\mathrm{d} -\beta'_{E_0}\mathrm{x}_{E_0})\cdot \begin{pmatrix} \mathrm{d}' & \mathrm{x}'\end{pmatrix}',$$ for a fixed subcollection of our covariates containing $E_0$. We assume that $$\mathbb{E}\left[\exp\left(\eta \|\mathrm{V}\|\right)\right] <\infty$$ for some $\eta\in \mathbb{R}^{+}$. For the observed set of covariates $E_k$ in the existing study k, consider the linearizable representation in Lemma \ref{limiting:Gaussianity}. We assume that the remainder term $R_{N_k}$ satisfies: \begin{equation*} \displaystyle\lim_{N_k\to \infty} \dfrac{1}{a_{N_k}^2}\cdot \log \mathbb{P}\left[a_{N_k}^{-1}\| R_{N_k} \| > \epsilon \right] =- \infty \ \ \ \ \ \ \ \text{ for every } \epsilon >0. \end{equation*} \end{condition} The probability of selection admits the following characterization in terms $\sqrt{N_k}\begin{pmatrix} \gamma'_k & \Gamma'_k & \omega'_k\end{pmatrix}'$: $$ \mathbb{P}[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}]= \mathbb{P}[A\begin{pmatrix} \sqrt{N_k}\widehat\gamma'_k & \sqrt{N_k}\widehat\Gamma'_k & \omega'_k\end{pmatrix}' + o_p(1) \leq b], $$ where $A$ and $b$ are fixed matrices. This follows from the asymptotic polyhedral characterization of the selection event associated with the LASSO in previous work; we direct interested readers to \cite[Proposition 4.2][]{panigrahi2016integrative}. The next condition allows us to ignore the remainder term in the polyhedral characterization which converges to $0$ in probability with increasing sample size. \begin{condition} \label{prob:sel:condition:0} We assume that the probability of selection satisfies $$\lim\dfrac{1}{a_{N_k}^2}\Big\{\log \mathbb{P}[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}]- \mathbb{P}[A\begin{pmatrix} \sqrt{N_k}\widehat\gamma'_k & \sqrt{N_k}\widehat\Gamma'_k & \omega'_k\end{pmatrix}' \leq b] \Big\}=0.$$ \end{condition} The final condition bounds the deviation of the sample estimate $\widehat{\Sigma}_k$ from the population parameter $\mathbb{E}[\widehat{\Sigma}_k]$. \begin{condition} For the selected set $E_k$, we assume: $\mathbb{E}[\|\widehat\Sigma_{k}-\Sigma_k\|_{\text{op}}] = O(1)$, where $\| M \|_{\text{op}}$ denotes the operator norm of the matrix $M$. \label{operator:norm:bdd:0} \end{condition} For the remaining section, consider the parameters $$\sqrt{N_k}\begin{pmatrix} \alpha' & \beta_{E_0}'\end{pmatrix}' = a_{N_k} \begin{pmatrix} (\alpha_0)' & (\beta_{0,E_0})'\end{pmatrix}$$ in our generation scheme such that $\alpha_0$ and $\beta_{0,E_0}$ are constants and $a_{N_k}= O(\sqrt{N_k})$. \begin{theorem} \label{unbiasedness} Let $\widehat{\alpha}_k^{\text{\; carve}}$ be defined according to \eqref{eq:gamma-carve}. Under the conditions in \ref{moment:condition:0}, \ref{prob:sel:condition:0} and \ref{operator:norm:bdd:0}, we have $$\mathbb{E}\left[ \|\sqrt{N_k}(\widehat{\alpha}_k^{\text{\; carve}} - \alpha) \|^2 \right]\leq a_{N_k}^{2} C,$$ where $C$ is a constant. \end{theorem} The asymptotic variance of our estimator $\sqrt{N_k}\widehat{\alpha}_k^{\text{\; carve}}$ based on the Fisher information matrix, we have: $$\widehat{V}_{E_k}^{\; \text{carve}}= e_1'\left((1-r_k)^{-1}\widehat{\Sigma}^{-1}_{k} - (1-r_k)^{-2}r^2_k\left((1-r_k)^{-1}r_k\widehat{\Sigma}_{k} + \nabla^2 B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)\right)^{-1}\right)e_1$$ which we detail out in Proposition \ref{Fisher:info} in Section \ref{Sec:proofs}. Lemma \ref{efficiency:gain:K1} then quantifies the relative gain in variance of our estimator over splitting within each study $k$. Here, $\widehat{V}^{\; \text{split}}_{E_k}$ denotes the estimated variance of the least squares estimate refitted on the validation data alone and let $\widehat{V}_{j,E_k}^{\; \text{carve}}$ and $\widehat{V}^{\; \text{split}}_{j,E_k}$ denote the $j$-th diagonal entry of these $s$-dimensional matrices, respectively. \begin{lemma} \label{efficiency:gain:K1} Let $r_k$ be the ratio $\frac{n_k}{N_k}$ and let $B_{\text{max}}$ be the maximum value of the $(\mathbb{R}^+)^{s+q_k}$-valued vector $$(s_{E_k} \sqrt{N_k}\widehat{z}_k)^{-2} - (1+s_{E_k} \sqrt{N_k}\widehat{z}_k)^{-2},$$ and let $\lambda_{\text{min}}$ be the smallest eigen value of $\widehat{\Sigma}_k$. Then, the following holds for $j\in \{1,.., K\}$: $$\left(\widehat{V}^{\; \text{split}}_{j, E_k}\right)^{-1} (\widehat{V}^{\; \text{split}}_{j, E_k} - \widehat{V}^{\; \text{carve}}_{j, E_k}) \geq (1-r_k)^{-1} r^2_k \left((1-r_k)^{-1} r_k+ B_{\text{max}} \lambda_{\text{min}}^{-1} \right)^{-1}.$$ \end{lemma} Borrowing the summary information from the existing study $k$ after selection, our carved estimator $\widehat{\alpha}_k^{\text{\; carve}}$ dominates the split estimators in variance. The variance of the averaged carved estimator involves correlations between the refitted estimates from the selected models on any pair of existing studies in addition to the variances from each study. Heuristically, the correlation between a pair of split-based estimates using the validation data is expected to be larger than the carved counterpart which further uses the information from the independent samples of the associated existing studies. Consider, for example, the situation when each study reports the same model $$E_1=E_2=\cdots= E_k.$$ Let the bound on the right-hand side of Lemma \ref{efficiency:gain:K1} be equal to $B_k$. Comparing the averaged carved estimator in \eqref{proposed estimate} with the splitting-based analog, the difference in the two variances from applying the proposal is bounded below by $$ \frac{1}{K^2}\left(\sum_{k=1}^K \frac{B_k}{1-B_k} \widehat{V}_{E_k}^{\; \text{carve}} + 2\sum_{k_1< k_2} \left(\frac{1}{\sqrt{(1-B_{k_1})(1-B_{k_2})}} -1\right) \sqrt{\widehat{V}_{E_{k_1}}^{\; \text{carve}}}\sqrt{\widehat{V}_{E_{k_2}}^{\; \text{carve}}}\right). $$ Note, this bound is strictly greater than $0$, since $B_k<r_k<1$. Our simulation studies in the next section empirically support the gain in efficiency we attain with our carved estimator through a full utility of existing and validation data. \section{Simulation studies}\label{Sec:simulation} We report simulation results to evaluate the finite-sample performance of the proposed method. In our empirical analysis, the outcome variable is generated via a simple linear model: \begin{align}\label{eq:Simulation-no-D-model} &\nonumber Y_k = \alpha D_k+ X_{k, E_0}\beta_{E_0} + \varepsilon_k, \quad \varepsilon_k \sim N(0, \sigma^2_{\varepsilon} I_{n_k}), \quad \text{for }k=1, \ldots,K,\\ & Y = \alpha D+ X_{E_0}\beta_{E_0} + \varepsilon, \quad \varepsilon \sim N(0, \sigma^2_{\varepsilon}I_{n}). \end{align} We set the coefficients $\alpha = 1$ and $\beta_{E_0} = (1.5, 1, 1, 1, 1)'$. That is, for each study $k$, $X_{k, E_0}$ is the first 4 columns of $X_k$. The dimension of the covariates varies with $k$ as $p_k = 400 + 20k$ and $p=500$. We use the sample sizes $n_1 = \ldots = n_K= 100$, where $K \in \{ 2,3,5,10\} $, and the validation study has $n=50$. Our data is generated under two signal-to-noise-ratio values based on $\sigma_{\varepsilon} \in \{ 2,4\}$. We summarize the performance of our carved estimator in three different settings. \begin{enumerate} \setlength\itemsep{1em} \item \label{setting:1} Setting I. \; In the first setting, there is a moderate degree of correlation between the treatment assignment and the covariates. We generate $(D_k, X_k)$ and $(D, X)$ from a multivariate normal distribution $ N(0,\Sigma)$ where $\Sigma = \big( \Sigma_{jl}\big)_{j,l=1}^{p_k+1}$ and $\Sigma_{jl} = 0.5^{|j-l|}$. \item \label{setting:2} Setting II. \; The second setting generates highly correlated treatment and covariates by first generating \begin{align}\label{eq:Simulation-D-model} & D = X_{ M_0}'\gamma_{M_0} + \nu,\ \nu \sim N(0, \sigma_{\nu}^2I_{n}), \quad D_k = X_{k, M_0}'\gamma_{M_0} + \nu_k,\ \nu_k \sim N(0, \sigma_{\nu}^2I_{n_k}), \end{align} followed by drawing the response according to \eqref{eq:Simulation-no-D-model}. In the model \eqref{eq:Simulation-D-model} for generating the treatment variable, we fix $M_0 = \{5,6\}$, $\gamma_{M_0} = (1,1)'$, and fix the noise variance $\sigma_{\nu}^2 = 0.1$. \item \label{setting:3} Setting III. \; In the final setting, we vary the coefficients of our covariates for generating the data in the existing and validation studies. We simulate the shift in covariates across the existing and validation data as follows. We draw our data according to the generating scheme described for Setting II, except now we set the value of covariate coefficients $\beta_{E_0} = (1.5, 1, 1, 1, 0)'$ in the model \eqref{eq:Simulation-no-D-model} to draw the response in our validation study. That is, the coefficient of one of the confounders has changed between all the existing studies and the validation study in the follow-up stage. \end{enumerate} We compare the performance of the proposed carved estimator with the split-and-aggregate estimate and the aggregated debiased LASSO estimator discussed in Remarks \ref{remark:split-and-aggregate} and \ref{remark:debias-and-aggregate}. Given that the debiased LASSO and the post-double selection estimates are asymptotically equivalent \citep{wang2020debiased}, we implement the debiased LASSO estimator via the \texttt{R} package \texttt{hdm} for its computational speed. We found that in a small number of cases, the package \texttt{hdm} produced numerically unstable results, and therefore, we report only the median bias and the median squared errors, instead of the averages that would be heavily against the debiased LASSO estimator due to a few unstable cases. We report both the mean and median bias and squared errors for our carved estimator and the split-and-aggregate estimator. The comparison of our estimator with debiased LASSO should be made with respect to the corresponding medians, even though we provided mean for the carved and the split estimators for a relative comparison between these two estimators. \begin{table}[h!] \def~{\hphantom{0}} \centering \resizebox{\columnwidth}{!}{\begin{tabular}{cccccccccccccc} & & \multicolumn{5}{c}{Bias} & \multicolumn{5}{c}{MSE} \\ $\sigma_{\varepsilon} $ & $K$ & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased \\ 4 & 3 & $0.107$ & $0.095$ & $0.119$ & $0.103$ & $-0.051$ & $0.109$ & $0.042$ & $0.248$ & $0.103$ & $0.044$ \\ 4 & 5 & $0.117$ & $0.122$ & $0.123$ & $0.109$ & $-0.028$ & $0.065$ & $0.032$ & $0.157$ & $0.077$ & $0.024$ \\ 4 & 10 & $0.127$ & $0.125$ & $0.124$ & $0.124$ & $-0.046$ & $0.043$ & $0.021$ & $0.083$ & $0.043$ & $0.014$ \\ 2 & 3 & $-0.003$ & $0.005$ & $0.010$ & $-0.001$ & $-0.076$ & $0.017$ & $0.006$ & $0.042$ & $0.017$ & $0.013$ \\ 2 & 5 & $0.002$ & $-0.002$ & $0.012$ & $0.009$ & $-0.061$ & $0.010$ & $0.004$ & $0.024$ & $0.009$ & $0.008$ \\ 2 & 10 & $0.004$ & $0.003$ & $0.010$ & $0.011$ & $-0.051$ & $0.004$ & $0.002$ & $0.012$ & $0.006$ & $0.005$ \\ \end{tabular}} \vspace{0.2cm} \caption{\normalfont{Simulation results under Setting I. The two columns under the Carved and Split estimators report the mean and median bias and squared errors respectively. The cells in the single column under Debiased report the median bias and squared errors, due to reasons indicated in the description.}} \label{tab:1} \end{table} \begin{table}[h!] \def~{\hphantom{0}} \centering \resizebox{\columnwidth}{!}{\begin{tabular}{cccccccccccccc} & & \multicolumn{5}{c}{Bias} & \multicolumn{5}{c}{MSE} \\ $\sigma_{\varepsilon} $ & $K$ & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased \\ 4 & 3 & $0.066$ & $0.065$ & $0.088$ & $0.089$ & $-0.081$ & $0.024$ & $0.011$ & $0.056$ & $0.026$ & $0.059$ \\ 4 & 5 & $0.066$ & $0.072$ & $0.069$ & $0.063$ & $-0.083$ & $0.016$ & $0.007$ & $0.033$ & $0.013$ & $0.045$ \\ 4 & 10 & $0.070$ & $0.072$ & $0.079$ & $0.079$ & $-0.079$ & $0.011$ & $0.006$ & $0.022$ & $0.009$ & $0.016$ \\ 2 & 3 & $0.016$ & $0.012$ & $0.026$ & $0.023$ & $-0.089$ & $0.005$ & $0.002$ & $0.012$ & $0.005$ & $0.020$ \\ 2 & 5 & $0.016$ & $0.018$ & $0.019$ & $0.017$ & $-0.075$ & $0.005$ & $0.001$ & $0.007$ & $0.003$ & $0.013$ \\ 2 & 10 & $0.016$ & $0.015$ & $0.020$ & $0.021$ & $-0.061$ & $0.002$ & $0.001$ & $0.004$ & $0.002$ & $0.006$ \\ \end{tabular}} \vspace{0.2cm} \caption{\normalfont{Simulation results under Setting II. The two columns under the Carved and Split estimators report the mean and median bias and squared errors respectively. The cells in the single column under Debiased report the median bias and squared errors.}} \label{tab:2} \end{table} \begin{table}[h!] \def~{\hphantom{0}} \centering \resizebox{\columnwidth}{!}{\begin{tabular}{cccccccccccccc} & & \multicolumn{5}{c}{Bias} & \multicolumn{5}{c}{MSE} \\ $\sigma_{\varepsilon} $ & $K$ & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased & \multicolumn{2}{c}{Carved} & \multicolumn{2}{c}{Split} & Debiased \\ 4 & 3 & $0.102$ & $0.103$ & $-0.179 $ & $-0.164$ & $-0.096$ & $0.053$ & $0.020$ & $0.084$ & $0.036$ & $0.067$ \\ 4 & 5 & $0.110$ & $0.112$ & $-0.170$ & $-0.167$ & $-0.091$ & $0.042$ & $0.015$ & $0.067$ & $0.033$ & $0.042$ \\ 4 & 10 & $0.092$ & $0.100$ & $-0.163$ & $-0.162$ & $-0.092$ & $0.023$ & $0.011$ & $0.042$ & $0.027$ & $0.026$ \\ 2 & 3 & $0.028$ & $0.024$ & $-0.156$ & $-0.151$ & $-0.079$ & $0.008$ & $0.003$ & $0.042$ & $0.024$ & $0.018$ \\ 2 & 5 & $0.024$ & $0.034$ & $-0.156$ & $-0.159$ & $-0.075$ & $0.016$ & $0.002$ & $0.034$ & $0.025$ & $0.013$ \\ 2 & 10 & $0.029$ & $0.029$ & $-0.152$ & $-0.151$ & $-0.067$ & $0.003$ & $0.001$ & $0.038$ & $0.023$ & $0.008$ \\ \end{tabular}} \vspace{0.2cm} \caption{\normalfont{Simulation results under Setting III. The two columns under the Carved and Split estimators report the mean and median bias and squared errors respectively. The cells in the single column under Debiased report the median bias and squared errors.}} \label{tab:3} \end{table} The cells in Tables \ref{tab:1}, \ref{tab:2}, and \ref{tab:3} summarize our findings in the three different settings. In all the tables, our proposed estimator is indicated as ``Carved", while the split-and-aggregate estimator is called ``Splitting'' and the aggregated debiased LASSO estimator is called ``Debiased". Based on Table \ref{tab:1}, we observe that when the noise level is low and the treatment variable is not highly correlated with the covariates, the selected model includes all the confounding variables with a high probability, and therefore, both the carved estimator and the split-and-aggregate estimator have little bias in estimating $\alpha$. When the noise level is higher, the two estimators tend to have slightly more bias than the debiased LASSO estimator. But, the bias of the estimators is still dominated by their variance in the mean squared errors. On the other hand, the carved estimator has smaller mean-squared error than the split estimator in all the considered cases, which is expected since the carved estimator integrates information from the existing studies with higher efficiency. We note that the aggregated debiased LASSO estimator loses much efficiency relative to the aggregated carved estimator when the treatment variable is highly correlated with some of the covariates, in Tables \ref{tab:2} and \ref{tab:3}. The empirical comparisons here are in line with what we expect from our theoretical investigation in the paper. \section{Proofs of main results} \label{Sec:proofs} \subsection{Carved Estimator: illustrative case analysis} We complete the details in Section \ref{Sec:example} to present the rationale behind our estimator in the illustrative example. Assuming that the limiting properties of $\omega_1$ and $\widehat{\gamma}_1$ in \ref{prop:1} and \ref{prop:2} hold exactly, we state \begin{proposition} \label{exact:UMVU:simple} Let $\widehat\alpha_1^{\text{\;carve}}$ assume the value \begin{equation*} \begin{aligned} & \widehat{\alpha}_1 + \Bigg\{ 1_{\mathbb{R}^+}(s_{E_1})\cdot (1-\rho^2)^{-1/2}\dfrac{\rho}{\sqrt{N_1}} \left(\bar{\Phi}\left(\dfrac{(\lambda-(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1)}{\sqrt{(1-\rho^2)}}\right)\right)^{-1}{\phi\left(\dfrac{(\lambda-(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1)}{\sqrt{(1-\rho^2)}}\right)} \\ & - 1_{\mathbb{R}^-}(s_{E_1}) \cdot (1-\rho^2)^{-1/2}\dfrac{\rho}{\sqrt{N_1}} \left(\Phi\left(\dfrac{(-\lambda-(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1)}{\sqrt{(1-\rho^2)}}\right)\right)^{-1}{\phi\left(\dfrac{(\lambda+(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1)}{\sqrt{(1-\rho^2)}}\right)}\Bigg\}. \end{aligned} \end{equation*} Then, the following holds $$\mathbb{E}\left[\widehat{\alpha}_1^{\text{\; carve}}\right] = \alpha.$$ \end{proposition} \begin{proof} Our proof proceeds in three steps. \smallskip \emph{Step 1}: \ \ Fix $$\widehat\alpha_1^{\text{\;initial}}=\widehat{\alpha}_1 - (1-\rho^2)^{-1} \dfrac{1}{\sqrt{N_1}}\cdot (\omega_{1,1}- \rho \omega_{1,2}).$$ First, we note that $$ \mathbb{E}\left[ \widehat\alpha_1^{\text{\; initial}}\; \Big\lvert \; \widehat{E}_1 = E_1, \;\text{sign}(\widehat{\beta}^{(L)}_1) = s_{E_1}\right] = \alpha. $$ This conclusion follows directly from writing our conditioning event as: \begin{equation} \label{cond:event:simple} \left\{ \widehat{E}_1 = E_1, \text{sign}(\widehat{\beta}^{(L)}_1) =s_{E_1} \right\} = \left\{s_{E_1} (\omega_{1,2}-\rho\omega_{1,1})+s_{E_1} (1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda >0\right\}. \end{equation} Observe now that the initial estimate is indeed orthogonal to: $$s_{E_1} (\omega_{1,2}-\rho\omega_{1,1})+s_{E_1} (1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda.$$ We thus note that $\widehat\alpha_1^{\text{\; initial}}$ is independent of our conditioning event, which then gives us the initial unbiased estimate. \emph{Step 2}: \ \ Conditioning further upon $\widehat{\gamma}_1$ provides us the estimate: \begin{equation*} \begin{aligned} &\widehat\alpha_1^{\text{\;carve}} = \mathbb{E}\Big[\widehat{\alpha}_1 -(1-\rho^2)^{-1}\dfrac{1}{\sqrt{N_1}}\cdot(\omega_{1,1}- \rho \omega_{1,2})\ \Big\lvert \widehat{\gamma}_1, s_{E_1} (\omega_{1,2}-\rho\omega_{1,1})\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\; +s_{E_1} (1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda>0\Big]\\ &= \widehat{\alpha}_1 - (1-\rho^2)^{-1} \dfrac{1}{\sqrt{N_1}}\cdot\mathbb{E}\Big[\omega_{1,1}- \rho \omega_{1,2} \ \lvert \ \widehat{\gamma}_1,\; s_{E_1}(\omega_{1,2}-\rho\omega_{1,1})\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+s_{E_1}(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda>0\Big], \end{aligned} \end{equation*} an improvement over $\widehat\alpha_1^{\text{\; initial}}$ through Rao-Blackwellization. \emph{Step 3}: \ \ We are left to compute the conditional expectation: $$\mathbb{E}\Big[\omega_{1,1}- \rho \omega_{1,2} \ \lvert \ \widehat{\gamma}_1,\; s_{E_1}(\omega_{1,2}-\rho\omega_{1,1})+s_{E_1}(1-\rho^2) \sqrt{N_1}\widehat{\beta}_1 -\lambda>0\Big],$$ that yields us the expression for our estimate in our illustrative instance. By construct, we have: $$\mathbb{E}\left[\widehat{\alpha}_1^{\text{\; carve}}\;\Big\lvert \; \widehat{E}_1 = E_1, \;\text{sign}(\widehat{\beta}^{(L)}_1) = s_{E_1}\right]=\alpha.$$ Our claim in the Proposition now follows from the tower property of expectation. \end{proof} \subsection{Carved Estimator: general framework} We give the proofs of the main results in Section \ref{Sec:theory}. \begin{proof}[Proof of Lemma \ref{limiting:Gaussianity}] First, define the symbol: $$\widehat{\Sigma}_{-k,k} = \frac{1}{N_k}\left\{X_{E_k^c}' \begin{pmatrix} D & X_{E_k} \end{pmatrix} + X_{k,E_k^c}' \begin{pmatrix} D_k & X_{k, E_k} \end{pmatrix}\right\}$$ and let $\Sigma_{-k,k}=\mathbb{E}[\widehat\Sigma_{-k,k}]$. Further, let $\Sigma_k =\mathbb{E}[\widehat{\Sigma}_k]$ where $$\widehat{\Sigma}_{k} = \frac{1}{N_k}\left(n_k \widehat{\Xi}_{k} + \begin{pmatrix} D & X_{E_k} \end{pmatrix}' \begin{pmatrix} D & X_{E_k} \end{pmatrix}\right).$$ In the claimed representation, we have: \begin{equation*} \begin{aligned} \bar{T}_{N_k} &= \frac{1}{N_k} \begin{pmatrix} \Sigma^{-1}_k \begin{pmatrix} D & X_{E_k} \end{pmatrix}'(y-D \alpha - X_{E_k}\beta_{E_k}) \\ (X'_{E_k^c}-\Sigma_{-k,k}\Sigma^{-1}_{k}\begin{pmatrix} D & X_{E_k} \end{pmatrix}')(y-D \alpha - X_{E_k}\beta_{E_k})\\ -\begin{pmatrix} D & X \end{pmatrix}'(y-D \alpha - X_{E_k}\beta_{E_k}) \end{pmatrix}\\ &\;\;\;\;\;+ \frac{1}{N_k}\begin{pmatrix} \Sigma^{-1}_k \begin{pmatrix} D_k & X_{k,E_k} \end{pmatrix}'(y_k-D_k \alpha - X_{k,E_k}\beta_{E_k}) \\ (X'_{k,E_k^c}-\Sigma_{-k,k}\Sigma^{-1}_{k}\begin{pmatrix} D_k & X_{k,E_k} \end{pmatrix}')(y_k-D_k \alpha - X_{k,E_k}\beta_{E_k})\\ \frac{1}{r_k} \begin{pmatrix} D_k & X_{k} \end{pmatrix}'(y_k-D_k \alpha - X_{E_k}\beta_{E_k}) -\begin{pmatrix} D_k & X_{k} \end{pmatrix}' (y_k-D_k \alpha - X_{k,E_k}\beta_{E_k})\end{pmatrix}. \end{aligned} \end{equation*} The proof of the Lemma closely follows \cite[Proposition 4.1][]{panigrahi2016integrative} and hence we omit further details here. \end{proof} Towards the proof for Theorem \ref{exact:UMVU}, we state a useful Lemma next. Let $$\widetilde{\gamma}_k = \begin{pmatrix}(\widehat{\gamma}^{(L)}_k)' & 0'_{p_k-q_k}\end{pmatrix}' $$ be the LASSO solution based on \eqref{eq:lasso}, where $\widehat{\gamma}^{(L)}_k$ collects the non-zero components of the LASSO solution. Let $Z_k$ collect the components of $$\partial_{\widetilde{\gamma}_k} \|\beta\|_1,$$ the subgradient of the $\ell_1$ penalty at the solution, which are not present in the active set selected by the LASSO. \begin{lemma} As $N_k\to \infty$, the variables: $\widehat{\gamma}_k$, $\widehat{\gamma}^{(L)}_k$ are asymptotically independent of $\widehat{\Gamma}_k$ and $Z_k$. Further, the limiting Gaussian likelihood for $\widehat{\gamma}_k$ and $\widehat{\gamma}^{(L)}_k$ agrees with: \begin{equation*} \begin{aligned} & \exp\Bigg(-(1-r_k)^{-1}r_k\frac{n}{2}(\widehat{\gamma}_k- \gamma_k)' \Sigma_k^{-1}(\widehat{\gamma}_k- \gamma_k)\Bigg)\cdot \exp\Bigg(-\frac{n}{2}\left(\widehat{\gamma}^{(L)}_k- \widehat\gamma_k + \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k\right)' \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \Sigma_k\left(\widehat{\gamma}_k- \widehat\gamma_k + \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k\right)\Bigg) \end{aligned} \end{equation*} up to a constant of proportionality. \label{lemma:limit:properties} \end{lemma} \begin{proof} Based on the randomization variable in \eqref{randomization:gen}, we consider the following optimization: \begin{equation} \label{lasso:randomized:rep:gen} \text{minimize}_{\alpha, \beta} \;\; \dfrac{1}{2\sqrt{N_k}}\|Y -D \alpha - X\beta \|^2 + \dfrac{1}{2\sqrt{N_k}}\|Y_k - D_k \alpha- X_k\beta \|^2- \omega_{k,1}^T \alpha -\omega_{k,2}^T \beta + \|\Lambda \beta\|_1, \end{equation} Fixing some symbols, we let $$\zeta_k = \begin{pmatrix} 0'_{s} & (\Lambda_{E_k} s_{E_k})' \end{pmatrix}',$$ $$\widehat{\Sigma}_{-k,k} = \frac{1}{N_k}\left\{X_{E_k^c}' \begin{pmatrix} D & X_{E_k} \end{pmatrix} + X_{k,E_k^c}' \begin{pmatrix} D_k & X_{k, E_k} \end{pmatrix}\right\}.$$ The above optimization is equivalent to the LASSO in \eqref{eq:lasso} in terms of the K.K.T. mapping at the solution $\widehat{\gamma}^{(L)}_k$. Then, the K.K.T. mapping of \eqref{lasso:randomized:rep:gen} is given by: \begin{equation} \label{stationary:map} \omega_{k}=\begin{bmatrix}-\widehat{\Sigma}_{k} & 0\\ -\widehat{\Sigma}_{-k,k} & -I \end{bmatrix}\sqrt{N_k}\begin{pmatrix} \widehat{\gamma}_k \\ \widehat{\Gamma}_{k} \end{pmatrix} + \begin{bmatrix} \widehat{\Sigma}_{k}& 0 \\ \widehat{\Sigma}_{-k,k} & \Lambda_{E_k^c}\end{bmatrix} \begin{pmatrix} \sqrt{N_k} \widehat{\gamma}^{(L)}_k \\ Z_k \end{pmatrix} + \begin{pmatrix} \zeta_k \\ 0 \end{pmatrix}, \end{equation} such that the variables $\widehat{\gamma}^{(L)}_k$ and $Z_k$ from solving the LASSO satisfy: $$\|Z_k \|< 1, \ \text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}.$$ Let $\Sigma_k =\mathbb{E}[\widehat{\Sigma}_k]$ and $\Sigma_{-k,k}=\mathbb{E}[\widehat\Sigma_{-k,k}]$, the limiting values of the sample estimates. Using \eqref{stationary:map} together with the fact that $\widehat\Sigma_{k}- \Sigma_k= o_p(1)$, $\widehat\Sigma_{-k,k}- \Sigma_{-k,k}=o_p(1)$ in our i.i.d. framework, we deduce that the variables $\widehat{\gamma}^{(L)}_k$ and $Z_k$ in this map admit the asymptotic representation: \begin{equation} \label{aymp:rep} \begin{aligned} \sqrt{N}_k\widehat{\gamma}^{(L)}_k &=\Sigma^{-1}_{k}\omega_{k, E} + \sqrt{N_k}\widehat{\gamma}_k -\Sigma^{-1}_{k}\zeta_k + o_p(1);\\ Z_k &=(\Lambda_{E_k^c})^{-1}(\omega_{k, E^c} -\Sigma_{-k,k}\Sigma^{-1}_{k}\omega_{k, E} + \Sigma_{-k,k}\Sigma^{-1}_{k}\zeta_k +\sqrt{N}_k \widehat\Gamma_k) + o_p(1). \end{aligned} \end{equation} Using \eqref{aymp:rep}, we easily verify the claim in the Lemma. \end{proof} We are ready to provide the proof of Theorem \ref{exact:UMVU}. In particular, the limiting distribution in Lemma \ref{limiting:Gaussianity} yields us an asymptotic debiasing term for the refitted least squares estimate after we condition out the observed event of selection. All the expectations in the following proof are with respect to this limiting distribution. \begin{proof}\emph{Theorem \ref{exact:UMVU}}. We consider the following initial estimator: $$ \widehat\alpha_k^{\text{\; initial}} = (1-r_k)^{-1} \widehat\alpha_k - (1-r_k)^{-1} r_k e_1'\left( \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k +\widehat{\gamma}^{(L)}_{k} \right). $$ Observe, we have $$\mathbb{E}\left[ \widehat\alpha_k^{\text{\; initial}} \; \Big\lvert \; \widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right] = \alpha.$$ This is because: \begin{equation*} \begin{aligned} & \mathbb{E}\left[ \widehat\alpha_k^{\text{\; initial}} \; \Big\lvert \; \widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right]\\ &= \mathbb{E}\left[ (1-r_k)^{-1} \widehat\alpha_k - (1-r_k)^{-1} r_k e_1'\left\{ \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k +\widehat{\gamma}^{(L)}_{k}\right\}\right]\\ &= (1-r_k)^{-1} \mathbb{E}\left[ \widehat\alpha_k\right] - (1-r_k)^{-1} r_k e_1'\left\{\frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k + \mathbb{E}\left[ \mathbb{E}\left[\widehat{\gamma}^{(L)}_{k}\;\lvert \; \widehat\gamma_k\right]\right]\right\}\\ &= (1-r_k)^{-1} \mathbb{E}\left[ \widehat\alpha_k\right] - (1-r_k)^{-1} r_k e_1' \left\{\frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k + \mathbb{E}\left[ \widehat\gamma_k - \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k \right]\right\}\\ &=(1-r_k)^{-1} \mathbb{E}\left[ \widehat\alpha_k\right] - (1-r_k)^{-1} r_k e_1'\mathbb{E}\left[\widehat\gamma_k\right] = \alpha. \end{aligned} \end{equation*} Note, the first display uses the characterization of the underlying selection event directly in terms of the variables $Z_k$ and $\widehat{\beta}^{(L)}_k$ based on the K.K.T. mapping in \eqref{stationary:map}. That is, the conditional expectation of the initial estimate equals $$ \mathbb{E}\left[ (1-r_k)^{-1} \widehat\alpha_k - (1-r_k)^{-1} r_k e_1'\left\{ \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k +\widehat{\gamma}^{(L)}_{k} \right\} \; \Big\lvert \; \|Z_k \|_{\infty}\leq 1, \ \text{sign}(\widehat{\beta}^{(L)}_k)= s_{E_k}\right]. $$ The above expression is equal to: $$ \mathbb{E}\left[ (1-r_k)^{-1} \widehat\alpha_k - (1-r_k)^{-1} r_k e_1'\left\{ \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k +\widehat{\gamma}^{(L)}_{k} \right\}\right], $$ due to the independence between $Z_k$ and $\widehat\alpha_k^{\text{\; initial}}$ and between $\widehat{\gamma}^{(L)}_{k}$ and $\widehat\alpha_k^{\text{\; initial}}$ in the asymptotic limit, as proved in Lemma \ref{lemma:limit:properties}. The second display uses the tower property of expectation. Finally, using Lemma \ref{lemma:limit:properties} again, we have: $$\mathbb{E}\left[\widehat{\gamma}^{(L)}_{k}\;\Big\lvert \; \widehat\gamma_k\right]= \widehat\gamma_k - \frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k,$$ in the penultimate display. Conditioning further upon $\widehat\gamma_k$, the complete sufficient statistic, our estimate in the claim is equal to: \begin{equation} \label{UB:final} \widehat{\alpha}_k^{\text{\; carve}} = \mathbb{E}\left[ \widehat\alpha_k^{\text{\; initial}}\; \Big\lvert \; \widehat\gamma_k,\; \widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right]. \end{equation} Obtaining an expression for the right-hand side estimate in \eqref{UB:final}, we have \begin{equation*} \begin{aligned} & \mathbb{E}\left[ \widehat\alpha_k^{\text{\; initial}} \; \Big\lvert \; \widehat\gamma_k,\; \widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k} \right]\\ &= (1-r_k)^{-1}\cdot \widehat\alpha_k - (1-r_k)^{-1} r_k e_1'\frac{1}{\sqrt{N_k}}\Sigma^{-1}_{k} \zeta_k\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;- (1-r_k)^{-1} r_k\frac{1}{\sqrt{N_k}}\cdot e_1'\mathbb{E}\left[ \sqrt{N_k}\widehat{\gamma}^{(L)}_{k}\; \Big\lvert \; \widehat\gamma_k,\;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k} \right]. \end{aligned} \end{equation*} Because $$\mathbb{E}\left[ \sqrt{N_k}\widehat{\gamma}^{(L)}_{k}\; \Big\lvert \; \widehat\gamma_k,\;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k} \right]=\mu_{\mathcal{H}}\left(\sqrt{N_k}\widehat\gamma_k -\Sigma^{-1}_{k} \zeta_k, r_k^{-1}(1-r_k)\Sigma^{-1}_k\right),$$ we obtain the expression of the estimator in the claim. \end{proof} \subsection{Asymptotic properties of carved estimator} We give the technical details to justify the asymptotic debiasing factor for our estimator. \smallskip First, we present a limiting value for the probability of selection in Proposition \ref{conv:log:partition} setting $\gamma_k = \begin{pmatrix} \alpha' & \beta'_{E_k} \end{pmatrix}'$ such that $$\sqrt{N_k}\gamma_k = a_{N_k}\gamma_0,$$ this serves as a supporting result along establishing the asymptotic unbiasedness of our estimate. Below we use $C_0$ to denote a constant free of $\gamma_0$ and $N_k$. \begin{proposition} \label{conv:log:partition} Consider assumptions \ref{moment:condition:0} and \ref{prob:sel:condition:0}. Then, as a function of the parameters $\gamma_0$, the probability of selection assumes the following limiting value: \begin{equation*} \begin{aligned} &\lim_{N_k\to \infty} \frac{1}{(a_{N_k})^{2}}\log \mathbb{P}\left[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right]+\textstyle\inf_{\widetilde{\gamma}, \widetilde{z}}\Big\{ \dfrac{1}{2}(\widetilde{\gamma}- \gamma_0)' \Sigma_{k} (\widetilde{\gamma}- \gamma_0) \\ &+ \dfrac{1}{2}(1-r_k)^{-1}r_k (\widetilde{z}-\widetilde{\gamma}+ \frac{1}{a_{N_k}}\Sigma_{k}^{-1}\zeta_k)'\Sigma_{k} (\widetilde{z}-\widetilde{\gamma} + \frac{1}{a_{N_k}} \Sigma_{k}^{-1}\zeta_k)+ \dfrac{1}{(a_{N_k})^2}B_{s_{E_k}}(a_{N_k}\widetilde{z})\Big\} -C_0=0. \end{aligned} \end{equation*} \end{proposition} \begin{proof} We have $$\lim_{N_k\to \infty}\frac{1}{(a_{N_k})^{2}}\left\{\log \mathbb{P}[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}]-\log \mathbb{P}[ \|Z_k \|_{\infty}\leq 1, \ \text{sign}(\widehat{\beta}^{(L)}_k)= s_{E_k}]\right\}=0$$ from \eqref{stationary:map}. We use the representation of the variables $Z_k$ and $\widehat{\gamma}^{(L)}_k$ based on Lemma \ref{lemma:limit:properties}. Then, the conditions in \ref{moment:condition:0} and \ref{prob:sel:condition:0} allow us to write the term $$\lim_{N_k\to \infty}\frac{1}{(a_{N_k})^{2}} \log \mathbb{P}[ \|Z_k \|_{\infty}\leq 1, \ \text{sign}(\widehat{\beta}^{(L)}_k)= s_{E_k}]$$ on the left-hand side as: \begin{equation*} \begin{aligned} & \lim_{N_k\to \infty}\frac{1}{(a_{N_k})^{2}}\Big\{\log \mathbb{P}\left[ \text{sign}(e_2'(\Sigma^{-1}_{k}\omega_{k, E} + \sqrt{N_k}\widehat{\gamma}_k -\Sigma^{-1}_{k}\zeta_k)) = s_{E_k}\right]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \log \mathbb{P}\left[ \|(\Lambda_{E_k^c})^{-1}(\omega_{k, E^c} -\Sigma_{-k,k}\Sigma^{-1}_{k}\omega_{k, E} + \Sigma_{-k,k}\Sigma^{-1}_{k}\zeta_k +\sqrt{N}_k \widehat\Gamma_k)\|_{\infty}\leq 1\right]\Big\}\\ &= \lim_{N_k\to \infty} T_{1,n} + T_{2,n} \end{aligned} \end{equation*} where, for $v_1 \in \mathbb{R}^s$ and $v_2 \in \mathbb{R}^{p_k}$, $e_2' \begin{pmatrix} v_1 & v_2 \end{pmatrix} = v_2$. Now, note that the limiting distribution of the variables in $T_{2,n}$ is free from $\gamma_0$, and that this limit exists by applying a large deviation limit for the probability; call the limiting value $\lim_{N_k\to \infty} T_{2,n}$ as $C_0$. Examining $\lim_{N_k\to \infty} T_{1,n} $, we have the following large-deviation limit $$\lim_{N_k\to \infty}\frac{1}{(a_{N_k})^{2}}\log \mathbb{P}\left[ \text{sign}(e_2'(\Sigma^{-1}_{k}\omega_{k, E} + \sqrt{N_k}\widehat{\gamma}_k -\Sigma^{-1}_{k}\zeta_k)) = s_{E_k}\right]$$ $$ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= -\textstyle\inf_{\widetilde{\gamma}, w_E: \text{sign}(e_2'(\Sigma^{-1}_{k}w_E + \widetilde{\gamma})) = s_{E_k}} \ \Big\{\dfrac{1}{2}(\widetilde{\gamma}- \gamma_0)' \Sigma_{k} (\widetilde{\gamma}- \gamma_0) + (1-r_k)^{-1}r_k\dfrac{1}{2}w_E' \Sigma^{-1}_{k} w_E\Big\} $$ up to an additive constant under the assumption \ref{moment:condition:0}. Applying a reparameterization in the optimizing variables $w_E\to \widetilde{z}$ such that $\widetilde{z} =\Sigma^{-1}_{k}w_{E} + \widetilde{\gamma} $, the limiting value of the probability of selection equals \begin{equation*} \begin{aligned} & -\textstyle\inf_{\widetilde{\gamma}, \widetilde{z}: \text{sign}(\widetilde{z}_2)=s_{E_k}} \ \Big\{\dfrac{1}{2}(\widetilde{\gamma}- \gamma_0)' \Sigma_{k} (\widetilde{\gamma}- \gamma_0) + \dfrac{1}{2}(1-r_k)^{-1}r_k (\widetilde{z}-\widetilde{\gamma})'\Sigma_{k} (\widetilde{z}-\widetilde{\gamma})\Big\} + C_0. \end{aligned} \end{equation*} Because of the convexity of the limiting optimization objective, we deduce: \begin{equation*} \begin{aligned} &\lim_{N_k\to \infty} \Big\{\frac{1}{(a_{N_k})^{2}}\log \mathbb{P}\left[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right]+\textstyle\inf_{\widetilde{\gamma}, \widetilde{z}}\Big\{ \dfrac{1}{2}(\widetilde{\gamma}- \gamma_0)' \Sigma_{k} (\widetilde{\gamma}- \gamma_0) \\ &+ \dfrac{1}{2}(1-r_k)^{-1}r_k (\widetilde{z}-\widetilde{\gamma}+ \frac{1}{a_{N_k}}\Sigma_{k}^{-1}\zeta_k)'\Sigma_{k} (\widetilde{z}-\widetilde{\gamma} + \frac{1}{a_{N_k}} \Sigma_{k}^{-1}\zeta_k)+ \dfrac{1}{(a_{N_k})^2}B_{s_{E_k}}(a_{N_k}\widetilde{z})\Big\} -C_0=0. \end{aligned} \end{equation*} \end{proof} \begin{proof}\emph{Theorem \ref{unbiasedness}.} To proceed with the proof, we define the following function $\widetilde{\mathcal{P}}_{N_k}(\cdot)$: \begin{equation} \label{P:D} \widetilde{\mathcal{P}}_{N_k}(\eta_0) = \textstyle\sup_{\widetilde\gamma} \widetilde\gamma' \eta_0 -D_{N_k}(\widetilde\gamma), \end{equation} which is the convex conjugate for \begin{equation*} \begin{aligned} D_{N_k}(\widetilde\gamma)&= \dfrac{1}{2}\widetilde\gamma' \widehat{\Sigma}_k \widetilde\gamma +\textstyle\inf_{\widetilde{z}} \Big\{\dfrac{1}{2}(1-r_k)^{-1}r_k (\widetilde{z}-\widetilde\gamma+ \frac{1}{a_{N_k}}\widehat{\Sigma}_{k}^{-1}\zeta_k)'\widehat{\Sigma}_{k} (\widetilde{z}-\widetilde\gamma + \frac{1}{a_{N_k}} \widehat{\Sigma}_{k}^{-1}\zeta_k) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \dfrac{1}{(a_{N_k})^2}B_{s_{E_k}}(a_{N_k}\widetilde{z})\Big\}. \end{aligned} \end{equation*} Then, our carved estimator can be represented as: \begin{equation} \label{est:eqn} \frac{1}{a_{N_k}}\sqrt{N_k}\widehat{\alpha}_k^{\text{\; carve}} = e_1'\widehat\Sigma^{-1}_{k} \nabla D_{N_k}\left(\frac{1}{a_{N_k}}\sqrt{N_k} \widehat\gamma_k\right), \end{equation} where $\nabla D_{N_k}(\cdot)$ is the gradient of $D_{N_k}(\cdot)$. This is because: (i) \begin{equation*} \begin{aligned} \nabla D_{N_k}(\widetilde\gamma) = \widehat{\Sigma}_k \widetilde\gamma - (1-r_k)^{-1}r_k \widehat{\Sigma}_{k} \left(z^{*}(\widetilde\gamma)-\widetilde\gamma+ \frac{1}{a_{N_k}}\widehat{\Sigma}_{k}^{-1}\zeta_k\right), \end{aligned} \end{equation*} where \begin{equation*} \begin{aligned} z^*(\widetilde\gamma) &= \text{arginf}_{z}\; \Big\{\dfrac{1}{2}(1-r_k)^{-1}r_k \left(\widetilde{z}-\widetilde\gamma+ \frac{1}{a_{N_k}}\widehat{\Sigma}_{k}^{-1}\zeta_k\right)'\widehat{\Sigma}_{k} \left(\widetilde{z}-\widetilde\gamma + \frac{1}{a_{N_k}} \widehat{\Sigma}_{k}^{-1}\zeta_k\right) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \dfrac{1}{(a_{N_k})^2}B_{s_{E_k}}(a_{N_k}\widetilde{z})\Big\}, \end{aligned} \end{equation*} and (ii) $z^*\left(\frac{1}{a_{N_k}}\sqrt{N_k} \widehat\gamma_k\right) = \frac{1}{a_{N_k}}\sqrt{N_k}\widehat{z}_k$, where $\widehat{z}_k$ is defined in \eqref{optimizer}. Next, from the definition of $\widetilde{\mathcal{P}}_{N_k}(\cdot)$, we observe that it is a strongly convex function with the index of convexity bounded from below by $C_0:= (1-r_k) \lambda^{-1}_{\text{max}}$ such that $\lambda_{\text{max}}$ is the largest eigen-value of $\widehat{\Sigma}_{k}$. Defining $$\widetilde{\mathcal{P}}_{0,N_k}(\eta_0) =\frac{1}{(a_{N_k})^{2}}\log \mathbb{P}\left[\widehat{E}_k = E_k, \;\text{sign}(\widehat{\beta}^{(L)}_k) = s_{E_k}\right] + \frac{1}{2}\eta'_0 \Sigma^{-1}_k\eta_0, $$ where $\eta_0= \Sigma_k\gamma_0$, we have: \begin{equation*} \begin{aligned} & \mathbb{E}\left[ \Big\|\frac{1}{a_{N_k}}\sqrt{N_k}(\widehat{\alpha}_k^{\text{\; carve}} - \alpha)\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right] \\ &\leq \dfrac{1}{\lambda^2_{\text{min}}(\widehat\Sigma_{k})}\mathbb{E}\left[ \Big\|\frac{1}{a_{N_k}}\sqrt{N_k}\widehat\Sigma_{k} \widehat{\alpha}_k^{\text{\; carve}} - \widehat\Sigma_{k} \alpha_0\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right]\\ &= \dfrac{1}{\lambda^2_{\text{min}}(\widehat\Sigma_{k})}\mathbb{E}\left[ \Big\|e_1' \nabla\widetilde{\mathcal{P}}_{N_k}^{-1}\left(\frac{1}{a_{N_k}}\sqrt{N_k} \widehat\gamma_k\right)- \widehat\Sigma_{k} \alpha_0\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right] \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\leq \dfrac{1}{C_0^2\lambda^2_{\text{min}}(\widehat\Sigma_{k})}\mathbb{E}\left[ \Big\|\frac{1}{a_{N_k}}\sqrt{N_k} e_1' \widehat\gamma_k- \nabla\widetilde{\mathcal{P}}_{N_k}(\widehat\Sigma_{k} \alpha_0)\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right] \\ &\leq \dfrac{1}{C_0^2\lambda^2_{\text{min}}(\widehat\Sigma_{k})}\Bigg\{\mathbb{E}\left[ \Big\|\frac{1}{a_{N_k}}\sqrt{N_k} e_1' \widehat\gamma_k- \nabla\widetilde{\mathcal{P}}_{0,N_k}(\Sigma_{k} \alpha_0)\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right] \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \mathbb{E}\left[\Big\| \nabla\widetilde{\mathcal{P}}_{N_k}(\widehat\Sigma_{k} \alpha_0)- \nabla\widetilde{\mathcal{P}}_{N_k}(\Sigma_{k} \alpha_0)\Big\|^2\Big\lvert \widehat{E}_k = E_k, \text{sign}(\widehat{\beta}^{(L)}_k) =s_{E_k}\right] \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \Big\| \nabla\widetilde{\mathcal{P}}_{N_k}(\Sigma_{k} \alpha)- \nabla\widetilde{\mathcal{P}}_{0,N_k}(\Sigma_{k} \alpha_0)\Big\|^2\Bigg\} \end{aligned} \end{equation*} The second display uses \eqref{P:D} and \eqref{est:eqn} to deduce: \begin{equation*} \begin{aligned} e_1'\widehat\Sigma^{-1}_{k} \nabla\widetilde{\mathcal{P}}_{N_k}^{-1}\left(\frac{1}{a_{N_k}}\sqrt{N_k} \widehat\gamma_k\right) &= e_1'\widehat\Sigma^{-1}_{k} \nabla D_{N_k}\left(\frac{1}{a_{N_k}}\sqrt{N_k} \widehat\gamma_k\right)= \frac{1}{a_{N_k}}\sqrt{N_k}\widehat{\alpha}_k^{\text{\; carve}}. \end{aligned} \end{equation*} and the third display follows from using the Lipschitz-nature of the conjugate for the strongly-convex $\mathcal{P}_{0,N_k}( \cdot)$. Observe, the first term is $O(1)$. Taking a further expectation of the second term and using the Lipschitz property of $\nabla\widetilde{\mathcal{P}}_{N_k}(\cdot)$ with the assumption \ref{operator:norm:bdd:0}, we deduce that it is $O(1)$. The final term converges to $0$ due to convexity of $\mathcal{P}_{0,N_k}( \cdot)$ and the convergence established in Proposition \ref{conv:log:partition}. This completes our proof. \end{proof} We turn to deriving a feasible value for the observed Fisher information matrix in order to estimate the variance of the carved estimator. To this end, the limiting value for the probability of selection in Proposition \ref{conv:log:partition} gives rise to an approximate log-partition function in terms of $\eta_0= \Sigma_k\gamma_0$, the natural parameters in the asymptotic distribution of the refitted least squares estimation after conditioning on the selection event. Letting $$\sqrt{N}_k \eta = a_{N_k}\eta_0,$$ observe that the approximate log-partition function based on Proposition \ref{conv:log:partition} is given by: \begin{equation} \label{approx:part} \begin{aligned} & \frac{a^2_{N_k}}{2}\eta_0' \Sigma^{-1}_{k} \eta_0 -a_{N_k}^2\cdot\textstyle\inf_{\widetilde{\gamma}, \widetilde{z}}\Big\{ \dfrac{1}{2}(\widetilde{\gamma}- \Sigma^{-1}_{k} \eta_0)' \Sigma_{k} (\widetilde{\gamma}- \Sigma^{-1}_{k} \eta_0) \\ &\;\;\;\;\; + \dfrac{1}{2}(1-r_k)^{-1}r_k (\widetilde{z}-\widetilde{\gamma}+ \frac{1}{a_{N_k}}\Sigma_{k}^{-1}\zeta_k)'\Sigma_{k} (\widetilde{z}-\widetilde{\gamma} + \frac{1}{a_{N_k}} \Sigma_{k}^{-1}\zeta_k) + \dfrac{1}{(a_{N_k})^2}B_{s_{E_k}}(a_{N_k}z)\Big\}\\ &= \frac{N_k}{2}\eta' \Sigma^{-1}_{k} \eta -\textstyle\inf_{\gamma, z}\Big\{ \dfrac{1}{2}(\sqrt{N_k}\gamma- \sqrt{N_k}\Sigma^{-1}_{k} \eta)' \Sigma_{k} (\sqrt{N_k}\gamma- \sqrt{N_k}\Sigma^{-1}_{k} \eta) \\ &+ \dfrac{1}{2}(1-r_k)^{-1}r_k (\sqrt{N_k}z-\sqrt{N_k}\gamma+ \Sigma_{k}^{-1}\zeta_k)'\Sigma_{k} (\sqrt{N_k}z-\sqrt{N_k}\gamma + \Sigma_{k}^{-1}\zeta_k) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ B_{s_{E_k}}(\sqrt{N_k}z)\Big\}. \end{aligned} \end{equation} The last display is derived by reparameterizing $a_{N_k} \widetilde\gamma = \sqrt{N_k}\gamma$, $a_{N_k} \widetilde{z} = \sqrt{N_k}z$. Call the approximate log-partition function in \eqref{approx:part} $\mathcal{P}_{N_k}(\eta)$. The fact that maximizing the approximate likelihood function based on the (approximate) log-partition function $\mathcal{P}_{N_k}(\eta)$ gives us the carved estimator yields an operational expression for the observed Fisher information matrix next. We plug in the sample estimate for the unknown covariance $\Sigma_k$ in the expression \eqref{approx:part} to estimate the value of the observed Fisher information matrix. Proposition \ref{Fisher:info} outlines the derivation of this estimator. The symbol $e_1$ in the claim denotes a $q_k + s$ dimensional block diagonal matrix, where the first $s$-dimensional matrix along the diagonal is the identity matrix with $s$ columns and the second $q_k$-dimensional matrix is a matrix of all zeros. \begin{proposition} \label{Fisher:info} Based on the approximate value for the log-partition function in \eqref{approx:part}, the inverse of the observed Fisher information matrix assumes the following expression: \begin{equation*} \begin{aligned} e_1'\left((1-r_k)^{-1}\widehat{\Sigma}^{-1}_{k} - (1-r_k)^{-2}r^2_k\left((1-r_k)^{-1}r_k\widehat{\Sigma}_{k} + \nabla^2 B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)\right)^{-1}\right)e_1. \end{aligned} \end{equation*} \end{proposition} \begin{proof} Relying upon the approximate expression \eqref{approx:part} for the log-partition function, the observed Fisher information submatrix equals $$e_1' \widehat{\Sigma}_{k} \nabla^2 \mathcal{P}_{N_k}( \widehat\Sigma_k\widehat{\gamma}_k^{\text{\; carve}}) \widehat{\Sigma}_{k} e_1 = e_1' \widehat{\Sigma}_{k} \nabla \gamma^*\widehat{\Sigma}_{k} e_1$$ where $ \gamma^*$ equals \begin{equation} \label{opt:fi} \begin{aligned} & \textstyle\arg\sup_{\gamma} \sqrt{N_k}\gamma' \widehat\Sigma_k\sqrt{N_k}\widehat{\gamma}_k^{\text{\; carve}} -\dfrac{1}{2}\sqrt{N_k}\gamma' \widehat{\Sigma}_k \sqrt{N_k}\gamma \\ &\;\;\;\;\;\;-\textstyle\inf_{z} \Big\{\dfrac{1}{2}(1-r_k)^{-1}r_k (\sqrt{N_k}z-\sqrt{N_k}\gamma+ \widehat{\Sigma}_{k}^{-1}\zeta_k)'\widehat{\Sigma}_{k} (\sqrt{N_k}z-\sqrt{N_k}\gamma + \widehat{\Sigma}_{k}^{-1}\zeta_k) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ B_{s_{E_k}}(\sqrt{N_k}z)\Big\}. \end{aligned} \end{equation} From solving \eqref{opt:fi}, we have: $$ \widehat\Sigma_k\sqrt{N_k}\widehat{\gamma}_k^{\text{\; carve}}= \widehat{\Sigma}_k \sqrt{N_k}\gamma^*+ (1-r_k)^{-1}r_k \widehat{\Sigma}_{k} \left(\sqrt{N_k}\gamma^*- \widehat{\Sigma}_{k}^{-1}\zeta_k- \sqrt{N_k}\widehat{z}_k\right),$$ where $$ (1-r_k)^{-1}r_k\widehat{\Sigma}_{k} \left(\sqrt{N_k} \widehat{z}_k-\sqrt{N_k}\gamma^*+ \widehat{\Sigma}_{k}^{-1}\zeta_k\right) + \nabla B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)= 0. $$ Using the expression for our carved estimator, the former equations yields us $\gamma^*= \widehat{\gamma}_k$. Taking further derivatives, we obtain $$\nabla_{\gamma^*} \widehat{z}_k = \left((1-r_k)^{-1}r_k\widehat{\Sigma}_{k} + \nabla^2 B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)\right)^{-1}(1-r_k)^{-1}r_k\widehat{\Sigma}_{k},$$ \begin{equation*} \begin{aligned} &\nabla \gamma^* = \left((1-r_k)^{-1} \widehat{\Sigma}_{k} - (1-r_k)^{-1}r_k\widehat{\Sigma}_{k}\nabla_{\gamma^*} \widehat{z}_k\right)^{-1}\\ &= \Big((1-r_k)^{-1} \widehat{\Sigma}_{k} - (1-r_k)^{-1}r_k\widehat{\Sigma}_{k} \left((1-r_k)^{-1}r_k\widehat{\Sigma}_{k} + \nabla^2 B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)\right)^{-1}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\cdot(1-r_k)^{-1}r_k\widehat{\Sigma}_{k}\Big)^{-1}. \end{aligned} \end{equation*} Then, the inverse for the Fisher information matrix is given by: \begin{equation*} \begin{aligned} (1-r_k)^{-1}\widehat{\Sigma}^{-1}_{k} - (1-r_k)^{-2}r^2_k\left((1-r_k)^{-1}r_k\widehat{\Sigma}_{k} + \nabla^2 B_{s_{E_k}}(\sqrt{N_k} \widehat{z}_k)\right)^{-1}. \end{aligned} \end{equation*} \end{proof} \section{Concluding Remarks} \label{Sec:conclusion} In this paper, we have: (i) developed a data carving approach for estimating common treatment effects from a synthesis of prior studies through an efficient use of the data after model selection; (ii) identified summary statistics necessary for data aggregation when the LASSO is used in each individual study to select confounding variables. As a result, we have laid out a data aggregation and validation analysis protocol aiming for statistical efficiency and privacy preservation of individual records. The summary statistics required by our estimator only involves the first two moments of the data in the (usual) least squares analysis refitted for each selected model, alongside some compressed information from the model selections. A major difference of our debiasing approach in contrast to the debiased LASSO techniques is the fact that our estimator does not involve a high-dimensional matrix or the inverse of such a matrix. As opposed to some off-the-shelf alternatives such as data splitting, applying the principles of data carving in the construct of our estimator permits a principled re-use of the data in the existing studies towards improved statistical efficiency. The approach we take in the paper extends to realistic situations where the parameters for the confounding factors in the support set may vary across studies. \begin{funding} The first author is supported by NSF Grants DMS-1951980 and DMS-2113342. The second author is supported in part by NSF Grant DMS-2015325 and NIH Grant R01MH125746. The third author is supported in part by NSF Grants DMS-1914496 and DMS-1951980. \end{funding} \bibliographystyle{imsart-number}
1,314,259,992,693
arxiv
\section{Introduction} Tunneling is a manifestation of quantum coherence: quantum systems are able to surmount barriers they energetically should not due to their wave-like properties~\cite{ankerhold2007quantum}. The range of tunneling systems is broad: elementary particles in nuclear matter~\cite{balantekin1998quantum}, electrons in conductors~\cite{tersoff1983theory,beenakker2008colloquium}, magnetization in nanomagnets~\cite{sangregorio1997quantum}, and superconducting phase in superconducting circuits~\cite{voss1981macroscopic}. Theoretically, tunneling can be understood as the quantum mechanical counterpart to classical thermal activation describing, for instance, chemical reactions~\cite{miller1975semiclassical,hanggi1990reaction,cao1996unified}. Another type of quantum coherence can be seen when a coherent exchange of energy between two quantum mechanical systems happens. A prime example of such coherent systems is polaritons which are the hybrid excitations of the vacuum electromagnetic field and molecular degrees of freedom. Recently, it has been suggested that the formation of such coherent systems could affect chemistry which is still poorly understood~\cite{garcia2021manipulating,wang2021roadmap,hertzog2019strong,ribeiro2018polariton}. In fact, a transition state theory calculation shows that all the polaritonic enhancements to the reaction rate scale as $1/N$ where $N$ is the number of molecules participating in the polariton~\cite{zhdanov2020vacuum,campos2020polaritonic,li2020origin}. This is often attributed to the fact that the coupling to light induces only two energetically different polaritonic states, separated in energy by the Rabi splitting proportional to $\sqrt{N}$, while $N-1$ molecular states, the so-called dark states, remain energetically the same. Motivated by the idea of polaritonic chemistry, I focus on a related question whether there can be a genuine polaritonic quantum tunneling effect. This question arises naturally as the light-matter coupling changes the coherence properties of the system at hand. It also induces collective behavior through the formation of polaritons. In fact, the $N-1$ dark states are superpositions over the molecular states even though their energy does not change. In this article, I present a model of $N$ metastable systems coupled to a cavity mode and investigate the effect of the common cavity mode on the low-temperature tunneling decay rate. For a simple model potential, I analytically solve the polaritonic rate modification using path integral techniques in the semiclassical approximation. Such solvable models are rare; there are only a few truly multidimensional problems in quantum tunneling that have been solved analytically~\cite{ankerhold2007quantum,rontani2012tunneling}. In the low-temperature regime, the tunneling decay rate is dominated by \emph{instantons}. I find the instanton solutions for the polaritonic system without friction. As the main result, I find the polaritonic rate modification as a function of the number~$N$ of metastable systems. The tunneling decay rate is modified by a factor proportional to the single-molecule coupling constant and not by the Rabi splitting. This shows that the cavity indeed induces a coherence effect but it is not a collective effect. Similar to the transition state theory calculation~\cite{zhdanov2020vacuum,campos2020polaritonic}, the polaritonic enhancements scale as $1/N$ if the Rabi splitting is fixed. Therefore, the practical route to realizing the cavity-induced coherence is not in the collective strong coupling regime with large number of systems but rather in single systems with large couplings to the cavity. \section{Semiclassical approximation to tunneling} Consider a metastable system described by a potential \begin{align} V(q) = \begin{cases} \frac{1}{2} \omega_0^2 q^2, & q \leq a, \\ - \infty, & q > a, \end{cases} \label{eq:skijumppot} \end{align} where $a$ determines the energy of the potential barrier $E_b = \frac{1}{2}\omega_0^2 a^2$ as in Fig.~\ref{fig:potential}(a). The quadrature $q$ is defined here so that the conjugate momentum quadrature $p$ is given simply by $p = \dot q$. Although this potential has been used before~\cite{grabert1984quantum,altlandsimons}, it lacks a name and, so, I call it the ski-jumping potential. I set $\hbar = 1$ everywhere. Next, consider $N$ identical metastable systems coupled to a single harmonic cavity mode whose position quadrature is $x$, normalized similarly to $q$. I assume that this coupling is directly between the quadratures $x$ and $q$. The total Hamiltonian of this polaritonic system is given by \begin{align} H = \frac{1}{2} {\dot x}^2 + \sum_{i=1}^N \frac{1}{2} {\dot q}_i^2 + V_\mathrm{tot}, \end{align} where \begin{align} V_\mathrm{tot} = \sum_{i=1}^N V(q_i) + \frac{1}{2} \omega_c^2 x^2 + \sum_{i=1}^N \lambda_i^2 x q_i. \end{align} Here, the apparent eigenfrequency of the cavity mode is $\omega_c$ while the light-matter coupling is encoded within $\lambda_i^2$. It can be related to the coupling constant $g_i$ obtained from a quantum electrodynamics calculation~\cite{walls2007quantum} by $\lambda_i^2 = \sqrt{\omega_c \omega_0} g_i$. If one considers the ski-jumping potential to be a simplistic model of a potential energy surface of a molecule, the exact value of each coupling constant depends on the orientation and position of the molecule within the cavity~\cite{walls2007quantum}. There are several methods to calculate the tunneling decay rate of a metastable system. Here, I use the so-called $\Im F$ method~\cite{langer1967theory} as it can straightforwardly be used for multidimensional systems and provides the possibility to extend the theory to include dissipation~\cite{ankerhold2007quantum,altlandsimons}. Physically, the idea is simple: the metastability of the ski-jumping potential means that there are no stationary states. This may be represented by the eigenenergies obtaining a finite imaginary part which is associated to a tunneling decay rate. Likewise, the partition function~$\mathcal{Z}$ defined in terms of the states within the ski-jumping potential obtains an imaginary part. Then, the tunneling decay rate $k$ at low temperature may be expressed as \begin{align} k = \frac{2}{\beta} \Im \ln \mathcal{Z}, \end{align} where $\beta = 1/k_B T$ is the inverse temperature~\cite{ankerhold2007quantum,kleinertbook}. The partition function~$\mathcal{Z}$ can be represented by an Euclidean path integral \begin{align} \mathcal{Z} = \int D(\phi) \exp{-S_E[\phi(\tau)]} \end{align} over $\beta$-periodic paths in imaginary time $\tau = it$. Here, $\phi = (x, q_1, \dots, q_N)^T$ represents a column vector of all the dynamical degrees of freedom and the Euclidean action is given by \begin{align} S_E = \int_{-\beta/2}^{\beta/2} \dd{\tau} \qty[\frac{1}{2}\dot{\phi}^T \dot{\phi} + V_\mathrm{tot}(\phi)]. \end{align} One can associate this Euclidean action to the classical action of systems moving in the inverted potential $-V_\mathrm{tot}$. For independent systems, the total potential energy can be written as a sum of system's potential energies. Thus, the partition function factorizes as $\mathcal{Z} = \mathcal{Z}_1^N$ for identical systems. Whatever the single-system tunneling decay rate $k_1$ is, the total rate is then $k = N k_1$. In general, solving the path integral exactly to obtain the partition function is difficult. Thus, I resort to the semiclassical approximation which is valid when the barrier energy $E_b$ is large compared to the real part of the ground state energy (which is of the order of $\omega_0$)~\cite{kleinertbook}. I expand the path integral around the classical solutions and take into account only the quadratic fluctuations \begin{subequations} \begin{align} \mathcal{Z} &\approx \sum_\mu I_\mu e^{-S_E(\phi_\mu)} , \\ I_\mu &= \int D(r_\mu) \exp{- \frac{1}{2}r_\mu^T\qty[\partial_\tau^2 + \mathcal{V}(\phi_\mu)]r_\mu}. \end{align} \end{subequations} Here, $\phi_\mu$ represents one possible classical $\beta$-periodic path and $I_\mu$ the contribution of quadratic fluctuations which may be expressed using a second derivative matrix $\mathcal{V}_{ij} = \partial^2 V_\mathrm{tot}/\partial \phi_i \partial \phi_j$ evaluated at the corresponding classical solution $\phi_\mu$. The integration variable $r_\mu$ is the deviation from $\phi_\mu$ with the boundary conditions $r_\mu(\pm\beta/2) = 0$. As the action $S_E$ is a real variable, the imaginary part of the partition function must be in fluctuations $I_\mu$. The ski-jumping potential allows for the solution of classical paths in a general case but it complicates the evaluation of the fluctuations as the potential is discontinuous at $q_i = a$. These problems can mostly be avoided since the quadratic fluctuations can be expressed in terms of the classical solutions exactly in the case of a closed system~\cite{dashen1974nonperturbative,liang1992bounces}. \begin{figure} \centering \includegraphics{potentials_ver1.pdf} \caption{(a) Ski-jumping potential of Eq.~\eqref{eq:skijumppot}. It is obtained by a limiting process from a potential $E_b\qty[(q/a)^2 - \theta(q)(q/a)^n]$ with $\theta$ being the Heaviside step function and $n \rightarrow \infty$. The dotted line represents $n = 4$. (b) Inverted potential $-V(q)$. The arrow indicates the instanton solution in which the system moves from $q = 0$ to $q = a$ and back.} \label{fig:potential} \end{figure} \subsection{Solution of the Euclidean action} First, I solve the classical periodic paths in imaginary time. The problem is the same as solving classical motion in real time but in the inverted potential. Note that $q_i = a$ represents a wall in the inverted potential as in Fig.~\ref{fig:potential}(b). Thus, at this point, the velocity~$\dot{q}_i$ is discontinuous. Rather than trying to piece together solutions before and after hitting the wall, I expand the mathematical trick presented in Ref.~\onlinecite{grabert1984quantum} for a polaritonic system and take this discontinuity into account at the level of the equations of motion. If a single quadrature~$q_1$ hits the wall at time~$\tau_1$, that is, $q_1(\tau = \tau_1) = a$, the dynamics in the inverted potential is determined by \begin{subequations}\label{eq:EOMS} \begin{align} - \ddot{x} + \omega_c^2 x + \sum_{i=1}^N \lambda_i^2 q_i &= 0, \\ - \ddot{q}_1 + \omega_0^2 q_1 + \lambda_1^2 x &= A \delta(\tau - \tau_1),\\ - \ddot{q}_i + \omega_0^2 q_i + \lambda_i^2 x &= 0, \qq{$i = 2,3,\dots N$.} \end{align} \end{subequations} The unknown constant~$A$ is determined from the condition $q_1(\tau = \tau_1) = a$. Since I am searching for periodic solutions, the way to proceed is to write all dynamical quantities as Fourier series. Here, I choose the convention $f(\tau) = \sum_m f_m e^{i \omega_m \tau}$ with $\omega_m = 2 \pi m/\beta$ being the bosonic Matsubara frequency. The inverse transformation is then $f_m = \frac{1}{\beta}\int\dd{\tau} f(\tau)e^{-i\omega_m\tau}$. By applying the latter definition to Eqs.~\eqref{eq:EOMS}, I find \begin{subequations} \begin{align} (\omega_c^2 + \omega_m^2) x_m + \sum_{i=1}^N\lambda_i^2 q_{i,m} &= 0, \\ (\omega_0^2 + \omega_m^2) q_{1,m} + \lambda_1^2 x_m &= \frac{A}{\beta} e^{- i \omega_m \tau_1}, \label{eq:solstep1b}\\ (\omega_0^2 + \omega_m^2) q_{i,m} + \lambda_i^2 x_m &= 0.\label{eq:solstep1c} \end{align} \end{subequations} This set of linear equations can be solved. The idea is first to find the dynamics of the cavity mode~$x$ which then gives the solutions of the individual quadratures~$q_i$. This is achieved by defining a collective variable $Q_m = \sum_{i=1}^N \frac{\lambda_i^2}{\expval{\lambda^2}}q_{i,m}$ with $\expval{\lambda^2} = \sum_i \lambda_i^2/N$ representing the average over the couplings. The dynamics of $Q$ can be determined from Eqs.~\eqref{eq:solstep1b}--\eqref{eq:solstep1c}, which allows for solving the dynamics of $x$. After a short calculation I find the solutions in Fourier space to be \begin{subequations} \begin{align} x_m &= - \frac{A}{\beta}\lambda_1^2 \chi_P(\omega_m) e^{-i \omega_m \tau_1}, \\ q_{i,m} &= \frac{A}{\beta} \frac{\lambda_1^2\lambda_i^2}{\omega_m^2 + \omega_0^2} \chi_P(\omega_m) e^{-i \omega_m \tau_1}, \\ q_{1,m} &= \frac{A}{\beta} \frac{1}{\omega_m^2 + \omega_0^2}\qty[1 + \lambda_1^4\chi_P(\omega_m)]e^{-i \omega_m \tau_1}, \end{align} \end{subequations} where I defined a short-hand notation describing the polaritonic response \begin{align} \chi_P(\omega_m) = \qty[(\omega_m^2 + \omega_0^2)(\omega_m^2 + \omega_c^2) - N \expval{\lambda^4}]^{-1}. \end{align} The cavity-mediated interaction can be seen in the fact that the dynamics of all the quadratures~$q_i$ depend on the coupling~$\lambda_1^2$ of the first quadrature. The abstract Fourier space solutions become clearer in the zero-temperature limit $\beta \rightarrow \infty$. Then, the Fourier series can be transformed to an integral which I evaluate using the residue theorem. Setting $\tau_1 = 0$ for brevity, this results in the imaginary-time paths \begin{subequations}\label{eq:instpath} \begin{align} x(\tau) &= A \frac{\lambda_1^2}{\sqrt{\expval{\lambda^4}}} \sqrt{\frac{1 - \delta^2}{4N}} \qty( \frac{e^{-\omega_+ \abs{\tau}}}{\omega_+} - \frac{e^{-\omega_- \abs{\tau}}}{\omega_-}), \\ q_i(\tau) &= A \frac{\lambda_1^2\lambda_i^2}{\expval{\lambda^4}} \frac{f(\tau)}{N}, \\ q_1(\tau) &= A \frac{e^{-\omega_0 \abs{\tau}}}{2 \omega_0} + A \frac{\lambda_1^4}{\expval{\lambda^4}} \frac{f(\tau)}{N}, \label{eq:inst1}\\ f(\tau) &= \frac{1+\delta}{2}\frac{e^{-\omega_+ \abs{\tau}}}{2\omega_+} + \frac{1-\delta}{2}\frac{e^{-\omega_- \abs{\tau}}}{2\omega_-} - \frac{e^{-\omega_0 \abs{\tau}}}{2 \omega_0} \end{align} \end{subequations} with further definitions of the polariton eigenfrequencies~$\omega_\pm$ without the rotating wave approximation and a detuning parameter~$\delta \in [-1,1]$ given by \begin{subequations} \begin{align} \omega_\pm &= \sqrt{\frac{\omega_0^2 + \omega_c^2}{2} \pm \frac{1}{2}\sqrt{4N \expval{\lambda^4} + (\omega_0^2 - \omega_c^2)^2}}, \\ \delta &= \frac{\omega_0^2 - \omega_c^2}{\omega_+^2 - \omega_-^2}. \end{align} \end{subequations} The Rabi splitting is typically defined as $\omega_+ - \omega_-$ when the cavity is on resonance $\omega_c = \omega_0$. Finally, $A$ resolves by demanding that $q_1(\tau = \tau_1 = 0) = a$. It gives rise to a weighted harmonic average \begin{align} A = 2 a \frac{N \expval{\lambda^4}}{\frac{N \expval{\lambda^4} - \lambda_1^4}{\omega_0} + \lambda_1^4\qty(\frac{1+\delta}{2}\frac{1}{\omega_+} + \frac{1-\delta}{2}\frac{1}{\omega_-})} \equiv 2 a \omega_{H,1}. \label{eq:hf} \end{align} Here, the weights are the second-order coupling constants $\lambda_i^4$ and detuning factors $(1 \pm \delta)/2$. This expression already shows that, similarly to the discussion about dark states, the bare frequencies~$\omega_0$ are weighted with a factor proportional to $N - 1$ whereas the polariton frequencies~$\omega_\pm$ have a weight close to unity, independently of $N$. Thus, in general, $\omega_{H,1} \approx \omega_0$ for $N \gg 1$. If $\lambda_1^2 = 0$, then $\omega_{H,1} = \omega_0$. An example of the polaritonic instanton solution is shown in Fig.~\ref{fig:inst}. Initially, $\tau \rightarrow -\infty$, all the quadratures are at zero. Very slowly, the first quadrature starts to evolve, pulling all the other systems with it. The exact direction the other systems are pulled towards depends on the relative signs of the coupling constants~$\lambda_i^2$. At time $\tau = 0$, the first quadrature is at the wall and bounces back. From this hitting time to $\tau \rightarrow \infty$, the inverse happens. The first quadrature starts to slow down and all the quadratures creep towards their initial position. \begin{figure} \centering \includegraphics{instantonpaths_ver2.pdf} \caption{Polaritonic instanton solution for $N = 6$ on resonance $\omega_c = \omega_0$. The coupling constants are chosen so that $\lambda_1^2/\omega_0^2 = 0.1$ and $\lambda_{i\neq 1}^2/\omega_0^2 \in \qty{0, \pm 0.1, \pm 0.2}$. The dashed orange line in the middle graph also represents the second term in Eq.~\eqref{eq:inst1} that describes the modification to the bounce due to the light-matter coupling.} \label{fig:inst} \end{figure} The Euclidean action follows directly from the instanton solutions at any temperature. I obtain \begin{subequations} \label{eq:action:result} \begin{align} S_{E,1} &= \frac{1}{2} a^2 \qty[\frac{1}{\beta}\sum_m \frac{1 + \lambda_1^4 \chi_P(\omega_m)}{\omega_0^2 + \omega_m^2}]^{-1}\\ &\rightarrow 2 \frac{E_b}{\omega_0} \frac{\omega_{H,1}}{\omega_0} \equiv S_0 \frac{\omega_{H,1}}{\omega_0}, \qq{when $\beta \rightarrow \infty$.} \end{align} \end{subequations} In the low-temperature limit, the action is determined by two ratios: First, the barrier energy~$E_b$ is compared to the pseudo-eigenenergy~$\omega_0$. This is in contrast to the high-temperature result with $S_E = \beta E_b$. Second, the polaritonic effect is contained within the ratio of the harmonic mean frequency~$\omega_{H,1}$ and the bare frequency~$\omega_0$. This ratio is unity when there is no coupling, $\lambda_1^2 = 0$, and the action is just the bare action, $S_{E,1} = S_0$. A fully uncoupled system does not know about the polaritons, as expected. There are also configurations in which multiple quadratures hit the wall. Finding such solutions is a well-defined classical problem which can be tackled similarly to the case of one-bounce instanton and is shortly discussed in Appendix~\ref{app:multibounce}. However, these multi-bounce paths do not contribute to the tunneling rate. I argue this in simple terms: Consider only two systems coupled to a cavity with equal coupling strengths $\lambda_i^2 = \lambda^2$. Then, it is clear that the two systems obey the same imaginary-time equations of motion, except when they bounce off the wall. Consequently, one of the possible multi-bounce solutions is that the two quadratures are exactly the same, $q_1(\tau) = q_2(\tau)$ for all $\tau$'s. However, this solution is not a saddle point of the Euclidean action. If you increase the temperature, the instanton path shrinks to a point that is near the maximum of the total potential~$V_\mathrm{tot}$, around $q_1 = q_2 \approx a$. This point is not a saddle point of the potential~$V_\mathrm{tot}$ but the maximum --- the Hessian matrix of the potential has two negative eigenvalues. In the limit of no coupling, $\lambda^2 \rightarrow 0$, we arrive at a contradiction with the case of independent systems; only the saddle points contribute to the imaginary part of $\mathcal{Z}$ in the semiclassical approximation. It is thus hard to see how the ``coherent transition state theory picture'' of Ref.~\onlinecite{yang2021quantum} can be mapped to this low-temperature calculation. \subsection{Polaritonic tunneling rate modification} To get from the instanton solution~\eqref{eq:instpath} to the polaritonic tunneling decay rate, one needs to calculate the fluctuation factor~$I_\mu$. The program is somewhat cumbersome even in the one-dimensional case~\cite{kleinertbook}. The first derivative of the instanton solution happens to be a zero eigenvalue mode for the fluctations and $I_\mu$ formally diverges. The existence of the zero mode also implies that there exists a negative eigenvalue mode which makes $I_\mu$ imaginary. A further complication is that one should include multiple sequential bounces. Such paths are obtained by essentially glueing instanton solutions together: The imaginary-time axis can be separated into $n$~partitions of length~$\beta/n$. Since the instanton paths change appreciably only for the imaginary time~$2/\omega_0$, using the instanton solution~\eqref{eq:instpath} for each partition of length~$\beta/n$ gives a path with $n$~bounces. The error of this process is exponentially small in $\beta$ when $2/\omega_0 \ll \beta$. In these steps, I follow closely the one-dimensional treatment of Ref.~\onlinecite{liang1992bounces}. I do this as the relatively recent literature~\cite{milnikov2002decay,erakovic2020instanton} cannot be applied since the instanton solution~\eqref{eq:instpath} is not differentiable at the hitting time. The fluctuation factor of a single bounce can be obtained from a version of the Gelfand--Yaglom formula \cite{kleinertbook} \begin{align} I^{n=1}_1 \propto - i \beta \sqrt{S_{E,1}} \sqrt{\frac{\epsilon_1(\beta)}{D_1}}, \label{eq:onebounce:fluc} \end{align} where $D_1 = \abs{\det(\pdv{\phi_j(\beta/2)}{\dot \phi_i(-\beta/2)})}$ is the fluctuation determinant evaluated in the $\beta \rightarrow \infty$ limit and $\epsilon_1(\beta)$ provides a finite temperature correction to it. In these and the following expressions, I denote the number of bounces as a superscript whereas the subscript refers to the quadrature that hits the wall. I also choose not to keep track of the powers of $2\pi$; in the end, they are fixed by comparing to the non-interacting result. By expanding the method in Ref.~\onlinecite{gildener1977pseudoparticle} to the multidimensional system at hand, I find \begin{equation} \epsilon_1(\beta) \approx 2\frac{\dot \phi^T(-\beta/2) \ddot \phi(-\beta/2) - \dot \phi^T(\beta/2) \ddot \phi(\beta/2)}{\int_{-\infty}^\infty \dot\phi^T(\tau) \dot \phi(\tau) \dd{\tau}}, \label{eq:finiteTcorr} \end{equation} where $\phi$ refers to the vectorized form of the instanton solution~\eqref{eq:instpath}. The derivation of this result can be found in Appendix~\ref{app:ftc}. The factor $-i$ is the Maslov--Morse index, which takes into account the one negative eigenvalue mode. Mathematically, it follows from the singularity of the fluctuation determinant at the turning point of the classical solution (see e.g. Ref.~\onlinecite{kleinertbook}). Lastly, $\beta \sqrt{S_{E,1}}$ follows from the Faddeev--Popov method as the hitting time~$\tau_1$ is in fact a free parameter. By a change of integration variables from the zero mode proportional to the first derivative of the instanton solution to $\tau_1$ in $I_\mu$, one integrates $\tau_1$ over the whole range~$[-\beta/2, \beta/2]$ while the Jacobian of the transformation is $\sqrt{S_{E,1}}$~\cite{zinnjustin2005pi,kleinertbook,altlandsimons}. To connect two bounces, in principle, one needs to calculate the action with variable ending points. This is not feasible in practice. However, since the instanton paths reside mostly near $\phi = 0$, it is justified to expand the action. Thus, from the initial $\phi(-\beta/2) = \phi_- =0$ to an arbitrary point~$\tilde{\phi}$, the action can be expressed in terms of the final point $\phi(\beta/2) = \phi_+= 0$ as \begin{align} S_{E}[\phi_-, \tilde\phi] \approx S_{E,1}&[\phi_-,\phi_+] \label{eq:action:expansion}\\ & + \frac{1}{2}\qty(\phi_+ - \tilde\phi)^T \qty[\pdv{S_E}{\phi_i}{\phi_j}] \qty(\phi_+ - \tilde\phi). \notag \end{align} The Hessian matrix on the second row is calculated along the classical instanton path. The same structure is also obtained from a variable initial point and a fixed final point. Thus, the paths are connected by first dividing the path integral into two parts with a variable mid-point~$\tilde\phi$ and then integrating over it. These two parts are assumed to obey the instanton solutions individually. At this point, the multidimensional nature of the problem becomes relevant. To connect two bounces, I should take into account that the two bounces correspond to different quadratures. In the case of equal couplings, they are exactly the same. Thus, the integration over the mid-point $\tilde\phi$ is a Gaussian integral and the Hessian matrices in Eq.~\eqref{eq:action:expansion} are the same. In this case, the determinant rising from integration is equal to the inverse of the fluctuation determinant $D_1$~\cite{dashen1974nonperturbative}. I assume here that this relation holds, at least to an approximation, also in the case of variable coupling constants. The extension from two to $n$~bounces does not require considerably more effort. One should note that there are now $n$ hitting times, which are all free parameters and for which the Faddeev-Popov method gives an extraneous factor of $1/n!$. Otherwise, the fluctuation factor is similar to Eq.~\eqref{eq:onebounce:fluc} for each bounce. Thus, the general $n$ bounce contribution to the partition function is \begin{align} I^n e^{S_{E}^n} \propto \sqrt{\frac{1}{D}}\sum_{\qty{k}} \frac{(-i \beta)^n}{n!}\prod_{i=1}^n \sqrt{S_{E,k_i} \epsilon_{k_i}(\beta/n)} e^{-S_{E,k_i}}. \end{align} The vector $k$ enumerates which quadrature hits the wall in each bounce and the sum is taken over all the possible $n$-bounce configurations ($k_i \in \qty{1, \dots N}$). There are $n$ independent sums and, thus, in total $N^n$ configurations. These sums can be alternatively written as \begin{align} I^n e^{S_{E}^n} \propto \frac{(-i \beta)^n}{n!} N^n\expval{\sqrt{S_{E} \epsilon(\beta/n)} e^{-S_{E}}}^n. \label{eq:nbounce:contribution:mean} \end{align} Here, $\expval{\cdot}$ denotes the ensemble average over the coupling constants~$\lambda^2_i$. The remaining problem is to calculate the finite temperature correction~$\epsilon(\beta)$ and evaluate the sum over all bounces to arrive at the partition function~$\mathcal{Z}$. The strategy I use is to approximate $\epsilon(\beta/n)^n \approx C(\beta)\epsilon(0)^n$ with a prefactor $C(\beta)$. This approximation renders the partition function~$\mathcal{Z}$ to an exponential form which gives the leading order contribution in temperature to the tunneling rate. The function $C(\beta)$ plays no role in the rate as it becomes a real prefactor of the imaginary part in the partition function~$\mathcal{Z}$. This approximation is further discussed in Appendix~\ref{app:ftc}. Effectively, it leads in Eq.~\eqref{eq:nbounce:contribution:mean} to \begin{align} \epsilon_1(\beta/n) \rightarrow \epsilon_1(0) = 4 \omega_{A,1} \omega_{H,1}. \end{align} where I need to define the weighted arithmetic average \begin{align} \omega_{A,1} = \frac{(N\expval{\lambda^4} - \lambda_1^4)\omega_0 + \lambda_1^4\qty(\frac{1+\delta}{2}\omega_+ + \frac{1-\delta}{2}\omega_-)}{N\expval{\lambda^4}} \label{eq:arithmeticfreq} \end{align} with the same weights as in the harmonic average~$\omega_{H,1}$. Whenever the total coupling $N\expval{\lambda^4}$ is small compared to $\omega_c^2\omega_0^2$ (i.e., the rotating wave approximation is applicable), always $\omega_{A,1} \approx \omega_0$. In the limit $\lambda_1^2 \rightarrow 0$, one finds $\epsilon_1(\beta/n)^n = (2\omega_0)^{2n}$ without any approximations. Finally, the sum over all classical solutions and their quadratic fluctuations can be evaluated to arrive at the partition function~$\mathcal{Z}$. The important quantity here is the average modification~$r$ of the tunneling rate, defined as the ratio of the total tunneling rates with and without the coupling to the cavity, $r = k/k(\lambda=0)$. I find \begin{align} r = \expval{\frac{\omega_H}{\omega_0}\sqrt{\frac{\omega_A}{\omega_0}} \exp[- S_0 \qty( \frac{\omega_H}{\omega_0} - 1)]}. \label{eq:ratemod} \end{align} This analytical result is for an arbitrary distribution of couplings. It directly shows that the most important polaritonic effects are contained in the harmonic frequency~$\omega_H$ defined in Eq.~\eqref{eq:hf} while the arithmetic mean frequency~$\omega_A$ of Eq.~\eqref{eq:arithmeticfreq} provides a small correction relevant only in the ultra-strong coupling regime. The rate modification~$r$ describes the total tunneling rate modification of an $N$-body polaritonic system. The light-matter coupling modifies the tunneling for each system and, thus, there must be an ensemble average over the coupling constants. To be more precise, the average is over the second-order couplings~$\lambda_i^4$ which are the weighing factors in the harmonic average~$\omega_H$. Using the expression $\lambda_i^2 = \sqrt{\omega_c \omega_0} g_i$ it is instructive to write \begin{align} \frac{\omega_H}{\omega_0} = \frac{1}{1 + \frac{g^2}{N\expval{g^2}}\qty(\frac{1+\delta}{2}\frac{\omega_0}{\omega_+} + \frac{1-\delta}{2}\frac{\omega_0}{\omega_-} - 1)} \end{align} in terms of the true coupling constants $g$. Thus, the relevant distribution is that of $g^2$. This is in contrast to our recent work focusing on bistable potentials in the thermal activation regime where we found that the distribution of $g$ plays an important role~\cite{kansanen2022cavity}. \section{Analysis of the polaritonic rate modification} Let us consider the consequences of the rate modification~\eqref{eq:ratemod}. In the following, I assume that the rotating wave approximation holds and that $\omega_c \approx \omega_0$. The polariton frequencies are effectively redefined as \begin{align} \omega_\pm = \frac{\omega_c + \omega_0}{2} \pm \sqrt{N\expval{g^2} + (\omega_c - \omega_0)^2/4} \end{align} and $\omega_A/\omega_0 = 1$. The harmonic average simplifies as the relation \begin{align} \frac{1+\delta}{2}\frac{1}{\omega_+} + \frac{1-\delta}{2}\frac{1}{\omega_-} = \frac{\omega_c}{\omega_0\omega_c - N \expval{g^2}} \end{align} removes the need for the detuning parameter $\delta$. $N = 1$: For a single metastable system, the analysis is straightforward. The harmonic average is then over the polariton states which favors the lower polariton state. By employing the rotating wave approximation, I have $\omega_H/\omega_0 = 1 - g^2/\omega_0 \omega_c$. Inserting this relation to Eq.~\eqref{eq:ratemod} gives \begin{align} r = \qty(1 - \frac{g^2}{\omega_c\omega_0})\exp(S_0\frac{g^2}{\omega_c\omega_0}). \label{eq:ratemodN1} \end{align} Whether the tunneling rate is increased or decreased depends on the bare action $S_0 = 2E_b/\omega_0$. Expanding to the lowest order in the coupling gives $r \approx 1 + (S_0 - 1)\frac{g^2}{\omega_c\omega_0}$. A high tunneling barrier is represented by $S_0 > 1$ in which case the rate always increases due to the presence of the cavity. Higher the barrier, stronger the effect for a fixed coupling~$g$. This is visualized in Fig.~\ref{fig:rtm1}. It should be noted that $S_0 < 1$ is at odds with the semiclassical approximation and, thus, the result might not be accurate in such case. The case of a single tunneling system coupled to a harmonic oscillator is relevant for experiments conducted in superconducting circuits~\cite{ankerhold2007quantum}. The metastable quadrature could be, for instance, the superconducting phase difference of a Josephson junction in an electrical circuit. Then, Eq.~\eqref{eq:ratemodN1} predicts the tunneling rate change if this circuit is connected to an external resonator. \begin{figure} \centering \includegraphics{rtm1_ver1.pdf} \caption{Polaritonic tunneling rate modifications of a single system with different tunneling barriers~$E_b/\omega_0 = S_0/2$.} \label{fig:rtm1} \end{figure} $N \gg 1$ but $N \expval{g^2} < \omega_0 \omega_c$: The case of macroscopically large $N$ is the typical regime of polaritonic chemistry. The ensemble average in Eq.~\eqref{eq:ratemod} could be calculated numerically for some model distribution of couplings but, rather, I calculate it with the cumulant expansion to the second order. This gives \begin{align} r &\approx \qty[\expval{\frac{\omega_H}{\omega_0}} - S_0 \var{\frac{\omega_H}{\omega_0}}] \\ &\qquad\times \exp[S_0 - S_0 \expval{\frac{\omega_H}{\omega_0}} + \frac{1}{2}S_0^2 \var{\frac{\omega_H}{\omega_0}}], \notag \end{align} where $\var{\cdot}$ refers to ensemble variance defined as $\var{x} = \expval{x^2} - \expval{x}^2$. Thus, in principle, the variance of the coupling constants can modify the observed rate modification. However, for $N \gg 1$, the variance is well-approximated by \begin{align} \var{\frac{\omega_H}{\omega_0}} &=\expval{\frac{\omega_H}{\omega_0}}^4 \frac{\var{g^2}}{(\omega_0 \omega_c - N\expval{g^2})^2}, \end{align} because the fluctuation of couplings is also suppressed by the factor $1/N$ in the harmonic mean. Now, if $g_i^2/\omega_c\omega_0 \ll 1$ for all $i$, which is a typical assumption in the collective coupling regime, the variance can be neglected as $\var{g^2/\omega_c\omega_0} \ll 1$. Consequently, the expectation value of $\omega_H/\omega_0$ is given by \begin{align} \expval{\frac{\omega_H}{\omega_0}} &= \frac{\omega_0 \omega_c - N \expval{g^2}}{\omega_0 \omega_c - (N - 1)\expval{g^2}} \approx 1 - \frac{1}{N}\frac{N\expval{g^2}}{\omega_c\omega_0}. \label{eq:hf:mean} \end{align} Here, it appears that $\omega_H/\omega_0$ is determined as a ratio of polariton frequencies~$(\omega_+ \omega_-)^2$ so that the polaritons in the denominator consist of $N-1$~systems and in the nominator of $N$~systems. The latter equation is an expansion in the leading order of $\expval{g^2}/\omega_0\omega_c$. Using this expanded form I find \begin{align} r \approx \qty(1 - \frac{\expval{g^2}}{\omega_c\omega_0})\exp(S_0\frac{\expval{g^2}}{\omega_c\omega_0}), \label{eq:ratemodN} \end{align} which generalizes the single-system polaritonic rate modification of Eq.~\eqref{eq:ratemodN1}. In conclusion, there is no considerable collective tunneling effect, even if the collective coupling $\sqrt{N\expval{g^2}}$ is a considerable fraction of $\sqrt{\omega_c \omega_0}$. \subsection{Comparison to high-temperature escape rate} Thermal activation is the main mechanism in the escape from a metastable potential whenever the temperature is above a threshold temperature proportional to $\omega_0$~\cite{affleck1981quantum,hanggi1990reaction}. The instanton path shrinks to a single point in the limit of high temperature, $\beta \rightarrow 0$. This follows from Matsubara frequency $\omega_{m\neq0} \rightarrow \infty$. Thus, only $m = 0$ contributes in the Fourier series expressions. The action is in this case \begin{align} S_{E,i} = \beta E_b \frac{\omega_0 \omega_c - N \expval{g^2}}{\omega_0 \omega_c - (N\expval{g^2}- g_i^2)}. \end{align} The similarity to the low-temperature action in Eq.~\eqref{eq:action:result} is evident: the bare action has changed from $S_0 = 2E_b/\omega_0$ to $\beta E_b$ while the polaritonic modification is expressed in a form similar to Eq.~\eqref{eq:hf:mean} instead of $\omega_{H,i}/\omega_0$. However, I have not used the rotating wave approximation here as in Eq.~\eqref{eq:hf:mean}. In the high-temperature regime, one can also calculate the rate using the classical transition state theory. This approach gives the same exponent but it allows for a straightforward solution of the prefactor (also called the attempt frequency). The prefactor is proportional to \begin{align} \omega_0 \sqrt{\frac{\omega_0 \omega_c - N \expval{g^2}}{\omega_0 \omega_c - (N\expval{g^2}- g_i^2)}}. \end{align} The structure of the classical escape rate is therefore different from the low-temperature one. Besides the change of the harmonic frequency~$\omega_H$ to the ratio of polariton frequencies (which coincide in the rotating wave approximation), the modification of the action and prefactor are in different powers. With both the low- and high-temperature limits of the rate modification at hand, one can imagine the following set of experiments (see e.g. Ref.~\onlinecite{voss1981macroscopic}): One varies the temperature of the polaritonic system and measures the escape rate. Starting from a high temperature and lowering it, the rate drops and eventually saturates to the quantum tunneling rate. By repeating this measurement without the cavity, the polaritonic coherence effect should become visible. The results I obtained imply, however, that this is likely only in single systems with sizable light--matter coupling because there is no collective enhancement of the rates. \section{Conclusion} The work presented here is rather technical and, in many ways, cumbersome. Next, I try to clarify what I think are the main ideas and results of the work. I show a simple, analytically solvable, toy model for polaritonic tunneling. In principle, there are numerous calculation techniques in the literature but the multidimensionality of the polaritonic system and the ski-jumping potential require some adaptation. These techniques might prove useful, for instance, in the investigations of macroscopic tunneling in superconducting circuit arrays or other interacting ensembles of metastable systems. Even if the main result, the polaritonic tunneling rate modification~\eqref{eq:ratemod}, is obtained in a ski-jumping potential that does not directly correspond to any potential seen in nature, it has value. As a first guess, the structure of the solution is likely similar for a different potential: the modification is determined by the bare action and the harmonic frequency~$\omega_H$. The formation of polaritons affects the coherence properties in such a way that the tunneling rate may be increased. At the same time, if $N \gg 1$, the dark states spoil the effect of the polaritons to the tunneling decay rate out of any metastable potential. Of course, I would prefer to be proven wrong. My work in the low-temperature regime coupled with the transition state results in Refs.~\onlinecite{zhdanov2020vacuum} and~\onlinecite{campos2020polaritonic} indicate that there is no collective polaritonic effect in the escape rate in the case of a large number $N$ of molecules. Thus, it seems that one should take into account the possibility of systems returning back to the metastable state, or that there is no effect to be found in the first place. This article considers only a truly metastable potential. An alternative system would be a bistable potential which we considered in the thermal activation limit~\cite{kansanen2022cavity}. For the low-temperature limit, the approach would have to be different than what I present here because there are no similar instantons. This is because these imaginary-time paths are at zero energy while the cavity changes the energies of the stationary states. Tunneling in bistable systems therefore requires another approach. I did not take into account the friction or dissipation the systems realistically have. On the level of the action this would be, in principle, a straightforward extension~\cite{caldeira1981influence,weiss2012quantum,altlandsimons,ankerhold2007quantum}. I expect dissipation to modify the tunneling rate modification: the modification should be larger for a nearly dissipationless cavity than for a bad cavity with a large dissipation rate. However, it cannot change, for instance, the $N$-scaling of the action. \begin{acknowledgements} I thank Tero Heikkilä for useful discussions and for coining the term ``ski-jumping potential''. This work has been supported by the Magnus Ehrnrooth foundation and the Academy of Finland (project numbers 317118 and 321982). \end{acknowledgements}
1,314,259,992,694
arxiv
\section{Introduction} \section{Introduction} Spin Hall effect(SHE) is attracting interest recently because it can produce spin current without magnetism or magnetic field. The research was triggered by the two theoretical proposals on the intrinsic mechanism on the SHE \cite{Mur,Sin}, and it has been intensively studied both theoretically and experimentally. There are various experiments on the SHE in doped semiconductors and in metals\cite{Kato,Wun,Saitoh,Valenzuela} by optical and electrical methods. In these observations in electronic systems, the spin current is seen as an effect summed over many electrons, while the motion of the individual electrons cannot be seen. Therefore, comparison between theory and experiments is sometimes indirect and not straightforward. An experimental method to see directly the electron trajectory is highly desired. At first sight it seems impossible because condensed materials have a huge number of electrons, which cannot be distinguished from each other. Apart from electronic systems, we have one example where one can observe directly the SHE as a trajectory of the particle: light \cite{M.Onoda1}. As the intrinsic SHE is induced by the Berry phase, it is not limited to electronic systems but also seen in other (even classical) wave phenomena such as light. In this SHE of light, the difference of the refractive indices at an interface of two different media plays the role of the ``electric field" in the electronic SHE. The SHE of light at the interface is recently measured in a high accuracy of about 1$\mathrm{\AA}$ using weak measurement\cite{Hosten}. In this letter we theoretically propose a way to optically observe the trajectory of an elementary excitation driven by the SHE. We consider two candidates; transverse excitons in alkali halides and orthoexcitons in Cu$_{2}$O. We propose an experimental setup, and estimate the shift size due to the SHE, which turns out to be enough magnitude for observation. In both systems, an electron-hole exchange coupling lifts the degeneracy of the excitonic states, which gives rise to the Berry curvature in $k$ space of the center-of-mass motion. It leads to the SHE, namely spin-dependent trajectory of the excitons. After the radiative lifetime, these excitons emit light, whose circular polarization is determined by the exciton spins. Thus by spatially resolving the circular polarization of the emitted light, we can see how the excitons move in real space in a spin-dependent way. It is the first proposal of a real-space observation of the Berry-phase-driven SHE in electronic systems. \section{Spin Hall Effect of excitons in alkali halides} Due to the spin-orbit coupling, exciton states in alkali halides with the lowest energy consists of an electron in the $\Gamma_{6}^{+}$ conduction band, and a hole in the $\Gamma_{8}^{-}$ valence band, and these states are further classified into pure spin-triplet states (total angular momentum $J=2$) and spin singlet-triplet mixed states ($J=1$). Exchange interaction and the spin-orbit coupling lifts the degeneracy among these states \cite{Ono}, and the energies of the $J=2$ excitons are lower than those of the $J=1$ due to the analytic exchange interaction. The $J=1$ excitons are allowed for optical dipolar transition, and are suitable for real-space imaging of the SHE. Meanwhile, the $J=2$ states are dipolar forbidden. Hence we restrict ourselves to the $J=1$ excitons. The nonanalytic exchange Hamiltonian with the basis $\{\ke{O_{x}},\ke{O_{y}},\ke{O_{z}}\}$ within the $J=1$ states is given by \cite{Cho} \begin{equation} H_{ex}(\vec{K})= \frac{\Delta_{\mathrm{LT}}}{K^{2}}(K^{2}- (\vec{K}\cdot \vec{S})^{2}), \end{equation} where $\vec{S}$ is the set of the spin-1 matrices. $\Delta_{\mathrm{LT}}$ is the longitudinal-transverse (L-T) splitting, which can be experimentally determined e.g. from polarization beating of the emission \cite{Langbein07}. We neglect higher order terms in $\vec{K}$. In addition, for simplicity, we assume that the analytic exchange (the splitting between $J=1$ and $J=2$) is much larger than the nonanalytic one $\Delta_{\mathrm{LT}}$. In the calculation of the Berry curvature, this assumption allows us to retain only the matrix elements within the $J=1$ states among the various matrix elements in the 8$\times$8 Hamiltonian in the space spanned by $J=1$ and $J=2$ states (see Ref.~\onlinecite{Cho} and Table 8 in Ref.~\onlinecite{Hoenerlage}). This Hamiltonian $H_{ex}$ is diagonalized by eigenstates of the helicity $\lambda=(\vec{K}\cdot \vec{S})/K$ with eigenvalues of $\lambda=\pm 1,0$. Hence, the eigenstates of $H_{ex}(\vec{K})$ are twofold degenerate transverse modes and a longitudinal mode, whose energies differ by $\Delta_{\mathrm{LT}}$. This L-T splitting gives rise to the Berry curvature for the $J=1$ excitons, leading to the SHE. When the eigenstates are degenerate, a wavepacket follows the semiclassical equations of motion\cite{Mur,Sundaram,Shindo,Niu2}: \begin{eqnarray} &&\dot{\vec{R}}_{c}=\frac{1}{\hbar }\frac{\partial \varepsilon_{n}(\vec{K}_{c})}{\partial \vec{K}_{c}}+\dot{\vec{K}}_{c}\times \eta^{\dag}\vec{{\cal F}}_{n}(\vec{K}_{c})\eta, \label{EOM-r}\\ &&\hbar \dot{\vec{K}}_{c}=-\frac{\partial V(\vec{R}_{c})}{\partial \vec{R}_{c}}, \ \ \ \ \dot{\eta}=-i\dot{\vec{K}}_{c}\cdot \vec{{\cal A}}_{n}(\vec{K}_{c}) \ \eta, \label{EOM} \end{eqnarray} where $\vec{R}_{c},\vec{K}_{c}$ are the center position and the wavevector of the wavepacket, $\varepsilon_{n}(\vec{K}_{c})$ is the energy dispersion of the $n$-th band, $V(\vec{R}_{c})$ is an external potential, and $\eta=(\eta_{1},\eta_{2})$ is the internal degree of freedom of the two degenerate transverse exciton bands. $\vec{{\cal A}}_{n}(\vec{K})$ and $\vec{{\cal F}}_{n}(\vec{K}_{c})$ are Berry connection and Berry curvature, which are defined as \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle [{\cal A}_{n}^{\mu}(\vec{K})]_{ij}\equiv -i\avof{n_{i}(\vec{K})}{\frac{\partial}{\partial K_{\mu}}}{n_{j}(\vec{K})}, \\ \displaystyle {\cal F}_{n}^{\rho}(\vec{K})\equiv \epsilon_{\mu\nu\rho}\left( \frac{\partial {\cal A}_{n}^{\nu }(\vec{K})}{\partial K_{\mu}}+i{\cal A}_{n}^{\mu}(\vec{K}){\cal A}_{n}^{\nu}(\vec{K})\right), \end{array} \right. \label{BC} \end{eqnarray} where $\ke{n_{i}(\vec{K})}$ is an eigenstate of the $n$-th band and $i$ is the label for each eigenstate within the degenerate band. The term $\dot{\vec{K}}\times \eta^{\dag}\vec{{\cal F}}\eta $ in the equation of motion for $\dot{R}_{c}$ is called anomalous velocity, which leads to the SHE. The Berry phase changes sign when the spin direction is reversed. Therefore, two wavepackets with opposite spins move along opposite directions to each other. This mechanism is responsible for the SHE of electrons in p-type semiconductors \cite{Mur} and that of light\cite{M.Onoda1}. The Berry curvature for the $J=1$ exciton states can be calculated from $H_{ex}$ in the same way as that in the SHE of light \cite{M.Onoda1}, because the two cases share the same feature of L-T splitting in the spin-1 systems. Therefore the Berry curvature of the transverse states with helicity of $\lambda=\pm 1$ is then calculated as \begin{equation} {\cal F}_{n}^{\rho}(\vec{K})=\lambda \frac{K_{\rho}}{K^{3}}. \end{equation} The longitudinal state ($\lambda=0$) has a vanishing Berry curvature, and it does not undergo a shift due to the SHE. We propose an experiment to detect the SHE in the real space and evaluate the Hall shift. The SHE requires a nonzero $\dot{\vec{K}}_{c}$ as seen from Eq.~(\ref{EOM}). Namely, one should apply an external force to the exciton to see a shift due to the SHE. For electrons an electric field is sufficient, whereas an exciton cannot be accelerated by an electric field. Instead, a local strain gives rise to a potential gradient and accelerates excitons, inducing the SHE. Thus we propose the following setup; we prepare an transverse exciton wavepacket with momentum along the $z$ direction, and apply a uniaxial local strain, so that the excitons feel a force along the $x$ direction, as shown in Fig.\ref{fig1}. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{fig1a.eps} \end{center} \caption{(a) Experimental setup for the detection of SHE in alkali halides. The twofold degenerate wavepacket of the transverse excitons moves toward the center of the uniaxial trapping potential, along the $x$ direction. (b) Schematic figure of the spin Hall effect of excitons. The up- and down-spin wavepackets are deflected to the $\mp y$ directions, respectively, and they emit light with opposite circular polarizations. } \label{fig1} \end{figure} A strain-induced potential well has been developed for Cu$_2$O \cite{Naka}, but not for alkali halides to our knowledge. Therefore, we estimate the shift from existing data on alkali halides. From the data on the thin-film RbI for example, the effect of uniaxial strain is 25-45 meV for 1 kbar \cite{Ohno,Nishimura,Itoh}. Because the crystal is easily broken by high uniaxial pressure, we take a lower value for a trapping potential, 4meV for 0.1kbar as an example. We assume the size of the trap to be several hundred micrometers, as developed for Cu$_2$O \cite{Naka}. Thus we consider a 200$\mathrm{\mu m}$-4meV configuration of the trapping potential. The force acting on the exciton wavepacket is $3.2\times 10^{-18}$N, and the corresponding rate of the wavevector change is $\dot{K}_{x}\simeq 3.0\times 10^{16}\mathrm{m^{-1}s^{-1}}$. When we take RbI for example, the typical wavenumber is $k_0=0.8\times 10^6$cm$^{-1}$. The magnitude of the Berry curvature is $F^{z}= k_{0}^{-2}\simeq 1.6\times 10^{-16}\mathrm{m^{2}}$. Therefore the anomalous velocity is $v_{\mathrm{a}}=\dot{K}_{x}F^{z}\simeq 4.8$m/s and the shift is $y_{\mathrm{a}}=v_{\mathrm{a}}\tau\simeq 8\mathrm{n m}$, where $\tau=1.7$ns is the lifetime of the exciton in RbI, which is governed by self-trapping process \cite{Tsu}. We note that this self-trapping instability can be reduced or avoided by choosing other materials such as III-V or II-VI compounds, AgBr, and TlBr, where the free state of exciton is more stable than the self-trapped state. In these materials, the shift of the excitons could be much longer \cite{toyozawa}. Because of the uncertainty principle, in order for the wavepacket to have a well-defined wavenumber, the size of the wavepacket in $k$ space should be much larger than the wavenumber. Hence the ratio between the size of the exciton wavepacket and the transverse shift is small, and the direct observation of the SHE might be difficult. Nevertheless, a wavepacket deflected to the transverse direction is spin-polarized and emits a circularly polarized light. Therefore, one can observe the SHE by detecting the spatial dependence of the circular polarization from the two wavepackets deflected in the opposite direction. \section{Spin Hall Effect of orthoexciton in Cu$_{2}$O} In Cu$_{2}$O, the exciton states with the lowest energy, composed of the $\Gamma_{7}^{+}$-valence band and the conduction band, is the $1S$ exciton. Because the valence band and the conduction band share the same parity, radiative recombination of the $1S$ exciton is dipolar forbidden, and therefore this state has a long radiative lifetime. The four states in the $1S$ yellow excitons are classified into three $\Gamma_{5}^{+}$ orthoexciton states and one $\Gamma_{2}^{+}$ paraexciton state. The orthoexcitons are singlet-triplet mixed states, while the paraexciton is purely spin-triplet. Therefore exchange interaction exists only in the singlet states, and affects only the energy of the orthoexcitons, while the paraexcitons remain intact. The energy splitting between ortho and paraexcitons due to the exchange interaction is about 12meV. Furthermore, the degeneracy of the three orthoexciton states is lifted by (nonanalytic) exchange splitting. The matrix form of the exchange interaction among the orthoexciton states $\{\ke{O_{yz}},\ke{O_{zx}},\ke{O_{xy}}\}$ is given as \begin{widetext} \begin{eqnarray} H_{ex}(\vec{K})=\left[ \begin{array}{ccc} \Delta_{Q}\frac{K_{y}^{2}K_{z}^{2}}{K^{2}}+\Delta_{3}(3K_{x}^{2}-K^{2}) & (\Delta_{Q}\frac{K_{z}^{2}}{K^{2}}+\Delta_{5})K_{x}K_{y} & (\Delta_{Q}\frac{K_{y}^{2}}{K^{2}}+\Delta_{5})K_{z}K_{x} \\ (\Delta_{Q}\frac{K_{z}^{2}}{K^{2}}+\Delta_{5})K_{x}K_{y} & \Delta_{Q}\frac{K_{z}^{2}K_{x}^{2}}{K^{2}}+\Delta_{3}(3K_{y}^{2}-K^{2}) & (\Delta_{Q}\frac{K_{x}^{2}}{K^{2}}+\Delta_{5})K_{y}K_{z} \\ (\Delta_{Q}\frac{K_{y}^{2}}{K^{2}}+\Delta_{5})K_{z}K_{x} & (\Delta_{Q}\frac{K_{x}^{2}}{K^{2}}+\Delta_{5})K_{y}K_{z} & \Delta_{Q}\frac{K_{x}^{2}K_{y}^{2}}{K^{2}}+\Delta_{3}(3K_{z}^{2}-K^{2}) \\ \end{array} \right]. \label{ex-c} \end{eqnarray} \end{widetext} where the parameters are $\Delta_{Q}k_{0}^{2}=5.0\mathrm{\mu eV},\Delta_{3}k_{0}^{2}=-1.3\mathrm{\mu eV},\Delta_{5}k_{0}^{2}=2.0\mathrm{\mu eV} $\cite{dasbach} with the wavenumber $k_{0}\equiv 2.62\times 10^{7}\mathrm{m^{-1}}$, as obtained experimentally from the high resolution spectroscopy of polaritons \cite{dasbach}. \begin{figure}[htbp] \begin{center} \includegraphics[width=7.5cm]{fig7a.eps} \end{center} \caption{(a) Energy dispersion of the exchange interaction and (b)distribution of the Berry curvature $F_{n}^{\epsilon\mu}$ in Cu$_2$O. They are shown as a function of the polar angle $\theta $ of $\vec{K}$, with the azimuthal angle $\phi=45^{\circ}$. The strain $\epsilon_{yz}$ is set to be zero. } \label{fig2} \end{figure} The wave-vector dependence of the exchange interaction (\ref{ex-c}) is illustrated in Fig.~\ref{fig2}(a). The eigen energies $E_{1}(\vec{K})$ and $E_{2}(\vec{K})$ are degenerate along the $[0\,0\,1]$ direction and $E_{2}(\vec{K})$ and $E_{3}(\vec{K})$ are degenerate along the $[1\,1\,1]$ direction. One possible experiment is to make a potential trap exert a force to the exciton, as we considered in alkali halides. In Cu$_2$O, however, the strain is typically of the order of meV, much larger than the exchange coupling ($\sim\mu$eV). Hence one cannot ignore the strain in the calculation of the Berry curvature. This local strain in general reduces considerably the Berry curvature stemming from the exchange coupling, because of its larger energy scale. To overcome this difficulty, we consider another type of strain: a shear strain $\epsilon\equiv\epsilon_{yz}$. The shear strain brings about an additional term to the Hamiltonian as $H'_{ij}= \Lambda\epsilon_{yz}(\delta_{i2}\delta_{j3} +\delta_{i3}\delta_{j2})$. For simplicity we change the normalization of the dimensionless strain parameter $\epsilon\equiv \epsilon_{yz}$, by taking $\Lambda=8.1\textrm{meV}$ which is the energy shift expected for 5kbar shear strain, that is calculated from the data in \cite{Naka}. Using this we consider a Berry curvature in the hyperspace of $\epsilon$-$\vec{K}$, which follows \cite{Sundaram} \begin{equation} \dot{R}_{\mu}=\frac{1}{\hbar} \frac{\partial E_{n}}{\partial K_{\mu}} -\dot{\epsilon} \eta^{\dag} {\cal F}_{n}^{\epsilon\mu}(\vec{K})\eta. \end{equation} with Berry connection and Berry curvature that are defined as \begin{align} & [{\cal A}_{n}^{\epsilon}(\vec{K})]_{ij}\equiv -i\avof{n_{i}(\vec{K})}{\frac{\partial}{\partial \epsilon}}{n_{j}(\vec{K})}, \\ & {\cal F}_{n}^{\epsilon\mu}(\vec{K})\equiv \frac{\partial {\cal A}_{n}^{\mu }}{\partial \epsilon} -\frac{\partial {\cal A}_{n}^{\epsilon }}{\partial K_{\mu}} +i\left[{\cal A}_{n}^{\epsilon},\ {\cal A}_{n}^{\mu}\right], \end{align} Because the Hamiltonian matrix $H_{ex}+H'$ is real, the eigenvectors can be chosen as real. The Berry curvature ${\cal F}^{\epsilon\mu}(\vec{K})$ is then pure imaginary and Hermitian. If the state considered is nondegenerate, the Berry curvature is scalar ($1\times 1$ matrix), and therefore it vanishes. On the other hand, when the state is twofold degenerate, as in [001] or in [111] direction, the Berry curvature is a 2$\times$2 matrix. It is therefore proportional to the Pauli matrix $\sigma_{y}$: \begin{equation} {\cal F}^{\epsilon\mu}(\vec{K})=F^{\epsilon\mu}(\vec{K})\sigma_{y}. \end{equation} Thus to see the SHE, the exciton states should be degenerate, which occurs along the high-symmetry lines. For concreteness, we hereafter focus on the twofold degeneracy along the $[0,0,1]$ direction ($\vec{K}\|\hat{z}$) as the degenerate bands in the semiclassical equation of motion (\ref{EOM}). Then the eigenstates $\ke{n_{1}(\vec{K})}$ and $\ke{n_{2}(\vec{K})}$ with eigenenergies $E_{1}(\vec{K})$ and $E_{2}(\vec{K})$ in Fig.~\ref{fig2}(a) are considered as pseudospin states. Along the $[0\,0\,1]$ direction, these states become $|O_{yz}\rangle$ and $|O_{zx}\rangle$. Figure \ref{fig2}(b) is the distribution of $F^{\epsilon\mu}$. We note that $F^{\epsilon\mu}$ depends on gauge, even though the anomalous velocity does not, and Fig.~\ref{fig2}(b) is based on a particular gauge choice. The typical size of the Berry curvature is expected to be $F\sim(\Lambda/\Delta_{\mathrm{gap}})k_{0}$ from consideration of relevant energy scales, where $\Delta_{\mathrm{gap}}$ denotes the energy gap between the ($\ke{n_1}$, $\ke{n_2}$) states and the $\ke{n_3}$ state. Because $\Lambda$ and $\Delta_{\mathrm{gap}}$ are of the order of meV and $\mu$eV, respectively, this estimate agrees with Fig.~\ref{fig2}(b). In fact, for $\vec{K}\|\hat{z}$ the Berry curvature can be calculated analytically as ${\cal F}^{\epsilon x}(\vec{K})= (\Delta_5\Lambda)/(9\Delta_3 k_0^3)=4.06\times 10^{-5}\mathrm{m}$, and the other components are zero: ${\cal F}^{\epsilon y}(\vec{K})=0$, ${\cal F}^{\epsilon z}(\vec{K})=0$. The reason for the vanishing $y$ and $z$ components is the mirror symmetry with respect to the $yz$ plane, and the twofold rotational symmetry around the $z$ axis, respectively. Therefore, for $\vec{K}$ along the $[0\,0\,1]$ direction, the anomalous velocity is along the $x$ direction. Because the SU(2) Berry curvature ${\cal F}^{\mu}(\vec{K})$ is proportional to $\sigma_y$, we take the eigenvectors of $\sigma_{y}$, i.e. $\frac{1}{\sqrt{2}}\binom{1}{\pm i}$ (in the $\ke{n_{1}}$-$\ke{n_{2}}$ basis), and the semiclassical equations of motion (\ref{EOM}) is diagonalized. In this basis, the spin $\eta$ only acquires U(1) phase in time evolution, but does not change its direction. Therefore, for the wavenumber along the $[0\,0\,1]$ direction, the wavepackets for $(\ke{O_{xz}}\pm i\ke{O_{yz}})/\sqrt{2}$ have opposite anomalous velocity, and their spins are along $\pm z$, respectively. These excitons emit circularly polarized light depending on its spin state \cite{elliott}. This enables us to see this spin Hall shift directly by an optical method. The anomalous velocity is proportional to $\dot{\epsilon}$. Therefore, in order to induce the SHE, the strain should be varied externally. One may consider adding an oscillating strain with frequency $\omega$. Then the typical size of the shift is $\epsilon(\Lambda/\Delta_{\mathrm{gap}})/k_{0}\sim (E_{\mathrm{str.}}/\Delta_{\mathrm{gap}})/k_{0}$, where $E_{\mathrm{str.}}(\sim\Lambda\epsilon)$ is the energy shift of excitons by strain. Thus only the small strain of the order of $\mu$eV gives rise to the shift of the order of a wavenumber $\sim 600$nm. Although the radiative lifetime is $\tau_{\mathrm{rad.}}\sim 14 \mu\mathrm{s}$ \cite{Ohara}, the lifetime of the orthoexcitons is much shorter: $\tau\sim 3 \mathrm{ns}$, due to a nonradiative rapid conversion from orthoexcitons to paraexcitons. The oscillation of the strain $\epsilon$ should be faster than $1/\tau$, i.e. be as fast as gigahertz in frequency. The light emission from the orthoexciton may be reduced by several reasons. First, among all the orthoexcitons only the fraction of $\tau/\tau_{\mathrm{rad.}}\sim 2\times 10^{-4}$ emit light. The resolution to detect this emission is it to be well achievable, because the radiative decay rate of excitons has been measured in experiments \cite{Ohara}. Furthermore, when the density of the orthoexcitons exceed a critical value ($\sim 10^{15}\mathrm{cm}^{-3}$), the spin exchange process between two orthoexcitons will be effectively convert orthoexcitons into paraexcitons in a short timescale ($\sim 100\mathrm{ps}$) \cite{Kubouchi}. A typical density of excitons by continuous wave (CW) laser is $10^{13}$-$10^{14}\mathrm{cm}^{-3}$; it is well below the critical density, and it is not a problem for the proposed experiment. The interaction between orthoexcitons also leads to phase decoherence, but it does not affect the SHE, as Eqs.\ (\ref{EOM-r})(\ref{EOM}) remains unaffected. This situation is similar to the electrons in solids, where the mean free path is much shorter than the excitons, but still shows the spin Hall effect. This is because the spin Hall effect is the accumulative effect of the transverse motion of the particles, which does not require the coherence of the process. \section{Summary and discussions} In conclusion, we theoretically investigate the SHE of the excitons in alikali halides and in Cu$_{2}$O. The exchange coupling lifts the threefold degeneracy of the orthoexcitons, while in some directions of the wavenumber double degeneracy remains. This remaining double degeneracy gives rise to nonzero SU(2) Berry curvature, leading to the SHE. This SHE can be observed as a position-dependent circularly polarized light emitted from the orthoexcitons. Recently Yao and Niu \cite{Niu1} proposed SHE for excitons in GaAs quantum well. In their paper the main contribution to the Berry curvature comes from the heavy-hole light-hole mixing in the quantum well, whereas in the present paper the exchange coupling between the hole and electron spins is the main source of the Berry curvature. Because of the degeneracy of the energy spectrum, the Berry curvature is enhanced in our setup, thereby the SHE becomes prominent. Furthermore, we propose in this paper a realistic setup with target material specified. The proposed setup enables us to use modulation spectroscopy with high precision. This provides us with a space-time resolved measurement of the SHE. As a closely related subject, an optical SHE has been observed in an exciton-polariton system \cite{Leyder}, whose mechanism is totally different from the SHE in the present paper. The two different mechanisms for the intrinsic SHE are (A) precession due to the $\mathbf{k}$-dependent (Zeeman-like) field acting on the spin, and (B) the anomalous velocity from the $\mathbf{k}$-space Berry curvature. Although they are often confused with each other, they are distinct. The mechanism (A) is used in the optical SHE in exciton-polaritons \cite{Leyder,Kavokin05,Glazov07,Glazov08}, and in the SHE in the Rashba system \cite{Sin}. In these cases the spin-orbit coupling is linear in terms of the spin, which means that the spin-orbit splitting can be regarded as a ``Zeeman-like" field, although the external magnetic field is zero. In these systems the mechanism (B) is absent because the contribution of the Berry curvature cancels between the two bands involved. On the other hand, the mechanism (B) causes the SHE in excitons in the present paper, as well as the SHE in the Luttinger model \cite{Mur}. This mechanism works even when the Hamiltonian is not linear in the spin operator. This effect due to (B) is enhanced when band crossings exist near the Fermi energy, e.g.\ in the SHE in platinum \cite{Guo}, while it is not the case in (A). Moreover, (B) gives an additional spin-dependent (anomalous) velocity and deflects the exciton trajectory, while (A) does not. Thus the mechanisms (A) and (B) are distinct, and the experiments proposed in the present paper allows us a first real-space observation of the Berry-curvature mechanism (B) in electronic systems. \begin{acknowledgments} We are grateful to K. Yoshioka and M. Kuwata-Gonokami for fruitful discussions. This research is partly supported by Grant-in-Aids under the grant numbers 16076205, 17105002, 19019004, 19048008, 19048015, and 19740177 from the Ministry of Education, Culture, Sports, Science and Technology of Japan. \end{acknowledgments}
1,314,259,992,695
arxiv
\section{Introduction\label{sec: Introduction}} Superconducting circuits based on Josephson junctions are currently one of the leading platforms for quantum information technology \cite{Makhlin2010,Ladd2010,Wendin2017}. Charge transport in such junctions exhibits a rich phenomenology due to the presence of both Cooper pairs and gapped Bogoliubov quasiparticles. The latter are a fundamental tool in quantum technologies employing hybrid nanostructures, enabling high resolution transport spectroscopy~\cite{Giaever1960,Grove2009,Dirks2009}. However, quasiparticle poisoning~\cite{Bespalov2016,Frombach2020} can hinder certain applications. Signatures of quasiparticles extend, e.g. by thermal excitation, also into the gap, where they contribute distinctive features \cite{Whan1996,Pfaller2013,Ratz2014,Gaass2014,Gramich2016}, in addition to the ones produced by Andreev processes~\cite{Andreev1964,Flensberg1988}. Adding an AC drive to nanostructures results in a plethora of additional effects and rich transport characteristics \cite{Grifoni1998,Platero2004}. The presence of a drive further opens up the possibility to manipulate the system properties using Floquet engineering \cite{Shirley1965,Oka2019}. For a nanostructure connected to superconducting leads, quasiparticle transport is modified by the possibility of photon assisted tunneling processes \cite{Tien1963,Kouwenhoven1994,Kouwenhoven1994b,Whan1996}. The resulting transport signatures have been measured e.g. in scanning tunneling microscope experiments with both a superconducting tip and a superconducting substrate \cite{Roychowdhury2015,Kot2020,Peters2020}, where it is shown that they allow discerning the nature of the charge carriers employing the separation between the AC-induced sidebands. For a Josephson junction the combination of a time-dependent bias voltage and a supercurrent further results in the appearance of Shapiro steps \cite{Shapiro1963}, a bichromatic effect arising from the interplay between the AC bias frequency and the intrinsic Josephson frequency of the junction. The microscopic origin of these steps has been investigated in several systems, most notably in quantum point contacts, where multiple Andreev reflections lead to subharmonic Shapiro steps~\cite{Dubos2001,Cuevas2002}. Recently, microwave-driven Josephson junctions have attracted interest as a way of detecting $4\pi-$periodic supercurrents, one of the key signatures of topological superconductors~\cite{Rokhinson2012,Wiedenmann2016,Laroche2019,Fischer2022}. \begin{figure} \centering \includegraphics[scale=1]{pictures/Schematic2.png} \caption{Schematic setup of a gated quantum dot (QD) coupled to two superconducting leads (labeled L and R). A bias voltage with DC and AC components is applied between the left and right superconductor. These are characterized by superconducting gaps $\Delta_{\T{L}}$ and $\Delta_{\T{R}}$, respectively. A gate voltage is applied capacitively to the quantum dot via an electric gate. } \label{fig: schematic} \end{figure} In this work we investigate the transport properties of a junction formed by an interacting quantum dot (QD) connected to two superconducting leads (an S-QD-S junction) in the presence of an AC drive. Quantum dots offer an ideal realization of the weak link required for a Josephson junction, providing an excellent platform to probe the relationship between superconducting correlations and interactions~\cite{Clerk2000,Jrgensen2007,MartinRodero2011}. One major characteristic of weakly coupled, small sized quantum dots is the energy one has to pay to add extra charge to them. These charging effects due to the Coulomb repulsion are antagonistic to Cooper pairing. Thus, transfer of Cooper pairs is disfavoured compared to quasiparticle transport for strong interaction. Conversely, coherent processes involving the transport of Cooper pairs are dominant below the gap, a situation that has been studied profusely in the infinite gap limit~\cite{Pala2007,Governale2008,Hiltscher2012}, and other approximation schemes ~\cite{Vecino2003,MartinRodero2011}. Several implementations of QD-based Josephson junctions have been devised, including semiconductor nanowires~\cite{vanDam2006} and carbon-based weak links such as nanotubes~\cite{Buitelaar2003,Vecino2004,Cleuziou2006,Eichler2007,Dirks2009, Grove2009,Pillet2010,Gaass2014}, fullerenes~\cite{Winkelmann2009}, and gated monolayers of graphene~\cite{Dirks2011}. QD-based Josephson junctions have moreover been proposed as a platform for quantum computing, employing Andreev spin qubits~\cite{Padurariu2010,Park2017,Pavesic2022,2205.03843}. From a theoretical point of view, non-linear transport properties of interacting S-QD-S weak links subject to both DC and AC biases have been poorly investigated so far. This is partly due to the many-body character of the problem, together with the necessity to keep track of the number of Cooper pairs and quasiparticles transferred from one lead to another. To cope with such complexities, in our work we extend previous density operator-based treatments of transport through interacting S-QD-S junctions~\cite{Hiltscher2012,Pfaller2013,Governale2008,Pala2007,Ratz2014} to include the effects of a time-dependent AC bias. Superconducting correlations in the leads are treated within a particle conserving approach to transport in Josephson junctions \cite{Josephson1962}. It allows one to naturally account for the non-equilibrium phenomena due to finite voltages and the electronic correlations in the quantum dot. To our aim, we apply the Nakajima-Zwanzig projector operator formalism \cite{Nakajima1958,Zwanzig1960} to obtain an exact generalized master equation (GME) for the reduced density operator of the interacting quantum dot as well as an integral equation for the current. Also, formally exact expressions for both the current and the reduced density operator are derived which display the expected periodicity in both the intrinsic Josephson frequency and the frequency of the AC bias. With focus on the weak-tunneling limit, we retain only the lowest order terms in the coupling to the leads. This accounts for driven quasiparticle transport and recovers from a microscopic theory similar results as found in phenomenological approaches \cite{Whan1996,Tien1963}. The full treatment of the photon assisted tunneling processes, as considered in this work, allow us to identify regions of the stability diagram where the strongly peaked density of states of the superconductors results, together with the AC bias, in dominant backward tunneling rates across the junction. For these regions our theory predicts total current inversion, in which the current flows against the applied DC voltage bias. The paper is structured as follows: In \cref{sec: Model} we introduce the model of the junction in a formalism preserving particle conservation in the superconducting leads. We turn to a discussion of the transport theory for general AC-DC-driven junctions in \cref{sec: Transport for an AC-driven superconducting junction}. After truncating the resulting equations to second order in the tunnel coupling, we consider the case of a quantum dot junction in \cref{sec: Transport in the weak tunneling limit} and study the transport signatures for both the DC and AC-DC-driven situations. Finally, in \cref{sec: Conclusion} conclusions are drawn and further avenues of research are discussed. \section{Model\label{sec: Model}} We consider an S-QD-S junction, as exemplified by the setup of \cref{fig: schematic}, consisting of a gated quantum dot which is weakly coupled to two superconducting leads labeled by $l=\T{L},\T{R}$. The total Hamiltonian is of the form \begin{equation} \hat{H}(t)=\hat{H}_{\T{QD}}+\sum_{l}\hat{H}_{l}(t)+\hat{H}_{\T{T}}, \label{eq: 95} \end{equation} where $\hat{H}_{\T{QD}}$ and $\hat{H}_{l}$ are the Hamiltonians of, respectively, the QD and lead $l$. The tunneling Hamiltonian $\hat{H}_{\T{T}}$ accounts for the tunnel coupling between the leads and the QD. The latter is modeled by the single impurity Anderson model (SIAM) \cite{Anderson1961}, with \begin{equation} \hat{H}_{\T{QD}}=\sum_{\sigma}(\epsilon_{\sigma}-a_{\T{G}}eV_{\T{G}})\hat{d}^{\dagger}_{\sigma}\hat{d}_{\sigma}+U\hat{d}^{\dagger}_{\uparrow}\hat{d}_{\uparrow}\hat{d}^{\dagger}_{\downarrow}\hat{d}_{\downarrow}, \label{eq: 1} \end{equation} where $\sigma\in \lbrace \uparrow,\downarrow\rbrace$, $\epsilon_{\sigma}$ denotes the spin dependent single particle energy, $eV_{\T{G}}$ is the energy shift due to the gate voltage, multiplied by the coupling factor $a_{\T{G}}$, and $U$ is a Hubbard like interaction \cite{Hubbard1963}. In the following we will consider, without loss of generality, $a_{\T{G}}=-1$. The Fock space of the QD is spanned by the set $\{\ket{\chi}=\ket{0},\ket{\uparrow},\ket{\downarrow},\ket{2}\}$, comprising the empty, singly occupied with spin $\sigma$, and doubly occupied states, respectively. For the leads, we start by considering a Hamiltonian of the form \begin{align} \hat{H}_{l}(t) = & \sum_{\sigma,\bm{k}} (\xi_{l,\bm{k}}+\mu_{l}(t))\hat{c}^{\dagger}_{l,\bm{k},\sigma}\hat{c}_{l,\bm{k},\sigma}\nonumber \\+ & \sum_{\substack{\sigma,\sigma',\bm{q}\\\bm{k},\bm{k}'}} V_{l}(\bm{q}) \hat{c}^{\dagger}_{l,\bm{k}+\bm{q},\sigma}\hat{c}^{\dagger}_{l,\bm{k}'-\bm{q},\sigma'}\hat{c}_{l,\bm{k}',\sigma'}\hat{c}_{l,\bm{k},\sigma}. \label{eq: 62} \end{align} Here, $\hat{c}_{l,\bm{k},\sigma}$ is the annihilation operator for an electron from lead $l$ with momentum $\bm{k}$ and spin $\sigma$, with an associated energy $\xi_{l,\bm{k}}$ with respect to the chemical potential $\mu_{l}(t)$. We focus in the following on the case of isotropic dispersion such that $\xi_{l,\bm{k}}=\xi_{l,k}$ and a constant, attractive scattering potential $V_{l}(\bm{q})=-V_0$. Moreover, the chemical potentials read \begin{equation} \mu_{l}(t)=a_{l}[eV_{\T{DC}}-eV_{\T{AC}}\sin(\omega_{\T{AC}}t)], \label{eq: chem-pot} \end{equation} where $a_\T{L}-a_\T{R}=1$ and $0\leq a_\T{L}\leq 1$. In the DC-driven case, the transfer of a Cooper pair from the left to the right lead has an energy cost of $2e(\mu_\T{L} -\mu_\T{R})=2eV_\T{DC}$. In the AC case, the time-dependent part of the bias is also reflected in any transfer of Cooper pairs between the leads. As such, keeping track of the number of Cooper pairs is fundamental for a full description of non-equilibrium transport. Therefore, from here onward we will consider a particle-conserving approach to superconductivity, as first introduced by Josephson and independently by Bardeen~\cite{Josephson1962,Bardeen1962}. An attractive interaction in \cref{eq: 62} results in the formation of a Cooper pair condensate~\cite{Cooper1956} in which electrons are bound in time-reversed pairs. In the particle-conserving approach, the ground state is completely described by the number of Cooper pairs in the condensate $M_l$, so that we can denote it simply by $\ket{M_l}$~\cite{Leggett2008}. Cooper pairs can be broken (e.g. due to thermal effects), and the energy necessary to do so is twice the superconducting gap. As a result, the excitation spectrum of the superconductors is characteristically gapped. In order to describe these excited states, we rely on a mean-field description of the interaction. Absorbing the effect of the Hartree and Fock terms into the definition of the $\xi_{l,k}$, we are left with \begin{align} \hat{H}^{\text{MF}}_{l}(t) = & \sum_{\bm{k},\sigma} (\xi_{l,k}+\mu_{l}(t))\hat{c}^{\dagger}_{l,\bm{k},\sigma}\hat{c}_{l,\bm{k},\sigma} \nonumber \\ - & \sum_{\bm{k}} \left( \Delta_{l,\bm{k}} \hat{S}_{l} \hat{c}_{l,\bm{k},\uparrow}^\dagger\hat{c}_{l,\bar{\bm{k}},\downarrow}^\dagger + \text{H.c.}\right),\label{eq: HMF} \end{align} where $\Delta_{l,\bm{k}}=\sum_{\bm{k}^\prime}V_{l}(\bm{k}-\bm{k}^\prime)\langle \hat{S}^{\dagger}_{l} \hat{c}_{l,\bm{k}^\prime,\uparrow}\hat{c}_{l,\bar{\bm{k}}^\prime,\downarrow}\rangle$ is the superconducting gap for lead $l$. For a constant interaction, as assumed above, $\Delta_{l,\bm{k}}=\Delta_l$ and the gap is independent of $\bm{k}$, yielding s-type superconductivity. Moreover, we have employed the Cooper pair creation and annihilation operators \begin{align} \hat{S}_{l}^{\dagger}\ket{M_l}=\ket{M_l+1},\,\,\,\hat{S}_{l}\ket{M_l}=\ket{M_l-1}, \end{align} which change the number of Cooper pairs in the condensate. These operators fix particle conservation when going from \cref{eq: 62} to the mean-field description of \cref{eq: HMF}. They satisfy $\hat{S}_l^{\dagger}\hat{S}_l=1-\hat{P}_{l,0}$~\cite{Pfaller2013}, where $\hat{P}_{l,0}$ projects to the state without Cooper pairs in lead $l$. Their commutator is also given in terms of this projector operator as $\,[\hat{S}_l,\hat{S}_l^{\dagger}]=\hat{P}_{l,0}$. In the following, we shall consider macroscopic leads, such that the action of $\hat{P}_{l,0}$ can be neglected. The Hamiltonian of \cref{eq: HMF} can be diagonalized employing the (particle-conserving) Bogoliubov-Valatin transformations~\cite{Josephson1962,Bardeen1962} \begin{equation} \hat{c}_{l,\bm{k},\sigma}^{\dagger}=u_{l,k}\hat{\gamma}_{l,\bm{k},\sigma}^{\dagger}+\text{sgn}\left(\sigma\right)v_{l,k}^{*}\hat{S}_{l}^{\dagger}\hat{\gamma}_{l,\bar{\bm{k}},\bar{\sigma}}+\mathcal{O}\bigl(\hat{P}_{l,0}\bigr), \label{eq: bog-val} \end{equation} where \begin{align} u_{l,k} & =\sqrt{(1/2)\left(1+\xi_{l,k}/E_{l,k}\right)},\\ v_{l,k} & =e^{i\arg({\Delta_l})}\sqrt{\left(1/2\right)\left(1-\xi_{l,k}/E_{l,k}\right)}. \end{align} Here, we have introduced the quasiparticle excitation energy $E_{l,k}=\sqrt{\xi_{l,k}^2+|\Delta_l|^2}$. \cref{eq: bog-val} defines the Bogoliubov quasiparticle operators $\hat{\gamma}^{(\dagger)}_{l,\bm{k},\sigma}$, which describe the fermionic excitations of the system. These excitations are fermionic for macroscopic leads, since \begin{equation} \{\hat{\gamma}_{l,\bm{k},\sigma},\hat{\gamma}^\dagger_{l',\bm{k}',\sigma'}\} =\delta_{l,l'}[\delta_{\bm{k},\bm{k}'}\delta_{\sigma,\sigma'}+\mathcal{O}(\hat{P}_{l,0})].\label{eq: comm-rel} \end{equation} The ground state is the vacuum for the quasiparticles, since $\hat{\gamma}_{l,\bm{k},\sigma}\ket{M_l}=0,\,\forall l,\bm{k},\sigma$. In turn, for a given particle number $N_l$, the excitation spectrum can be obtained by applying the Cooper pair and quasiparticle operators (which commute, up to factors $\mathcal{O}\bigl(\hat{P}_{l,0}\bigr)$) to the ground state. E.g. the state with two excited quasiparticles is \begin{equation} \hat{S}_l\hat{\gamma}^\dagger_{l,\bm{k},\sigma}\hat{\gamma}^\dagger_{l,\bm{k}',\sigma'}\ket{M_l}. \end{equation} For arbitrary $N_l$, the Fock space is spanned by states of the form $\ket{M_l,\left\{\nu_{l,\bm{k},\sigma}\right\}}$, where $\nu_{l,\bm{k},\sigma}$ is the occupation of a given quasiparticle mode. Employing these properties, it can further be shown that $[\hat{N}_l,\hat{S}^\dagger_l]=2\hat{S}^\dagger_l$, with $\hat{N}_l$ the fermion number operator~\cite{Josephson1962}. Once the Hamiltonian \cref{eq: 62} is diagonalized, we may split it into quasiparticle and Cooper pair parts $\hat{H}_{l}=\hat{H}_{\T{QP},l}+\hat{H}_{\T{CP},l}$. Here \begin{equation} \hat{H}_{\T{QP},l}(t)=\sum_{\bm{k},\sigma}(E_{l,k}+\mu_{l}(t))\hat{\gamma}^{\dagger}_{l,\bm{k},\sigma}\hat{\gamma}_{l,\bm{k},\sigma}, \label{eq: 2} \end{equation} describes the quasiparticle excitations, and \begin{equation} \hat{H}_{\T{CP},l}(t)=\mu_{l}(t)\sum_{\bm{k},\sigma}\big(\hat{c}^{\dagger}_{l,\bm{k},\sigma}\hat{c}_{l,\bm{k},\sigma}-\hat{\gamma}^{\dagger}_{l,\bm{k},\sigma}\hat{\gamma}_{l,\bm{k},\sigma}\big), \label{eq: 3} \end{equation} accounts for the Cooper pairs in lead $l$. The coupling between the dot and the leads is mediated by the tunneling Hamiltonian \begin{align} \hat{H}_{\T{T}}= &\sum_{l,\bm{k},\sigma}\left(t_l\hat{c}^\dagger_{l,\bm{k},\sigma}\hat{d}_\sigma + \text{H.c.}\right) \nonumber \\ = & \sum_{l,\bm{k},\sigma}\left[t_l\left(u_{l,k}\hat{\gamma}^\dagger_{l,\bm{k},\sigma} + \text{sgn}(\sigma)v^*_{l,k}\hat{S}^\dagger_l\hat{\gamma}_{l,\bar{\bm{k}},\bar{\sigma}}\right)\hat{d}_\sigma + \text{H.c.}\right]. \end{align} Introducing a Fock index $p=\pm$, such that $\hat{f}^{+}:=\hat{f}^\dagger$ and $\hat{f}^- := \hat{f}$ (as well as $h^{+}:=h^*,h^-:=h$ for complex numbers), the tunneling Hamiltonian can be written as \begin{align} \hat{H}_{\T{T}}= & \sum_{\substack{l,\bm{k},\sigma,p}}pt_{l}^{p}\hat{c}^{p}_{l,\bm{k},\sigma}\hat{d}^{\bar{p}}_{\sigma} \nonumber \\= & \sum_{l,\bm{k},\sigma,p}pt_{l}^{p}(u^{\bar{p}}_{l,k}\hat{\gamma}^{p}_{l,\bm{k},\sigma}+\T{sgn}(\sigma) v^{p}_{l,k}\hat{S}^{p}_{l}\hat{\gamma}^{\bar{ p}}_{l,\bar{\bm{k}},\bar{\sigma}})\hat{d}^{\bar{p}}_{\sigma}. \label{eq: 4} \end{align} The first and second terms of \cref{eq: 4} correspond to normal and anomalous tunneling, respectively. The anomalous tunneling term, in particular, involves simultaneously quasiparticles and Cooper pairs in the process. \section{Transport theory for an AC-driven superconducting junction\label{sec: Transport for an AC-driven superconducting junction}} In this section a transport theory is presented to study the superconducting junction introduced in the previous section. The formalism extends former works on S-QD-S systems employing a generalized master equation (GME) for the reduced density operator, derived through a real-time Keldysh approach or the Nakajima-Zwanzig formalism \cite{Governale2008,Hiltscher2012,Pfaller2013,Gaass2014,Ratz2014}. The strength of the GME approach is to allow one to consider the interaction inside the QD exactly, while treating the tunnel coupling between the leads and the QD perturbatively. In this work we include both DC and AC biases as well as anomalous and normal contributions arising at finite $|\Delta_{l}|$. The formalism reproduces results of previous works in the appropriate parameter regimes. See e.g. \cite{Pfaller2013,Ratz2014,Hiltscher2012} and references therein. \subsection{Current} We will be mainly concerned with studying the current through the junction. As a convention, we take the current to be the flow of charge from the dot into the left lead \footnote{Due to particle conservation in this formalism, any displacement currents vanish in the time average. Therefore, alternative conventions differ in at most a phase(sign) for the AC(DC) part of the current.}. The current operator is then given by \begin{equation} \hat{I}_{\T{L}}=e\dot{\hat{N}}_{\T{L}}=\frac{ie}{\hbar}[\hat{H}(t),\hat{N}_{\T{L}}]=\frac{e}{i\hbar}\sum_{\substack{\bm{k},\sigma,p}}t_{\T{L}}^{p} \hat{c}^{p}_{\T{L},\bm{k},\sigma}\hat{d}^{\bar{p}}_{\sigma}, \label{eq: 37} \end{equation} where $e$ is the negative charge of an electron. The expectation value of the current can be obtained as \begin{equation} I_{\T{L}}(t)=\T{Tr}\lbrace\hat{I}_{\T{L}}\hat{\rho}_{\T{tot}}(t)\rbrace,\label{eq: exp val of I} \end{equation} where $\hat{\rho}_{\T{tot}}(t)$ is the density operator for the junction, which satisfies the Liouville-von-Neumann equation \begin{equation} \frac{d}{dt}\oprho_{\T{tot}}(t)=\frac{1}{i\hbar}[\hat{H}(t),\hat{\rho}_{\T{tot}}(t)]=\mathcal{L}_{\T{tot}}(t)\hat{\rho}_{\T{tot}}(t), \label{eq: Liouvillian} \end{equation} where $i\hbar\mathcal{L}_{\T{tot}}(t)\hat{O}= [\hat{H}(t),\hat{O}]$ defines the Liouvillian superoperator. Similarly, we define $\mathcal{L}_{\T{QD}},\mathcal{L}_{\T{CP}}(t),\mathcal{L}_{\T{QP}}(t)$ and $\mathcal{L}_{\T{T}}$ by restricting $\hat{H}$ to the respective part of the Hamiltonian in \cref{eq: Liouvillian}. Josephson junctions are characterized by the DC and AC Josephson effects \cite{Josephson1962,Josephson1974}. At a constant voltage bias $V_\T{DC}$, the steady state current through the junction will be, in general, periodic in time, with the periodicity given by the Josephson frequency $\omega_\T{J} = 2eV_\T{DC}/\hbar$. In our case, it is more convenient to employ instead the Josephson frequency associated to the coupling to either lead in vectorial form \begin{equation} \bm{\omega}_\T{DC}=\frac{2eV_\T{DC}}{\hbar}(a_\T{L},a_\T{R}). \end{equation} In order to treat the periodicity coming from the AC Josephson effect and the one coming from the AC voltage bias on the same formal footing, we perform the following unitary transformation on the density operator \begin{equation} \begin{aligned} \oprho_{\T{tot}}'(t)=\mathcal{U}(t)\oprho_{\T{tot}}(t)=\exp(-\int_{t_0}^{t}dt'\mathcal{L}_{\T{CP}}(t'))\oprho_{\T{tot}}(t). \label{eq: 9} \end{aligned} \end{equation} We denote the transformed operators with a prime from here onward. The Liouville-von Neumann equation for the transformed density operator reads \begin{equation} \frac{d}{dt}\oprho'_{\T{tot}}(t)=[\mathcal{L}_{\T{QD}}+\mathcal{L}_{\T{QP}}(t)+\mathcal{L}_{\T{T}}'(t)]\hat{\rho}_{\T{tot}}'(t). \label{eq: Liouvillian-trans} \end{equation} The transformation removes the Cooper pair Liouvillian at the cost of turning the tunneling Liouvillian time-dependent. Meanwhile, $\mathcal{L}_{\T{QD}}$ and $\mathcal{L}_{\T{QP}}(t)$ are unchanged, since they commute with the Cooper pair operators. On the other hand, employing that $\mathcal{L}_{\T{CP}}(t)\hat{S}^p_l=2p\mu_l(t)\hat{S}^p_l$, the tunneling Liouvillian becomes \begin{align} \mathcal{L}_{\T{T}}^\prime(t)= &\sum_{l,\bm{k},\sigma,p,\alpha} \frac{pt^{p}_{l}}{i\hbar} \bigg[ u^{\bar{p}}_{l,k}\hat{\gamma}^{p,\alpha}_{l,\bm{k},\sigma}\nonumber \\ + & \T{sgn}(\sigma) v^{p}_{l,k}\hat{\gamma}^{\bar{p} ,\alpha}_{l,\bar{\bm{k}},\bar{\sigma}}e^{\frac{2ip}{\hbar}\int_{t_0}^{t}dt'\mu_l\left(t'\right)}\hat{S}_{l}^{p,\alpha}\bigg] \hat{d}^{\bar{p},\alpha}_{\sigma}, \label{eq: 12} \end{align} where we introduced the superoperator index $\alpha$, defined by \begin{equation} \hat{X}^{\alpha}\hat{Y}=\begin{cases} \hat{X}\hat{Y}, & \T{if }\, \alpha=+,\\ \hat{Y}\hat{X}, &\T{if }\, \alpha=-. \end{cases} \end{equation} This enables us to write the action of the Liouvillian in the compact way \begin{align} i\hbar\mathcal{L}_{q}(t)\hat{O}=\sum_{\alpha}\alpha\hat{H}_q^{\alpha}(t)\hat{O}, \end{align} for $q=$ QD, QP, CP, and T. Notice that the time integration over the lead chemical potentials in \cref{eq: 12} leads to a function of the time $t$ being periodic both in $\bm{\omega}_\T{DC}$ and $\omega_\T{AC}$. This property can be made explicit by employing the Jacobi-Anger expansion \cite{Abramowitz1965} to find \begin{align} e^{\frac{2i}{\hbar}\int_{t_0}^{t}dt'\mu_{l}\left(t'\right)} =\sum_{n}i^{n}J_{n}\left(a_l\epsilon_\text{AC}\right)e^{i\left(p\bm{u}_{l}\cdot\bm{\omega}_{\text{DC}}+n\omega_{\text{AC}}\right)t}, \label{eq: integral-mu} \end{align} where $\epsilon_\text{AC}=2eV_{\T{AC}}/(\hbar\omega_{\T{AC}})$ is a parameter quantifying the strength of the drive, $J_n(z)$ is the $n$th order Bessel function of the first kind, $\bm{u}_l=(\delta_{l,\T{L}},\delta_{l,\T{R}})$, and we chose $t_{0}=\pi/2\omega_{\text{AC}}$ for convenience. Any other choice is identical up to an additional phase factor absorbed into $v_{l,k}$. \cref{eq: Liouvillian-trans} is the starting point to derive a generalized master equation (GME) for the reduced density operator which we will employ to describe the transport properties of the system. \subsection{Generalized master equation\label{subsec: Identifying the proper reference state of the bath}} The density operator describes the full coherent dynamics of the QD and the leads, with the latter including the infinite degrees of freedom of the quasiparticles and the Cooper pairs. In turn, the current formula \cref{eq: exp val of I} involves the total trace of the product of the current and density operator. We conveniently write it as \begin{equation} \T{Tr}\lbrace\cdots\rbrace=\T{Tr}_\T{sys}\lbrace\T{Tr}_\T{QP}\lbrace\cdots\rbrace\rbrace, \end{equation} where $\T{Tr}_\T{QP}\lbrace\cdots\rbrace$ is the trace over the quasiparticle degrees of freedom, while $\T{Tr}_\T{sys}\lbrace\cdots\rbrace$ is the trace over the rest (i.e. quantum dot and Cooper pairs). This opens up the possibility to drastically reduce the number of degrees of freedom that have to be considered by introducing the reduced density operator \begin{equation} \oprho'(t)=\T{Tr}_{\T{QP}}\lbrace\hat{\rho}_{\T{tot}}'(t)\rbrace. \label{eq: def rho} \end{equation} \cref{eq: Liouvillian-trans} leads to the generalized master equation for the reduced density operator \begin{equation} \frac{d}{dt}\oprho'(t)=\mathcal{L}_{\T{QD}}\oprho'(t) +\int_{0}^{t}ds\mathcal{K}'_{\T{T}}(t,s)\oprho'(s), \label{eq: 43} \end{equation} where $\mathcal{K}'_{\T{T}}(t,s)$ denotes the tunneling kernel superoperator and we assumed a factorized initial state of system and quasiparticles. The tunneling kernel superoperator includes the effect of the quasiparticles and introduces irreversibility in the time evolution. The current can also be given in an integral form \begin{equation} I_{\T{L}}(t)=\int_{0}^{t}ds\T{Tr}_\T{sys}\lbrace\mathcal{K}^\prime_{\T{I},\text{L}}(t,s)\oprho'(s)\rbrace, \label{eq: 49} \end{equation} where we have introduced the current kernel $\mathcal{K}'_{\T{I},\text{L}}$. The integral form of $\mathcal{K}'_{j}(t,s)$, where $j=\T{T},(\T{I,L})$ stands respectively for the tunneling and current kernels, is derived in \cref{App: Derivatio of the kernels}. The kernel superoperator can be expanded as a power series in the tunneling amplitudes. In the weak tunneling limit, only the lowest order contribution in the tunneling coupling is accounted for. Due to particle conservation, this corresponds to terms $\propto\mathcal{L}_\T{T}^{\prime2}(t)$. The kernel is then given by \begin{align} \mathcal{K}^{\prime(2)}_{\T{T}}(t,s)\oprho'(s)=\T{Tr}_{\T{QP}}\lbrace \mathcal{L}_{\T{T}}'(t)\mathcal{G}'_0(t,s)\mathcal{L}_{\T{T}}'(s)\oprho'(s)\otimes\oprho_{\T{QP}}\rbrace, \label{eq: Tunneling Kernel} \end{align} where \begin{align} \mathcal{G}'_0(t,s)=\exp(\int_s^t ds' \mathcal{L}_{\T{QD}}+\mathcal{L}_{\T{QP}}(s')), \label{eq: freeprop} \end{align} is the free propagator and $\oprho_{\T{QP}}$ is the grand canonical density operator of the quasiparticles at thermal equilibrium (see \cref{App: Derivatio of the kernels}). The time-dependence of the chemical potential translates here into a time-dependence of $\mathcal{L}_{\T{QP}}(t)$ (c.f. \cref{eq: 2}). This, together with the time-dependence of $\mathcal{L}_{\T{T}}'(t)$ (i.e. \cref{eq: 12}), breaks time translation invariance and results in a kernel which is a function of two time variables. Similarly, the current kernel is given by \begin{align} \mathcal{K}^{\prime(2)}_{\T{I},\T{L}}(t,s)\oprho'(s)=\T{Tr}_\T{QP}\lbrace \hat{I}'_{\T{L}}(t)\mathcal{G}'_0(t,s)\mathcal{L}_{\T{T}}'(s)\oprho'(s)\otimes\oprho_{\T{QP}}\rbrace. \label{eq: Currentkernel2} \end{align} Note that the current operator is also transformed by \cref{eq: 9}. The similarity between the current and tunneling kernels will allow us to treat them in the same manner in the following. \subsection{Dynamics in the Cooper pair number representation \label{subsec: Eliminating the Cooper pairs}} The reduced density operator operates on both the Cooper pair and quantum dot Fock spaces. The kernel, similarly, also acts on these degrees of freedom due to the Cooper pair operators appearing in the tunneling Liouvillian (c.f.~\cref{eq: 12}). This is true even if the Cooper pair Liouvillian no longer appears explicitly due to the unitary transformation (e.g. in \cref{eq: freeprop}). To understand its complex evolution, we consider first the dynamics of the Cooper pair degrees of freedom. We start by writing $\hat{\rho}^{\prime}$ explicitly as \begin{equation} \hat{\rho}^{\prime}\left(t\right)=\sum_{\substack{\bm{M},\Delta\bm{M}\\\chi,\chi'}}\varrho^{\prime}_{\chi,\chi'}(\Delta\bm{M},\bm{M};t)\ketbra{\chi,\bm{M}+\Delta\bm{M}}{\chi',\bm{M}}, \label{eq: structure of rho} \end{equation} where the ket and bra parts have the same total number of particles. This accounts for the appropriate superselection rules discussed in \cref{App: Selection rules}. This is the Cooper pair number representation. The vector $\bm{M}=(M_{\T{L}},M_{\T{R}})$ labels the Cooper pair numbers of the two condensates, while $\Delta\bm{M}=(\Delta M_{\T{L}}, \Delta M_{\T{R}})$ measures the difference of the particle content of the condensates in the bra and ket parts of the density operator. Similarly, we write the action of the kernels explicitly in Cooper pair space as \begin{align} \mathcal{K}'_{j}\left(t,s\right)\hat{\mathcal{O}} & =\sum_{\bm{N}^{+},\bm{N}^{-}}\kappa'_{j}\left(\bm{N}^{+},\bm{N}^{-};t,s\right) \bm{\hat{S}}^{\bm{N}^{+}}\hat{\mathcal{O}}\bm{\hat{S}}^{\bm{N}^{-}}, \label{eq: action of K} \end{align} where we use the shorthand notation $\bm{\hat{S}}^{\bm{N}} =\hat{S}_\T{L}^{N_\T{L}}\hat{S}_\T{R}^{N_\T{R}}$, $\hat{S}_l^{N}=(\hat{S}_l^{\T{sign}(N)})^{|N|}$. In \cref{eq: action of K} we used explicitly that, in the transformed frame, the kernel only acts on the condensate by the number of Cooper pair operators applied to the left and to the right of its argument. Inserting \cref{eq: structure of rho,eq: action of K} into \cref{eq: 49} yields \begin{align} I_{\text{L}}(t)=&\int_0^t ds \sum_{\bm{N}^+,\bm{N}^-}\sum_{\substack{\bm{M},\Delta\bm{M}}}\sum_{\chi,\chi'}\varrho^{\prime}_{\chi,\chi'}(\Delta\bm{M},\bm{M};t)\nonumber\label{eq: Current cyclic property}\\ \times&\T{Tr}_\T{QD}\bigg\lbrace \kappa^\prime_{\T{I}}(\bm{N}^+,\bm{N}^-;t,s)\ket{\chi}\bra{\chi'}\bigg\rbrace\nonumber\\ \times&\sum_{\bm{M}'} \bra{\bm{M}'}\bm{S}^{\bm{N}^+}\ketbra{\bm{M}+\Delta\bm{ M}}{\bm{M}}\bm{S}^{\bm{N}^-}\ket{\bm{M}'}\nonumber\\ =&\int_0^t ds \sum_{\bm{M}}\sum_{\chi,\chi'}\varrho^{\prime}_{\chi,\chi'}(-\bm{N}^+-\bm{N}^-,\bm{M};t)\nonumber\\ \times&\T{Tr}_\T{QD}\bigg\lbrace \kappa^\prime_{\T{I}}(\bm{N}^+,\bm{N}^-;t,s)\ket{\chi}\bra{\chi'}\bigg\rbrace, \end{align} where we used the cyclic property of the trace. This form of the current allow us to define \begin{align} \kappa'_{j}\left(\bm{N};t,s\right) & =\sum_{\bm{N}^{+}}\kappa'_{j}\left(\bm{N}^{+},\bm{N}-\bm{N}^{+};t,s\right), \label{eq: simplified KI}\\ \hat{\varrho}'(\Delta \bm{M};t) &= \sum_{\bm{M},\chi,\chi'}\varrho'_{\chi,\chi'}(\Delta \bm{M},\bm{M};t)\ketbra{\chi}{\chi'}, \label{eq: simplified rho} \end{align} where the $\hat{\varrho}'(\Delta \bm{M};t)$ are now operators in the space of the QD only. They are a generalization of the partial trace over the CP degrees of freedom. In particular, $\hat{\varrho}'(\bm{0};t)$ is the partial trace proper. That is, it is the sum over the diagonal elements of the density matrix in the CP space. Then, it can be seen that the rest of the $\hat{\varrho}'(\Delta \bm{M};t)$ are sums over the subdiagonals or supradiagonals of the density matrix. Furthermore, in \cref{eq: simplified KI} $\bm{N}=\bm{N}^++\bm{N}^-$ is the total Cooper pair imbalance added between time $s$ and time $t$. In terms of these partial reduced operators and kernels, the current formula takes the form \begin{align} I_{\text{L}}(t)=&\int_0^tds\sum_{\Delta\bm{M}}\T{Tr}_\T{QD}\bigg\lbrace \kappa_{\T{I,L}}^\prime(-\Delta\bm{ M};t,s)\hat{\varrho}^\prime(\Delta\bm{M};s)\bigg\rbrace. \label{eq: Current simplified} \end{align} \cref{eq: Current simplified} is extremely convenient. It reduces the transport problem to the evaluation of coupled reduced operators which act only in the QD space, with the Cooper pair imbalance $\Delta\bm{M}$ acting as a parameter. The GME for $\hat{\varrho}(\Delta\bm{M};t)$ is obtained by taking the matrix element $\bra{\Delta\bm{M}+\bm{M}}\hat{O}\ket{\bm{M}}$ on either side of \cref{eq: 43} and summing over $\bm{M}$ in much the same way as for the current. We find \begin{align} &\frac{d}{dt}\hat{\varrho}^\prime(\Delta\bm{ M};t)=\mathcal{L}_{\T{QD}}\hat{\varrho}^\prime(\Delta\bm{ M};t)\nonumber\\+&\sum_{\Delta \bm{M}'}\int_0^t ds\kappa_{\T{T}}^\prime(\Delta\bm{ M}-\Delta\bm{ M}';t,s)\hat{\varrho}^\prime(\Delta\bm{ M}';s),\label{eq: EOM simplified} \end{align} which is of Toeplitz form in the Cooper pair number $\Delta\bm{M}$. This suggests solving \cref{eq: EOM simplified} by going to a representation which uses the conjugate variable of $\Delta\bm{M}$ as we discuss next. \subsection{Dynamics in the phase representation \label{subsec: CPpart2}} The adjoint variable to the Cooper pair imbalance $\Delta\bm{M}$ from the the last section has the form of a phase vector $\bm{\varphi}=(\varphi_{\T{L}},\varphi_{\T{R}})$. Notice that $\varphi_\T{L}$ and $\varphi_\T{R}$ are not the absolute phases of the superconducting condensates known in the BCS mean field framework. We use the symbol $\circ$ to denote quantities in the phase representation and omit the prime in order not to clutter the notation further. The kernels and the reduced QD operators are in phase representation given by \begin{align}\kappa_{j}^{\circ}(\bm{\varphi};t,s) &= \sum_{\Delta\bm{M}}e^{i\Delta\bm{M}\cdot\bm{\varphi}}\kappa_{j}^{\prime}(\Delta\bm{M};t,s),\label{eq:kernel-ph}\\ \hat{\varrho}^{\circ}(\bm{\varphi};t)& =\sum_{\Delta\bm{M}}e^{i\Delta\bm{M}\cdot\bm{\varphi}} \hat{\varrho}^{\prime}(\Delta\bm{M};t).\label{eq:dm-ph} \end{align} We transform the GME into phase representation by multiplying \cref{eq: EOM simplified} with $\exp(i\Delta\bm{M}\cdot\bm{\varphi})$ throughout and summing over $\Delta\bm{M}$, which after a short calculation yields \begin{equation} \frac{d}{dt}{\hat{\varrho}}^{\circ}(\bm{\varphi};t)=\mathcal{L}_{\text{QD}}{\hat{\varrho}}^{\circ}(\bm{\varphi};t)+\int_{0}^{t}ds{\kappa}_{\text{T}}^{\circ}(\bm{\varphi};t,s){\hat{\varrho}}^{\circ}(\bm{\varphi};t). \label{eq: GME in phase final} \end{equation} Importantly, in \cref{eq: GME in phase final} the kernel is diagonal in phase and instead of an infinite coupled set of equations, we recover a continuous family of uncoupled GMEs for the reduced QD operators $\hat{\varrho}^{\circ}$ in phase representation. This simplified form is possible because the kernel involves only CP operators, which are diagonal in the phase representation. In order to evaluate the current, we also bring \cref{eq: Current simplified} into the phase representation \begin{align} I_{\text{L}}(t) =&\int_0^tds\sum_{\Delta\bm{M},\Delta\bm{M}'}\int_{\square}d\bm{\varphi} e^{i(\Delta\bm{M}+\Delta\bm{M}')\cdot{\bm{\varphi}}}\nonumber\\ \times&\T{Tr}_\T{QD}\bigg\lbrace\kappa_{\T{I,L}}^\prime(\Delta\bm{ M}';t,s)\hat{\varrho}^\prime(\Delta\bm{M};s)\bigg\rbrace\nonumber\\ =&\int_{0}^{t}ds\int_{\square}d\bm{\varphi}\text{Tr}_{\text{QD}}\bigg\lbrace{\kappa}_{\text{I},\text{L}}^{\circ}(\bm{\varphi};t,s){\hat{\varrho}}^{\circ}(\bm{\varphi};s)\bigg\rbrace, \label{eq: Current in phase rep} \end{align} where we write $\int_{\square}d\bm{\varphi}=\frac{1}{4\pi^2}\int_{0}^{2\pi}d\varphi_{\T{L}}\int_{0}^{2\pi}d\varphi_{\T{R}}$ as a shorthand. Hence, the trace over the Cooper pair imbalance becomes an integral over the phase variable in the phase representation. Since the integral extracts the constant component in the phases, and given \cref{eq:kernel-ph,eq:dm-ph}, its effect is indeed to select the diagonal component precisely as the trace would do. \subsection{The steady state} The form of the kernels in \cref{eq: Current in phase rep,eq: GME in phase final} contains two time variables, and accounts for both the effect of the time-periodic elements and the memory time of the leads. Hence, in order to recover the time-periodic behaviour in the steady state, it is convenient to write the kernels in a form which showcases these two time-dependences distinctively. By employing the fact that in \cref{eq: 12} each $\hat{S}$ operator is associated to a phase factor of the form given in \cref{eq: integral-mu}, we can elucidate this representation. After performing the integral in \cref{eq: freeprop}, we expand the terms in the constituents of the kernels as \begin{align} \mathcal{L}_{\T{T}}'(t)=&\sum_{n,\bm{m}}\ell_{\T{T};n,\bm{m}} e^{i(n\omega_\T{AC}+\bm{m}\cdot\bm{\omega}_\T{DC})t}e^{i\bm{m}\cdot\hat{\bm{\varphi}}}, \label{eq: expansions1}\\ \hat{I}_{\text{L}}^{\prime}\left(t\right)=&\sum_{n,\bm{m}}\hat{j}_{\text{L};n,\bm{m}}e^{i\left(n\omega_{\text{AC}}+\bm{m}\cdot\bm{\omega}_{\text{DC}}\right)t}e^{i\bm{m}\cdot\hat{\bm{\varphi}}}, \label{eq: expansions2}\\ \mathcal{G}'_0(t,s)=&\sum_n g'_{0,n}(t-s)e^{in\omega_\T{AC}s}, \label{eq: expansions3} \end{align} where in \cref{eq: expansions1} $\bm{m}$ is restricted to $0,\pm\bm{u}_l$. The kernels to second order only contain contributions of the form of \cref{eq: expansions1,eq: expansions2,eq: expansions3} and thus one can bring them into the form \begin{align} {\kappa}^\circ_{j}\left(\bm{\varphi};t,s\right) =& \sum_{n,\bm{m}} k_{j,n,\bm{m}}\left(t-s\right) e^{i[\bm{m}\cdot(\bm{\omega}_{DC}s+\bm{\varphi})+n\omega_{AC}s]}, \label{eq: Fourierdecomposed K} \end{align} which manifestly separates into time-translational invariant operators $k_{\T{T};n,\bm{m}}(t-s)$ and periodic factors. However, this property also holds to all orders as higher order kernels at most contain the convolutions of terms of the form \cref{eq: expansions1,eq: expansions2,eq: expansions3} taken at different times. For any order in the coupling one can then extend the periodic phase factors to the appropriate boundaries of the convolution and collecting the phase factors. Then, one arrives at \cref{eq: Fourierdecomposed K}. The time-translational invariant parts decay exponentially at long times, ensuring the existence of a steady state for the reduced QD operator. In \cref{eq: Fourierdecomposed K} $\bm{m}$ has turned into a Fourier index in both the phase and the Josephson frequency. The bichromatic nature of the junctions dynamics is reflected in the presence of both $\omega_\T{AC}$ and $\bm{\omega}_\T{DC}$. Due to the dependence of the kernel components $k_{\T{T};n,\bm{m}}(t-s)$ on a single time variable with finite memory time, \cref{eq: GME in phase final} can be turned into an algebraic equation through a Laplace transform $\tilde{F}(\lambda)=\int_0^\infty dt e^{-\lambda t} F(t)$. We denote with a tilde the terms in Laplace space from here onward. Applying this transformation to \cref{eq: GME in phase final} yields \begin{align} & 0 =(\mathcal{L}_{\T{QD}} - \lambda)\hat{\tilde{\rho}}^\circ(\bm{\varphi};\lambda)+\hat{\rho}^\circ(\bm{\varphi};0) \label{eq: GME-Laplace} \\& +\sum_{n,\bm{m}}e^{i\bm{m}\cdot\bm{\varphi}}\tilde{k}_{\T{T};n,\bm{m}}\left(\lambda\right)\hat{\tilde{\rho}}^\circ(\bm{\varphi};\lambda-in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}}).\nonumber \end{align} Note that the second term in the right side is the reduced dot operator at initial time. Hence, it is a real time quantity (i.e. not a Laplace space quantity). Knowledge of $\hat{\tilde{\rho}}^\circ(\bm{\varphi};\lambda)$ in turn allows one to find the steady state solution through the application of the final value theorem generalized to periodic functions~\cite{Grifoni1998}. In particular, the reduced QD operator has the following asymptotic form \begin{equation} \hat{\rho}^{\circ\infty}\left(\bm{\varphi};t\right)=\sum_{n,\bm{m}}\hat{\varrho}^\circ_{n,\bm{m}}(\bm{\varphi})e^{i(n\omega_{\text{AC}}+\bm{m}\cdot\bm{\omega}_{\text{DC}})t},\label{eq: time-periodic-DO} \end{equation} where we have defined the operatorial Fourier coefficients of the quasiperiodic reduced QD operator \begin{align} \hat{\varrho}^\circ_{n,\bm{m}}(\bm{\varphi})= & \lim_{\lambda\to in\omega_{\text{AC}}+i\bm{m}\cdot\bm{\omega}_{\text{DC}}} \nonumber \\ & (\lambda-in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}})\hat{\tilde{\rho}}^\circ\left(\bm{\varphi};\lambda\right). \label{eq: Fourier-comp-DO} \end{align} In order to obtain the $(n,\bm{m})$ Fourier component of the steady state reduced QD operator as in \cref{eq: time-periodic-DO}, the residua of $\hat{\tilde{\rho}}^\circ(\bm{\varphi};\lambda)$ at its poles have to be extracted by taking a limit in \cref{eq: GME-Laplace}. That is, one multiplies both sides of \cref{eq: GME-Laplace} by $\lambda-in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}}$ and takes $\lambda\to in\omega_{\text{AC}}+i\bm{m}\cdot\bm{\omega}_{\text{DC}}$. The GME then reduces to an algebraic equation for the Fourier components of the steady state reduced QD operator as \begin{widetext} \begin{align} & 0 = (\mathcal{L}_{\T{QD}} -in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}}) \hat{\varrho}^\circ_{n,\bm{m}}(\bm{\varphi}) + \sum_{n',\bm{m}'}e^{i\bm{m}'\cdot\bm{\varphi}} \tilde{k}_{\T{T};n',\bm{m}'}(in\omega_{\text{AC}}+i\bm{m}\cdot\bm{\omega}_{\text{DC}}) \hat{\varrho}^\circ_{n-n',\bm{m}-\bm{m}'}(\bm{\varphi}). \label{eq: GME-intermediate} \end{align} \end{widetext} Solving the phase dependent part of \cref{eq: GME-intermediate} results in \begin{align} \hat{\varrho}^\circ_{n,\bm{m}}(\bm{\varphi})= e^{i\bm{m}\cdot\bm{\varphi}}F(\bm{\varphi})\hat{\varrho}_{n,\bm{m}}. \label{eq: Fourier-comp-DO-phase} \end{align} Here, the $F(\bm{\varphi})$ factor cannot be determined directly from \cref{eq: GME-intermediate} as its exact form depends on the initial preparation. By assuming the QD and the superconducting leads to be uncoupled at initial time $t=0$, this phase envelope is fixed to $F(\bm{\varphi})=1$. This is discussed in more detail in \cref{App: initial conditions}. Once the phase dependent part of the reduced QD operator is determined, we are left with solving for the $\hat{\varrho}_{n,\bm{m}}$. Substituting \cref{eq: Fourier-comp-DO-phase} in \cref{eq: GME-intermediate} results in \begin{widetext} \begin{align} 0 & =(\mathcal{L}_{\T{QD}} -in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}})\hat{\varrho}_{n,\bm{m}} +\sum_{n',\bm{m}'}\tilde{k}_{\T{T};n',\bm{m}'}\left(in\omega_{\text{AC}}+i\bm{m}\cdot\bm{\omega}_{\text{DC}}\right)\hat{\varrho}_{n-n',\bm{m}-\bm{m}'}. \label{eq: GME-final} \end{align} \end{widetext} This expression describes exactly the steady state limit of the GME, with the intrincate integro-differential form of \cref{eq: 43}, reduced to a set of coupled algebraic equations for the Fourier components $\hat{\varrho}_{n,\bm{m}}$. So far, this is a complete reformulation of the Liouville-von Neumann equation, \cref{eq: Liouvillian}, in the limit $t\to\infty$. In fact, if we know the structure of the kernel at arbitrary $\lambda$, this can be solved to yield the full quasiperiodic solution of the reduced QD operator. The same arguments leading to \cref{eq: GME-final} can be employed to simplify the integral form of the current given in \cref{eq: 49}. In general, the current in the steady state $I_{\T{L},\infty}$ will also be quasiperiodic, with the form \begin{equation} I_{\T{L},\infty}(t)=\sum_{n,\bm{m}}j_{\T{L},n,\bm{m}}e^{i(n\omega_{\T{AC}}+\bm{m}\cdot\bm{\omega}_{\T{DC}})t}. \label{eq: cur-qp} \end{equation} The Fourier components of the current are defined, following the same arguments outlined below \cref{eq: Fourier-comp-DO}, as \begin{equation} j_{\T{L};n,\bm{m}}=\lim_{\lambda\to in\omega_{\text{AC}}+i\bm{m}\cdot\bm{\omega}_{\text{DC}}}(\lambda-in\omega_{\text{AC}}-i\bm{m}\cdot\bm{\omega}_{\text{DC}})\tilde{I}_\T{L}\left(\lambda\right). \label{eq: Fourier-comp-current} \end{equation} They can then be related, employing the current kernel, to the Fourier components of the steady state reduced QD operator via \begin{align} &j_{\T{L},n,\bm{m}} = \sum_{n',\bm{m}'}\int_{\square}d\bm{\varphi} F\left(\bm{\varphi}\right)e^{i\bm{m}\cdot\bm{\varphi}} \nonumber\\ & \times \T{Tr}_\T{QD}\bigg\lbrace \tilde{k}_{\T{I,L},n',m'}(in\omega_{\T{AC}}+i\bm{m}\cdot\bm{\omega}_\T{DC})\hat{\varrho}_{n-n',\bm{m}-\bm{m}'}\bigg\rbrace.\label{eq: 22} \end{align} The current will be, in general, quasiperiodic in both the Josephson frequency $\omega_\T{J}=(1,-1)\cdot\bm{\omega}_\T{DC}$ and $\omega_\T{AC}$. This is a result of the superselection rules discussed in \cref{App: Selection rules}. Under certain circumstances the motion will be strictly periodic (i.e. not quasiperiodic). This will be clearly the case whenever $\omega_\T{J} = \omega_\T{AC}$, but even if this equality is not exact, phase-locking forces $\omega_\T{J}$ to adjust dynamically to $\omega_\T{AC}$~\cite{Kautz1996}, resulting in the appearance of Shapiro steps. In the weak coupling limit that we will be considering next, the current is also periodic in $\omega_\text{AC}$, although for a different reason. Namely, that the Fourier components in $\bm{\omega}_\text{DC}$ other than $\bm{m}=0$ will start appearing in the next perturbative order (barring resonances). Regardless, in the following we will focus exclusively on the DC component of the current, corresponding to $I_{\T{L},0,0}$. For the DC component, the current is independent of $F(\bm{\varphi})$ due to \cref{eq:trace-f}. It reads \begin{equation} j_{\text{L},0,0}=\text{Tr}_{\text{QD}}\bigg\lbrace\sum_{n',\bm{m}'}\tilde{k}_{\text{I,L};n',\bm{m}'}\left(0\right)\hat{\varrho}_{-n',-\bm{m}'}\bigg\rbrace. \end{equation} In the following we will focus exclusively on this quantity. \section{Sequential tunneling through a QD Josephson junction}\label{sec: Transport in the weak tunneling limit} After discussing the formalism in its general form, in this section we will restrict ourselves to the weak coupling limit. We will study first the DC case in \cref{subsec: DC case} before turning to the general situation of an AC-driven junction in \cref{subsec: I-V characteristics}. We start by giving an analytical form of the kernels in the weak coupling limit, where only the lowest order in the tunneling amplitudes $|t_l|$ is considered. As previously discussed, it is sufficient to deal with the tunneling kernel and reconstruct from it the current kernel in the end. We therefore consider \cref{eq: Tunneling Kernel}, which after inserting the explicit form of the tunneling Liouvillian given in \cref{eq: 12} and some minor manipulations reads \begin{widetext} \begin{align} \mathcal{K}'&^{(2)}_{\T{T}}(t,s)\oprho(s)=\T{Tr}_{\T{QP}}\bigg\lbrace \sum_{\substack{l,l',\bm{k},\bm{k}',\sigma,\sigma'\\p,p',\alpha,\alpha'}}\frac{-\alpha\alpha'pp't^{p}_{l}t^{p'}_{l'}}{(i\hbar)^2} \hat{d}^{\bar{p},\alpha}_{\sigma}\mathcal{G}'_0(t,s) \hat{d}^{\bar{p}',\alpha'}_{\sigma'}\bigg[ u^{\bar{p}}_{l,k}e^{\int_s^t ds' \frac{ip}{\hbar}(E_{l,k}+\mu_l(s'))}\hat{\gamma}^{p,\alpha}_{l,\bm{k},\sigma}+ v^{p}_{l,k}e^{\frac{2ip}{\hbar}\int_{t_0}^{t}dt'\mu_l\left(t'\right)}\nonumber\\ \times&\T{sgn}(\sigma)e^{\int_s^t ds' \frac{i\bar{p}}{\hbar}(E_{l,k}+\mu_l(s'))}\hat{S}_{l}^{p,\alpha}\hat{\gamma}^{\bar{p},\alpha}_{l,\bar{\bm{k}},\bar{\sigma}}\bigg]\times\bigg[ u^{\bar{p}'}_{l',k'}\hat{\gamma}^{p',\alpha'}_{l',\bm{k}',\sigma'}+\T{sgn}(\sigma') v^{p'}_{l',k'}e^{\frac{2ip'}{\hbar}\int_{t_0}^{s}dt'\mu_{l'}\left(t'\right)}\hat{S}_{l'}^{p',\alpha'}\hat{\gamma}^{\bar{p}' ,\alpha'}_{l',\bar{\bm{k}}',\bar{\sigma}'}\bigg] \oprho(s)\otimes\oprho_{\T{QP}}\bigg\rbrace, \label{eq:kernel-too-big} \end{align} \end{widetext} where we used \begin{align} &\hat{\gamma}^{p,\alpha}_{l,\bm{k},\sigma}\mathcal{G}'_0(t,s)\nonumber \\ &=\mathcal{G}_0(t,s)\exp(\int_s^t ds' \frac{ip}{\hbar}(E_{l,k}+\mu_l(s')))\hat{\gamma}^{p,\alpha}_{l,\bm{k},\sigma}. \label{eq: G0 comm} \end{align} Thanks to \cref{eq: G0 comm}, we can now perform the trace over the quasiparticles directly, yielding \begin{align} \text{Tr}_{\text{QP}}\left\{ \hat{\gamma}_{l,\boldsymbol{k},\sigma}^{p,\alpha}\hat{\gamma}_{l',\boldsymbol{k}',\sigma'}^{p',\alpha'}\hat{\rho}_{\text{QP}}\right\} & =\delta_{p\bar{p}'}\delta_{ll'}\delta_{\sigma\sigma'}\delta_{\boldsymbol{k},\boldsymbol{k}'}f^{p\alpha'}\left(E_{l,k}\right), \label{eq:trace-2nd-order} \end{align} From \cref{eq:kernel-too-big}, the terms after tracing can have either no Cooper pairs, one Cooper pair or two Cooper pairs from the same lead and opposite hermiticity. These two operators would compensate each other, but here they do not necessarily share the same superoperator index. The possible contributions are visualized in a diagrammatic form in \cref{fig: diag: kernel 2 diagrams}. \begin{figure} \centering \includegraphics{pictures/2Odiagrams.pdf} \caption{Diagrams contributing to the second order kernel. Besides quasiparticle vertices (dots), also combined vertices (crosses) containing both Cooper pair and quasiparticle creation or annihilation are possible.} \label{fig: diag: kernel 2 diagrams} \end{figure} Identifying the terms of the kernel according to \cref{eq: action of K,eq: simplified KI}, the situation simplifies considerably, as the superoperator index is effectively rendered irrelevant. Thus, we are able to sum up the outer two and the central two diagrams in \cref{fig: diag: kernel 2 diagrams}, leaving us with only two types of non-vanishing contributions to the tunneling kernel. These two contributions represent the normal and anomalous tunneling kernels. After converting the sums over momenta $\bm{k}$ to integrals over energies $E$ these kernels are given by \begin{widetext} \begin{align} \kappa'&^{(2)}_{\T{T}}\left(\bm{0};t,s\right)=\sum_{l,\sigma,p,\alpha,\alpha'}\frac{\alpha\alpha'|t_{l}|^2}{(i\hbar)^2}\hat{d}^{\bar{p},\alpha}_{\sigma}\mathcal{G}_\T{QD}(t,s)\hat{d}^{p,\alpha'}_{\sigma}\int_{-\infty}^\infty dE D_l(E) L_W(E) e^{\frac{i}{\hbar}\int_s^t ds' (E+p\mu_l(s'))}f^{\alpha'}(E),\label{eq: time domain kappa normal} \end{align} \begin{align} \kappa'^{(2)}_{\T{T}}\left(p\bm{u}_l;t,s\right)=&\sum_{\sigma,\alpha,\alpha'}\frac{p\alpha\alpha'|t_{l}|^2}{(i\hbar)^2} \T{sgn}(\sigma)e^{-ip\phi_l} \hat{d}^{\bar{p},\alpha}_{\sigma}\mathcal{G}_\T{QD}(t,s)\hat{d}^{\bar{p},\alpha'}_{\bar{\sigma}} e^{\frac{ip}{\hbar}\int_{0}^{s}dt'2\mu_l\left(t'\right)}\nonumber\\ \times& \int_{-\infty}^\infty dE \tilde{D}_l(E)\T{sgn}(E)e^{\frac{i}{\hbar}\int_s^t ds' (E+p\mu_l(s'))}f^{\alpha'}(E),\label{eq: time domain kappa anom} \end{align} \end{widetext} where we introduced ${\mathcal{G}'_\T{QD}(t,s)={\exp(\mathcal{L}_{\T{QD}}(t-s))}}$, ${f^q(E)={1/(1+e^{q\beta E})}}$, ${\phi_l={\arg(\Delta_l t_l^2)}+{\frac{2}{\hbar}\int_0^{t_0}ds\mu_l(s)}}$, with $D_{l}(E)$ and $\widetilde{D}_{l}(E)$ the normal and anomalous densities of states (DOS), respectively. In the wide band limit, the latter are given by \begin{align} D_{l}(E)= & D^{0}\T{Re}\bigg\lbrace \sqrt{\frac{E^{2}}{E^{2}-|\Delta_{l}|^{2}+i\gamma}}\bigg\rbrace, \label{eq:DOS-n}\\ \widetilde{D}_{l}(E)= & D^{0}\T{Re}\bigg\lbrace\sqrt{\frac{|\Delta_{l}|^{2}}{E^{2}-|\Delta_{l}|^{2}+i\gamma}}\bigg\rbrace. \label{eq:DOS-sc} \end{align} where $D^0$ is the DOS in the normal state at the Fermi level, and $\gamma$ is a Dynes parameter~\cite{Yeyati1997,Dynes1978} which accounts for a finite broadening of the peaks of the superconducting DOS at $E=\pm|\Delta_l|$. Note that the broadening is introduced here as a phenomenological parameter. See, e.g. Ref.~\cite{Yeyati1997}, where this is discussed in the case of a non-interacting dot. Moreover, we have introduced a Lorentzian $L_W(E)=W^{2}/(W^{2}+E^2)$ with bandwidth $W$ in \cref{eq: time domain kappa normal} in order to regularize the integral. In physical terms, it corrects the ultraviolet divergence due to the non-vanishing density of states at large energies in the wide band limit. This is unnecessary in \cref{eq: time domain kappa anom} as the anomalous DOS decays rapidly enough. The differences between \cref{eq: time domain kappa normal} and \cref{eq: time domain kappa anom} reflect the physical origin of these terms. The normal kernel corresponds to quasiparticle transport through the junction. In that sense, it is equivalent to a non-superconducting lead with a particular DOS, given by \cref{eq:DOS-n}. The anomalous kernel corresponds to the tunneling of quasiparticles together with Cooper pairs, including Andreev reflections. It is the source of the proximity effect. That is, to the appearance of superconducting correlations in the dot $\propto\hat{S}_l^\dagger\hat{d}_\downarrow \hat{d}_\uparrow,\hat{S}_l\hat{d}^\dagger_\uparrow \hat{d}^\dagger_\downarrow$. Upon performing the integrals over time in \cref{eq: time domain kappa normal,eq: time domain kappa anom}, we can use the Jacobi-Anger expansion introduced in \cref{eq: integral-mu}, to find \begin{widetext} \begin{align} \kappa'_{\T{T}}\left(\bm{0};t,s\right)&=\sum_{\substack{l,\sigma,p,\alpha,\alpha',n}}\frac{\alpha\alpha'|t_{l}|^2p^n}{i\hbar} \hat{d}^{\bar{p},\alpha}_{\sigma}Y^{\alpha'}_{l,n}(p\mu_l(0)-i\hbar\mathcal{L}_{\T{QD}},t-s)\hat{d}^{p,\alpha'}_{\sigma}e^{in\omega_{\T{AC}}s}, \\ \kappa'_{\T{T}}\left(p\bm{u}_l;t,s\right)&=\sum_{\substack{\sigma,\alpha,\alpha',n}}\frac{\alpha\alpha'|t_{l}|^2p^{n+1}}{i\hbar} \T{sgn}(\sigma)e^{-ip\phi'_l}\hat{d}^{\bar{p},\alpha}_{\sigma}Z^{\alpha'}_{l,n}(p\mu_l(0)-i\hbar\mathcal{L}_{\T{QD}},t-s)\hat{d}^{\bar{p},\alpha'}_{\bar{\sigma}} e^{in\omega_{\T{AC}}s}e^{ip\omega_{\T{DC},l}s}, \label{eq: long expressions} \end{align} with the associated integrals \begin{align} Y^q_{l,n}(\nu,\tau)&=\frac{(-1)^n}{i\hbar} J_n\bigg[a_l\epsilon_{\T{AC}}\sin\bigg(\frac{\omega_{\T{AC}}}{2}\tau\bigg)\bigg]\int_{-\infty}^\infty dE D_l(E) L_W(E) f^{q}(E)e^{i\frac{E+\nu+(n\hbar\omega_{\T{AC}}/2)}{\hbar}\tau},\label{lap1} \\ Z^q_{l,n}(\nu,\tau)&=\frac{(i)^n}{i\hbar} J_n\bigg[a_l\epsilon_{\T{AC}}\cos\bigg(\frac{\omega_{\T{AC}}}{2}\tau\bigg)\bigg]\int_{-\infty}^\infty dE \tilde{D}_l(E)\T{sgn}(E)f^{q}(E)e^{i\frac{E+\nu+(n\hbar\omega_{\T{AC}}/2)}{\hbar}\tau}.\label{lap2} \end{align} \end{widetext} In phase space the tunneling kernel takes the form \begin{align} \kappa^{\circ(2)}(\bm{\varphi};t,s)&=\kappa'^{(2)}_{\T{T}}\left(\bm{0};t,s\right)+\sum_{p,l}e^{ip\varphi_l}\kappa'^{(2)}_{\T{T}}\left(p\bm{u}_l;t,s\right), \label{eq: seq tun kernel in phase} \end{align} which is an admixture of the normal and anomalous contributions. From this form one can directly identify the expansion coefficients $k_{\T{T},n,\bm{m}}(t-s)$ defined in \cref{eq: Fourierdecomposed K}. Instead of giving them explicitly, we first note that all dependence on the time difference $t-s$ is contained in the functions in \cref{lap1,lap2}. Therefore, we conveniently perform the Laplace transformation on them and introduce \begin{align} \int_0^\infty d\tau e^{-\lambda\tau}Y^q_{l,n}(\nu,\tau)=\tilde{Y}^q_{l,n}(\nu+i\hbar\lambda)\label{eq: normal integral Laplace}, \end{align} where $\tilde{Y}^{q}_{l,n}(\nu)=\int_0^\infty d\tau Y^{q}_{l,n}(\nu,\tau)$. Analogously, we call $\widetilde{Z}$ the Laplace transform of the function $Z$. In the resulting final form of the components of the kernel we can again identify the contribution from the normal ($\bm{m}=0$) sequential tunneling kernel \begin{widetext} \begin{align} \widetilde{k}_{\T{T},n,0}(\lambda) = & \sum_{ l,\sigma,p,\alpha,\alpha'}\frac{p^{n}}{i\hbar}|t_{l}|^{2}\alpha\alpha' \hat{d}^{\bar{p},\alpha}_{\sigma} \widetilde{Y}^{\alpha'}_{l,n}(i\hbar(\lambda-\mathcal{L}_{\T{QD}})+p\mu_{l}(0))\hat{d}^{p,\alpha'}_{\sigma}, \label{eq: 74} \end{align} and contributions from the anomalous ($\bm{m} = \pm \bm{u}_l$) sequential tunneling kernel given by \begin{align} \widetilde{k}_{\T{T},n,p\bm{u}_l}(\lambda) = & \sum_{\sigma,\alpha,\alpha'}\frac{p^{n+1}}{i\hbar}|t_{l}|^{2}\T{sgn}(\sigma)\alpha\alpha' e^{-ip\phi_{l}}e^{-ipa_l\frac{\epsilon_{\T{AC}}}{2}} \hat{d}^{\bar{p},\alpha}_{\sigma} \widetilde{Z}^{\alpha'}_{l,n}(i\hbar(\lambda-\mathcal{L}_{\T{QD}})+p\mu_{l}(0))\hat{d}^{\bar{p},\alpha'}_{\sigma}. \label{eq: 75} \end{align} \end{widetext} The strength of the contribution from the kernels can be estimated by introducing the rate of tunneling of normal electrons in and out of lead $l$ \begin{equation} \Gamma_l=\frac{2\pi}{\hbar}|t_l|^2 D^0. \end{equation} Near coherence peaks the density of states increases by a factor $|\Delta_l|/\sqrt{2\gamma}$. The weak coupling limit is justified provided that $\hbar\Gamma_l|\Delta_l|/\sqrt{2\gamma}$ is the smallest energy scale in the system (for $l=\T{L},\T{R}$). Comparing \cref{eq: 4,eq: 37} we find that, due to their definitions in \cref{eq: Tunneling Kernel,eq: Currentkernel2}, the expressions for the current kernels follow from the tunneling kernels in \cref{eq: 74,eq: 75} by adding a term $ep$, fixing $\alpha=+$ and $l=\T{L}$. \subsection{DC case \label{subsec: DC case}} We start by discussing the DC case, corresponding to $V_{\T{AC}}=0$. This limit has been investigated in previous works, however so far either without the anomalous contributions \cite{Pfaller2013,Gaass2014} or in the limit $|\Delta_l|\rightarrow\infty$ \cite{Governale2008,Hiltscher2012}, in which the quasiparticle degrees of freedom can be disregarded. Here, we will consider the general situation of DC transport through the dot in the weak coupling limit. As we will see, outside of certain parameter regions where the Cooper pairs induce resonant transitions between the $\ket{0}$ and $\ket{2}$ states of the quantum dot \cite{Kamp2021}, the main effect of the superconducting correlations appearing due to the anomalous kernel (i.e. the proximity effect) is to renormalize the tunneling rates. First, let us consider which terms in the quasiperiodic expansion of the reduced QD operator, \cref{eq: time-periodic-DO}, are relevant. Clearly, in the DC case only the terms with $n=0$ have to be considered. Moreover, only certain contributions in the expansion in $\bm{\omega}_\text{DC}$ are required in the weak tunneling limit. For an arbitrary term with $\bm{m}\neq 0$ we have, from \cref{eq: GME-final} \begin{align} \hat{\varrho}_{0,\bm{m}}= & \frac{1}{i\bm{m}\cdot\bm{\omega}_\T{DC}-\mathcal{L}_{\T{QD}}-\widetilde{k}_{\T{T},0,0}(i\bm{m}\cdot\bm{\omega}_\T{DC})} \nonumber \\ \times & \sum_{p,l}\widetilde{k}_{\T{T},0,\bar{p}\bm{u}_l} (i\bm{m}\cdot\bm{\omega}_\T{DC})\hat{\varrho}_{0,\bm{m} +p\bm{u}_l}, \label{eq: 104} \end{align} since, as discussed above, only terms in the kernel which vary $\bm{m}$ by $\pm \bm{u}_l$ are allowed in the weak coupling limit. In the spirit of the secular approximation \cite{Koller2010}, we consider first the situation outside of the resonance \begin{equation} |\bm{m}\cdot\bm{\omega}_\T{DC}+\omega_\T{QD}| \gtrsim \Gamma_l,\,\,\,\,\,l=\T{L},\T{R}. \label{eq:cond-res} \end{equation} where $\omega_\T{QD}$ is a Rabi frequency associated with the action of $\mathcal{L}_{\T{QD}}$ in the denominator of \cref{eq: 104}; in the absence of a Zeeman splitting, $\omega_\T{QD}=0,\pm(2eV_\T{G}+U)$. As discussed in \cref{App: Selection rules}, the $\bm{m}=0$ mode describes the population of the QD, as in the case of normal leads, while the terms $\bm{m}=\pm\bm{u}_l$ will contain coherences between the states $\ket{0}$ and $\ket{2}$, with the Cooper pairs fixing particle conservation. Due to conservation of probability, $\text{Tr}_\text{QD}\lbrace\hat{\varrho}_{0,0}\rbrace=1$. In other words, the terms in $\hat{\varrho}_{0,0}$ are of order $1$ in an expansion in $\Gamma_l$. Since $\hat{\varrho}_{0,0}$ is the only term whose size is fixed by the trace property, we see that \cref{eq: 104} imposes a hierarchy. For any two modes $\hat{\rho}_{0,\bm{m}'}$ and $\hat{\rho}_{0,\bm{m}}$ with $\sum_l|m'_l|=\sum_l|m_l|+1$, $\hat{\rho}_{0,\bm{m}'}$ will be smaller than $\hat{\rho}_{0,\bm{m}}$ by a factor $\sim\Gamma_l/|\bm{m}\cdot\bm{\omega}_\T{DC}+\omega_\T{QD}|$. Then, in order to calculate the reduced QD operator up to order $\Gamma_l$, we need only the contributions $\bm{m} = 0,\pm \bm{u}_l$, provided that \cref{eq:cond-res} is satisfied. However, for the current to order $\Gamma_l$ we need only the terms $\bm{m}=0$ of the reduced QD operator since the current kernel is already of order $\Gamma_l$. The opposite situation corresponds to the resonance where \cref{eq:cond-res} is not satisfied. Then, the hierarchy of Fourier components "skips" an order and all the terms are one order in $\Gamma_l$ less than outside of the resonances (except for $\bm{m}=\bm{0}$, which is still of order 1). This means, in particular, that the contributions $\bm{m}=\pm \bm{u}_l$ (corresponding to coherences with one Cooper pair from either lead) are now of the same order as the populations ($\bm{m}=\bm{0}$). Moreover, the next terms in the hierarchy given by $\bm{m}=\pm(-1,1)$ are now of order $\Gamma_l$ (instead of $\Gamma_l^2$ as before) and necessary to calculate the reduced QD operator up to order $\Gamma_l$. Therefore, the current up to order $\Gamma_l$ at the resonances depends depends also on the contributions $\bm{m}=\pm\bm{u}_l$ Due to this, considering the full second order kernel will give contributions to the current which are not necessarily linear in $\Gamma$. Nonetheless, this correction is negligible for small enough coupling. In the following calculations, we keep all rates included in the normal and anomalous kernels as indicated above and calculate the Cooper pair coherences explicitly. With this clarified, we turn now to the calculation of the current. If the QD is described by the SIAM of \cref{eq: 1}, the current is given, in general, by the expression \begin{widetext} \begin{align} I_{\T{L}}=-e\bigg\lbrace\sum_{\sigma} \bigg[\Gamma_{\T{L}}^{\sigma,0}P_{0}+(\Gamma_{\T{L}}^{2,\sigma}-\Gamma_{\T{L}}^{0,\sigma})P_{\sigma}-\Gamma_{\T{L}}^{\sigma,2}P_{2}\bigg] +2\Gamma_{\T{L}}^{2,0}P_{0}-2\Gamma_{\T{L}}^{0,2}P_{2}\bigg\rbrace, \label{eq: 29} \end{align} \end{widetext} where $P_\chi$ denotes the occupation of state $\chi$ and $\Gamma_l^{\chi',\chi}$ are the tunneling rates from population $\chi$ to population $\chi'$, mediated by lead $l$. \Cref{eq: 29} has a natural interpretation as the difference of rates by which charges enter and exit the left lead. In the normal case, these rates can be related to elements of the tunneling kernel by $\Gamma_l^{\chi',\chi}=(\widetilde{k}_{\T{T},l,0,0}(0^+))^{\chi',\chi}_{\chi',\chi}$, where the $l$ indicates a restriction to the terms stemming from the tunneling to lead $l$. Here we denoted the matrix elements of the kernel by $\widetilde{k}_{\T{T},l,n,\bm{m}}(\lambda))^{\chi,\chi'}_{\chi'',\chi'''}=\bra{\chi}[\widetilde{k}_{\T{T},l,n,\bm{m}}(\lambda)\ketbra{\chi''}{\chi'''}]\ket{\chi'}.$ In the superconducting case, \cref{eq: 29} is still valid. We may write the components $\hat{\varrho}_{0,\pm\bm{u}_l} $ in terms of the $\hat{\varrho}_{0,0}$ as in \cref{eq: 104}. With this, the current can be expressed as in \cref{eq: 29}, substituting the $\Gamma_l^{\chi',\chi}$ by the renormalized rates \begin{widetext} \begin{align} \Gamma_{\T{sec},l}^{\chi',\chi}= & (\widetilde{k}_{\T{T},l,0,0}(0^{+}))^{\chi',\chi}_{\chi',\chi} - 2\T{Re}\bigg[\frac{i\hbar}{2eV_\T{G}+U-\hbar\bm{u}_l\cdot\bm{\omega}_{\T{DC},l}+\Sigma_{l}}(\widetilde{k}_{\T{T},0,\bm{u}_l}(0))^{\chi',2}_{\chi',0}(\widetilde{k}_{\T{T},0,-\bm{u}_l}(-i\bm{u}_l \cdot\bm{\omega}_{\T{DC}}))^{2,\chi}_{0,\chi}\bigg], \label{eq: 26} \end{align} \end{widetext} where we have introduced the self-energy \begin{align} \Sigma_{l}= i\hbar(\widetilde{k}_{\T{T},0,0}(-i\bm{u}_l\cdot\bm{\omega}_{\T{DC}}))^{2,2}_{0,0}, \label{eq: 27} \end{align} and used symmetries to simplify the correction term (see \cref{App: Selection rules}). This renormalization accounts for the back-action of the $\hat{\varrho}_{0,\pm \bm{u}_l} $ terms into $\hat{\varrho}_{0,0}$. The second term in \cref{eq: 26} is $\propto|t_l|^4\sim\Gamma^2_l$ provided that \cref{eq:cond-res} is satisfied, but is relevant inside the resonances. Similarly, the $\hat{\varrho}_{0,\pm \bm{u}_l} $ yield an additional pair tunneling process (i.e. $\Gamma_{l}^{2,0}$ and $\Gamma_{l}^{0,2}$) which is, again, of next order in the tunneling amplitudes except in the resonances. Hence, we see that the main effect of the coherences in the DC current is to renormalize the normal tunneling rates and introduce a new Cooper-pair enabled pair tunneling process. \begin{figure} \includegraphics[width=\columnwidth]{pictures/figDC1.pdf} \caption{Transport in the DC case. (a) Current near the the 1-0 charge degeneracy point for the DC case ($V_\T{AC}=0$), showing areas of Coulomb blockade (clear colors) and of current flow in both directions. (b) Cascade plot for the same parameters and different values of the DC voltage amplitude, from $V_\T{DC}=0.08$ (blue) to $2\,\T{meV}$ (green), showing clearly current peaks due to thermally excited quasiparticles. (c) Diagrams summarizing the three common transport situations in this setup: (1) Normal transport, where $eV_\T{DC}$ is larger than $2\Delta$ and the chemical potential of the QD is aligned so that a current can flow. (2) Transport blockade inside the gap, where the state of the QD lies inside the superconducting gap. (3) Thermal transport, where transport is mediated by thermally excited quasiparticles above the gap. \textit{Parameters}: $U=15\,\T{meV}$, $T=1.2\,\T{K}$, $\Delta_{\T{L}}=\Delta_{\T{R}}=0.32\,\T{meV}$, $\gamma=10\,(\mu\T{eV})^2$ and $2\pi D^{0}|t_{\T{L}}|^{2}=2\pi D^{0}|t_{\T{R}}|^{2}=93\,\T{neV}$, which results in $2\pi D^{0}|t_{\T{L}}|^{2}\beta=0.9$. \label{fig:DC-case}} \end{figure} In \cref{fig:DC-case}~(a), we have represented the current as a function of the gate voltage $V_\T{G}$ and the DC component of the bias voltage $V_\T{DC}$ (the stability diagram of the QD) near $eV_\T{G}=0$, where the empty and the single occupied states of the dot are degenerate (i.e. the 1-0 charge degeneracy point). We assume in the following that the two gaps are identical $\Delta_l=\Delta$, unless noted. For a QD-based junction in the weak coupling limit, the current flows provided that, at the chemical potential of the dot, there are occupied states in one lead (the source) and empty states in the other (the drain). In absence of thermal fluctuations, this can only occur when the magnitude of the DC bias $|V_\T{DC}|$ is larger than $2\Delta$. This possibility is exemplified by point~(1) in \cref{fig:DC-case}~(a). A diagram illustrating the level alignment at this point is represented in \cref{fig:DC-case}~(c.1). At non-zero temperature, transport can also occur by transferring thermally excited quasiparticles that occupy states above the gap~\cite{Pfaller2013,Ratz2014}. At the low temperatures considered here, this subgap thermal current is small compared to the contribution from quasiparticles below the gap, and is hard to observe in the stability diagram of \cref{fig:DC-case}~(a). For that reason, we have also represented in \cref{fig:DC-case}~(b) a cascade plot of the current for different values of $V_\T{DC}=0.08,\ldots,2\,\T{mV}$ capped at a relatively small value of the current, showing clearly the appearance of a set of peaks (marked as (3) both here and in \cref{fig:DC-case}~(a)). An asymmetry due to the different degeneracies of the empty and singly occupied states of the quantum dot is appreciable. The energy level diagram corresponding to this process is represented in \cref{fig:DC-case}~(c.3). Within these two points, there is a region where current does not flow, of size $2\Delta$. This situation is represented schematically in \cref{fig:DC-case}~(c.2). Regarding the limitations of \cref{eq:cond-res}, we note that for the small values of $\Gamma_l$ considered here, the results presented in \cref{fig:DC-case} are valid in general for all regions of interest in the stability diagram. Around the resonances where \cref{eq:cond-res} is not satisfied, the reduced QD operator up to order 1 is calculated exactly, while outside of it we also include the corrections of order $\Gamma_l$ due to the Cooper pair coherences. The only issues arise when this corrections are larger than the sequential tunneling current (i.e. the current $\propto\Gamma_l$), which can occur in the region of Coulomb blockade. However, in this case the current is already quite small and thus the effect of the coherences can be at most a change of sign of the current, which is clearly an artifact. The current in these regions can be calculated considering higher order terms, which fall beyond the scope of this work. In particular, it can be shown ~\cite{PicoCortes2022} that the next order contribution (containing terms $\propto\Gamma^2_l$, in what is called the cotunneling approximation), accounts correctly for the reduced QD operator up to order $\Gamma_l^2$ for all regions of the stability diagram where \cref{eq:cond-res} is not simultaneously broken for two or more values of $\bm{m}\cdot\bm{\omega}_\T{DC}$ or of $\omega_\T{QD}$. Before continuing, we remark that the GME can be solved analytically in this case. The normal and anomalous integrals as defined in \cref{eq: normal integral Laplace} can be expressed for $V_\T{AC}=0$ in terms of a sum over the Matsubara frequencies and are given in \cref{App: sequential tunneling integrals}. Moreover, the populations $P_\chi$ can be obtained explicitly in terms of the renormalized rates of \cref{eq: 26}. The corresponding expressions for the populations are given in \cref{App: Analytic solution for the populations}. \subsection{AC Case\label{subsec: I-V characteristics}} We turn now to the general situation of $V_\T{AC}\neq0$. While for the $\bm{m}$ modes, the sequential tunneling kernel could only change $\bm{m}$ by at most $\pm \bm{u}_l$, the kernel connects all $\hat{\varrho}_{n,\bm{m}}$ in \cref{eq: GME-final}, regardless of $n$, which makes the problem difficult to solve. Therefore, we want to again restrict the range $n$ as we did in \cref{subsec: DC case} for $\bm{m}$. Let us consider first the extension of \cref{eq:cond-res} to non-zero AC voltages, namely \begin{equation} |\bm{m}\cdot\bm{\omega}_\T{DC}+n\omega_\T{AC}+\omega_\T{QD}| \gtrsim \Gamma_l,\,\,\,\,\,l=\T{L},\T{R}, \label{eq:cond-res-ac} \end{equation} which amounts to staying sufficiently far from any photon assisted resonance between the unoccupied and the doubly occupied states (mediated by Cooper pairs). Studying the DC current, we can immediately make the observation that, given \cref{eq:cond-res-ac}, any contribution coming from $\hat{\varrho}_{n,\bm{m}}$ with both $n,\bm{m}\neq 0$ will be at least $\propto|t_l|^4\sim\Gamma^2_l$ and can therefore be neglected in the sequential tunneling approximation. Combined with the discussion in \cref{subsec: DC case}, this enables us to focus on $\bm{m}=0,n\in\mathbb{Z}$. For these indices, $\omega_\T{QD}=0$ due to the selection rules discussed in \cref{App: Selection rules}. Following the treatment in the DC case, we write \begin{align} \hat{\varrho}_{n,0}= & \frac{1}{in\omega_\T{AC}-\widetilde{k}_{\T{T},0,0}(in\omega_\T{AC})} \sum_{n'}\widetilde{k}_{\T{T},n',0}(in\omega_\T{AC})\hat{\varrho}_{n-n',0}. \label{eq: hierarchy AC} \end{align} \Cref{eq: hierarchy AC} in the region of validity of \cref{eq:cond-res-ac} again describes a hierarchy of contributions to the reduced QD operator, where any term $\hat{\rho}_{\pm|n|,0}$ will be smaller than $\hat{\rho}_{\pm|n|\mp1,0}$ by a factor $\Gamma_l/\omega_\T{AC}$. By the same arguments employed in \cref{subsec: DC case}, this enables us to neglect all $n\neq0$ contributions. This is the \textit{high frequency approximation} on which we will focus in the rest of the text. Note that this approximation requires only that the photon energy is large compared to $\Gamma_l$, not to all other energy scales of the problem. For the remaining non-vanishing contribution, $n=0$, we can expand \begin{equation} \begin{aligned} J_{0}\big[a_l\epsilon_\text{AC}\sin\big(\omega_{\T{AC}}t/2\big)\big]=\sum_{k=-\infty}^{\infty}J_{k}^{2}\big(a_l\epsilon_\text{AC}/2\big)e^{ik\omega_{\T{AC}}t}. \end{aligned} \label{eq: 92} \end{equation} Using this, we find that the integrals in \cref{eq: normal integral Laplace} are given by \begin{equation} \begin{aligned} \widetilde{Y}^{q}_{l,n}(\nu)\rightarrow\delta_{n,0}\sum_{k=-\infty}^{\infty}J_{k}^{2}\big(a_l\epsilon_\text{AC}/2\big)\widetilde{Y}^{q}_{l,\T{DC}}(\nu+k\hbar\omega_{\T{AC}}), \end{aligned} \label{eq: 93} \end{equation} which is in agreement with previous works \cite{Whan1996}. Here, each term corresponds to a photon assisted rate weighted by a factor $J_{k}^{2}\big(\epsilon_\text{AC}/2\big)$ representing the process associated with the absorption or emission of $k$ photons. For fixed $\epsilon_\text{AC}$, due to the dependence of higher order Bessel functions on $\epsilon_\T{AC}$, it is always possible to truncate the sums in \cref{eq: 93} to $|k|<k_\text{max}$. \cref{eq: 93} hints at a connection of the resulting expressions for high frequency to the well known Tien-Gordon theory of photon assisted sidebands in tunneling~\cite{Tien1963}. A naive application of the Tien-Gordon model suggests instead the replacement \begin{equation} I(V_{\T{DC}},V_\T{g})\rightarrow \sum_{k=-\infty}^{\infty}J_{k}^{2}\big(\epsilon_\text{AC}/2\big)I(V_{\T{DC}}-k\hbar\omega_{\T{AC}}/e,V_\T{g}), \label{eq: 94} \end{equation} upon adding the AC drive. However, the change in the rates \cref{eq: 93} not only affects the current, but also the steady state solution, via \cref{eq: 74,eq: 75}. As such, this naive version of the Tien-Gordon model yields qualitatively different results from the correct expression. See \cref{App: Comparison with Tien-Gordon} for details. We further remark that this series describe photon assisted processes and not the formation of Shapiro steps, which would yield contributions of the form~\cite{Hamilton1970} \begin{equation} I_\T{c}(0,V_\T{g}) \sum_{k=-\infty}^{\infty}|J_{k}\big(a_l\epsilon_\text{AC}\big)|\delta(V_{\T{DC}}-k\hbar\omega_{\T{AC}}/e), \label{eq:shapiro} \end{equation} according to the semiclassical RCSJ model~\cite{Kautz1996}. We expect these contributions to arise from higher order tunneling processes and beyond the conditions established in \cref{eq:cond-res-ac}. \begin{figure} \includegraphics[width=\columnwidth]{pictures/figAC1.pdf} \caption{Transport in the AC case. (a) Current near the the 1-0 charge degeneracy point for the AC case with $\epsilon_\text{AC}=b_{0,1}$, where $b_{m,n}$ is the $n$th zero of the $m$th Bessel function of the first kind. New areas of current flow appear here as compared to \protect\cref{fig:DC-case} due to the possibility of photon assisted transport. (b) Diagrams exemplifying several transport situations in this setup: (1) Sideband transport, where current flows even if the chemical potential of the dot lies within the gap of any of the leads. (2) Subgap transport, where current flows despite $V_\T{DC}<2\Delta$. (3) Current inversion, where the backwards rates are larger than the forwards rates, and the net current flows from the lead at lower to the one with the higher (average) chemical potential (i.e. from drain to source). (c) Current as a function of the DC bias $V_\T{DC}$ for $V_\T{G}=0$, $\epsilon_\text{AC}=b_{0,1}$ (top) and $\epsilon_\text{AC}=b_{0,1}/2$ (bottom), corresponding to the cases where two and one photon assisted processes show current inversion, respectively. The color filling indicates the sign of the current (blue for negative, red for positive sign). \textit{Parameters}: Same as in \protect\cref{fig:DC-case}. The AC energy $\hbar\omega_{\T{AC}}$ is set to $0.5\,\T{m}e\T{V}$ and the considered range of $n=-20,...,20$ with $m=0,\pm1$ is in accordance with the discussion in \protect\cref{subsec: DC case}. \label{fig:AC-case}} \end{figure} We note that, in general, the high frequency approximation breaks down for a bichromatic drive if the \textit{difference} of the frequencies becomes comparable to the time scale of the dynamics of the system \cite{Ho1983,GomezLeon2020}. With $\omega_{\T{DC}}$ depending on the position in the stability diagram, this condition of non-degeneracy translates into \cref{eq:cond-res-ac}, again highlighting the bichromatic nature of the junction due to the presence of Cooper pairs. The respective integrals as defined according to \cref{eq: normal integral Laplace} are solved analytically in \cref{App: sequential tunneling integrals}. Another situation where \cref{eq: GME-final} simplifies considerably is given by the linear response in the strength of the AC drive. This case is briefly discussed in \cref{App: Linear response}. In \cref{fig:AC-case}~(a) we have represented the current as a function of the gate voltage $V_\T{G}$ and the DC component of the bias voltage $V_\T{DC}$ for an AC bias with amplitude $eV_\T{AC}=b_{0,1}\hbar\omega_\T{AC}$, where $b_{m,n}$ is the $n$th zero of the $m$th Bessel function of the first kind, again near the 1-0 degeneracy point. The resulting stability diagram exhibits features reminiscent of the DC case of \cref{fig:DC-case} (parameters are otherwise the same) but replicated and displaced by integers of $\hbar\omega_\T{AC}/e$. These replicas arise due to the photon assisted rates in \cref{eq: 93}. Their non-trivial nature is expected, since the rates enter in a decisively non-linear manner in the GME. (See, for instance, the analytic result of \cref{App: Analytic solution for the populations}, which corresponds to the much simpler DC case.) We have indicated in \cref{fig:AC-case}~(a) exemplary points which reflect the effect of the ac bias. Point~(1) corresponds to a situation in which the DC voltage is larger than $2\Delta$ but the chemical potential of the dot is not aligned as to lead to current flow. Nonetheless, photon assisted transitions still allow for a non-zero current. This is represented schematically in \cref{fig:AC-case}~(b.1). Here, dashed lines separated by $\pm\hbar\omega_\T{AC}$ from the chemical potential of the dot, represented by a full line, illustrate a tunneling process accompanied by the absorption/emission of a photon. Note, however, that this representation is only for illustrative purposes, since the AC voltage is applied to the leads and not to the QD. On the other hand, point~(2) corresponds to subgap transport, in which the DC bias voltage is not large enough to overcome the superconducting gap, but a current still flows due to AC-induced sidebands, as can be seen in the diagram of \cref{fig:AC-case}~(b.2). The superconducting case differs from the normal conducting case by also containing regions of current inversion, where the current flows in the opposite direction of the DC bias. This occurs, for instance, at the point labeled (3) in \cref{fig:AC-case}~(a). This effect is known to appear as a result of a non-flat DOS \cite{Kostur2008}. For point~(3), transport without photon absorption nor emission is suppressed since the chemical potential of the dot lies in the gap of both leads, while photon assisted processes are allowed. In this particular configuration, the backward photon assisted rate transfers charges from the peak in the DOS of the right lead, while the forward rate transfers charges from the flat region of the DOS of the left lead (as represented schematically in \cref{fig:AC-case}~(b.3)). As such, the backward rate is larger than the forward one (by a factor $\sim\Delta/\sqrt{2\gamma}$, at most), resulting in a net current flow against the applied DC-bias. \cref{fig:AC-case}~(c) showcases this at two values of the AC bias amplitude, $\epsilon_\text{AC}=b_{0,1}$ (top) and $\epsilon_\text{AC}=b_{0,1}/2$ (bottom). For the latter, current inversion occurs only near $eV_\T{DC}=\pm\Delta$, following the same process as described above. For the former, a further zone of current inversion occurs near $eV_\T{DC}=\pm(2\hbar\omega+\Delta)$ well inside the regions of current flow in the DC case. \begin{figure} \includegraphics[width=\columnwidth]{pictures/figLZS.pdf} \caption{(a) Differential conductance as a function of the DC and AC amplitudes of the bias voltage, $V_\T{DC}$ and $V_\T{AC}$, respectively, for $V_\T{G}=0$. The fan-like pattern reflects the appearance of multiple sidebands as $V_\T{AC}$ increases. (b) Cuts of Fig.~(a) for $V_\T{AC}=0$ (blue), 0.125 (green) and $0.25\,\T{mV}$ (light red). (c) Full lines: Cuts of Fig.~(a) for values of $V_\T{DC}$ corresponding to the best matches for the resonances with $n=0-3$ photons (blue, light green, green and light red, respectively). Dashed lines: squared Bessel functions $J_n^2(\epsilon_\text{AC}/2)$ for the same values of $n$. Note the close match for most of the parameter range. \textit{Parameters}: Same as in \protect\cref{fig:AC-case} but with $\hbar\omega_{\T{AC}}=25\,\mu eV$. \label{fig:AC-fan}} \end{figure} The asymmetry between the left and right sides of the degeneracy point, which can be easily observed in \cref{fig:AC-case}~(a), was already mentioned in the discussion of \cref{fig:DC-case}. It arises from the spin-degeneracy of the single-occupied states. Note that said asymmetry cannot be understood within the Tien-Gordon theory for the current \cite{Tien1963}, but is a consequence of the expression for the rates in \cref{eq: 93}. Overall particle hole symmetry is conserved as the stability diagram at the 1-2 degeneracy point (i.e. the point where $\ket{\sigma}$ and $\ket{2}$ are degenerate at $eV_\T{G} = U$) is a mirror copy of this one. \begin{figure*} \includegraphics[width=1.74\columnwidth]{pictures/fig2C.pdf} \caption{Comparison between the normal and superconducting cases. (a) Current near the the 1-0 charge degeneracy point for the case of normal conducting leads with $\epsilon_\text{AC}=b_{0,1}$ and $\omega_\T{Ac}=0.32\,\T{meV}$. (b) Same, for the superconducting case, with $\omega_\T{AC}=\Delta$. (c) Cut for $V_\T{G}=0.15\,\T{mV}$, along the arrows marked in (a) and~(b), showing current inversion. We have further represented the case $\hbar\omega=2\Delta$ for the same cut, showing an even larger signature of inversion. (d) Sign of the current for the same parameters as Fig.~(b). \textit{Parameters}: Same as in \protect\cref{fig:AC-case}.} \label{fig:2Col} \end{figure*} \cref{fig:AC-fan} shows the emergence of the photon assisted sidebands as a function of $V_{\T{AC}}$ for $\hbar\omega_{\T{AC}}=25\,\mu eV$. The resulting fan like pattern has a spacing of $2\hbar\omega_{\T{AC}}/e$ in $V_\T{DC}$ between the individual peaks, in agreement with \cref{eq: 93}. As the AC voltage increases, AC induced subgap transport at lower voltages becomes possible. Compared to the normal (i.e. non-superconducting) case, here we obtain \textit{two} fans corresponding to the states at the two sides of the gap. The conductance itself changes sign at the gap edges, as can be seen in \Cref{fig:AC-fan}~(a) and more clearly in \Cref{fig:AC-fan}~(b). This is a well-known result of the peaked DOS of superconductors~\cite{Doh2008}. The resulting conductance peaks at the different resonances follow nonetheless a Bessel-like pattern, shown in \Cref{fig:AC-fan}~(c). We have represented, together with the conductance, the associated squared Bessel function $J_n^2(\epsilon_\text{AC}/2)$ (employing dashed lines). The evaluation of the conductance along the peak of the respective rate contribution results in it dominating the other rates. Hence, the result expected from the Tien-Gordon model is recovered partially, specially at large values of $V_\T{AC}$. Similiar features have been observed in recent experiments in scanning tunneling microscopy with superconducting tip and substrate \cite{Kot2020,Peters2020}. The complex nature of the stability diagram of \cref{fig:AC-case} is a result of having two incommensurate energy scales. The stability diagram under an AC voltage for the normal case (i.e. non-superconducting leads) is comparatively simple, as can be seen in \cref{fig:2Col}~(a) for $\hbar\omega_\T{AC}=0.32\,\text{meV}$, since only the AC frequency comes into play. Similarly, for superconducting leads with $\hbar\omega_\T{AC}=\Delta=0.32\,\text{meV}$, as represented in \cref{fig:2Col}~(b), the current also exhibits a simpler structure as compared to \cref{fig:AC-case}, since the two energy scales are commensurate. In this situation, the AC bias pumps quasiparticles from below the gap so that a non-zero current can flow even for for arbitrarily small $V_\T{AC},V_\T{DC}$. For the normal case, the DOS is flat and there is no possibility for current inversion, as the backwards and forward photon assisted rates will be equal. As a result, the current always flows in the direction of the DC bias. \Cref{fig:2Col}~(c) shows a small vertical cut of the stability diagram for both the normal (blue) and the superconducting case (green, the lighter color corresponding to $\hbar\omega_\T{AC}=\Delta$ and the darker color corresponding to $\hbar\omega_\T{AC}=2\Delta$), showing current inversion in the latter In \cref{fig:2Col}~(d) we have represented the sign of the current for the same parameters as in \cref{fig:2Col}~(b), in the region close to $V_\T{DC}=0$, in order to showcase more clearly the pattern resulting from current inversion. In the two diamond-like current inversion regions near $V_\T{G}=0$, both the $k=0$ and the $k=\pm1$ rates are blockaded by the gap and the $k=\pm2$ rates are dominant. Current inversion then corresponds to the region where the $k=2$ rate is large due to the peak of the DOS and the $k=-2$ is smaller as it samples the flat region of the DOS (and vice versa). Apart from these diamonds, current inversion occurs along $V_\T{G}=0$ in intervals separated by $\hbar\omega_\T{AC}=\Delta$ but now has a conic shape. The explanation of their origin is nonetheless the same (i.e. higher $|k|$ rates being dominant over the ones with lower $k$). The small stripe of current inversion near $V_\T{DC}=0$ for $V_\T{G}<0$ is a numerical error appearing in the region of zero bias, where the model fails according to \cref{eq:cond-res-ac}. \section{Conclusion\label{sec: Conclusion}} In this work we have studied the transport through a quantum dot Josephson junction in presence of a periodic driving. By generalizing the reduced density matrix approach to transport to the case of multiple driving frequencies, we were able to give an expression for the long time behavior of the current and the relevant operator describing the QD. We have given general expressions valid at all perturbative orders for the generalized master equation satisfied by the reduced QD operator and, from there, obtained the expressions for the weak tunneling limit. Within this limit, we have derived the conditions of validity for the sequential tunneling approximation. For the case of high frequency compared to the coupling strength, we were able to derive analytical expressions for the sequential tunneling rates in presence of photon assisted tunneling. The photon assisted tunneling rates determine the current in a highly non-linear manner, and although the features of the current can often be understood based on the appearance of sidebands, other effects are less trivial. Among them, we predict the emergence of total current inversion, in which the current flows in the opposite direction of the DC voltage bias for certain regions of the stability diagram. We explained its origin as being due to the non-flat density of states and the dominance of backward photon assisted tunneling rates. The formalism presented here serves as a counterpoint to non-equilibrium Green's function techniques~\cite{Yeyati2017}, which describe tunneling to all order of magnitudes, but can only treat the interaction approximately. In a similar manner, density operator methods, which treat the interaction exactly, can be extended to the intermediate coupling regime via diagrammatic summation techniques~\cite{Konig1996b,Kern2013}. A similar idea has been implemented for the case of infinite gap~\cite{Pala2007,Governale2008}, where the sequential tunneling anomalous kernel, as presented here, is the dominant term, and the intricate cotunneling and higher order contributions to the kernels can be neglected. In that direction, future research may relax the weak coupling condition employed throughout this work and consider higher orders in the tunneling. With the supercurrent arising at the next higher order for the same formalism~\cite{Glazman1989}, this lays the groundwork for a microscopic theory of the physics of AC-driven Josephson junctions. Among the effects that are expected to arise beyond the weak coupling limit is the appearance of Shapiro steps, which are understood within the semiclassical picture~\cite{Kautz1996} to originate from the characteristic non-linear nature of the supercurrent term, together with relaxation and the AC drive. This, in fact, may result in the appearance of chaotic dynamics under specific conditions. Most of the elements necessary to describe this situation are included in the model as described here, and therefore a connection to the quantum mechanical microscopic behaviour would be of great interest. Moreover, recent experiments on topological Josephson junctions hint at signatures of Majorana zero modes in the pattern of Shapiro steps for junctions made from topological superconductors, which makes such a microscopic description highly desirable. Although there exist several treatments based on the semiclassical picture~\cite{Dominguez2012,PicoCortes2017,Park2021}, there is surprisingly little literature~\cite{Virtanen2013,Li2018,Galaktionov2021} on the problem from a microscopic point of view, even a detailed characterization is fundamental in order to differentiate trivial and topological modes~\cite{Fischer2022}. The methodology as presented here could shed light on the properties of the fractional Josephson effect and the associated even-odd pattern in the Shapiro steps, beyond what is understood from the semiclassical treatment. This is particularly interesting in the interacting case, where previous works have shown that the periodicity of the Josephson effect can be highly non-trivial~\cite{Zhang2014,Peng2016}. In that direction, the formalism as presented here seems particularly well suited to tackle this problem. \begin{acknowledgments} We thank G. Platero and A. Donarini for fruitful discussions and acknowledge DFG funding through projects B04 and B09 of SFB 1277 Emerging Relativistic Phenomena in Condensed \mbox{Matter} and support from CSIC Research Platform PTI-001 as well as (MICINN) via Grant No. PID2020-117787GB-100. \end{acknowledgments}
1,314,259,992,696
arxiv
\section{Introduction}\label{sec1} Empirical Bayes methods, though of increasing use, still suffer from an uncertain theoretical basis, enjoying neither the safe haven of Bayes theorem nor the steady support of frequentist optimality. Their rationale is often reduced to inserting more or less obvious estimates into familiar Bayesian formulas. This conceals the essential empirical Bayes task: learning an appropriate prior distribution from ongoing statistical experience, rather than knowing it by assumption. Efficient learning requires both Bayesian and frequentist modeling strategies. My plan here is to discuss such strategies in a mathematically simplified framework that, hopefully, renders them more transparent. The development proceeds with some methodological discussion supplemented by numerical examples. A wide range of empirical Bayes applications have the following structure: repeated sampling from an unknown prior distribution $g\pthe $ yields unseen realizations \begin{equation} \Theta_1,\Theta_2,\ldots,\Theta_N. \label{11} \end{equation} Each $\Theta_k$ in turn provides an observation $X_k\sim f_{\Theta _k}\pdot$ from a known probability family $f_\theta(x)$, \begin{equation} X_1,X_2,\ldots,X_N. \label{12} \end{equation} On the basis of the observed sample \eqref{12}, the statistician wishes to approximate certain Bayesian inferences that would be directly available if $g\pthe$ were known. This is the empirical Bayes framework developed and named by \citet{robbins}. Both $\Theta$ and $X$ are usually one-dimensional variates, as they will be in our examples, though that is of more applied than theoretical necessity. A central feature of empirical Bayes estimation is that the data arrives on the $x$ scale but inferences are calculated on the $\theta$ scale. Two main strategies have developed: modeling on the $\theta$ scale, called \textit{$g$-modeling} here, and modeling on the $x$ scale, called \textit{$f$-modeling}. $G$-modeling has predominated in the theoretical empirical Bayes literature, as in \citet{laird}, \citet {morris}, \citet{zhang}, and \citet{jiang}. Applications, on the other hand, from \citet{robbins} onward, have more often relied on $f$-modeling, recently as in \citeauthor{2010} (\citeyear{2010,2011}) and \citet{brown}. We begin Section~\ref{sec2} with a discretized statement of Bayes theorem that simplifies the nonparametric\vadjust{\goodbreak} $f$-modeling development of Section~\ref{sec3}. Parameterized $f$-modeling, necessary for efficient empirical Bayes estimation, is discussed in Section~\ref{sec4}. Section~\ref{sec5} introduces an exponential family class of $g$-modeling procedures. Classic empirical Bayes applications, an $f$-modeling stronghold (including Robbins' Poisson formula, the James--Stein estimator and false discovery rate methods), are the subject of Section~\ref{sec6}. The paper concludes with a brief discussion in Section~\ref{sec7}. Several numerical examples, both contrived and genuine, are carried through in Sections~\ref{sec2} through \ref{sec7}. The comparison is never one-sided: as one moves away from the classic applications, $g$-modeling comes into its own. Trying to go backward, from observations on the $x$-space to the unknown prior $g\pthe$, has an ill-posed computational flavor. Empirical Bayes calculations are inherently fraught with difficulties, making both of the modeling strategies useful. An excellent review of empirical Bayes methodology appears in Chapter~3 of \citet{newcarlin}. There is an extensive literature, much of it focusing on rates of convergence, concerning the ``deconvolution problem,'' that is, estimating the distribution $g\pthe$ from the observed $X$ values. A good recent reference is \citet{butucea}. Empirical Bayes inference amounts to estimating certain nonlinear functionals of $g\pdot$, whereas linear functionals play a central role for the deconvolution problem, as in \citet{cavalier}, but the two literatures are related. The development in this paper employs discrete models that avoid rates of convergence difficulties. Empirical Bayes analyses often produce impressive-looking estimates of posterior $\theta$ distributions. The main results in what follows are a series of computational formulas---Theorems \ref{th1} through \ref{th4}---giving the accuracy of both $f$-model and $g$-model estimates. Accuracy can be poor, as some of the examples show, and in any case accuracy assessments are an important part of the analysis. \section{A Discrete Model of Bayesian Inference}\label{sec2} In order to simplify the $f$-modeling computations, we will assume a model in which both the parameter vector $\theta$ and the observed data set $x$ are confined to finite discrete sets: \begin{eqnarray}\label{21} \theta\in\bthe&=&(\theta_1,\theta_2,\ldots, \theta_j,\ldots,\theta_m)\quad\mbox{and} \nonumber \\[-8pt] \\[-8pt] \nonumber x\in\mathbf{x}&=&(x_1,x_2, \ldots,x_i,\ldots,x_n) \end{eqnarray} with $m<n$. The prior distribution $\bg$ puts probability $g_j$ on $\theta_j$, \begin{equation} \bg=(g_1,g_2,\ldots,g_j, \ldots,g_m)'. \label{22} \end{equation} This induces a marginal distribution $\bmf$ on $\mathbf{x}$, \begin{equation} \bmf=(f_1,f_2,\ldots,f_i, \ldots,f_n)', \label{23} \end{equation} with $f_i=\Pr\{x=x_i\}$. Letting $\{p_{ij}\}$ represent the sampling probabilities \begin{equation} p_{ij}=\Pr\{x_i|\theta_j\}, \label{24} \end{equation} the $n\times m$ matrix \begin{equation} P=(p_{ij}) \label{25} \end{equation} produces $\bmf$ from $\bg$ according to \begin{equation} \bmf=P\bg. \label{26} \end{equation} \begin{figure*} \includegraphics{455f01.eps} \caption{\emph{Top:} Discrete model: prior $g\pthe, \theta =\operatorname {seq}(-3,3,0.2)$; $g$ is equal mixture of $\caln(0,0.5^2)$ and density $\propto|\theta|$. \emph{Bottom:} Corresponding $f(x)$: assuming $\caln(\theta,1)$ sampling, $x=\operatorname{seq}(-4.4,5.2,0.05)$. Note the different scales.} \label{fig1} \end{figure*} In the example of Figure~\ref{fig1}, we have \begin{equation} \bthe=(-3,-2.8,\ldots,3)\quad (m=31), \label{27} \end{equation} with $g\pthe$ an equal mixture of a discretized $\caln(0,0.5^2)$ density and a density proportional to $|\theta|$. The sampling probabilities $p_{ij}$ are obtained from the normal translation model $\varphi(x_i-\theta_j)$, $\varphi$ the standard normal density function, and with \begin{equation} \mathbf{x}=(-4.4,-4.35,\ldots,5.2)\quad (n=193). \label{28} \end{equation} Then $\bmf=P\bg$ produces the triangular-shaped mar\-ginal density $f(x)$ seen in the bottom panel. Looking ahead, we will want to use samples from the bottom distribution to estimate functions of the top. In the discrete model \eqref{21}--\eqref{26}, Bayes rule takes the form \begin{equation} \Pr\{\theta_j|x_i\}=p_{ij}g_j/f_i. \label{29} \end{equation} Letting $\bp_i$ represent the $i$th row of matrix $P$, the $m$-vector of posterior probabilities of $\theta$ given $x=x_i$ is given by \begin{equation} \diag(\bp_i)\bg/\bp_i\bg, \label{210} \end{equation} where $\diag(\mathbf{v})$ indicates a diagonal matix with diagonal elements taken from the vector $\mathbf{v}$. Now suppose $t\pthe$ is a parameter of interest, expressed in our discrete setting by the vector of values \begin{equation} \bt=(t_1,t_2,\ldots,t_j, \ldots,t_m)'. \label{211} \end{equation} The posterior expectation of $t\pthe$ given $x=x_i$ is then \begin{eqnarray} \label{212} E \{t\pthe|x_i \}&=&\sum _{j=1}^mt_jp_{ij}g_j\Big/f_i \nonumber \\[-8pt] \\[-8pt] \nonumber &=&\bt'\diag(\bp_i)\bg/\bp_i\bg. \end{eqnarray} The main role of the discrete model \eqref{21}--\eqref{26} is to simplify the presentation of $f$-modeling begun in Section~\ref{sec3}. Basically, it allows the use of familiar matrix calculations rather than functional equations. $G$-modeling, Section~\ref{sec5}, will be presented in both discrete and continuous forms. The prostate data example of Section~\ref {sec6} shows our discrete model nicely handling continuous data. \section{Bayes Rule in Terms of \lowercase{$\mathbf{f}$}}\label{sec3} Formula \eqref{212} expresses $E\{t\pthe|x_i\}$ in terms of the prior distribution $\bg$. This is fine for pure Bayesian applications but in empirical Bayes work, information arrives on the $x$ scale and we may need to express Bayes rule in terms of $\bmf$. We begin by inverting \eqref{26}, $\bmf=P\bg$. For now assume that the $n\times m$ matrix $P$ \eqref{24}--\eqref{25} is of full rank $m$. Then the $m\times n$ matrix \begin{equation} A=\bigl(P'P\bigr)^{-1}P' \label{31} \end{equation} carries out the inversion, \begin{equation} \bg=A\bmf. \label{32} \end{equation} Section~\ref{sec4} discusses the case where rank($P$) is less than~$m$. Other definitions of $A$ are possible; see the discussion in Section~\ref{sec7}. With $\bp_i$ denoting the $i$th row of $P$ as before, let \begin{equation}\quad \mathbf{u}'=(\cdots t_jp_{ij}\cdots)= \bt'\diag(\bp_i), \quad\mathbf{v}'= \bp_i \label{33} \end{equation} and \begin{equation} \bU'=\mathbf{u}'A, \quad\bV'=\mathbf{v}'A, \label{34} \end{equation} $\bU$ and $\bV$ being $n$-vectors. (Here we are suppressing the subscript in $\mathbf{u}=\mathbf{u}_i$, etc.) Using \eqref{32}, the Bayes posterior expectation $E\{t|x_i\}$ \eqref{212} becomes \begin{equation} E\{t|x_i\}=\frac{\mathbf{u}'\bg}{\mathbf{v}'\bg}=\frac{\bU'\bmf}{\bV '\bmf}, \label{35} \end{equation} the latter being \textit{Bayes rule in terms of $\bmf$}. Notice that $\bU$ and $\bV$ do not depend on $\bg$ or $\bmf$. The denominator $\bV '\bmf$ equals $f(x_i)$ in \eqref{35}, but not in the regularized versions of Section~\ref{sec4}. In a typical empirical Bayes situation, as in Section~6.1 of \citet {2010}, we might observe independent observations $X_1,X_2,\ldots,X_N$ from the marginal density $f(x)$, \begin{equation} X_k\iid f\pdot,\quad k=1,2,\ldots,N, \label{36} \end{equation} and wish to estimate $E=E\{t|x_i\}$. For the discrete model \eqref{21}, the vector of counts $\by=(y_1,y_2,\ldots,y_n)'$, \begin{equation} y_i=\#\{X_k=x_i\}, \label{37} \end{equation} is a nonparametric sufficient statistic; $\by$ follows a multinomial distribution on $n$ categories, $N$ draws, probability vector $\bmf$, \begin{equation} \by\sim\operatorname{Mult}_n(N,\bmf), \label{38} \end{equation} having mean vector and covariance matrix \begin{equation} \by\sim\bigl(N\bmf,ND(\bmf) \bigr), \quad D(\bmf)\equiv\diag(\bmf )-\bmf \bmf'. \label{39} \end{equation} The unbiased estimate of $\bmf$, \begin{equation} \hbf=\by/N, \label{310} \end{equation} gives a nonparametric estimate $\hate$ of $E\{t|x_i\}$ by substitution into \eqref{35}, \begin{equation} \hate=\bU'\hbf/\bV'\hbf. \label{311} \end{equation} Using $\hbf\sim(\bmf,D(\bmf)/N)$, a standard differential argument yields the approximate ``delta method'' frequentist standard error of $\hate$. Define \begin{equation} U_f=\sum_{i=1}^nf_iU_i,\quad V_f=\sum_{i=1}^nf_iV_i \label{312} \end{equation} and \begin{equation} \bW=\frac{\bU}{U_f}-\frac{\bV}{V_f}. \label{313} \end{equation} (Notice that $\sum f_iW_i=0$.) \begin{thm}\label{th1} The delta-method approximate standard deviation of $\hate=\bU'\hbf /\bV '\hbf$ is \begin{equation} \sd(\hate)=\frac{1}{\sqrt{N}}|E|\cdot\sigma_f(W), \label{314} \end{equation} where $E=\bU'\bmf/\bV'\bmf$ and \begin{equation} \sigma_f^2(W)=\sum_{i=1}^nf_iW_i^2. \label{315} \end{equation} The approximate coefficient of variation $\sd(\hate)/|E|$ of $\hate$ is \begin{equation} \cv(\hate)=\sigma_f(W) /\sqrt{N}. \label{316} \end{equation} \end{thm} \begin{pf} From \eqref{35} we compute the joint moments of $\bU'\hbf$ and $\bV '\hbf$, \begin{eqnarray}\label{317} \qquad&&\pmatrix{\bU'\hbf \cr \bV'\hbf} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\quad\sim\biggl( \pmatrix{U_f \cr V_f},\frac{1}{N} \pmatrix{\sigma_f^2(U)&\sigma_f(U,V) \cr \sigma_f(U,V)&\sigma_f^2(V) } \biggr), \end{eqnarray} with $\sigma_f^2(U)=\sum f_i(U_i-U_f)^2, \sigma_f(U,V)= \sum f_i(U_i-U_f)(V_i-V_f)$, and $\sigma_f^2(V)=\sum f_i(V_i-V_f)^2$. Then \begin{eqnarray} \label{318} \hate=\frac{\bU'\hbf}{\bV'\hbf}&=&E\cdot\frac{1+\hdel _U}{1+\hdel_V}\quad \nonumber\\ &\doteq& E\cdot(1+\hdel_U-\hdel_V ),\\ \eqntext{\displaystyle\biggl[\hdel_U=\frac{\bU'\hbf-U_f}{U_f}, \hdel_V= \frac{\bV'\hbf -V_f}{V_f} \biggr]} \end{eqnarray} so $\sd(\hate^2)\doteq E^2\var(\hdel_U-\hdel_V)$, which, again using~\eqref{39}, gives \tref{th1}. \end{pf} The trouble here, as will be shown, is that $\sd(\hate)$ or $\cv (\hate )$ may easily become unmanageably large. Empirical Bayes methods require sampling on the $x$ scale, which can be grossly inefficient for estimating functions of $\theta$. Hypothetically, the $X_k$'s in \eqref{36} are the observable halves of pairs $(\Theta,X)$, \begin{equation}\qquad (\Theta_k,X_k)\ind g\pthe f_\theta(x),\quad k=1,2, \ldots,N. \label{319} \end{equation} If the $\Theta_k$'s \textit{had} been observed, we could estimate $\bg$ directly as $\bar{\bg}=(\barg_1,\barg_2,\ldots,\barg_m)'$, \begin{equation} \barg_j=\#\{\Theta_k=\theta_j\}/N, \label{320} \end{equation} leading to the \textit{direct Bayes estimate} \begin{equation} \bare=\mathbf{u}'\bar{\bg}/\mathbf{v}'\bar{\bg}. \label{321} \end{equation} $\bare$ would usually be less variable than $\hate$ \eqref{311} (and would automatically enforce possible constraints on $E$ such as monotonicity in $x_k$). A version of \tref{th1} applies here. Now we define \begin{eqnarray} u_g&=&\sum_{j=1}^mg_ju_j,\quad v_g=\sum_{j=1}^mg_jv_j\quad \mbox{and} \nonumber \\[-8pt] \\[-8pt] \nonumber \mathbf{w}&=&\mathbf{u} /u_g-\mathbf{v}/v_g. \label{322} \end{eqnarray} \begin{thm}\label{th2} For direct Bayes estimation \eqref{321}, the delta-method approximate standard deviation of $\bare$ is \begin{equation} \sd(\bare)=\frac{1}{\sqrt{N}}|E|\cdot\sigma_g(w), \label{323} \end{equation} where \begin{equation} \sigma_g^2(w)=\sum_{j=1}^mg_jw_j^2; \label{324} \end{equation} $\bare$ has approximate coefficient of variation \begin{equation} \cv(\bare)=\sigma_g(w) /\sqrt{N}. \label{325} \end{equation} \end{thm} The proof of \tref{th2} is the same as that for \tref{th1}. \begin{table} \tabcolsep=0pt \caption{Standard deviation and coefficient of variation of $E\{t\pthe |x=2.5\}$ (for $N=1$); for the three parameters \protect\eqref{326}, with $g$ and $f$ as in Figure~\protect\ref{fig1}; sdf from \protect\tref{th1} \protect\eqref{314}; sdd for direct Bayes estimation, \protect\tref{th2} \protect\eqref{323}; sdx from the regularized $f$-modeling of Section~\protect\ref{sec4}, \protect\tref{thm3} \protect\eqref{48} \label{tab1} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill }}lcd{2.2}d{2.2}d{2.2}ccc@{}} \hline &&\multicolumn{3}{c}{$\bolds{N^{1/2}}$ \textbf{sd}}& \multicolumn{3}{c}{$\bolds{N^{1/2}}$ \textbf{cv}}\\[-6pt] &&\multicolumn{3}{c}{\hrulefill}& \multicolumn{3}{c@{}}{\hrulefill}\\ $\bolds{t\pthe}$ &\multicolumn{1}{c}{$\bolds{E\{t|x=2.5\} }$}&\multicolumn{1}{c}{\textbf{sdf}}& \multicolumn{1}{c}{\textbf{sdd}} & \multicolumn{1}{c}{\textbf{sdx}} & \multicolumn{1}{c}{\textbf{cvf}} & \multicolumn{1}{c}{\textbf{cvd}}& \multicolumn{1}{c@{}}{\textbf {cvx}}\\ \hline Parameter (1)& 2.00 & 8.74& 3.38& 2.83& 4.4& 1.7& 1.4\\ Parameter (2)& 4.76 & 43.4& 13.7& 10.4& 9.1& 2.9& 2.2\\ Parameter (3)& 0.03 & 43.9& 0.53& 1.24& 1371& 16& 39\\ \hline \end{tabular*} \end{table} Table~\ref{tab1} concerns the estimation of $E\{t\pthe|x=2.5\}$ for the situation shown in Figure~\ref{fig1}.\vadjust{\goodbreak} Three different parameters $t\pthe$ are considered: \begin{eqnarray}\label{326} (1)\quad &t\pthe=\theta, \nonumber\\ (2)\quad &t\pthe=\theta^2, \\ (3)\quad &t\pthe= \cases{1,&$\mbox{if }\theta\leq0,$\vspace*{2pt} \cr 0,&$\mbox{if }\theta>0$.}\nonumber \end{eqnarray} In the third case, $E\{t\pthe|x\}=\Pr\{\theta\leq0|x\}$. \textitt{Cvf} is $\sqrt{N}\cv(\hate)$ \eqref{316} so cvf$/\sqrt{N}$ is the approximate coefficient of variation of $\hate$, the nonparametric empirical Bayes estimate of $E\{t\pthe|x=2.5\}$. \textitt{Cvd} is the corresponding quantity \eqref{325}, available only if we could directly observe the $\Theta_k$ values in \eqref{319}, while \textitt{cvx} is a regularized version of $\hate$ described in the next section. Suppose we wish to bound $\cv(\hate)$ below some prespecified value $c_0$, perhaps $c_0=0.1$. Then according to \eqref{316}, we need $N$ to equal \begin{equation} N=(\cv_1/c_0)^2, \label{327} \end{equation} where $\cv_1$ is the numerator $\sigma_f(W)$ of \eqref{316}, for example, cvf in Table~\ref{tab1}. For the three parameters \eqref{326} and for $c_0=0.1$, we would require $N=1936$, 8281 and 187 million, respectively. \begin{figure*} \includegraphics{455f02.eps} \caption{$\bW$ vector \protect\eqref{313} for $f$-Bayes estimation of $\Pr\{ \theta\leq0|x=2.5\}$ for the model of Figure~\protect\ref{fig1} (actually $\bW_{12}$ as in Section~\protect\ref{sec4}; dashed curve is $\bW_9$).} \label{fig2} \end{figure*} The vector $\bW$ for parameter (3) is seen to take on enormous values in Figure~\ref{fig2}, resulting in $\sigma_f(W)=1370.7$ for \eqref{316}. The trouble stems from the abrupt discontinuity of $t_3$ at $\theta=0$, which destabilizes $\bU$ in \eqref{313}. Definition \eqref{34} implies $\bU'P=\mathbf{u}'$. This says that $\bU'$ must linearly compose $\mathbf {u}'$ from the rows of $P$. But in our example the rows of $P$ are smooth functions of the form $\varphi(x_i-\theta_j)$, forcing the violent cycling of $U$ seen in Figure~\ref{fig2}. Section~\ref{sec4} discusses a regularization method that greatly improves the accuracy of using ``Bayes rule in terms of $\bmf$.'' Table~\ref{tab1} shows that if we \textit{could} sample on the $\theta$ scale, as in \eqref{320}, we would require ``only'' 25,600 $\Theta_k$ observations to achieve coefficient of variation 0.1 for estimating $\Pr \{\theta\leq0|x=2.5\}$; direct sampling is almost always more efficient than $f$ sampling, but that is not the way empirical Bayes situations present themselves. The efficiency difference is a factor of 86 for parameter (3), but less than a factor of 3 for parameter (1), $t\pthe =\theta$. The latter is a particularly favorable case for empirical Bayes estimation, as discussed in Section~\ref{sec6}. The assumption of independent sampling, \eqref{36} and \eqref{319}, is a crucial element of all our results. Independence assumptions (often tacitly made) dominate the empirical Bayes literature, as in \citet {muralidharan}, \citet{zhang}, \citet{morris}, and \citet{1975morris}. Nonindependence effectively reduces the effective sample size $N$; see Chapter~8 of \citet{2010}. This point is brought up again in Section~\ref{sec6}. \section{Regularized \lowercase{$f$}-Modeling}\label{sec4} Fully nonparametric estimation of $E=E\{t\pthe|x\}$ is sometimes feasible, but, as seen in Table~\ref{tab1} of Section~\ref{sec3}, it can become unacceptably noisy. Some form of regularization is usually necessary. A promising approach is to estimate $\bmf$ parametrically according to a smooth low-dimensional model. Suppose then that we have such a model, yielding $\hbf$ as an estimate of $\bmf$ \eqref{23}, with mean vector and covariance matrix \begin{equation} \hbf\sim\bigl(\bmf,\Delta(\bmf)/N \bigr). \label{41} \end{equation} In the nonparametric case \eqref{39} $\Delta(\bmf)=D(\bmf)$, but we expect that we can reduce $\Delta(\bmf)$ parametrically. In any case, the delta-method approximate coefficient of variation for $\hate=\bU '\hbf/\bV'\hbf$ \eqref{311} is given in terms of $\bW$~\eqref{313}: \begin{equation} \cv(\hate)= \bigl\{\bW'\Delta(\bmf)\bW/N \bigr\}^{1/2}. \label{42} \end{equation} This agrees with \eqref{316} in the nonparametric situation~\eqref{39} where $\Delta(\bmf)=\diag(\bmf)-\bmf\bmf'$. The verification of \eqref {42} is almost identical to that for \tref{th1}. Poisson regression models are convenient for the smooth parametric estimation of $\bmf$. Beginning with an $n\times p$ structure matrix $\bX$, having rows $\mathbf{x}_i$ for $i=1,2,\ldots,n$, we assume that the components of the count vector $\by$ \eqref{37} are independent Poisson observations, \begin{eqnarray}\label{43} y_i\ind\operatorname{Poi}(\mu_i),\quad \mu_i=e^{\mathbf{x}_i\alpha} \nonumber \\[-8pt] \\[-8pt] \eqntext{\mbox{for }i=1,2,\ldots,n,} \end{eqnarray} where $\alpha$ is an unknown vector of dimension $p$. Matrix~$\bX$ is assumed to have as its first column a vector of 1's. Let $\mu_+=\sum_1^n\mu_i$ and $N=\sum_1^ny_i$, and define \begin{equation} f_i=\mu_i/\mu_+\quad \mbox{for }i=1,2,\ldots,n. \label{44} \end{equation} Then a well-known Poisson/multinomial relationship says that the conditional distribution of $\by$ given $N$ is \begin{equation} \by|N\sim\operatorname{Mult}_n(N,\bmf) \label{45} \end{equation} as in \eqref{38}. Moreover, under mild regularity conditions, the estimate $\hbf=\by/N$ has asymptotic mean vector and covariance matrix (as $\mu_+\to\infty$) \begin{equation} \hbf\,\dot\sim\,\bigl(\bmf,\Delta(\bmf)/N \bigr), \label{46} \end{equation} where \begin{eqnarray}\label{47} \Delta(\bmf)=\diag(\bmf)\bX G_f^{-1}\bX' \diag(\bmf) \nonumber \\[-8pt] \\[-8pt] \eqntext{\bigl[G_f=\bX'\diag(\bmf)\bX\bigr];} \end{eqnarray} Equations~\eqref{46}--\eqref{47} are derived from standard generalized linear model calculations. Combining \eqref{42} and~\eqref{46} gives a Poisson regression version of \tref{th1}. \begin{thm}\label{th3} The delta-method coefficient of variation for $\hate=\bU'\hbf/\bV '\hbf$ under Poisson model \eqref{43} is \begin{equation}\quad \cv(\hate)= \bigl\{\bigl(\bW'\bX\bigr)_f\bigl( \bX'\bX\bigr)_f^{-1}\bigl(\bW'\bX \bigr)_f'/N \bigr\}^{1/2}, \label{48} \end{equation} where \begin{eqnarray}\label{49} \bigl(\bW'\bX\bigr)_f&=&\bW'\diag(\bmf)\bX\quad \mbox{and} \nonumber \\[-8pt] \\[-8pt] \nonumber \bigl(\bX'\bX\bigr)_f&=&\bX'\diag( \bmf)\bX, \end{eqnarray} with $\bW$ as in \eqref{313}. \label{thm3} \end{thm} The bracketed term in \eqref{48}, times $N$, is recognized as the length$^2$ of the projection of $\bW$ into the $p$-dimensional space spanned by the columns of $\bX$, carried out using inner product $\langle a,b\rangle_f=\sum f_ia_ib_i$. In the nonparametric case, $\bX$ equals the identity $I$, and \eqref{48} reduces to \eqref{316}. As in \eqref{314}, $\sd(\hate)$ is approximated by $|E|\cv(\hate)$. [\textit {Note}: \tref{thm3} remains valid as stated if a multinomial model for $\hbf$ replaces the Poisson calculations in \eqref{47}.] \textitt{Cvx} in Table~\ref{tab1} was calculated as in \eqref{48}, with $N=1$. The structure matrix $\bX$ for the example in Figure~\ref{fig1} was obtained from the R natural spline function $ns(x,df=5)$; including a column of 1's made $\bX193\times6$. The improvements over \textitt{cvf}, the nonparametric coefficients of variation, were by factors of 3, 5 and 100 for the three parameters \eqref{326}. The regularization in \tref{thm3} takes place with respect to $\bmf$ and $\hbf$. Good performance also requires regularization of the inversion process $\hbg=A\hbf$ \eqref{32}. Going back to the beginning of Section~\ref{sec3}, let \begin{equation} P=LDR' \label{410} \end{equation} represent the singular value decomposition of the $n\times m$ matrix $P$, with $L$ the $n\times m$ orthonormal matrix of left singular vectors, $R$ the $m\times m$ orthonormal matrix of right singular vectors, and $D$ the $m\times m$ diagonal matrix of singular values, \begin{equation} d_1\geq d_2\geq\cdots\geq d_m. \label{411} \end{equation} Then it is easy to show that the $m\times n$ matrix \begin{equation} A=RD^{-1}L' \label{412} \end{equation} is the \textit{pseudo-inverse} of $P$, which is why we could go from $\bmf=P\bg$ to $\bg=A\bmf$ at \eqref{32}. [Other pseudo-inverses exist; see \eqref{71}.] Definition \eqref{412} depends on $P$ being of full rank~$m$, equivalently having $d_m>0$ in \eqref{411}. Whether or not this is true, very small values of $d_j$ will destabilize $A$.\vadjust{\goodbreak} The familiar cure is to truncate representation~\eqref{412}, lopping off the end terms of the singular value decomposition. If we wish to stop after the first $r$ terms, we define $R_r$ to be the first $r$ columns of $R$, $L_r$ the first $r$ columns of $L$, $D_r$ the $r\times r$ diagonal matrix $\diag(d_1,d_2,\ldots,d_r)$, and \begin{equation} A_r=R_rD_r^{-1}L_r'. \label{413} \end{equation} In fact, $r=12$ was used in Figure~\ref{fig2} and Table~\ref{tab1}, chosen to make \begin{equation} \sum_{r+1}^md_j^2 \bigg/\sum_1^md_j^2<10^{-10}. \label{414} \end{equation} As in \eqref{31}--\eqref{313}, let \begin{equation} \bU_r'=\mathbf{u}'A_r,\quad \bV_r'=\mathbf{v}'A_r \label{415} \end{equation} [$\mathbf{u}$ and $\mathbf{v}$ stay the same as in (\ref{33})], \begin{equation} E_r=\frac{\bU_r'\bmf}{\bV_r'\bmf},\quad \hate_r=\frac{\bU_r'\hbf }{\bV _r'\hbf} \label{416} \end{equation} and \begin{equation} \bW_r=\frac{\bU_r}{\sum f_iU_{ri}}-\frac{\bV_r}{\sum f_iV_{ri}}. \label{417} \end{equation} \tref{thm3} then remains valid, with $\bW_r$ replacing $\bW$. \textit {Note}: Another regularization method, which will not be pursued here, is the use of ridge regression rather than truncation in the inversion process \eqref{32}, as in \citet{hall}. \begin{table}[b] \tabcolsep=0pt \caption{Coefficient of variation and standard deviation ($N=1$), for $E\{t|x=2.5\}$ as in \protect\ref{tab1}; now using Poisson regression in \protect\tref {thm3}, with $\bX$ based on a natural spline with 5 degrees of freedom. Increasing choice of $r$, \protect\eqref{413}--\protect\eqref{417}, decreases bias but increases variability of $\hate$ for parameter (3); $g$ error from \protect\eqref{420}}\label{tab2} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccccd{6.1}d{5.1}@{}} \hline &&\multicolumn{3}{c}{\textbf{Parameter (1)}}& \multicolumn{3}{c}{\textbf{Parameter (3)}}\\[-6pt] &&\multicolumn{3}{c}{\hrulefill}&\multicolumn{3}{c}{\hrulefill}\\ $\bolds{r}$&$\bolds{g}$ \textbf{error}& $\bolds{E_r}$ & \textbf {cvx} & \textbf{sdx}&$\bolds{E_r}$&\multicolumn{1}{c}{\textbf{cvx}} & \multicolumn{1}{c@{}}{\textbf{sdx}}\\ \hline \phantom{0}3& 0.464& 1.75& 1.00& 1.75& 0.021& 3.6& 0.1\\ \phantom{0}6& 0.254& 2.00& 1.34& 2.68& 0.027& 4.6& 0.1\\ \phantom{0}9& 0.110& 2.00& 1.36& 2.73& 0.031& 8.2& 0.3\\ 12& 0.067& 2.00& 1.41& 2.83& 0.032& 38.6& 1.2\\ 15& 0.024& 2.00& 1.39& 2.78& 0.033& 494.0& 16.1\\ 18& 0.012& 2.00& 1.39& 2.78& 0.033& 23\mbox{,}820.8& 783.8\\ 21& 0.006& 2.00& 1.40& 2.80& 0.033&960\mbox{,}036.4&31\mbox{,}688.8\\ \hline \end{tabular*} \end{table} \begin{figure*} \includegraphics{455f03.eps} \caption{Approximation $g_r$ \protect\eqref{418} with $r=6,9,12$ for $g$ in Figure~\protect\ref {fig1}; heavy blue curve is $g$.} \label{fig3} \end{figure*} Reducing $r$ reduces $\bW_r$, hence reducing \eqref{49} and the approximate coefficient of variation of $\hate_r$. The reduction can be dramatic. $W_9$ almost disappears compared to $W_{12}$ in Figure~\ref{fig2}. Table~\ref{tab2} compares various choices of $r$ for parameters (1) and (3) \eqref{326}. The choice turns out to be unimportant for parameter (1) and crucial for parameter (3). Why not always choose a small value of $r$? The trouble lies in possible bias for the estimation of $E=E\{t|x\}$. Rather than the crucial inverse mapping $\bg=A\bmf$ \eqref{32}, we get an approximation \begin{eqnarray} \label{418} \bg_r&=&A_r \bmf=A_rP\bg \nonumber \\[-8pt] \\[-8pt] \nonumber &=&R_rD_r^{-1}L_r'LDR' \bg=R_rR_r'\b \end{eqnarray} [the last step following from $LDR'=L_rD_rR_r'+L_{(r)}D_{(r)}R_{(r)}'$, with $L_{(r)}$ indicating the last $m-r$ columns of $L$, etc.; Equation~\eqref {418} says that $\bg_r$ is the projection of $\bg$ into the linear space spanned by the first $r$ columns of $R$]. Then, looking at \eqref {415}--\eqref{416}, \begin{equation} E_r=\frac{\bU_r'\bmf}{\bV_r'\bmf}=\frac{\mathbf{u}'\bg_r}{\mathbf {v}'\bg_r}, \label{419} \end{equation} possibly making $\hate_r$ badly biased for estimating $E=\mathbf{u}'\bg /\mathbf{v}'\bg$. The $E_r$ columns of Table~\ref{tab2} show that bias is a problem only for quite small values of $r$. However, the example of Figure~\ref{fig1} is ``easy'' in the sense that the true prior $\bg$ is smooth, which allows $\bg_r$ to rapidly approach $\bg$ as $r$ increases, as pictured in Figure~\ref {fig3}. The $g_{\mathrm{error}}$ column of Table~\ref{tab2} shows this numerically in terms of the absolute error \begin{equation} g_{\mathrm{error}}=\sum_{i=1}^m|g_{ri}-g_i|. \label{420} \end{equation} A more difficult case is illustrated in Figure~\ref{fig4}. Here $\bg$ is a mixture: 90\% of a delta function at $\theta=0$ and 10\% of a uniform distribution over the 31 points $\theta_j$ in $\bthe=(-3,-2.8,\ldots ,3)$; $P$ and $\mathbf{x}$ are as before. Now $g_{\mathrm{error}}$ exceeds 1.75 even for $r=21$; $\bg_r$ puts too small a weight on $\theta=0$, while bouncing around erratically for $\theta\neq0$, often going negative. \begin{figure*} \includegraphics{455f04.eps} \caption{True $g=0.90\cdot\delta(0)+0.10$ uniform (heavy curve); approximation $g_r$ \protect\eqref{418} for $r=6,9,12,15,18,21$, as labeled.} \label{fig4} \end{figure*} We expect, correctly, that empirical Bayes estimation of $E\{t\pthe|x\} $ will usually be difficult for the situation of Figure~\ref{fig4}. This is worrisome since its $\bg$ is a reasonable model for familiar false discovery rate analyses, but see Section~\ref{sec6}. Section~\ref{sec5} discusses a different regularization approach that ameliorates, without curing, the difficulties seen here. \section{Modeling the Prior Distribution \lowercase{$\mathbf{g}$}}\label{sec5} The regularization methods of Section~\ref{sec4} involved modeling $\bmf$, the marginal distribution \eqref{23} on the $x$-space, for example, by Poisson regression in Table~\ref{tab2}. Here we discuss an alternative strategy: modeling $\bg$, the prior distribution \eqref{22} on the $\theta$-space. This has both advantages and disadvantages, as will be discussed. We begin with an $m\times q$ model matrix $Q$, $j$th row $Q_j$, which determines $\bg$ according to \begin{equation}\qquad \bg(\alpha)=e^{Q\alpha-\bone_m\phi(\alpha)} \quad\Biggl[\phi(\alpha )=\log\sum _1^me^{Q_j\alpha} \Biggr]. \label{51} \end{equation} [For $\mathbf{v}=(v_1,v_2,\ldots,v_m), e^{\mathbf{v}}$ denotes a vector with components $e^{v_j}$; $\bone_m$ is a vector of $m$ 1's, indicating in \eqref{51} that $\phi(\alpha)$ is subtracted from each component of $Q\alpha$.] Here $\alpha$ is the unknown $q$-dimensional natural parameter of exponential family \eqref{51}, which determines the prior distribution $\bg=\bg(\alpha)$. In an empirical Bayes framework, $\bg$ gives $\bmf=P\bg$ \eqref{26}, and the statistician then observes a multinomial sample $\by$ of size $N$ from $\bmf$ as in \eqref{38}, \begin{equation} \by\sim\mathrm{Mult}_n \bigl(N,P\bg(\alpha) \bigr), \label{52} \end{equation} from which inferences about $\bg$ are to be drawn. Model \eqref{51}--\eqref{52} is not an exponential family in $\by$, a~theoretical disadvantage compared to the Poisson modeling of \tref {thm3}. [It is a \textit{curved exponential family}, \citet{1975}.] We can still pursue an asymptotic analysis of its frequentist accuracy. Let \begin{equation} D(\bg)\equiv\diag(\bg)-\bg\bg', \label{53} \end{equation} the covariance matrix of a single random draw $\Theta$ from distribution $\bg$, and define \begin{equation} Q_\alpha=D \bigl(\bg(\alpha) \bigr)Q. \label{54} \end{equation} \begin{lem} The Fisher information matrix for estimating $\alpha$ in model \eqref {51}--\eqref{52} is \begin{equation} \mathcal{I}=NQ_\alpha'P'\diag\bigl(1/\bmf( \alpha) \bigr)PQ_\alpha, \label{55} \end{equation} where $P$ is the sampling density matrix \eqref{25}, and $\bmf(\alpha )=P\bg(\alpha)$. \label{lem1} \end{lem} \begin{pf} Differentiating $\log\bg$ in \eqref{51} gives the $m\times q$ derivative matrix $d\log g_i/d\alpha_k$, \begin{equation} \frac{d\log\bg}{d\alpha}= \bigl[I-\bone_m\bg(\alpha)' \bigr]Q, \label{56} \end{equation} so \begin{eqnarray} \label{57} \frac{d\bg}{d\alpha}&=&\diag\bigl(\bg(\alpha) \bigr) \frac{d\log \bg }{d\alpha} \nonumber \\[-8pt] \\[-8pt] \nonumber &=&D \bigl(\bg(\alpha) \bigr)Q=Q_\alpha. \end{eqnarray} This yields $d\bmf/d\alpha=PQ_\alpha$ and \begin{equation} \frac{d\log\bmf}{d\alpha}=\diag\biggl(\frac{1}{\bmf(\alpha )} \biggr)PQ_\alpha. \label{58} \end{equation} \begin{figure*} \includegraphics{455f05.eps} \caption{\emph{Top:} Standard deviation of $E\{t|x\}$ as a function of $x$, for parameter (1) $t\pthe=\theta$ (with $N=1$); $f$-modeling (solid), $g$-modeling (dashed). \emph{Bottom:} Now for parameter (3), $t\pthe=1$ or 0 as $\theta\leq0$ or $>0$; using natural spline models, $df=6$, for both calculations.} \label{fig5} \end{figure*} The log likelihood from multinomial sample \eqref{52} is \begin{equation} l_\alpha(\by)=\by'\log\bmf(\alpha)+\mathrm{ constant}, \label{59} \end{equation} giving score vector \begin{equation} \frac{dl_\alpha(\by)}{d\alpha}=\by'\frac{d\log\bmf}{d\alpha}. \label{510} \end{equation} Since $\by$ has covariance matrix $N(\diag\bmf-\bmf\bmf')$ \eqref{39}, $\mathcal{I}$, the covariance matrix of the score vector, equals \begin{eqnarray} \label{511} \mathcal{I}&=&NQ_\alpha'P' \diag(1/\bmf) \bigl(\diag\bmf-\bmf\bmf'\bigr)\nonumber\\ &&{}\cdot \diag(1/\bmf )PQ_\alpha \\ &=&NQ_\alpha'P' \bigl(\diag(1/\bmf)- \bone_n\bone_n' \bigr)PQ_\alpha.\nonumber \end{eqnarray} Finally, \begin{equation} \bone_n'PQ_\alpha=\bone_m'D \bigl(g(\alpha)\bigr)Q=\bzer'Q=0 \label{512} \end{equation} (using the fact that the columns of $P$ sum to 1), and \eqref{511} yields the lemma. \end{pf} Standard sampling theory says that the maximum likelihood estimate (MLE) $\halp$ has approximate covariance matrix $\mathcal{I}^{-1}$ and that $\hbg=\bg(\halp)$ has approximate covariance, from \eqref{57}, \begin{equation} \cov(\hbg)=Q_\alpha\mathcal{I}^{-1}Q_\alpha'. \label{513} \end{equation} \begin{lem}\label{lem2} The approximate covariance matrix for the maximum likelihood estimate $\bg(\halp)$ of $\bg$ in model \eqref{51}--\eqref{52} is \begin{eqnarray}\label{514}\qquad &&\cov(\hbg) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\quad=\frac{1}{N}Q_\alpha\bigl[Q_\alpha'P' \diag\bigl(1/\bmf(\alpha) \bigr)PQ_\alpha\bigr]^{-1}Q_\alpha'. \end{eqnarray} \end{lem} If we are interested in a real-valued parameter $\tau=T(\bg)$, the approximate standard deviation of its MLE $\hat\tau=T(g(\halp))$ is \begin{equation} \sd(\hat\tau)= \bigl[\dot{T}'\cov(\hbg)\dot{T} \bigr]^{1/2}, \label{515} \end{equation} where $\dot{T}$ is the gradient vector $dT/d\bg$, evaluated at $\hbg$. When $T(\bg)$ is the conditional expectation of a parameter $t\pthe$ \eqref{35}, \begin{equation} T(\bg)=E \{t\pthe|x=x_i \}=\mathbf{u}'\bg/ \mathbf{v}'\bg, \label{516} \end{equation} we compute \begin{equation} \dot{T}(\bg)=\mathbf{w}=(\mathbf{u}/u_g)-(\mathbf{v}/v_g) \label{517} \end{equation} \eqref{322}, and get the following. \begin{thm}\label{th4} Under models \eqref{51}--\eqref{52}, the MLE $\hate$ of $E\{t\pthe |x=x_i\}$ has approximate standard deviation \begin{equation} \sd(\hate)=|E| \bigl[\mathbf{w}'\cov(\hbg)\mathbf{w} \bigr]^{1/2}, \label{518} \end{equation} with $\mathbf{w}$ as in \eqref{517} and $\cov(\hbg)$ from \eqref{514}. \label{thm4} \end{thm} We can now compare $\sd(\hate)$ from $\bgg$-modeling \eqref{518}, with the corresponding $\bmff$-modeling results of \tref{thm3}. Figure~\ref{fig5} does this with parameters (1) and (3) \eqref{326} for the example of Figure~\ref{fig1}. \tref{thm3}, modified as at \eqref{417} with $r=12$, represents $\bmff$-modeling, now with $X$ based on $ns(\mathbf{x},6)$, a natural spline with six degrees of freedom. Similarly for $\bgg $-modeling, $Q=ns(\bthe,6)$ in \eqref{51}; $\alpha$ was chosen to make $\bg(\alpha)$ very close to the upper curve in Figure~\ref{fig1}. (Doing so required six rather than five degrees of freedom.) The upper panel of Figure~\ref{fig5} shows $\bmff$-modeling yielding somewhat smaller standard deviations for parameter (1), $t\pthe=\theta$. This is an especially favorable case for $\bmff$-modeling, as discussed in Section~\ref {sec6}. However, for parameter (3), $E=\Pr\{t\leq0|x\}$, $\bgg$-modeling is far superior. \textit{Note:} in exponential families, curved or not, it can be argued that the effective degrees of freedom of a model equals its number of free parameters; see Remark D of \citet{2004}. The models used in Figure~\ref{fig5} each have six parameters, so in this sense the comparison is fair. Parametric $g$-space modeling, as in \eqref{51}, has several advantages over the $f$-space modeling of Section~\ref{sec4}: \textit{Constraints}. $\hbg=\exp(Q\halp-\bone_m\phi(\halp))$ has all coordinates positive, unlike the estimates seen in Figure~\ref{fig4}. Other constraints such as monotonicity or convexity that may be imposed on $\hbf=P\hbg$ by the structure of $P$ are automatically enforced, as discussed in Chapter~3 of \citet{newcarlin}. \textit{Accuracy}. With some important exceptions, discussed in Section~\ref {sec6}, $g$-modeling often yields smaller values of $\sd(\hate)$, as typified in the bottom panel of Figure~\ref{fig5}. This is particularly true for discontinuous parameters $t\pthe$, such as parameter (3) in Table~\ref{tab1}. \textit{Simplicity}. The bias/variance trade-offs involved with the choice of $r$ in Section~\ref{sec4} are avoided and, in fact, there is no need for ``Bayes rule in terms of $\bmf$.'' \begin{table*} \caption{Estimating $E=\Pr\{\theta=0|x\}$ in the situation of Figure~\protect\ref {fig4}; using $g$-modeling \protect\eqref{51} with $Q$ equal $ns(x,5)$ augmented with a column putting a delta function at $\theta=0$. Sd is $\sd(\hate)$ \protect\eqref{525}, cv is the coefficient of variation $\sd/E$. (For sample size $N$, divide entries by $N^{1/2}$.)}\label{tab3} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill }}ld{2.2}d{2.2}d{2.2}d{2.2}d{2.2}d{2.2}d{2.2}d{2.2}d{2.2}@{}} \hline $\bolds{x}$& \multicolumn{1}{c}{$\bolds{-4}$}& \multicolumn{1}{c}{$\bolds{-3}$}& \multicolumn{1}{c}{$\bolds{-2}$} & \multicolumn{1}{c}{$\bolds{-1}$}& \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{1}}& \multicolumn{1}{c}{\textbf{2}}& \multicolumn{1}{c}{\textbf{3}}& \multicolumn{1}{c@{}}{\textbf{4}}\\% \hline $E$& 0.04& 0.32& 0.78& 0.94& 0.96& 0.94& 0.78& 0.32& 0.04\\[3pt] $N^{1/2}\cdot$ sd& 0.95& 3.28& 9.77& 10.64& 9.70& 10.48& 9.92& 3.36& 0.75\\ $N^{1/2}\cdot$ cv& 24.23& 10.39& 12.53& 11.38& 10.09& 11.20& 12.72& 10.65& 19.21\\ \hline \end{tabular*} \end{table*} \textit{Continuous formulation}. It is straightforward to translate $g$-modeling from the discrete framework \eqref{21}--\eqref{24} into more familiar continuous language. Exponential family model \eqref{51} now becomes \begin{eqnarray}\label{519} g_\alpha\pthe=e^{\bq\pthe\alpha-\phi(\alpha)} \nonumber \\[-8pt] \\[-8pt] \eqntext{\displaystyle \biggl[\phi (\alpha)=\log\int e^{\bq\pthe\alpha} \,d\theta\biggr], } \end{eqnarray} where $\bq\pthe$ is a smoothly defined $1\times q$ vector function of $\theta$. Letting $f_\theta(x)$ denote the sampling density of $x$ given $\theta$, define \begin{eqnarray}\label{520} h(x)=\int f_\theta(x)g\pthe(\bq\pthe-\bar{\bq} ) \,d\theta \nonumber \\[-8pt] \\[-8pt] \eqntext{\displaystyle\biggl [\bar{ \bq}=\int g\pthe\bq\pthe \,d\theta\biggr].} \end{eqnarray} Then the $q\times q$ information matrix $\mathcal{I}$ \eqref{55} is \begin{eqnarray}\label{521} \mathcal{I}=N\int\biggl[\frac{h(x)'h(x)}{f(x)^2} \biggr]f(x) \,dx \nonumber \\[-8pt] \\[-8pt] \eqntext{\displaystyle\biggl[f(x)=\int g \pthe f_\theta(x) \,dx \biggr].} \end{eqnarray} A posterior expectation $E=E\{t\pthe|x\}$ has MLE \begin{equation}\qquad\hspace*{2pt} \hate=\int t\pthe f_\theta(x)g_{\halp}\pthe \,d\theta\Big/\int f_\theta(x)g_{\halp}\pthe \,d\theta. \label{522}\hspace*{-6pt} \end{equation} An influence function argument shows that $E$ has gradient \begin{equation} \frac{dE}{d\alpha}=E\int z\pthe g_\alpha\pthe(\bq\pthe-\bar{\bq } ) \,d \theta, \label{523} \end{equation} with \begin{eqnarray}\label{524} z\pthe&=&\frac{t\pthe f_\theta(x)g_\alpha\pthe}{\int t(\varphi )f_\varphi (x)g_\alpha(\varphi) \,d\varphi} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}-\frac{f_\theta(x)g_\alpha\pthe }{\int f_\varphi(x)g_\alpha(\varphi) \,d\varphi}. \end{eqnarray} Then the approximate standard deviation of $\hate$ is \begin{equation} \sd(\hate)= \biggl(\frac{dE}{d\alpha}\mathcal{I}^{-1} \frac {dE}{d\alpha }' \biggr)^{1/2}, \label{525} \end{equation} combining \eqref{521}--\eqref{524}. [Of course, the integrals required in \eqref{525} would usually be done numerically, implicitly returning us to discrete calculations!] \textit{Modeling the prior}. Modeling on the $g$-scale is convenient for situations where the statistician has qualitative knowledge concerning the shape of the prior~$\bg$. As a familiar example, large-scale testing problems often have a big atom of prior probability at $\theta=0$, corresponding to the null cases. We can accommodate this by including in model matrix $Q$ \eqref{51} a column $\be_0=(0,0,\ldots,0,1,0,\ldots,0)'$, with the 1 at $\theta=0$. Such an analysis was carried out for the situation in Figure~\ref{fig4}, where the true $\bg$ equaled $0.9\be_0+0.1\cdot\operatorname{uniform}$. $Q$ was taken to be the natural spline basis $ns(\bthe,5)$ augmented by column $\be_0$, a $31\times6$ matrix. Table~\ref{tab3} shows the results for $\mathbf{t}=\be_0$, that is, for \begin{equation} E=E\{t|x\}=\Pr\{\theta=0|x\}. \label{526} \end{equation} The table gives $E$ and $\sd(\hate)$ \eqref{518} for $x=-4,-3,\break \ldots ,4\ (N=1)$, as well as the coefficient of variation $\sd(\hate)/E$. The results are not particularly encouraging: we would need sample sizes $N$ on the order of 10,000 to expect reasonably accurate estimates $\hate$ \eqref{327}. On the other hand, $f$-modeling as in Section~\ref{sec4} is hopeless here. Section~\ref{sec6} has more to say about false discovery rate estimates \eqref{526}. \begin{figure*}[b] \includegraphics{455f06.eps} \caption{MLE nonnull distribution, estimated from a sample of $N=5000\ X$ values from $f$ corresponding to true $g$ in Figure~\protect\ref {fig4}; estimated atom at $\theta=0$ was 0.92.} \label{fig6} \end{figure*} A random sample of $N=5000\ X$ values was drawn from the distribution $\bmf=P\bg$ corresponding to the true $\bg$ in Figure~\ref{fig4} [with $P$ based on the normal density $\varphi(x_i-\theta_j)$ as before], giving count vector $\by$ \eqref{37}. Numerical maximization yielded $\halp$, the MLE in model \eqref{51}--\eqref{52}, $Q$ as in Table~\ref{tab3}. The estimate $\hbg=\bg(\halp)$ put probability 0.920 at $\theta=0$, compared to true value 0.903, with nonnull distribution as shown in Figure~\ref{fig6}. The nonnull peaks at $\theta=\pm2$ were artifacts of the estimation procedure. On the other hand, $\hbg$ correctly put roughly equal nonnull probability above and below 0. This degree of useful but crude inference\vadjust{\goodbreak} should be kept in mind for the genuine data examples of Section~\ref{sec6}, where the truth is unknown. Our list of $g$-modeling advantages raises the question of why $f$-modeling has dominated empirical Bayes applications. The answer---that a certain class of important problems is more naturally considered in the $f$ domain---is discussed in the next section. Theoretically, as opposed to practically, $g$-modeling has played a central role in the empirical Bayes literature. Much of that work involves the nonparametric maximum likelihood estimation of the prior distribution $g\pthe$, some notable references being \citet{laird}, \citet{zhang} and \citet{jiang}. Parametric $g$-modeling, as discussed in \citet{morris} and \citet{casella}, has been less well developed. A large part of the effort has focused on the ``normal-normal''\vadjust{\goodbreak} situation, normal priors with normal sampling errors, as in \citet{1975morris}, and other conjugate situations. Chapter~3 of \citet{newcarlin} gives a nice discussion of parametric empirical Bayes methods, including binomial and Poisson examples. \section{Classic Empirical Bayes Applications}\label{sec6} Since its post-war emergence (\cite{robbins}, \cite{good}, \cite{james}), empirical Bayes methodology has focused on a small set of specially structured situations: ones where certain Bayesian inferences can be computed simply and directly from the marginal distribution of the observations on the $x$-space. There is no need for $g$-modeling in this framework or, for that matter, any calculation of $\hbg$ at all. False discovery rates and the James--Stein estimator fall into this category, along with related methods discussed in what follows. Though $g$-modeling is unnecessary here, it will still be interesting to see how it performs on the classic problems. Robbins' Poisson estimation example exemplifies the classic empirical Bayes approach: independent but not identically distributed Poisson variates \begin{equation} X_k\ind\operatorname{Poi}(\Theta_k),\quad k=1,2,\ldots,N, \label{61} \end{equation} are observed, with the $\Theta_k$'s notionally drawn from some prior $g\pthe$. Applying Bayes rule with the Poisson kernel $e^{-\theta }\theta ^x/x!$ shows that \begin{equation} E\{\theta|x\}=(x+1)f_{x+1}/f_x, \label{62} \end{equation} where $\bmf=(f_1,f_2,\ldots)$ is the marginal distribution of the $X$'s. [This is an example of \eqref{35}, Bayes rule in terms of $\bmf$; defining $\be_i=(0,0,\ldots,1,0,\ldots,0)'$ with 1 in the $i$th place, $\bU=(x+1)\be_{x+1}$, and $\bV=\be_x$.] Letting $\hbf=(\hatf _1,\hatf _2,\ldots)$ be the nonparametric MLE \eqref{310}, Robbins' estimate is the ``plug-in'' choice \begin{equation} \hate\{\theta|x\}=(x+1)\hatf_{x+1}/\hatf_x \label{63} \end{equation} as in \eqref{311}. \citet{brown} use various forms of semiparametric $f$-modeling to improve on \eqref{63}. \begin{figure*} \includegraphics{455f07.eps} \caption{\textit{Prostate data}. Left panel shows estimates of $E\{ \theta|x\}$ from Tweedie's formula (solid curve), $f$-modeling (circles) and $g$-modeling (dots). Right panel compares standard deviations of $\hate\{\theta|x\}$, for Tweedie estimates (dots), $f$-modeling (dashed curve) and $g$-modeling (solid curve); reversals at far right are computational artifacts.} \label{fig7} \end{figure*} The prehistory of empirical Bayes applications notably includes the \textit{missing species problem}; see Section~11.5 of \citet{2010}. This has the Poisson form \eqref{61}, but with an inference different than \eqref{62} as its goal. \citet{fisher} employed parameterized $f$-modeling as in Section~\ref{sec4}, with $f$ the negative binomial family. Section~3.2.1 of \citet{newcarlin} follows the same route for improving Robbins' estimator \eqref{63}.\vadjust{\goodbreak} \textit{Tweedie's formula} (\citep{2011}) extends Robbins-type estimation of $E\{\theta|x\}$ to general exponential families. For the normal case \begin{equation} \theta\sim g\pdot\quad\mbox{and}\quad x|\theta\sim\caln(\theta,1), \label{64} \end{equation} Tweedie's formula is \begin{eqnarray}\label{65} E\{\theta|x\}=x+l'(x) \nonumber \\[-8pt] \\[-8pt] \eqntext{\displaystyle\mbox{where }l'(x)= \frac{d}{dx}\log f(x), } \end{eqnarray} with $f(x)$ the marginal distribution of $X$. As in \eqref{62}, the marginal distribution of $X$ determines $E\{\theta|x\}$, without any specific reference to the prior $g\pthe$. Given observations $X_k$ from model \eqref{64}, \begin{equation} X_k\sim\caln(\Theta_k,1)\quad\mbox{for }k=1,2,\ldots,N, \label{66} \end{equation} the empirical Bayes estimation of $E\{\theta|x\}$ is conceptually straightforward: a smooth estimate $\hatf(x)$ is obtained from the $X_k$'s, and its logarithm $\hat{l}(x)$ differentiated to give \begin{equation} \hate\{\theta|x\}=x+\hat{l}'(x), \label{67} \end{equation} again without explicit reference to the unknown $g\pthe$. Modeling here is naturally done on the $x$-scale. [It is not necessary for the $X_k$'s to be independent in \eqref{66}, or \eqref{61}, although dependence decreases the accuracy of $\hate$; see Theorem 8.4 of \citet{2010}.] Figure~\ref{fig7} concerns an application of Tweedie's formula to the \textit {prostate data}, the output of a microarray experiment comparing 52 prostate cancer patients with 50 healthy controls (\cite{2010}, Section~2.1). The genetic activity of $N=6033$ genes was measured for each man. Two-sample tests comparing patients with controls yielded $z$-values for each gene, $X_1,X_2,\ldots,X_N$, theoretically satisfying \begin{equation} X_k\sim\caln(0,1) \label{68} \end{equation} under the null hypothesis that gene $k$ is equally active in both groups. Of course, the experimenters were searching for activity \textit {differences}, which would manifest themselves as unusually large values $|X_k|$. Figure~2.1 of \citet{2010} shows the histogram of the $X_k$ values, looking somewhat like a long-tailed version of a $\caln (0,1)$ density. The ``smooth estimate'' $\hatf(x)$ needed for Tweedie's formula \eqref {67} was calculated by Poisson regression, as in \eqref{43}--\eqref {47}. The 6033 $X_k$ values were put into 193 equally spaced bins, centered at $x_1,x_2,\ldots,x_{193}$, chosen as in \eqref{28} with $y_i$ being the number in bin $i$. A~Poisson generalized linear model \eqref {43} then\vadjust{\goodbreak} gave MLE $\hbf=(\hatf_1,\hatf_2,\ldots,\hatf_{193})$. Here the structure matrix $\bX$ was the normal spline basis $ns(\mathbf{x},df=5)$ augmented with a column of 1's. Finally, the smooth curve $\hatf(x)$ was numerically differentiated to give $\hat{l}'(x)=\hatf'(x)/\hatf(x)$ and $\hate=x+\hat{l}'(x)$. Tweedie's estimate $\hate\{\theta|x\}$ \eqref{67} appears as the solid curve in the left panel of Figure~\ref{fig7}. It is nearly zero between $-2$ and 2, indicating that a large majority of genes obey the null hypothesis \eqref{67} and should be estimated to have $\theta=0$. Gene 610 had the largest observed $z$-value, $X_{610}=5.29$, and corresponding Tweedie estimate 4.09. For comparison, $\hate\{\theta|x\}$ was recalculated both by $f$-modeling as in Section~\ref{sec4} and $g$-modeling as in Section~\ref{sec5} [with discrete sampling distributions \eqref{24}--\eqref{26} obtained from $X_k\sim\caln(\Theta_k,1)$, $\Theta_k$ being the ``true effect size'' for gene $k$]; $f$-modeling used $\bX$ and $\hbf$ as just described, giving $\hate_f=U_r'\hbf/V_r'\hbf$, $U_r$ and $V_r$ as in \eqref{419}, $r=12$; $g$-modeling took $\bthe=(-3,-2.8,\ldots,3)$ and $Q=(ns(\bthe ,5),\bone)$, yielding $\hbg=\bg(\halp)$ as the MLE from \eqref {51}--\eqref{52}. [The R nonlinear maximizer \texttt{nlm} was used to find $\halp$; some care was needed in choosing the control parameters of \texttt{nlm}. We are paying for the fact that the $g$-modeling likelihood \eqref{52} is not an exponential family.] Then the estimated posterior expectation $\hate_g$ was calculated applying Bayes rule with prior $\hbg$. Both $\hate_f$ and $\hate_g$ closely approximated the Tweedie estimate. Standard deviation estimates for $\hate_f$ [dashed curve, from \tref {thm3} with $\hbf$ replacing $\bmf$ in \eqref{49}] and $\hate_g$ (solid curve, from \tref{thm4}) appear in the right panel of Figure~\ref{fig7}; $f$-modeling gives noticeably lower standard deviations for $E\{\theta |x\}$ when $|x|$ is large. The large dots in the right panel of Figure~\ref{fig7} are bootstrap standard deviations for the Tweedie estimates $\hate\{\theta|x\}$, obtained from $B=200$ nonparametric bootstrap replications, resampling the $N=6033$ $X_k$ values. These closely follow the $f$-modeling standard deviations. In fact, $\hate_f^*$, the bootstrap replications of $\hate _f$, closely matched $\hate^*$ for the corresponding Tweedie estimates on a case-by-case comparison of the 200 simulations. That is, $\hate_f$ is numerically just about the same as the Tweedie estimate, though it is difficult to see analytically why this is the case, comparing formulas \eqref{416} and \eqref{67}. Notice that the bootstrap results for $\hate_f$ verify the accuracy of the delta-method calculations going into \tref{thm3}. \begin{table*} \caption{Local false discovery rate estimates for the prostate data; $\hufdr$ and its standard deviation estimates sdf obtained from $f$-modeling; $\hfdr$ and sdg from $g$-modeling; sdf is substantially smaller than sdg}\label{tab4} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccccc@{}} \hline $\bolds{x}$& $\bolds{-4}$& $\bolds{-3}$& $\bolds{-2}$& $\bolds{-1}$& \textbf{0}& \textbf{1}& \textbf{2}& \textbf{3}& \textbf{4}\\ \hline $\hufdr$&0.060& 0.370& 0.840&1.030&1.070&1.030& 0.860& 0.380& 0.050\\[3pt] sdf& 0.014& 0.030& 0.034& 0.017& 0.013& 0.021& 0.033& 0.030& 0.009\\ sdg& 0.023& 0.065& 0.179& 0.208& 0.200& 0.206& 0.182& 0.068& 0.013\\[3pt] $\hfdr$& 0.050& 0.320& 0.720& 0.880& 0.910& 0.870& 0.730& 0.320& 0.040\\ \hline \end{tabular*} \end{table*} Among empirical Bayes techniques, the James--Stein estimator is certainly best known. Its form, \begin{eqnarray}\label{69} \hthe=\bar{X}+ \bigl[1+(N-3)/S \bigr] (X_k-\bar{X} ) \nonumber \\[-8pt] \\[-8pt] \eqntext{\displaystyle\Biggl [S=\sum _1^N (X_k-\bar{X} )^2 \Biggr],} \end{eqnarray} again has the ``classic'' property of being estimated directly from the marginal distribution on the $x$-scale, without reference to $g\pthe$. The simplest application of Tweedie's formula, taking $\bX$ in our previous discussion to have rows $(1,x_i,x_i^2)$, leads to formula \eqref{69}; see Section~3 of \citet{2011}. Perhaps the second most familiar empirical Bayes applications relates to \citeauthor{benjamini}'s (\citeyear{benjamini}) theory of false discovery rates. Here we will focus on the \textit{local false discovery rate} (fdr), which best illustrates the Bayesian connection. We assume that the marginal density of each observation of $X_k$ has the form \begin{equation} f(x)=\pi_0\varphi(x)+(1-\pi_0)f_1(x), \label{610} \end{equation} where $\pi_0$ is the prior probability that $X_k$ is null, $\varphi(x)$ is the standard $\caln(0,1)$ density $\exp(-\frac{1}2 x^2)/\break \sqrt {2\pi}$, and $f_1(x)$ is an unspecified nonnull density, presumably yielding values farther away from zero than does the null density $\varphi$. Having observed $X_k$ equal to some value $x$, $\fdr(x)$ is the probability that $X_k$ represents a null case \eqref{68}, \begin{equation} \fdr(x)=\Pr\{\mathrm{null}|x\}=\pi_0\varphi(x)/f(x), \label{611} \end{equation} the last equality being a statement of Bayes rule. Typically $\pi_0$, the prior null probability, is assumed to be near 1, reflecting the usual goal of large-scale testing: to reduce a vast collection of possible cases to a much smaller set of particularly interesting ones. In this case, the \textit{upper false discovery rate}, \begin{equation} \ufdr(x)=\varphi(x)/f(x), \label{612} \end{equation} setting $\pi_0=1$ in \eqref{611}, is a satisfactory substitute for $\fdr (x)$, requiring only the estimation of the marginal density $f(x)$. Returning to the discrete setting \eqref{29}, suppose we take the parameter of interest $t\pthe$ to be \begin{equation} \bt=(0,0,\ldots,0,1,0,\ldots,0)', \label{613} \end{equation} with ``1'' at the index $j_0$ having $\theta_{j_0}=0$ [$j_0=16$ in~\eqref{27}]. Then $E\{t\pthe|x_i\}$ equals $\fdr(x_i)$, and we can assess the accuracy of a $g$-model estimate $\hfdr(x_i)$ using \eqref {518}, the corollary to \tref{thm4}. This was done for the prostate data, with the data binned as in Figure~\ref {fig7}, and $Q=(ns(\bthe,5),\bone)$ as before. \tref{thm4} was applied with $\bthe$ as in \eqref{27}. The bottom two lines of Table~\ref{tab4} show the results. Even with $N=6033$ cases, the standard deviations of $\hfdr (x)$ are considerable, having coefficients of variation in the 25\% range. $F$-model estimates of fdr fail here, the bias/variance trade-offs of Table~\ref{tab2} being unfavorable for any choice of $r$. However, $f$-modeling is a natural choice for ufdr, where the only task is estimating the marginal density $f(x)$. Doing so using Poisson regression \eqref{43}, with $\bX=(ns(\mathbf{x},5),\bone)$, gave the top two lines of Table~\ref{tab4}. Now the standard deviations are substantially reduced across the entire $x$-scale. [The standard deviation of $\hufdr $ can be obtained from \tref{thm3}, with $\bU=\varphi(x_i)\bone$ and $\bV$ the coordinate vector having 1 in the $i$th place.] The top line of Table~\ref{tab4} shows $\hufdr(x)$ exceeding~1 near $x=0$. This is the penalty for taking $\pi_0=1$ in~\eqref{612}. Various methods have been used to correct $\hufdr$, the simplest being to divide all of its values by their maximum. This amounts to taking $\hpi _0=1/$maximum, \begin{equation} \hpi_0=1/1.070=0.935 \label{614} \end{equation} in Table~\ref{tab4}. [The more elaborate $f$-modeling program \texttt {locfdr}, described in Chapter~6 of \citet{2010}, gave $\hpi_0=0.932$.] By comparison, the $g$-model MLE $\hbg$ put probability $\hpi_0=0.852$ on $\theta=0$. \begin{table} \caption{$f$-modeling permits familiar and straightforward fitting methods on the $x$ scale but then requires more complicated computations for the posterior distribution of $\theta$; the situation is reversed for $g$-modeling}\label{tab5} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcc@{}} \hline &\multicolumn{1}{c}{\textbf{Model fitting}}& \multicolumn{1}{c@{}}{\textbf{Bayesian computations}}\\ \hline $f$-modeling&{direct}&{indirect}\\ $g$-modeling&{indirect}&{direct}\\ \hline \end{tabular*} \end{table} \begin{figure*}[b] \includegraphics{455f08.eps} \caption{$g$-modeling estimates of $\Pr\{|\theta|\geq1.5|x\}$ for the prostate data. Dashed bars indicate $\pm$ one standard deviation, from \protect\tref{thm4}.} \label{fig8} \end{figure*} \section{Discussion}\label{sec7} The observed data $X_1,X_2,\ldots,X_N$ from the empirical Bayes structure \eqref{11}--\eqref{12} arrives on the $x$ scale but the desired Bayesian posterior distribution $g(\theta|x)$ requires computations on the $\theta$ scale. This suggests the two contrasting modeling strategies diagrammed in Table~\ref{tab5}: modeling on the $x$ scale, ``$f$-modeling,'' permits the application of direct fitting methods, usually various forms of regression, to the $X$ values, but then pays the price of more intricate and less stable Bayesian computations. We pay the price up front with ``$g$-modeling,'' where models such as \eqref{52} require difficult nonconvex maximum likelihood computations, while the subsequent Bayesian computations become straightforward. The comparative simplicity of model fitting on the $x$ scale begins with the nonparametric case: $f$-modeling needs only the usual vector of proportions $\hbf$ \eqref{310}, while $g$-modeling requires \citeauthor{laird}'s (\citeyear{laird}) difficult nonparametric MLE calculations. In general, $g$-models have a ``hidden'' quality that puts more strain on parametric assumptions; $f$-modeling has the advantage of fitting directly to the observed data. There is a small circle of empirical Bayes situations in which the desired posterior inferences can be expressed as simple functions of $f(x)$, the marginal distribution of the $X$ observations. These are the ``classic'' situations described in Section~\ref{sec6}, and account for the great bulk of empirical Bayes applications. The Bayesian computational difficulties of $f$-modeling disappear here. Not surprisingly, $f$-modeling dominates practice within this special circle. ``Bayes rule in terms of $f$,'' Section~\ref{sec2}, allows us to investigate how well $f$-modeling performs outside the circle. Often not very well seems to be the answer, as seen in the bottom panel of Figure~\ref{fig5}, for example. $G$-modeling comes into its own for more general empirical Bayes inference questions, where the advantages listed in Section~\ref{sec5} count more heavily. Suppose, for instance, we are interested in estimating $\Pr\{|\theta|\geq1.5|x\}$ for the prostate data. Figure~\ref{fig8} shows the $g$-model estimates and their standard deviations from \tref {thm4}, with $Q=ns(\bthe,6)$ as before. Accuracy is only moderate here, but, nonetheless, some useful information has been extracted from the data (while, as usual for problems involving discontinuities on the $\theta$ scale, $f$-modeling is ineffective). Improved $f$-modeling strategies may be feasible, perhaps making better use of the kinds of information in Table~\ref{tab2}. A reader has pointed out that pseudo-inverses of $P$ other than $A$ \eqref{31} are available, of the form \begin{equation} \bigl(P'BP\bigr)^{-1}P'B. \label{71} \end{equation} Here the matrix $B$ might be a guess for the inverse covariance matrix of $\hbf$, as motivated by generalized least squares estimation. So far, however, situations like that in Figure~\ref{fig8} seem inappropriate for $f$-modeling, leaving $g$-modeling as the only game in town. Theorems \ref{th3} and \ref{th4} provide accuracy assessments for $f$-modeling and $g$-modeling estimates. These can be dishearteningly broad. In the bottom panel of Figure~\ref{fig5}, the ``good'' choice, $g$-modeling, would still require more than $N=20\mbox{,}000$ independent observations $X_k$ to get the coefficient of variation down to $0.1$ when $x$ exceeds 2. More aggressive $g$-modeling, reducing the degrees of freedom for $Q$, improves accuracy, at the risk of increased bias. The theorems act as a reminder that, outside of the small circle of its traditional applications, empirical Bayes estimation has an ill-posed aspect that may call for draconian model choices. [The ultimate choice is to take $g\pthe$ as known, that is, to be Bayesian rather than empirical Bayesian. In our framework, this amounts to tacitly assuming an enormous amount ``$N$'' of relevant past experience.] Practical applications of empirical Bayes methodology have almost always taken $\Theta_k$ and $X_k$ in \eqref{11}--\eqref{12} to be real-valued, as in all of our examples. This is not a necessity of the theory (nor of its discrete implementation in Section~\ref{sec2}). Modeling difficulties mount up in higher dimensions, and even studies as large as the prostate investigation may not carry enough information for accurate empirical Bayes estimation. There are not many big surprises in the statistics literature, but empirical Bayes theory, emerging in the 1950s, had one of them: that parallel experimental structures like \eqref{11}--\eqref{12} carry within themselves their own Bayesian priors. Essentially, the other $N-1$ cases furnish the correct ``prior'' information for analyzing each $(\Theta_k,X_k)$ pair. How the statistician extracts that information in an efficient way, an ongoing area of study, has been the subject of this paper. \section*{Acknowledgments} I am grateful to Omkar Muralidharan, Amir Najmi and Stefan Wager for many helpful discussions. Research supported in part by NIH Grant 8R37 EB002784 and by NSF Grant DMS-12-08787.
1,314,259,992,697
arxiv
\section{Introduction} Extracting information from large amounts of data and understanding its global structure can be an immensely challenging and time consuming task. When the input data is huge, many traditionally `efficient' algorithms are no longer practical. The framework of property testing aims at addressing this problem. Property testing algorithms (\emph{testers}, for short) are given oracle access to the inputs, and their goal is to distinguish between inputs which have a given property $\mathbf P$ or are structurally \emph{far} from having $\mathbf P$ with high probability correctly. This can be seen as a relaxation of the classical yes/no decision problem for $\mathbf P$. Testers make these decisions by exploring only a small number of local parts of the input which are randomly chosen. They come with probabilistic guarantees on the quality of the answer. Typically, only a constant number of small local parts are explored and the algorithms often run in constant or sublinear time. This speed up in running time, whilst sacrificing some accuracy, can be crucial for dealing with large inputs. In particular it can be useful for a quick exploration of newly obtained data (e.\,g.\ biological networks). Based on the outcome of the exploration, a decision can then be taken whether to use a more time consuming exact algorithm in a second step. A \emph{property} is simply an isomorphism-closed class of graphs or relational databases. For example, each Boolean database query $q$ defines a property $\mathbf P_q$, the class of all databases satisfying $q$. In the bounded degree graph model~\cite{goldreich2002property}, a uniform upper bound $d$ on the degree of the graphs is assumed. For a small $\epsilon\in (0,1]$, two graphs $\mathcal{G}$ and $\mathcal{H}$, both on $n$ vertices, are $\epsilon$-\emph{close}, if at most $\epsilon dn$ edge modifications (deletions or insertions in $\mathcal{G}$ or $\mathcal{H}$) are necessary to make $\mathcal{G}$ and $\mathcal{H}$ isomorphic. If $\mathcal{G}$ and $\mathcal{H}$ are not $\epsilon$-\emph{close}, then they are called $\epsilon$-\emph{far}. A graph $\mathcal{G}$ is called $\epsilon$-\emph{close} to a property $\mathbf P$, if $\mathcal{G}$ is $\epsilon$-\emph{close} to a member of $\mathbf P$, and $\mathcal{G}$ is $\epsilon$-\emph{far} from $\mathbf P$ otherwise. The natural generalisation of this model to relational databases of bounded degree (where a database has degree at most $d$ if each element in its domain appears in at most $d$ tuples) was studied in~\cite{adler2018property}, where two databases $\mathcal D$ and $\mathcal D'$, both with $n$ elements in the domain, are $\epsilon$-\emph{close}, if at most $\epsilon dn$ tuple modifications (deletions from relations or insertions to relations) are necessary to make $\mathcal D$ and $\mathcal D'$ isomorphic, and $\mathcal D$ and $\mathcal D'$ are $\epsilon$-\emph{far} otherwise. We call this model for bounded degree relational databases the $\operatorname{BDRD}$ model. \textbf{Our contributions.} In this paper we propose a new model for property testing on bounded degree relational databases, which we call the $\operatorname{BDRD}_{+/-}$ model, with a distance measure that allows both tuple deletions and insertions, and \emph{deletion and insertion of elements of the domain}. On graphs, this translates to edge insertions and deletions, and \emph{vertex insertions and deletions}. We argue that this yields a natural distance measure. Indeed, take any (sufficiently large) graph $\mathcal{G}$, and let $\mathcal{H}$ be obtained from $\mathcal{G}$ by adding an isolated vertex. Then $\mathcal{G}$ and $\mathcal{H}$ are $\epsilon$-far for every $\epsilon \in (0,1]$ under the classical distance measure, although they only differ in one vertex. In contrast, our distance measure allows for a small number of vertex modifications. While comparing graphs on different numbers of vertices by adding isolated vertices was done implicitly as part of the study the testability of outerplanar graphs~ \cite{babu2016every}, to the best of our knowledge, such a distance measure has not been considered before as part of a model in property testing, which seems surprising to us. Formally, in the $\operatorname{BDRD}_{+/-}$ model, two databases $\mathcal D$ and $\mathcal D'$ are $\epsilon$-\emph{close}, if they can be made isomorphic by at most $\epsilon dn$ \emph{modifications}, where a modification is either, (1) removing a tuple from a relation, (2) inserting a tuple to a relation, (3) removing an element from the domain (and, as a consequence, any tuple containing that element is removed), or (4) inserting an element into the domain. Here $n$ is the minimum of the sizes of the domains of $\mathcal D$ and $\mathcal D'$. In Section~\ref{sec: the model} we give the full details of our model. We note that the $\operatorname{BDRD}_{+/-}$ model differs from the $\operatorname{BDRD}$ model only in the choice of the distance measure. While we work in the setting of relational databases, we would like to emphasize that our results carry over to (undirected and directed) graphs, as these can be seen as special instances of relational databases. It is known that in the bounded degree graph model, every minor-closed property is testable \cite{benjamini2010every}, and, more generally, every hyperfinite graph property is testable \cite{newman2013every} with constant query complexity. However, no bound on the running time can be obtained in these general settings. Indeed, there exist hyperfinite properties (of edgeless graphs) that are uncomputable. In~\cite{adler2018property}, Adler and Harwath ask which conditions guarantee both low query complexity \emph{and} efficient running time. They prove a meta-theorem stating that, on classes of databases (or graphs) of bounded degree and bounded tree-width, every property that can be expressed by a sentence of monadic second-order logic with counting (CMSO) is testable with \emph{constant} query complexity and \emph{polylogarithmic} running time in the $\operatorname{BDRD}$ model. Treating many algorithmic problems simultaneously, this can be seen as an algorithmic \emph{meta-theorem} within the line of research inspired by Courcelle's famous theorem \cite{courcelle1990graph} that states that each property of relational databases which is definable in CMSO is decidable in linear time on relational databases of bounded tree-width. CMSO extends first-order logic (FO) and hence properties expressible in FO (e.g. subgraph/sub-database freeness) are also expressible in CMSO. Other examples of graph properties expressible in CMSO include bipartiteness, colourability, even-hole-freeness and Hamiltonicity. Rigidity (i.\,e.\ the absence of a non-trivial automorphism) cannot be expressed in CMSO (cf.~\cite{courcelle2012graph} for more details). Our main theorem (Theorem~\ref{thm: CMSO testability}) shows that in the $\operatorname{BDRD}_{+/-}$ model, on classes of databases (or graphs) of bounded degree and bounded tree-width, every property that can be expressed by a sentence of monadic second-order logic with counting (CMSO) is testable with \emph{constant} query complexity and \emph{constant} running time. The question whether constant running time can also be achieved in the $\operatorname{BDRD}$ model remains open. We show that the $\operatorname{BDRD}_{+/-}$ model is in fact stronger than the $\operatorname{BDRD}$ model: Any property testable in the $\operatorname{BDRD}$ model is also testable in the $\operatorname{BDRD}_{+/-}$ model with the same query complexity and running time (Lemma \ref{lemma: comparing models 1}), but there are examples that show that the converse is not true (Lemma \ref{lemma: comparing models 2}). In the future, it would be interesting to obtain a characterisation of the properties that are (efficiently) testable in the $\operatorname{BDRD}_{+/-}$ model. \textbf{Our techniques.} To prove our main theorem, we give a general condition under which properties are testable in constant time in the $\operatorname{BDRD}_{+/-}$ model whereas the fastest known testers for such properties in the $\operatorname{BDRD}$ model run in polylogarithmic time. To describe this condition let us first briefly introduce some definitions. A property $\mathbf{P}$ is \emph{hyperfinite} on a class of databases $\mathbf{C}$ if every database in $\mathbf{P}$ can be partitioned into connected components of constant size by removing only a constant fraction of the tuples such that the resulting partitioned database is in $\mathbf{C}$. Let $r \in \mathbb{N}$, given an element $a$ in the domain of a database $\mathcal{D}$ the \emph{$r$-neighbourhood type} of $a$ in $\mathcal{D}$ is the isomorphism type of the sub-database of $\mathcal{D}$ induced by all elements that are at distance at most $r$ from $a$ in the underlying graph of $\mathcal{D}$, expanded by $a$. The \emph{$r$-histogram} of a bounded degree database $\mathcal{D}$, denoted by $\operatorname{h}_r(\mathcal{D})$, is a vector indexed by the $r$-neighbourhood types, where the component corresponding to the $r$-neighbourhood type $\tau$ contains the number of elements in $\mathcal{D}$ that realise $\tau$. The \emph{$r$-neighbourhood distribution} of $\mathcal{D}$ is the vector $\operatorname{h}_r(\mathcal{D})/n$ where $\mathcal{D}$ is on $n$ elements. We show that for any property $\mathbf{P}$ and input class $\mathbf{C}$, if $\mathbf{P}$ is hyperfinite on $\mathbf{C}$ and the set of $r$-histograms of the databases in $\mathbf{P}$ are semilinear, then $\mathbf{P}$ is testable on $\mathbf{C}$ in constant time (Theorem \ref{thm: constant time tester}). As a corollary we then obtain our main theorem, that every property definable by a CMSO sentence is testable on the class of databases with bounded degree and bounded tree-width in constant time (Theorem~\ref{thm: CMSO testability}). Alon \cite[Proposition 19.10]{lovasz2012large} proved that for every bounded degree graph $\mathcal{G}$ there exists a constant size graph $\mathcal{H}$ that has a similar neighbourhood distribution to $\mathcal{G}$. However, the proof is based on a compactness argument and does not give an explicit upper bound on the size of $\mathcal{H}$. Finding such a bound was suggested by Alon as an open problem \cite{indyk2011open}. We ask under which conditions on a given property $\mathbf P$, for every member of $\mathbf P$ there exists a constant size database with a similar neighbourhood distribution which is also in $\mathbf P$. We show that for any property $\mathbf{P}$ which is hyperfinite on the input class $\mathbf{C}$ and whose $r$-histograms are semilinear, if a database $\mathcal{D}$ is in $\mathbf{P}$ then there exists a constant size database $\mathcal{D'}$ in $\mathbf{P}$ with a similar neighbourhood distribution but this is not true for databases in $\mathbf{C}$ that are far from $\mathbf{P}$. Furthermore, we obtain upper and lower bounds on the size of $\mathcal{D'}$. We can then use this result to construct constant time testers. We first use the algorithm $\operatorname{EstimateFrequencies}_{r,s}$ (given in \cite{newman2013every} and adapted to databases in \cite{adler2018property}) to approximate the neighbourhood distribution of the input database. Then we only have to check if the estimated distribution is close to the neighbourhood distribution of a constant size database in the property. As a corollary (Corollary~\ref{cor:alon}), we obtain an explicit bound on the size on graphs $\mathcal{H}$ from Alon's theorem for `semilinear' properties, i.\,e.\ properties, where the histogram vectors of the neighbourhood distributions form a semilinear set. \subparagraph*{Further related work.} Other than the work already mentioned in \cite{adler2018property} there are only a handful of results on relational databases that utilise models from property testing. Chen and Yoshida \cite{chen2019testability} study a model which is close to the general graph model (cf.\ e.\,g. \cite{alon2008testing}) in which they study the testability of homomorphism inadmissibility. Ben-Moshe et al. \cite{ben2011detecting} study the testability of near-sortedness (a property of relations that states that most tuples are close to their place in some desired order). Our model differs from both of these, as it relies on a degree bound and uses different types of oracle access. Explicit bounds for Alon's theorem restricted to high-girth graphs were given in~\cite{FichtenbergerPS15}. Obtaining a characterisation of constant query testable properties is a long-standing open problem. Ito et al.~\cite{ito2020characterization} give a characterisation of the 1-sided error constant query testable monotone and hereditary graph properties in the bounded degree (directed and undirected) graph model. Fichtenberger et al.~\cite{fichtenberger2019every} show that every constant query testable property in the bounded degree graph model is either finite or contains an infinite hyperfinite subproperty. \subparagraph*{Organisation.} In Section \ref{sec: prelims} we introduce relevant notions used throughout the paper. In Section \ref{sec: the model} we introduce our property testing model for bounded degree relational databases and we compare it to the classical model. In Section \ref{sec: main results} we prove our main theorems. Due to space constraints the proofs of statements labelled $(\ast)$ are deferred to the appendix. \section{Preliminaries}\label{sec: prelims} We let $\mathbb{N}$ be the set of natural numbers including $0$, and $\mathbb{N}_{\geq 1} = \mathbb{N} \setminus \{0\}$. For each $n \in \mathbb{N}_{\geq 1}$, we let $[n] = \{1,2,\dots,n\}$. \subparagraph*{Databases.} A \emph{schema} is a finite set $\sigma = \{R_1,\dots,R_{|\sigma|}\}$ of relation names, where each $R\in \sigma$ has an \emph{arity} ar$(R) \in \mathbb{N}_{\geq 1}$. A \emph{database} $\mathcal{D}$ of schema $\sigma$ ($\sigma$-db for short) is of the form $\mathcal{D} = (D, R_1^{\mathcal{D}}, \dots, R_{|\sigma|}^{\mathcal{D}})$, where $D$ is a finite set, the set of \emph{elements} of $\mathcal{D}$, and $R_i^{\mathcal{D}}$ is an ar$(R_i)$-ary relation on $D$. The set $D$ is also called the \emph{domain} of $\mathcal{D}$. An \emph{(undirected) graph} $\mathcal{G}$ is a tuple $\mathcal{G} =(V(\mathcal{G}),E(\mathcal{G}))$ where $V(\mathcal{G})$ is a set of \emph{vertices} and $E(\mathcal{G})$ is a set of $2$-element subsets of $V(\mathcal{G})$ (the \emph{edges} of $\mathcal G$). An undirected graph can be seen as a $\{E\}$-db, where $E$ is a binary relation name, interpreted by a symmetric, irreflexive relation. We assume that all databases are linearly ordered or, equivalently, that $D=[n]$ for some $n\in \mathbb N$ (similar to \cite{KazanaS11}). We extend this linear ordering to a linear order on the relations of $\mathcal{D}$ via lexicographic ordering. The \emph{Gaifman graph} of a $\sigma$-db $\mathcal D$ is the undirected graph $\mathcal{G}(\mathcal{D})=(V,E)$, with vertex set $V:=D$ and an edge between vertices $a$ and $b$ whenever $a\neq b$ and there is an $R\in \sigma$ and a tuple $(a_1,\ldots,a_{\text{ar}(R)})\in R^{\mathcal D}$ with $a,b\in\{a_1,\ldots,a_{\text{ar}(R)}\}$. The \emph{degree} deg$(a)$ of an element $a$ in a database $\mathcal{D}$ is the total number of tuples in all relations of $\mathcal D$ that contain $a$. We say the \emph{degree} deg$(\mathcal{D})$ of a database $\mathcal{D}$ is the maximum degree of its elements. A class of databases $\mathbf{C}$ has \emph{bounded degree}, if there exists a constant $d\in\mathbb N$ such that for all $\mathcal{D} \in \mathbf{C}$, deg$(\mathcal{D}) \leq d$. (We always assume that classes of databases are closed under isomorphism.) Let us remark that the $\deg(\mathcal{D})$ and the (graph-theoretic) degree of $\mathcal{G}(\mathcal{D})$ only differ by at most a constant factor (cf.\ e.\,g.~\cite{durand2007first}). Hence both measures yield the same classes of relational structures of bounded degree. We define the \emph{tree-width} of a database $\mathcal D$ as the the tree-width of its Gaifman graph. (See e.\,g.\ \cite{Flum:2006:PCT:1121738} for a discussion of tree-width in this context.) A class $\mathbf{C}$ of databases has \emph{bounded tree-width}, if there exists a constant $t\in \mathbb N$ such that all databases $\mathcal{D} \in \mathbf{C}$ have tree-width at most~$t$. Let $\mathcal D$ be a $\sigma$-db, and $M\subseteq D$. The sub-database of $\mathcal{D}$ \emph{induced by} $M$ is the database $\mathcal{D}[M]$ with domain $M$ and $R^{\mathcal{D}[M]}:=R^{\mathcal{D}}\cap M^{\text{ar}(R)}$ for every $R\in \sigma$. An \emph{$(\epsilon, k)$-partition} of a $\sigma$-db $\mathcal{D}$ on $n$ elements is a $\sigma$-db $\mathcal{D'}$ formed by removing at most $\epsilon n$ many tuples from $\mathcal{D}$ such that every connected component in $\mathcal{D'}$ contains at most $k$ elements. A class of $\sigma$-dbs $\mathbf{C} \subseteq \mathbf{D}$ is \emph{$\rho$-hyperfinite} on $\mathbf{D}$ if for every $\epsilon \in (0,1]$ and $\mathcal{D} \in \mathbf{C}$ there exists an $(\epsilon, \rho(\epsilon))$-partition $\mathcal{D'} \in \mathbf{D}$ of $\mathcal{D}$. We call $\mathbf{C}$ \emph{hyperfinite} on $\mathbf{D}$ if there exists a function $\rho$ such that $\mathbf{C}$ is $\rho$-hyperfinite on $\mathbf{D}$. \subparagraph*{Logics.} We shall only briefly introduce first-order logic (FO) and monadic second-order logic with counting (CMSO). Detailed introductions can be found in \cite{libkin2013elements} and \cite{courcelle2012graph}. Let \textbf{var} be a countable infinite set of \emph{variables}, and fix a relational schema $\sigma$. The set $\operatorname{FO}[\sigma]$ is built from \emph{atomic formulas} of the form $x_1=x_2$ or $R(x_1, \dots, x_{\textup{ar}(R)})$, where $R \in \sigma$ and $x_1,\dots,x_{\textup{ar}(R)} \in \textbf{var}$, and is closed under Boolean connectives ($\lnot, \lor,\land,\rightarrow, \leftrightarrow$) and existential and universal quantifications ($\exists, \forall$). \emph{Monadic second-order logic} (MSO) is the extension of first-order logic that also allows quantification over subsets of the domain. CMSO extends MSO by allowing first-order modular counting quantifiers $\exists^m$ for every integer $m$ (where $\exists^m \phi$ is true in a $\sigma$-db if the number of its elements for which $\phi$ is satisfied is divisible by $m$). A \emph{free variable} of a formula is a (individual or set) variable that does not appear in the scope of a quantifier. A formula without free variables is called a \emph{sentence}. For a $\sigma$-db $\mathcal{D}$ and a sentence $\phi$ we write $\mathcal{D} \models \phi$ to denote that $\mathcal{D}$ satisfies $\phi$. \begin{proviso} For the rest of the paper, we fix a schema $\sigma$ and numbers $d,t \in \mathbb{N}$ with $d \geq 2$. From now on, all databases are $\sigma$-dbs and have degree at most $d$, unless stated otherwise. We use $\mathbf{C}_d$ to denote the class of all $\sigma$-dbs with degree at most $d$, $\mathbf{C}_d^t$ to denote the class of all $\sigma$-dbs with degree at most $d$ and tree-width at most $t$ and finally we use $\mathbf{C}$ to denote a class of $\sigma$-dbs with degree at most $d$. \end{proviso} \subparagraph*{Property testing.} Adler and Harwath~\cite{adler2018property} introduced the model of property testing for bounded degree relational databases, which is a straightforward extension of the model for bounded degree graphs~\cite{goldreich2002property}. We call this model the \emph{$\operatorname{BDRD}$ model} for short, which we shall discuss below. Property testing algorithms do not have access to the whole input database. Instead, they are given access via an \emph{oracle}. Let $\mathcal{D}$ be an input $\sigma$-db on $n$ elements. A property testing algorithm receives the number $n$ as input, and it can make \emph{oracle queries}\footnote{Note that an oracle query is not a database query.} of the form $(R,i,j)$, where $R \in \sigma$, $i \leq n$ and $j \leq \text{deg}(\mathcal{D})$. The answer to $(R,i,j)$ is the $j^{\text{th}}$ tuple in $R^{\mathcal{D}}$ containing the $i^{\text{th}}$ element\footnote{According to the assumed linear order on $D$.} of $\mathcal{D}$ (if such a tuple does not exist then it returns $\bot$). We assume oracle queries are answered in constant time. Let $\mathcal{D},\mathcal{D'}$ be two $\sigma$-dbs, both having $n$ elements. In the $\operatorname{BDRD}$ model the \emph{distance} between $\mathcal{D}$ and $\mathcal{D'}$, denoted by dist$(\mathcal{D}, \mathcal{D'})$, is the minimum number of tuples that have to be inserted or removed from relations of $\mathcal{D}$ and $\mathcal{D'}$ to make $\mathcal{D}$ and $\mathcal{D'}$ isomorphic. For $\epsilon \in [0,1]$, we say $\mathcal{D}$ and $\mathcal{D'}$ are \emph{$\epsilon$-close} if dist$(\mathcal{D}, \mathcal{D'}) \leq \epsilon d n$, and $\mathcal{D}$ and $\mathcal{D'}$ are \emph{$\epsilon$-far} otherwise. A \emph{property} is simply an isomorphism-closed class of databases. Note that every CMSO sentence $\phi$ defines a property $\mathbf{P}_{\phi}=\{\mathcal D\mid \mathcal D \models \phi\}$. We call $\mathbf{P}_{\phi}\cap \mathbf{C}$ the property \emph{defined by $\phi$ on $\mathbf{C}$}. A $\sigma$-db $\mathcal{D}$ is \emph{$\epsilon$-close} to a property $\mathbf{P}$ if there exists a database $\mathcal{D'} \in \mathbf{P}$ that is $\epsilon$-close to $\mathcal{D}$, otherwise $\mathcal{D}$ is \emph{$\epsilon$-far} from $\mathbf{P}$. Let $\mathbf{P} \subseteq \mathbf{C}$ be a property and $\epsilon \in (0,1]$ be the proximity parameter. An \emph{$\epsilon$-tester} for $\mathbf{P}$ on $\mathbf{C}$ is a probabilistic algorithm which is given oracle access to a $\sigma$-db $\mathcal{D} \in \mathbf{C}$ and it is given $n:=|D|$ as auxiliary input. The algorithm does the following: \begin{enumerate} \item If $\mathcal{D} \in \mathbf{P}$, then the tester accepts with probability at least ${2}/{3}$. \item If $\mathcal{D}$ is $\epsilon$-far from $\mathbf{P}$, then the tester rejects with probability at least ${2}/{3}$. \end{enumerate} The \emph{query complexity} of a tester is the maximum number of oracle queries made. A tester has \emph{constant} query complexity, if the query complexity does not depend on the size of the input database. We say a property $\mathbf{P} \subseteq \mathbf{C}$ is \emph{uniformly testable} in time $f(n)$ on $\mathbf{C}$, if for every $\epsilon \in (0,1]$ there exists an $\epsilon$-tester for $\mathbf{P}$ on $\mathbf{C}$ which has constant query complexity and whose running time on databases on $n$ elements is $f(n)$. Note that this tester must work for all $n$. \subparagraph*{Neighbourhoods.} For a $\sigma$-db $\mathcal D$ and $a,b \in D$, the \emph{distance} between $a$ and $b$ in $\mathcal D$, denoted by dist$_{\mathcal D}(a,b)$, is the length of a shortest path between $a$ and $b$ in $\mathcal{G}(\mathcal{D})$. Let $r \in \mathbb{N}$. For an element $a\in D$, we let $N^{\mathcal D}_r(a)$ denote the set of all elements of $\mathcal{D}$ that are at distance at most $r$ from $a$. The \emph{$r$-neighbourhood} of $a$ in $\mathcal D$, denoted by $\mathcal{N}^{\mathcal D}_r(a)$, is the tuple $(\mathcal{D}[N_r(a)], a)$ where $a$ is called the \emph{centre}. We omit the superscript and write $N_r(a)$ and $\mathcal{N}_r(a)$, if $\mathcal D$ is clear from the context. Two $r$-neighbourhoods, $\mathcal{N}_r(a)$ and $\mathcal{N}_r(b)$, are \emph{isomorphic} (written $\mathcal{N}_r(a) \cong \mathcal{N}_r(b)$) if there is an isomorphism between $\mathcal{D}[N_r(a)]$ and $\mathcal{D}[N_r(b)]$ which maps $a$ to $b$. An $\cong$-equivalence-class of $r$-neighbourhoods is called an \emph{$r$-neighbourhood type} (or \emph{$r$-type} for short). We let $T_{r}^{\sigma, d}$ denote the set of all $r$-types with degree at most $d$, over schema $\sigma$. Note that for fixed $d$ and $\sigma$, the cardinality $|T_{r}^{\sigma, d}|=:\operatorname{c}(r)$ is a constant, only depending on $r$ and $d$. We say that an element $a\in D$ \emph{has $r$-type $\tau$}, if $\mathcal{N}_r^{\mathcal D}(a) \in \tau$. For $r\in \mathbb N$, the \emph{$r$-histogram} of a database $\mathcal{D}$, denoted by $\operatorname{h}_r(\mathcal{D})$, is the vector with $\operatorname{c}(r)$ components, indexed by the $r$-types, where the component corresponding to type $\tau$ contains the number of elements of $\mathcal{D}$ of $r$-type $\tau$. The \emph{$r$-neighbourhood distribution} of $\mathcal{D}$, denoted by $\operatorname{dv}_r(\mathcal{D})$, is the vector $\operatorname{h}_r(\mathcal{D})/n$ where $|D|=n$. For a class of $\sigma$-dbs $\mathbf{C}$ and $r \in \mathbb{N}$, we let $\operatorname{h}_r(\mathbf{C}) := \{\operatorname{h}_r(\mathcal{D}) \mid \mathcal{D} \in \mathbf{C} \}$. A set is \emph{semilinear} if it is a finite union of linear sets. A set $M \subseteq \mathbb{N}^c$ is linear if $M = \{\bar{v}_0 + a_1 \bar{v}_1 + \dots + a_k \bar{v}_k \mid a_1 ,\dots, a_k \in \mathbb{N}\}$, for some $\bar{v}_0 ,\dots, \bar{v}_k \in \mathbb{N}^c$. From a result in \cite{fischer2004spectra} about many-sorted spectra of CMSO sentences it can be derived that that the set of $r$-histograms of properties defined by a CMSO sentence on $\mathbf{C}_d^t$ are semilinear. \begin{lemma}[\cite{adler2018property,fischer2004spectra}]\label{lemma:CMSO semilinear} For each $r \in \mathbb{N}$ and each property $\mathbf{P} \subseteq \mathbf{C}_d^t$ definable by a CMSO sentence on $\mathbf{C}_d^t$, the set $\operatorname{h}_r(\mathbf{P})$ is semilinear. \end{lemma} \subparagraph*{Model of computation.} We use Random Access Machines (RAMs) and a uniform cost measure when analysing our algorithms, i.\,e.\ we assume all basic arithmetic operations including random sampling can be done in constant time, regardless of the size of the numbers involved. \section{The Model}\label{sec: the model} We shall now introduce our property testing model for bounded degree relational databases, which is an extension of the $\operatorname{BDRD}$ model discussed in Section \ref{sec: prelims}. The notions of oracle queries, properties, $\epsilon$-tester, query complexity and uniform testability remain the same but we have an alternative definition of distance and $\epsilon$-closeness. In our model, which we shall call the \emph{$\operatorname{BDRD}_{+/-}$ model} for short, we can add and remove elements as well as tuples and can therefore compare databases that are on a different number of elements. \begin{definition}[Distance and $\epsilon$-closeness] Let $\mathcal{D}, \mathcal{D'} \in \mathbf{C}_d$ and $\epsilon \in [0,1]$. The distance between $\mathcal{D}$ and $\mathcal{D'}$ (denoted by $\operatorname{dist}_{+/-}(\mathcal{D}, \mathcal{D'})$) is the minimum number of modifications we need to make to $\mathcal{D}$ and $\mathcal{D'}$ to make them isomorphic where a modification is either (1) inserting a new element, (2) deleting an element (and as a result deleting any tuple that contains that element), (3) inserting a tuple, or (4) deleting a tuple. We then say $\mathcal{D}$ and $\mathcal{D'}$ are $\epsilon$-close if $\operatorname{dist}_{+/-}(\mathcal{D}, \mathcal{D'}) \leq \epsilon d \operatorname{min}\{|D|,|D'|\}$ and are $\epsilon$-far otherwise. \end{definition} The following example illustrates the difference between the distance measure of the $\operatorname{BDRD}$ and the distance measure of the $\operatorname{BDRD}_{+/-}$ model. \begin{example}\label{example: model comparison} Let $\mathbf{P}=\{\mathcal{G}_{n,m} \mid n,m \in \mathbb{N}_{>1}\}$ where $\mathcal{G}_{n,m}$ is an $n$ by $m$ grid graph as shown in Figure \ref{fig: grids example}. Let us consider the graph $\mathcal{H}_{n,m}$ for some $n,m \in \mathbb{N}$ which is formed from $\mathcal{G}_{n,m}$ by removing a corner vertex. In the $\operatorname{BDRD}_{+/-}$ model the distance between $\mathcal{H}_{n,m}$ and $\mathcal{G}_{n,m}$ is 1 (we remove a corner vertex from $\mathcal{G}_{n,m}$ to get $\mathcal{H}_{n,m}$) and therefore $\mathcal{H}_{n,m}$ is at distance 1 from $\mathbf{P}$ in the $\operatorname{BDRD}_{+/-}$ model. In the $\operatorname{BDRD}$ model if two graphs are on a different number of vertices then the distance between them is infinity. Therefore if $nm-1$ is a prime number then $\mathcal{H}_{n,m}$ is at distance infinity from $\mathbf{P}$ in the $\operatorname{BDRD}$ model. \begin{figure} \begin{center} \begin{tikzpicture} [scale=.6,auto=left,every node/.style={circle,fill=black!,scale=.6}] \def5{5} \def5{5} \foreach \x in {0,...,5}{ \foreach \y in {0,...,5}{ \ifthenelse{\x = 3 \OR \y=2} {} {\node at (\x,\y) {};}; \ifthenelse{\x=2 \AND \NOT \y =2}{ \draw[loosely dotted, line width=1pt] (\x,\y) -- (\x+2,\y);} {}; \ifthenelse{\y=3 \AND \NOT \x =3}{ \draw[loosely dotted, line width=1pt] (\x,\y) -- (\x,\y -2);} {}; \ifthenelse{ \x=5 \OR \x=2 \OR \x=3 \OR \y=2}{}{ \draw[line width=.5pt] (\x,\y) -- (\x +1,\y);}; \ifthenelse{ \y=5 \OR \y=1 \OR \y=2 \OR \x=3}{}{ \draw[line width=.5pt] (\x,\y) -- (\x,\y +1);}; }} \draw [<->, line width=.7pt] ( -.75,0) -- (-.75,5); \node[style={fill=none, scale=1.7},rotate=90] at ( -1.1,2.5) {$n$}; \draw [<->, line width=.7pt] (0 ,5.75) -- (5 ,5.75); \node[style={fill=none, scale=1.7}] at (2.5 ,6) {$m$}; \foreach \x in {8,...,13}{ \foreach \y in {0,...,5}{ \ifthenelse{\x = 11 \OR \y=2 \OR \(\x=13 \AND \y=0\)} {} {\node at (\x,\y) {};}; \ifthenelse{\x=10 \AND \NOT \y =2}{ \draw[loosely dotted, line width=1pt] (\x,\y) -- (\x+2,\y);} {}; \ifthenelse{\y=3 \AND \NOT \x =11}{ \draw[loosely dotted, line width=1pt] (\x,\y) -- (\x,\y -2);} {}; \ifthenelse{ \x=13 \OR \x=10 \OR \x=11 \OR \y=2 \OR \(\x=12 \AND \y=0\)}{}{ \draw[line width=.5pt] (\x,\y) -- (\x +1,\y);}; \ifthenelse{ \y=5 \OR \y=1 \OR \y=2 \OR \x=11 \OR \(\x=13 \AND \y=0\)}{}{ \draw[line width=.5pt] (\x,\y) -- (\x,\y +1);}; }} \draw [<->, line width=.7pt] ( 7.25,0) -- (7.25,5); \node[style={fill=none, scale=1.7},rotate=90] at ( 6.9,2.5) {$n$}; \draw [<->, line width=.7pt] (8 ,5.75) -- (13 ,5.75); \node[style={fill=none, scale=1.7}] at (10.5 ,6) {$m$}; \end{tikzpicture} \end{center} \caption{The graphs $\mathcal{G}_{n,m}$ and $\mathcal{H}_{n,m}$ (respectively) of Example \ref{example: model comparison}.} \label{fig: grids example} \end{figure} \end{example} We now show that if a property is testable in the $\operatorname{BDRD}$ model then it is also testable in the $\operatorname{BDRD}_{+/-}$ model but the converse is not true. This allows for more testable properties in the $\operatorname{BDRD}_{+/-}$ model. \begin{lemma}[$\ast$]\label{lemma: comparing models 1} Let $\mathbf{P} \subseteq \mathbf{C}$. If $\mathbf{P}$ is uniformly testable on $\mathbf{C}$ in time $f(n)$ in the $\operatorname{BDRD}$ model then $\mathbf{P}$ is also uniformly testable on $\mathbf{C}$ in time $f(n)$ in the $\operatorname{BDRD}_{+/-}$ model. \end{lemma} \begin{theorem}[\cite{goldreich2002property}]\label{thrm: bipartite tester} In the bounded degree model, bipartiteness cannot be tested with query complexity $o(\sqrt{n})$, where $n$ is the number of vertices of the input graph. \end{theorem} \begin{lemma}\label{lemma: comparing models 2} There exists a class $\mathbf{C}$ of $\sigma$-dbs and a property $\mathbf{P} \subseteq \mathbf{C}$ such that $\mathbf{P}$ is trivially testable on $\mathbf{C}$ in the $\operatorname{BDRD}_{+/-}$ model but is not testable on $\mathbf{C}$ in the $\operatorname{BDRD}$ model. \end{lemma} \begin{proof} Let $\mathbf{C}$ be the class of all graphs with degree at most $d$. Let $\mathbf{P} = \mathbf{P_1} \cup \mathbf{P_2} \subseteq \mathbf{C}$ be the property where $\mathbf{P_1}$ contains all bipartite graphs in $\mathbf{C}$ and $\mathbf{P_2}$ contains all graphs in $\mathbf{C}$ that have an odd number of vertices. In the $\operatorname{BDRD}_{+/-}$ model every $\mathcal{G} \in \mathbf{C}$ is $\epsilon$-close to $\mathbf{P}$ if $|V(\mathcal{G})|\geq 1/(\epsilon d) $ and hence $\mathbf{P}$ is trivially testable on $\mathbf{C}$ in the $\operatorname{BDRD}_{+/-}$ model (the tester accepts if $|V(\mathcal{G})|\geq 1/(\epsilon d) $ and does a full check of the input otherwise). In the $\operatorname{BDRD}$ model, if the input graph has an even number of vertices then it is far from $\mathbf{P_2}$ and so we have to test for $\mathbf{P_1}$. By Theorem \ref{thrm: bipartite tester}, bipartiteness is not testable (with constant query complexity) in the $\operatorname{BDRD}$ model. In particular, in the proof of Theorem \ref{thrm: bipartite tester}, Goldreich and Ron show that for any even $n$ there exists two families, $\mathcal{G}_1 \subseteq \mathbf{C}$ and $\mathcal{G}_2 \subseteq \mathbf{C}$, of $n$-vertex graphs such that every graph in $\mathcal{G}_1 $ is bipartite and almost all graphs in $\mathcal{G}_2 $ are far from being bipartite but any algorithm that performs $o(\sqrt{n})$ queries cannot distinguish between a graph chosen randomly from $\mathcal{G}_1 $ and a graph chosen randomly from $\mathcal{G}_2 $. Therefore $\mathbf{P}$ is not testable on $\mathbf{C}$ in the $\operatorname{BDRD}$ model. \end{proof} Note that the underlying general principle of the above proof can be applied to obtain further examples of properties that are testable in the $\operatorname{BDRD}_{+/-}$ model but not testable in the $\operatorname{BDRD}$ model. It is known that every hyperfinite property is `local' (Theorem~\ref{thrm: local-global}), where `local' means that if a $\sigma$-db $\mathcal{D}$ has a similar $r$-histogram to some $\sigma$-db (with the same domain size) that has the (hyperfinite) property, then $\mathcal{D}$ must be $\epsilon$-close to the property~\cite{newman2013every,adler2018property}. This is summarised in Theorem~\ref{thrm: local-global} below. We use Theorem~\ref{thrm: local-global} to prove a similar result in the $\operatorname{BDRD}_{+/-}$ model (Lemma~\ref{lemma: locality}). Lemma~\ref{lemma: locality} is essential for the proof of Theorem \ref{thrm: small dbs}. \begin{theorem}[\cite{newman2013every,adler2018property}]\label{thrm: local-global} Let $\epsilon \in (0,1]$ and let $\mathbf{C}$ be closed under removing tuples. If a property $\mathbf{P} \subseteq \mathbf{C}$ is hyperfinite on $\mathbf{C}$ then there exists $\lambda_{\ref{thrm: local-global}} := \lambda_{\ref{thrm: local-global}} (\epsilon) \in (0,1]$ and $r_{\ref{thrm: local-global}} := r_{\ref{thrm: local-global}}(\epsilon) \in \mathbb{N}$ such that for each $\mathcal{D} \in \mathbf{P}$ and $ \mathcal{D'} \in \mathbf{C}$ with the same number $n$ of elements, if $\|h_{r_{\ref{thrm: local-global}}}(\mathcal{D})- h_{r_{\ref{thrm: local-global}}}(\mathcal{D'})\|_1 \leq \lambda_{\ref{thrm: local-global}} n$, then $\mathcal{D'}$ is $\epsilon$-close to $\mathbf{P}$ in the $\operatorname{BDRD}$ model. \end{theorem} \begin{lemma}\label{lemma: locality} Let $\epsilon \in (0,1]$ and let $\mathbf{C}$ be closed under removing tuples. If a property $\mathbf{P} \subseteq \mathbf{C}$ is hyperfinite on $\mathbf{C}$ then there exists $\lambda := \lambda (\epsilon) \in (0,1]$ and $r := r(\epsilon) \in \mathbb{N}$ such that for each $\mathcal{D} \in \mathbf{P}$ and $ \mathcal{D'} \in \mathbf{C}$, on $|D|$ and $|D'|$ elements respectively, if $\|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D'})\|_1 \leq \lambda \operatorname{min}\{|D|,|D'|\}$, then $\mathcal{D'}$ is $\epsilon$-close to $\mathbf{P}$ in the $\operatorname{BDRD}_{+/-}$ model. \end{lemma} \begin{proof} Let $r =r_{\ref{thrm: local-global}}(\epsilon/4)$ and let $\lambda = \frac{\epsilon\lambda_{\ref{thrm: local-global}} (\epsilon /4)}{1+d^{r+1}}$. Let us assume that $\|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D'})\|_1 \leq \lambda \operatorname{min}\{|D|,|D'|\}$ and $\mathbf{P}$ is hyperfinite on $\mathbf{C}$. If $|D|=|D'|$ then by Theorem~\ref{thrm: local-global} and the choice of $\lambda$, $\mathcal{D'}$ is $\epsilon$-close to $\mathbf{P}$. So let us assume that $|D|\neq|D'|$. Let $\mathcal{D}_1$ be the $\sigma$-db on $|D|$ elements formed from $\mathcal{D'}$ by either removing $|D'|-|D|$ elements if $|D| < |D'|$ or adding $|D|-|D'|$ new elements if $|D'| < |D|$. Note that as $\|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D'})\|_1 \leq \lambda \operatorname{min}\{|D|,|D'|\}$ and by definition $\|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D'})\|_1 = \sum_{i=1}^{\operatorname{c}(r)}|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D'})|$ we have $\big| |D|-|D'| \big| \leq \lambda \operatorname{min}\{|D|,|D'|\}$. When an element $a$ is removed, the $r$-type of any element in $N_r(a)$ will change. As $|N_r(a)| \leq d^{r+1}$ (cf.\ e.\,g.\ Lemma 3.2 (a) of \cite{berkholz2018answering}) and $\big| |D|-|D'| \big| \leq \lambda \operatorname{min}\{|D|,|D'|\}$, we have $\|\operatorname{h}_r(\mathcal{D'})- \operatorname{h}_r(\mathcal{D}_1)\|_1 \leq \lambda \operatorname{min}\{|D|,|D'|\}d^{r+1}$. Therefore \[\|\operatorname{h}_r(\mathcal{D})- \operatorname{h}_r(\mathcal{D}_1)\|_1 \leq \lambda \operatorname{min}\{|D|,|D'|\}(1 +d^{r+1}) \leq \lambda_{\ref{thrm: local-global}} (\epsilon /4) |D|\] by the choice of $\lambda$. By Theorem \ref{thrm: local-global}, in the $\operatorname{BDRD}$ model $\mathcal{D}_1$ is $\epsilon /4$-close to $\mathbf{P}$. Hence there exists a $\sigma$-db $\mathcal{D}_{2} \in \mathbf{P}$ such that $|D_2|=|D|$ and $\operatorname{dist}(\mathcal{D}_1, \mathcal{D}_2) \leq \epsilon d |D|/4$. By the definition of the two distance measures $\operatorname{dist}$ and $\operatorname{dist}_{+/-}$, we have $\operatorname{dist}_{+/-}(\mathcal{D}_1, \mathcal{D}_2) \leq \operatorname{dist}(\mathcal{D}_1, \mathcal{D}_2)\leq \epsilon d |D|/4$ and by the choice of $\mathcal{D}_1$ we have $\operatorname{dist}_{+/-}(\mathcal{D'}, \mathcal{D}_1) \leq \lambda \operatorname{min}\{|D|,|D'|\}$. Therefore \[\operatorname{dist}_{+/-}(\mathcal{D'}, \mathcal{D}_2) \leq \frac{\epsilon d |D|}{4} + \lambda \operatorname{min}\{|D|,|D'|\} \leq \epsilon d \operatorname{min}\{|D|,|D'|\},\] as $|D| \leq \operatorname{min}\{|D|,|D'|\} + \lambda \operatorname{min}\{|D|,|D'|\} \leq 2 \operatorname{min}\{|D|,|D'|\} $ and $\lambda \leq \epsilon d /2$. Hence in the $\operatorname{BDRD}_{+/-}$ model $\mathcal{D}'$ is $\epsilon$-close to $\mathbf{P}$. \end{proof} \section{Main Results}\label{sec: main results} We begin this section with the first of our main theorems (Theorem~\ref{thrm: small dbs}). We show that for any property $\mathbf{P}$ which is hyperfinite on the input class $\mathbf{C}$, if the set of $r$-histograms of $\mathbf{P}$ is semilinear, then for every $\sigma$-db $\mathcal{D}$ in $\mathbf{P}$ there exists a constant size $\sigma$-db in $\mathbf{P}$ with a neighbourhood distribution similar to that of $\mathcal{D}$, but this is not true for $\sigma$-dbs in $\mathbf{C}$ that are far from $\mathbf{P}$. We then use this result to prove that such properties are testable in constant time in the $\operatorname{BDRD}_{+/-}$ model (Theorem~\ref{thm: constant time tester}). As a corollary we obtain that CMSO definable properties on $\sigma$-dbs of bounded tree-width and bounded degree are testable in constant time (Theorem~\ref{thm: CMSO testability}). \begin{theorem}\label{thrm: small dbs} Let $\epsilon \in (0,1]$ and let $r := r(\epsilon)$ be as in Lemma \ref{lemma: locality}. Let $\mathbf{C}$ be closed under removing tuples and let $\mathbf{P} \subseteq \mathbf{C}$ be a property that is hyperfinite on $\mathbf{C}$ such that the set $\operatorname{h}_r(\mathbf{P})$ is semilinear. There exist $n_{\text{min}}:=n_{\text{min}}(\epsilon),n_{\text{max}}:=n_{\text{max}}(\epsilon) \in \mathbb{N}$ and $f:=f(\epsilon), \mu:=\mu(\epsilon) \in (0,1)$ such that for every $\mathcal{D} \in \mathbf{C}$ with $|D| > n_{\text{max}}$, \begin{enumerate} \item if $\mathcal{D} \in \mathbf{P}$, then there exists a $\mathcal{D'} \in \mathbf{P}$ such that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$ and $\|\operatorname{dv}_r(\mathcal{D})-\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f - \mu $, and \item if $\mathcal{D}$ is $\epsilon$-far from $\mathbf{P}$ (in the $\operatorname{BDRD}_{+/-}$ model), then for every $\mathcal{D'} \in \mathbf{P}$ such that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$, we have $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 > f + \mu$. \end{enumerate} \end{theorem} \begin{proof} Let $\lambda := \lambda(\epsilon)$ be as in Lemma \ref{lemma: locality} and $c:=\operatorname{c}(r)$ (the number of $r$-types). First note that if $\mathbf{P}$ is empty then for any choice of $n_{\text{min}}$, $n_{\text{max}}$, $f$ and $\mu$, both 1. and 2. in the theorem statement are true and hence we shall assume that $\mathbf{P}$ is non-empty. As $\operatorname{h}_r(\mathbf{P})$ is a semilinear set we can write it as follows, $\operatorname{h}_r(\mathbf{P}) = M_1 \cup M_2 \cup \dots \cup M_m$ where $m \in \mathbb{N}$ and for each $i \in [m]$, $M_i = \{ \bar{v}_0^i + a_1 \bar{v}_1^i + \dots + a_{k_i} \bar{v}_{k_i}^i \mid a_1,\dots,a_{k_i} \in \mathbb{N} \}$ is a linear set where $\bar{v}_0^i,\dots,\bar{v}_{k_i}^i \in \mathbb{N}^{c}$ and for each $j \in [k_i]$, $\|\bar{v}_j^i\|_1 \neq 0$. Let $k:=\max_{i \in [m]}k_i + 1$ and $v:=\max_{i \in [m]} \Big(\max_{j \in [0,k_i]}\|\bar{v}_j^i\|_1\Big)$ (note that $v >0$ as $\mathbf{P}$ is non-empty). Let $n_{\text{min}}:= n_0 - kv$, $n_{\text{max}}:= n_0 + kv$, $f := \frac{\lambda}{3c}$, and $\mu := \frac{\lambda}{6c}$ where \[n_0:=kv\Big(\frac{3ckv}{f-\mu} + 1 \Big).\] Note that $n_{\text{min}} > 0$ by the choice of $n_0$, $f$ and $\mu$. (Proof of 1.) Assume $\mathcal{D} \in \mathbf{P}$ and $|D|=n > n_{\text{max}}$. Then there exists some $i \in [m]$ and $a_1^{\mathcal{D}},\dots,a_{k_i}^{\mathcal{D}} \in \mathbb{N}$ such that $\operatorname{h}_r(\mathcal{D}) = \bar{v}_0^i + a_1^{\mathcal{D}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D}}\bar{v}_{k_i}^i$ (note that $n = \|\bar{v}_0^i\|_1+ \sum_{j \in [k_i]} a_j^{\mathcal{D}} \|\bar{v}_j^i\|_1 $). Let $\mathcal{D'}$ be the $\sigma$-db with $r$-histogram $\bar{v}_0^i + a_1^{\mathcal{D'}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D'}}\bar{v}_{k_i}^i \in M_i$ where $a_j^{\mathcal{D'}}$ is the nearest integer to $a_j^{\mathcal{D}} n_{0} /n$, and hence $ a_j^{\mathcal{D}} n_0/n -1/2 \leq a_j^{\mathcal{D'}} \leq a_j^{\mathcal{D}} n_0/n + 1/2$. Note that since $\bar{v}_0^i + a_1^{\mathcal{D'}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D'}}\bar{v}_{k_i}^i \in \operatorname{h}_r(\mathbf{P})$, $\mathcal{D'}$ exists and $\mathcal{D'} \in \mathbf{P}$. We need to show that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$ and $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f - \mu$. \begin{claim}[$\ast$]\label{claim: lower bound on size} $|D'| \geq n_{\text{min}}$. \end{claim} \begin{claim}[$\ast$]\label{claim:upper bound on size} $|D'| \leq n_{\text{max}}$. \end{claim} \begin{claim} $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f - \mu$. \end{claim} \begin{claimproof} By definition, $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 = \sum _{j \in [c]}|\operatorname{dv}_r(\mathcal{D})[j]-\operatorname{dv}_r(\mathcal{D'})[j]|$. First recall that $0 < n_0 -kv \leq |D'| \leq n_0 +kv < n$ and note that for every $\ell \in [k_i]$, $a_{\ell}^{\mathcal{D}} \leq n$ (since $\|\bar{v}_{\ell}^i\|_1 \neq 0$). Then for every $j \in [c]$, by the choice of $a_{\ell}^{\mathcal{D'}}$ for $\ell \in [k_i]$, \begin{align*} &\operatorname{dv}_r(\mathcal{D})[j]-\operatorname{dv}_r(\mathcal{D'})[j] = \bar{v}_0^i[j]\Big(\frac{1}{n} - \frac{1}{|D'|}\Big) + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}}{n} - \frac{a_{\ell}^{\mathcal{D'}}}{|D'|}\Big) \\ &\leq \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}}{n} - \frac{a_{\ell}^{\mathcal{D}} n_0}{n|D'|} + \frac{1}{2|D'|}\Big) = \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}}{n}\Big(\frac{|D'| - n_0}{|D'|}\Big) + \frac{1}{2|D'|}\Big) \\ &\leq \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{n}{n}\Big(\frac{kv + n_0 - n_0}{n_0-kv}\Big) + \frac{1}{2(n_0-kv)}\Big) = \Big(\frac{2kv + 1}{2(n_0-kv)}\Big) \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j] \\ &\leq \frac{kv(2kv + 1)}{n_0-kv}. \end{align*} On the other hand, \begin{align*} &\operatorname{dv}_r(\mathcal{D})[j]-\operatorname{dv}_r(\mathcal{D'})[j] \geq -\frac{\bar{v}_0^i[j]}{|D'|} + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}}{n}\Big(\frac{|D'| - n_0}{|D'|}\Big) - \frac{1}{2|D'|}\Big) \\ &\geq -\frac{\bar{v}_0^i[j]}{|D'|} + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}}{n}\Big(\frac{-kv + n_0 - n_0}{|D'|}\Big) - \frac{1}{2|D'|}\Big) \\ &= -\frac{\bar{v}_0^i[j]}{|D'|} - \sum_{\ell \in [k_i]} \bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D}}kv}{n|D'|} + \frac{1}{2|D'|}\Big) \\ & \geq -\frac{\bar{v}_0^i[j]}{n_0 -kv} - \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{nkv}{n(n_0 -kv)} + \frac{1}{2(n_0 -kv)}\Big) \\ &= -\frac{\bar{v}_0^i[j]}{n_0 -kv} - \Big(\frac{2kv +1}{2(n_0 -kv)}\Big) \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j] \geq -\frac{kv(2kv + 1)}{n_0-kv}. \end{align*} Hence, \[|\operatorname{dv}_r(\mathcal{D})[j]-\operatorname{dv}_r(\mathcal{D'})[j]| \leq \frac{kv(2kv + 1)}{n_0-kv} \leq \frac{3(kv)^2}{n_0-kv} =\frac{f - \mu}{c}\] by the choice of $n_0$. Therefore, \[\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 = \sum _{j \in [c]}|\operatorname{dv}_r(\mathcal{D})[j]-\operatorname{dv}_r(\mathcal{D'})[j]| \leq f - \mu\] as required. \end{claimproof} (Proof of 2.) Assume $\mathcal{D}$ is $\epsilon$-far from $\mathbf{P}$ and $|D|=n > n_{\text{max}}$. For a contradiction let us assume there does exist a $\sigma$-db $\mathcal{D'} \in \mathbf{P}$ such that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$ and $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f + \mu$. As $\mathcal{D'} \in \mathbf{P}$ there exists some $i \in [m]$ and $a_1^{\mathcal{D'}},\dots,a_{k_i}^{\mathcal{D'}} \in \mathbb{N}$ such that $\operatorname{h}_r(\mathcal{D'}) = \bar{v}_0^i + a_1^{\mathcal{D'}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D'}}\bar{v}_{k_i}^i$. Let $\mathcal{D''}$ be the $\sigma$-db with $r$-histogram $\bar{v}_0^i + a_1^{\mathcal{D''}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D''}}\bar{v}_{k_i}^i \in M_i$ where $a_j^{\mathcal{D''}}$ is the nearest integer to $a_j^{\mathcal{D'}} n /|D'|$. Note as $\bar{v}_0^i + a_1^{\mathcal{D''}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D''}}\bar{v}_{k_i}^i \in \operatorname{h}_r(\mathbf{P})$, $\mathcal{D''}$ exists and $\mathcal{D''} \in \mathbf{P}$. \begin{claim}\label{claim: contradiction} $\mathcal{D}$ is $\epsilon$-close to $\mathbf{P}$. \end{claim} \begin{claimproof} First note that as $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f+ \mu$ and $\operatorname{h}_r(\mathcal{D'}) = \bar{v}_0^i + a_1^{\mathcal{D'}} \bar{v}_1^i + \dots + a_{k_i}^{\mathcal{D'}}\bar{v}_{k_i}^i$, for every $j \in [c]$ \begin{align*} &\frac{\bar{v}_0^i[j] + \sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D'}} \bar{v}_{\ell}^i[j]}{|D'|} - f- \mu\leq \operatorname{dv}_r(\mathcal{D})[j] \leq \frac{\bar{v}_0^i[j] + \sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D'}} \bar{v}_{\ell}^i[j]}{|D'|} + f + \mu \end{align*} and therefore \begin{align*} &n\Big(\frac{\bar{v}_0^i[j] + \sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D'}} \bar{v}_{\ell}^i[j]}{|D'|} - f - \mu\Big) \leq \operatorname{h}_r(\mathcal{D})[j]\leq n \Big( \frac{\bar{v}_0^i[j] + \sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D'}} \bar{v}_{\ell}^i[j]}{|D'|} + f + \mu \Big). \end{align*} Hence, by the choice of $a_{\ell}^{\mathcal{D''}}$ for $\ell \in [k_i]$, \begin{align*} &\operatorname{h}_r(\mathcal{D})[j] - \operatorname{h}_r(\mathcal{D''})[j] \leq \bar{v}_0^i[j]\Big(\frac{n}{|D'|} - 1\Big) + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D'}} n}{|D'|} - a_{\ell}^{\mathcal{D''}}\Big) + fn + \mu n\\ &\leq \bar{v}_0^i[j]\frac{n}{|D'|} + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D'}} n}{|D'|} - \Big(\frac{a_{\ell}^{\mathcal{D'}}n}{|D'|} - \frac{1}{2}\Big)\Big) + fn + \mu n\\ &=\bar{v}_0^i[j]\frac{n}{|D'|} + \frac{1}{2}\sum_{\ell \in [k_i]} \bar{v}_{\ell}^i[j] +fn+ \mu n. \end{align*} Similarly, by the choice of $a_{\ell}^{\mathcal{D''}}$ for $\ell \in [k_i]$ and as $n > |D'|$, \begin{align*} &\operatorname{h}_r(\mathcal{D})[j] - \operatorname{h}_r(\mathcal{D''})[j] \geq \bar{v}_0^i[j]\Big(\frac{n}{|D'|} - 1\Big) + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D'}} n}{|D'|} - a_{\ell}^{\mathcal{D''}}\Big) - fn - \mu n\\ &\geq -\bar{v}_0^i[j]\frac{n}{|D'|} + \sum_{\ell \in [k_i]}\bar{v}_{\ell}^i[j]\Big(\frac{a_{\ell}^{\mathcal{D'}} n}{|D'|} - \Big(\frac{a_{\ell}^{\mathcal{D'}}n}{|D'|} + \frac{1}{2}\Big)\Big) - fn - \mu n\\ &=-\bar{v}_0^i[j]\frac{n}{|D'|} - \frac{1}{2}\sum_{\ell \in [k_i]} \bar{v}_{\ell}^i[j] -f n- \mu n. \end{align*} Therefore, \begin{align*} &|\operatorname{h}_r(\mathcal{D})[j] - \operatorname{h}_r(\mathcal{D''})[j]| \leq \bar{v}_0^i[j]\frac{n}{|D'|} + \frac{1}{2}\sum_{\ell \in [k_i]} \bar{v}_{\ell}^i[j] +fn + \mu n\\ &\leq \frac{n}{|D'|}\sum_{0 \leq \ell \leq k_i} \bar{v}_{\ell}^i[j]+f n + \mu n \leq \frac{nkv}{|D'|} + fn + \mu n \\ &= n\Big(\frac{kv}{|D'|} + \frac{\lambda}{3c} + \frac{\lambda}{6c} \Big) \leq n\Big(\frac{\lambda}{18c} + \frac{\lambda}{3c} + \frac{\lambda}{6c}\Big) =\frac{5 \lambda n}{9c} \end{align*} by the choice of $f$ and $\mu$ and as \[|D'| \geq n_{\text{min}} = \frac{3c(kv)^2}{f- \mu} =\frac{18(ckv)^2}{\lambda} \geq \frac{18ckv}{ \lambda}.\] To apply Lemma \ref{lemma: locality} we need to show that $\|\operatorname{h}_r(\mathcal{D}) - \operatorname{h}_r(\mathcal{D''})\|_1 \leq \lambda \min\{n,|D''|\}$. If $|\operatorname{h}_r(\mathcal{D})[j] - \operatorname{h}_r(\mathcal{D''})[j]| \leq \frac{\lambda}{c} \min\{n,|D''|\}$ then $\|\operatorname{h}_r(\mathcal{D}) - \operatorname{h}_r(\mathcal{D''})\|_1 \leq \lambda \min\{n,|D''|\}$. Clearly, $\frac{5 \lambda n}{9c} < \frac{\lambda n}{c}$. We also have \begin{align*} &|D''| = \|\bar{v}_0^i\|_1 + \sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D''}} \|\bar{v}_{\ell}^i\|_1 \geq \|\bar{v}_0^i\|_1 + \sum_{\ell \in [k_i]}\Big(\frac{a_{\ell}^{\mathcal{D'}}n}{|D'|}-\frac{1}{2}\Big) \|\bar{v}_{\ell}^i\|_1 \\ &=\|\bar{v}_0^i\|_1 -\frac{1}{2} \sum_{\ell \in [k_i]}\|\bar{v}_{\ell}^i\|_1 + \frac{n}{|D'|}\sum_{\ell \in [k_i]}a_{\ell}^{\mathcal{D'}}\|\bar{v}_{\ell}^i\|_1 \geq -kv +\frac{n}{|D'|}(|D'| -\|\bar{v}_0^i\|_1) \\ &\geq - \frac{n}{18}+\frac{17}{18}n \geq \frac{5n}{9} \end{align*} as \[|D'| \geq \frac{18ckv}{ \lambda} \geq 18v \geq 18\|\bar{v}_0^i\|_1 \text{ and } kv \leq \frac{(ckv)^2}{\lambda} = \frac{n_{\text{min}}}{18} \leq \frac{n}{18}.\] Therefore, $\frac{5 \lambda n}{9c} \leq \frac{\lambda |D''|}{c}$ and hence $\|\operatorname{h}_r(\mathcal{D}) - \operatorname{h}_r(\mathcal{D''})\|_1 \leq \lambda \min\{n,|D''|\}$. By Lemma \ref{lemma: locality}, $\mathcal{D}$ is $\epsilon$-close to $\mathbf{P}$. \end{claimproof} Claim~\ref{claim: contradiction} gives us a contradiction and therefore for every $\mathcal{D'} \in \mathbf{P}$ such that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$, we have $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 > f + \mu$ as required. \end{proof} As mentioned in the introduction, Alon~\cite[Proposition 19.10]{lovasz2012large} proved that on bounded degree graphs, for any graph $\mathcal{G}$, radius $r$ and $\epsilon>0$ there always exists a graph $\mathcal{H}$ whose size is independent of $|V(\mathcal{G})|$ and whose $r$-neighbourhood distribution vector satisfies $\|\operatorname{dv}_r(\mathcal{G})-\operatorname{dv}_r(\mathcal{H})\|_1\leq \epsilon$. However, the proof is only existential and does not provide an explicit bound on the size of $\mathcal{H}$. As a corollary to the proof of Theorem~\ref{thrm: small dbs}, we immediately obtain explicit bounds for classes of graphs and relational databases of bounded degree whose histogram vectors form a semilinear set. \begin{corollary}\label{cor:alon} Let $\epsilon \in (0,1]$, $r \in \mathbb{N}$ and $\mathcal{D}$ be a $\sigma$-db that belongs to a class of $\sigma$-dbs $\mathbf{C}$ such that the set $\operatorname{h}_r(\mathbf{C})$ is semilinear, i.e. $\operatorname{h}_r(\mathbf{C}) = M_1 \cup M_2 \cup \dots \cup M_m$ where $m \in \mathbb{N}$ and for each $i \in [m]$, $M_i = \{ \bar{v}_0^i + a_1 \bar{v}_1^i + \dots + a_{k_i} \bar{v}_{k_i}^i \mid a_1,\dots,a_{k_i} \in \mathbb{N} \}$ is a linear set where $\bar{v}_0^i,\dots,\bar{v}_{k_i}^i \in \mathbb{N}^{\operatorname{c}(r)}$. Then there exists a $\sigma$-db $\mathcal{D}_0$ such that \[\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D}_0)\|_1 \leq \epsilon \text{ and } |D_0| \leq kv\Big(\frac{3ckv}{\epsilon} + 2 \Big)\] where $c:=\operatorname{c}(r)$, $k:=\max_{i \in [m]}k_i + 1$ and $v:=\max_{i \in [m]} \Big(\max_{j \in [0,k_i]}\|\bar{v}_j^i\|_1\Big)$. \end{corollary} Our aim is to construct constant time testers for hyperfinite properties whose set of $r$-histograms are semilinear. If we can approximate the $r$-neighbourhood distribution of a $\sigma$-db then by Theorem \ref{thrm: small dbs} we only need to check whether this distribution is close or not to the $r$-neighbourhood distribution of some small constant size $\sigma$-db. We let $\operatorname{EstimateFrequencies}_{r,s}$ be the algorithm that, given oracle access to an input $\sigma$-db $\mathcal{D}$, samples $s$ many elements uniformly and independently from $D$ and computes their $r$-type. The algorithm then returns the $r$-neighbourhood distribution vector of the sample. \begin{lemma}[\cite{adler2018property}]\label{lemma: neighbourhood distribution} Let $\mathcal{D} \in \mathbf{C}_d$ be a $\sigma$-db on $n$ elements, $\mu\in (0,1)$ and $r \in \mathbb{N}$. If $s \geq \operatorname{c}(r)^2/\mu^2 \cdot \operatorname{ln}(20\operatorname{c}(r))$, with probability at least ${9}/{10}$ the vector $\bar{v}$ returned by the algorithm $\operatorname{EstimateFrequencies}_{r,s}$ on input $\mathcal{D}$ satisfies $\| \bar{v} - \operatorname{dv}_{r}(\mathcal{D}) \|_1 \leq \mu$. \end{lemma} \begin{theorem}\label{thm: constant time tester} Let $\mathbf{C}$ be closed under removing tuples and let $\mathbf{P} \subseteq \mathbf{C}$ be a property that is hyperfinite on $\mathbf{C}$. If for each $r \in \mathbb{N}$ the set $\operatorname{h}_r(\mathbf{P})$ is semilinear, then $\mathbf{P}$ is uniformly testable on $\mathbf{C}$ in constant time in the $\operatorname{BDRD}_{+/-}$ model. \end{theorem} \begin{proof} Let $\epsilon \in (0,1]$. Let $r:=r(\epsilon)$ be as in Lemma \ref{lemma: locality}, let $n_{\text{min}}:=n_{\text{min}}(\epsilon)$, $n_{\text{max}}:=n_{\text{max}}(\epsilon)$, $f:=f(\epsilon)$ and $\mu:=\mu(\epsilon)$ be as in Theorem \ref{thrm: small dbs} and let $s = \operatorname{c}(r)^2/\mu^2 \cdot \operatorname{ln}(20\operatorname{c}(r))$. Assume that the set $\operatorname{h}_r(\mathbf{P})$ is semilinear. Given oracle access to a $\sigma$-db $\mathcal{D} \in \mathbf{C}$ and $|D|=n$ as an input, the $\epsilon$-tester proceeds as follows: \begin{enumerate} \item If $n \leq n_{\text{max}}$, do a full check of $\mathcal{D}$ and decide if $\mathcal{D} \in \mathbf{P}$. \item Run $\operatorname{EstimateFrequencies}_{r,s}$ and let $\bar{v}$ be the resulting vector. \item If there exists a $\mathcal{D'} \in \mathbf{P}$ where $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$ and $\|\bar{v} -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f$ then accept otherwise reject. \end{enumerate} The running time and query complexity of the above tester is constant as $n_{\text{max}}$ is a constant (it only depends on $\mathbf{P}$, $d$ and $\epsilon$) and $\operatorname{EstimateFrequencies}_{r,s}$ runs in constant time and makes a constant number of queries. For correctness, first assume $\mathcal{D} \in \mathbf{P}$. By Theorem \ref{thrm: small dbs} there exists a $\sigma$-db $\mathcal{D'} \in \mathbf{P}$ such that $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$ and $\|\operatorname{dv}_r(\mathcal{D})-\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f - \mu$. By Lemma \ref{lemma: neighbourhood distribution} with probability at least $9/10$, $\|\bar{v} -\operatorname{dv}_r(\mathcal{D})\|_1 \leq \mu$ and therefore $\|\bar{v} -\operatorname{dv}_r(\mathcal{D'})\|_1 \leq f$. Hence with probability at least $9/10$ the tester will accept. Now assume $\mathcal{D}$ is $\epsilon$-far from $\mathbf{P}$. By Theorem \ref{thrm: small dbs} for every $\mathcal{D'} \in \mathbf{P}$ with $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$, we have $\|\operatorname{dv}_r(\mathcal{D}) -\operatorname{dv}_r(\mathcal{D'})\|_1 > f + \mu$. By Lemma \ref{lemma: neighbourhood distribution} with probability at least $9/10$, $\|\bar{v} -\operatorname{dv}_r(\mathcal{D})\|_1 \leq \mu$ and therefore for every $\mathcal{D'} \in \mathbf{P}$ with $n_{\text{min}} \leq |D'| \leq n_{\text{max}}$, $\|\bar{v} -\operatorname{dv}_r(\mathcal{D'})\|_1 > f$. Hence with probability at least $9/10$ the tester will reject. \end{proof} Combining Theorem \ref{thm: constant time tester} and Lemma \ref{lemma:CMSO semilinear} and the fact that $\mathbf{C}_d^t$ is hyperfinite~\cite{HassidimKNO09,AlonST90} (and so any property is hyperfinite on $\mathbf{C}_d^t$) we obtain the following as a corollary. \begin{theorem}\label{thm: CMSO testability} Every property $\mathbf{P}$ definable by a CMSO sentence on $\mathbf{C}_d^t$ is uniformly testable on $\mathbf{C}_d^t$ with constant time complexity in the $\operatorname{BDRD}_{+/-}$ model. \end{theorem}
1,314,259,992,698
arxiv
\section{Introduction} \label{section_intro} In order to meet the rapid growth of wireless data rates, the Terahertz (THz) band with ultra-broad bandwidth has gained increasing attention~\cite{Nagatsuma2016Advances}. In the THz band, the ultra-wide bandwidth comes with a cost of huge propagation loss, which drastically limits the communication distance~\cite{8387211}. Fortunately, the sub-millimeter wavelength allows the design of array consisting of a large number of antennas at transceivers, e.g., 1024, to enable THz ultra-massive MIMO (UM-MIMO) systems~\cite{AKYILDIZ201646}. By utilizing the ultra-massive antennas, the beamforming technology can provide a high array gain to compensate for the path loss and combat the distance problem. Meanwhile, multiple data streams can be supported to offer a multiplexing gain and further improve the spectral efficiency of THz UM-MIMO systems~\cite{THz_HBF_WCM_2021}. In the THz UM-MIMO systems, many hardware constraints preclude from using conventional digital beamforming, which, instead, motivates the appealing hybrid beamforming~\cite{9398864,9411813}. The hybrid beamforming divides signal processing into the digital baseband domain and analog radio-frequency (RF) domain, which can achieve high spectral efficiency while maintaining low hardware complexity. The fully-connected (FC) and array-of-subarrays (AoSA) architectures are two widely-studied hybrid beamforming architectures~\cite{7436794,9139316,1,7397861,7913599,7445130}. The FC architecture achieves high spectral efficiency while consuming high power. On the contrary, as illustrated in Fig.~\ref{architecture_DS_FPS}(a), the complexity and power consumption of AoSA are noticeably reduced, while the spectral efficiency is largely sacrificed. As a result, the \textit{energy efficiency}, which is defined as the ratio between the spectral efficiency and power consumption, of both these two architectures is unsatisfactory and needs to be enhanced. Moreover, in THz UM-MIMO systems, due to the large number of antennas and the high carrier frequency, full channel state information (CSI) is difficult to acquire. To address these two practical concerns, we develop a novel energy-efficient hybrid beamforming architecture for THz UM-MIMO systems based on partial CSI in this work. \subsection{Related Work} \subsubsection{Energy-efficient hybrid beamforming architecture} There have been many studies aiming at improving the energy efficiency of the hybrid beamforming systems. The authors in~\cite{9205899} propose to jointly optimize the hybrid beamforming matrices and the resolution of DAC/ADC to enhance the energy efficiency. Another promising direction is utilizing the low-cost switches to provide dynamic connections. In~\cite{8778669,Aryan_paper1}, the RF chains of the hybrid beamforming architecture are dynamically deactivated to reduce the power consumption and enhance the energy efficiency. Except from the RF chain selection, there have also been many efforts on the dynamic connections between phase shifters and antennas~\cite{DAoSA_JSAC_2020,9026753,7880698,8642953,9219133,9110865}. By inserting switches in the FC architecture, a dynamic AoSA (DAoSA) architecture~\cite{DAoSA_JSAC_2020} and a fully-adaptive-connected (FAC) architecture~\cite{9026753} were proposed, where partial phase shifters are inactive to reduce power consumption. However, the number of remaining active phase shifters is usually larger than the number of antennas, which causes high power consumption. A dynamic hybrid beamforming (DHB) architecture was proposed~\cite{7880698,8642953}, as shown in Fig.~\ref{architecture_DS_FPS}(b). Through the switch network, each antenna dynamically selects one RF chain to connect with to enhance the spectral efficiency. Moreover, the number of phase shifters in the DHB architecture equals the number of antennas, which is less than the DAoSA and FAC architectures and leads to higher energy efficiency. \begin{figure*} \centering \includegraphics[scale=0.32]{image/total_architecture.pdf} \captionsetup{font={footnotesize}} \caption{The analog part of the architecture of hybrid beamforming. (a) AoSA architecture (b) DS architecture (c) The proposed DS-FPS architecture. } \label{architecture_DS_FPS} \vspace{-9.5mm} \end{figure*} One remaining problem of the DHB architecture is that the phase shifters are assumed to own high resolution and even infinite resolution, which are impractical and power-hungry. To address this problem, the authors of~\cite{9219133,9110865} proposed to use low-resolution phase shifters in the DHB architecture. However, both the above high-resolution and low-resolution phase shifters in~\cite{DAoSA_JSAC_2020,9026753,7880698,8642953,9219133,9110865} are still \textit{adjustable phase shifters}, i.e., phase selection is adjustable, which has high power consumption in the THz band. Fortunately, a \textit{fixed phase shifter} (FPS) can be adopted, whose phase remains fixed and non-adjustable. Compared to adjustable phase shifters, the THz FPS has substantially lower power consumption, which is more practical. The authors of~\cite{7387790,8310586} proposed to use FPSs, instead of adjustable phase shifters, in the hybrid beamforming architecture, where each antenna is connected with multiple FPSs through switches. As a result, the number of closed switches is usually several times of the number of antennas, which is unbearably large in THz UM-MIMO systems and causes huge power consumption. Therefore, to utilize low-cost FPSs while keeping low power consumption, we propose a novel DS-FPS architecture, as shown in Fig.~\ref{architecture_DS_FPS}(c). In the proposed DS-FPS architecture, each RF chain connects with multiple FPSs. Each antenna can dynamically select one FPS through one switch such that the number of closed switches equals the number of antennas, which is much smaller than the number of closed switches in~\cite{7387790,8310586}. As a result, the energy efficiency of the proposed DS-FPS architecture is improved. \subsubsection{Partial CSI} Most of the existing hybrid beamforming studies assume that full CSI is known at both transmitter and receiver. However, the full CSI is hard to obtain in THz UM-MIMO systems due to the large-dimensional channel matrix caused by ultra-massive antennas. To tackle this problem, the authors of~\cite{7737056} considered to design the hybrid beamforming based on partial elements of the channel matrix. Since the overall dimension of the channel matrix in THz UM-MIMO systems is prohibitively large, acquiring partial elements of the channel matrix is sill difficult. The authors of~\cite{8642953,7572969,9107073} proposed the hybrid beamforming solutions considering the statistical information of the channel. However, due to the lack of well-known general statistical MIMO channel model in the THz band yet, it is also challenging to know the statistical channel information. A more practical partial CSI was considered in~\cite{7908940,6522603}, where only the directions and the amplitude of the path gains of the multipath are known. However, the hybrid beamforming algorithms in~\cite{7908940,6522603} were proposed for the FC architecture with adjustable phase shifters and did not consider the use of low-cost FPSs, thus cannot be applied to the proposed DS-FPS architecture in this work. Consequently, by considering the practical partial CSI, novel hybrid beamforming algorithms need to be proposed for the DS-FPS architecture. \subsection{Our Contributions} In this work, we propose an energy-efficient DS-FPS architecture, by utilizing the low-cost FPS and dynamic switch network, as shown Fig.~\ref{architecture_DS_FPS}(c). Moreover, we consider the practical partial CSI, i.e., only the directions and the amplitude of the path gains of the multipath are known. Since the number of multipath of THz channel is usually around $5$~\cite{6998944}, the number of required parameters in the considered partial CSI is very limited. Furthermore, by considering partial CSI, we propose two CSI-robust hybrid beamforming algorithms for the THz DS-FPS architecture. In the prior and shorter version of this work~\cite{DS_GC_2020}, we concisely investigated the DS-FPS architecture, while the consideration of partial CSI, the corresponding hybrid beamforming algorithms, and the performance comparisons with existing work in terms of spectral and energy efficiencies were not thoroughly studied. The distinctive features of this work are summarized as follows. \begin{itemize} \item \textbf{We propose an energy-efficient DS-FPS architecture, by using the low-cost FPS and switch network.} With a fixed and nonadjustable phase, the FPS is more practical and consumes less power than the adjustable phase shifter in the THz band, which however, brings spectral efficiency loss. To address this problem, we further design a switch network to enable dynamic connections between the antennas and FPSs. Each antenna can intelligently select one FPS with the proper phase from all FPSs to adapt the THz UM-MIMO channel, which enhances the spectral efficiency. \item \textbf{By considering partial CSI, i.e., the directions and amplitude of path gains of multipath propagation, we formulate the hybrid beamforming problem for the DS-FPS architecture and propose two CSI-robust hybrid beamforming algorithms.} Specifically, we first propose a row-successive-decomposition (RSD) algorithm. The key idea is deriving an approximated form of the spectral efficiency, which only relies on partial CSI, and then optimizing each row of the switch network matrix successively. Furthermore, to reduce the computational complexity brought by the successive design, we propose a row-by-row (RBR) algorithm, which decomposes the optimization of each row of the switch network matrix as multiple uncorrelated sub-problems and solves them in parallel. \item \textbf{We evaluate the performance of the proposed DS-FPS architecture with the RSD and RBR algorithms and analyze the computational complexity.} Specifically, we show that the DS-FPS architecture achieves significantly higher energy efficiency than the existing architectures. Moreover, with partial CSI, the spectral efficiency of the RSD and RBR algorithms is similar to the case of full CSI and is robust to the CSI error. Furthermore, we analyze the computational complexity of the RBR algorithm, which is much lower than the existing algorithms. Compared to the RBR algorithm, the RSD algorithm yields a higher spectral efficiency, at the cost of increased computational complexity. \end{itemize} The remainder of this paper is organized as follows. In Sec.~\ref{section_channel_system_model}, we present the channel model and system model, and formulate the hybrid beamforming problem for the THz DS-FPS architecture. Then, an RSD algorithm and a low-complexity RBR algorithm are proposed to solve the DS-FPS hybrid beamforming problem in Sec.~\ref{section_RSD_algorithm} and Sec.~\ref{section_RBR_algorithm}, respectively. Furthermore, simulation results are provided in Sec.~\ref{section_simulation}. Finally, the conclusion is drawn in Sec.~\ref{section_conclusion}. \textbf{Notations}: $\textbf{A}$ is a matrix, $\textbf{a}$ is a vector, $a$ is a scalar. $\textbf{I}_{N}$ denotes an $N$-dimensional identity matrix. $(\cdot)^T$, $(\cdot)^*$, and $(\cdot)^{H}$ represent transpose, conjugate, and conjugate transpose. $\lVert\cdot\rVert_{p}$ is the $p$-norm of the vector. $\lVert\cdot\rVert_{F}$ is the Frobenius norm of the matrix. ${\rm Tr}(\cdot)$ and ${\rm Re}(\cdot)$ denote the trace and real part of the matrix. ${\rm blkdiag}(\cdot)$ denotes the block diagonal matrix. $\odot$ is the element-wise product. $\otimes$ represents the Kronecker product. \section{Channel Model and System Model of THz DS-FPS Hybrid Beamforming} \label{section_channel_system_model} In this section, we first introduce the THz channel model and the consideration of partial CSI. Then, we investigate the system model with the DS-FPS architecture. Based on the considered partial CSI, we formulate the hybrid beamforming design problem of the DS-FPS architecture. \subsection{Channel Model and the Practical Partial CSI} We consider a wideband multi-carrier THz UM-MIMO system, where $k=1$, $2$, ... $K$ denotes the index of the sub-carrier. $N_t$ and $N_r$ represent the numbers of antennas at the transmitter and receiver, respectively. The THz channel is usually very sparse, i.e., with limited multipath components~\cite{6998944}. Hence, we adopt a multipath model which is usually used for sparse MIMO channel as follows. For the $k^{\rm th}$ sub-carrier whose frequency and wavelength are $f_k$ and $\lambda_k$, the channel matrix $\textbf{H}[k]\in\mathbb{C}^{N_r\times N_t}$ can be written as~\cite{6998944,9398864} \begin{subequations} \begin{align} \textbf{H}[k]&=\sum\nolimits_{i=1}^{N_p}\!\alpha_{i}[k] \textbf{a}_{ri}[k]\textbf{a}_{ti}[k]^{H}\label{channel_model_planar_1}\\ &=\textbf{A}_{r}[k]\bm{\Lambda}[k]\textbf{A}_{t}[k]^H \label{channel_model_planar_2}\\ &=\textbf{A}_{r}[k]\left(\bm{\bar\Lambda}[k]\odot e^{j*\bm{\dot\Lambda}[k]}\right)\textbf{A}_{t}[k]^H, \label{channel_model_planar_3} \end{align} \label{channel_model_planar}% \end{subequations} where $\alpha_{i}[k]$ describes the complex path gain of the $i^{th}$ path, $\textbf{a}_{ri}[k]$ and $\textbf{a}_{ti}[k]$ denote the received and transmitted array response vectors for the $i^{\rm th}$ path of the $k^{\rm th}$ sub-carrier, which are the $i^{\rm th}$ column of $\textbf{A}_r[k]$ and $\textbf{A}_t[k]$, respectively. Additionally, $\bm{\Lambda}[k]\in\mathbb{C}^{N_p\times N_p}$ is a diagonal matrix, whose element at the $i^{\rm th}$ row and $i^{\rm th}$ column is $\alpha_{i}[k]$. We use $\bm{\bar\Lambda}[k]$ and $\bm{\dot\Lambda}[k]$ to denote the amplitude and the phase of $\bm{\Lambda}[k]$, respectively. In this work, we consider a THz system at 0.3 THz. Specifically, we use the ray-tracing method to generate the directions, interactions with objects, and propagation distances of each propagation path, as shown in Sec.~\ref{section_simulation}-A. Then, we compute the path gains according to our previous THz channel work~\cite{6998944}, by considering the THz-specific propagation characteristics, including i) the high spreading loss, ii) the strong molecular absorption phenomena that renders severe frequency selectivity and the resulting temporal broadening effects, iii) strong penetration and rough-surface scattering, among others. As a result, the considered THz channel is different with mmWave channels. We describe the expression for $\textbf{a}_{ri}[k]$ in \eqref{steering_vector_UPA}, while the expression for $\textbf{a}_{ti}[k]$ is similar and extensible. For an $L\times W$-element uniform planar array on the yz-plane, $\textbf{a}_{ri}[k]$ can be written as \begin{equation} \textbf{a}_{ri}[k]=\big[1, ... ,e^{j\frac{2\pi}{\lambda_k}d(L-1){\rm sin}(\phi_{ri}){\rm sin}(\theta_{ri})}\big]^T\otimes\big[1, ... ,e^{j\frac{2\pi}{\lambda_k}d(W-1){\rm cos}(\theta_{ri})}\big]^T, \label{steering_vector_UPA}% \end{equation} where $\phi_{ri}$ and $\theta_{ri}$ are the azimuth and elevation direction of arrival (DoA) of the $i^{\rm th}$ path. For $\textbf{a}_{ti}[k]$, the angles $\phi_{ti}$ and $\theta_{ti}$ denote the azimuth and elevation direction of departure (DoD). Moreover, $d$ is the antenna spacing, which is half of the wavelength of the central frequency. \subsubsection{Partial CSI} The channel matrix $\textbf{H}[k]$ is composed by the DoA, DoD, and path gain of each path. There have been many studies that jointly estimate the DoA, DoD, and the path gain at either transmitter or receiver~\cite{7400949}. After the feedback, both the transmitter and receiver know the DoA, DoD, and path gain. In this work, we consider the partial CSI scenario, where the transmitter knows the DoD and amplitude of path gain, i.e., $\textbf{A}_t[k]$ and $\bm{\bar\Lambda}[k]$ in~\eqref{channel_model_planar_3}, while the receiver knows the DoA and amplitude of path gain, i.e., $\textbf{A}_r[k]$ and $\bm{\bar\Lambda}[k]$, respectively. There have been multiple low-complexity methods~\cite{7914742,8949442} which can estimate the DoA at the receiver and estimate the DoD at the transmitter, respectively. During the estimation of DoA and DoD, the amplitude of path gain can also be acquired. Consequently, the considered partial CSI in this work is practical. For sparse channels, e.g., the THz channel in this work, the DoD and DoA are demanding information for hybrid beamforming design. While for rich scattering channels, it has been studied in~\cite{8647690} that the DoD and DoA can be mapped to the correlation matrix which is a realistic requirement for the hybrid beamforming design to achieve high spectral efficiency. \subsection{System Model of DS-FPS Hybrid Beamforming} Most of the existing mmWave hybrid beamforming studies~\cite{1,7397861,7913599,7445130,9026753,7880698,8642953,9219133,9110865} have used adjustable phase shifters, whose quantity is usually proportional to the number of antennas. For communications in the THz band, due to the higher frequency and larger number of antennas, the power consumption of adjustable phase shifters becomes prohibitively high and thus, impractical to use. To address this, we use low-cost FPSs to construct a DS-FPS architecture, as shown in Fig.~\ref{architecture_DS_FPS}(c). As will be shown in Sec.~\ref{section_simulation}, the DS-FPS architecture has much lower power consumption and higher energy efficiency than the existing adjustable phase shifters-based architectures, which indeed tackles the power consumption challenge of the THz band, although its spectral efficiency is lower than some existing architectures, e.g., the FC architecture. However, at mmWave systems, since the power consumption of adjustable phase shifters is still acceptable, as considered by most existing studies~\cite{7445130,9026753,9219133,9110865}, it may be unnecessary to use the DS-FPS architecture for enhancing energy efficiency at the cost of spectral efficiency degradation. Hence, the DS-FPS architecture is THz-specific. We set the number of RF chains as $L_t$ and each RF chain connects with $Q$ FPSs. The provided phases of the FPSs are fixed as $\Phi_1$, $\Phi_2$, ..., $\Phi_Q$, respectively. One drawback of FPS is that the provided phase is fixed while the required phase to steer beams varies with different channels, which thereby compromises the spectral efficiency performance. To overcome this drawback, we propose a switch network to enable dynamic connection, where each antenna can select one FPS from all $L_tQ$ FPSs to connect with, i.e., dynamically selects one proper phase from $\Phi_1$, $\Phi_2$, ..., $\Phi_Q$ to perform analog beamforming. Since the required phase of analog beamforming at each antenna may be an arbitrary value in $[0,2\pi]$ when channel varies, we set the phases of FPSs uniformly located in $[0,2\pi]$, i.e., $\Phi_{i}=\frac{2\pi(i-1)}{Q}$ for $i=1,2,\ldots,Q$, to make the beamforming weight error at each antenna no larger than $\frac{\pi}{Q}$. The system model of the DS-FPS hybrid beamforming architecture at the $k^{\rm th}$ sub-carrier can be expressed as \begin{equation} \textbf{y}[k] = \textbf{W}[k]^H\textbf{H}[k]\textbf{S}\textbf{F}\textbf{D}[k]\textbf{s}[k] + \textbf{W}[k]^H\textbf{n}[k], \label{system_model_DS} \end{equation} where $\textbf{s}[k]\in\mathbb{C}^{N_s\times1}$ and $\textbf{y}[k]\in\mathbb{C}^{N_s\times1}$ denote the transmitted and received signals. $N_s$ is number of data streams. $\textbf{n}[k]\in\mathbb{C}^{N_r\times 1}$ is the noise vector. $\textbf{W}[k]\in\mathbb{C}^{N_r\times N_s}$ is the combining matrix at receiver. $N_t\times L_tQ$-dimensional binary matrix $\textbf{S}$ and $L_tQ\times L_t$-dimensional matrix $\textbf{F}$ represent the switch network matrix and the phase matrix of the FPS network, respectively. The phase of one FPS is the same for each sub-carrier. The state of switch is identical for each sub-carrier. Hence, the frequency index $[k]$ is omitted in $\textbf{F}$ and $\textbf{S}$. As a result, $\textbf{F}$ can be written as \begin{equation} \textbf{F}={\rm blkdiag}(\underbrace{\textbf{f},\ldots,\textbf{f}}_{L_t}), \label{structure_FPS_network} \end{equation} where $\textbf{f}=[e^{j\Phi_1},e^{j\Phi_2},\ldots,e^{j\Phi_Q}]^T$ represents the phase vector generated by $Q$ FPSs of each RF chain. Since each antenna only selects one FPS from all $L_tQ$ FPSs through one closed switch, each row of $\textbf{S}$ has only one `1' and other elements are `0's, i.e., $\lVert\textbf{S}_i\rVert_{0}=1, i=1,2,\ldots,N_t$, where $\textbf{S}_{i}$ denotes the $i^{\rm th}$ row of $\textbf{S}$. $\textbf{D}[k]\in\mathbb{C}^{L_t\times N_s}$ represents the digital beamforming matrix. The transmit power constraint is enforced on $\textbf{D}[k]$ as $\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert^{2}_{F}=\rho$, where $\rho$ is the transmit power of the DS-FPS architecture. \subsection{Design Problem: Maximize the Spectral Efficiency with Partial CSI} The spectral efficiency of the DS-FPS architecture can be expressed as~\cite{7389996} \begin{align} SE\!=\!\frac{1}{K}\sum\nolimits_{k=1}^{K}\!\!\!\!{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_r}\!+\!\frac{1}{\sigma^{2}_{k}}\textbf{W}[k](\textbf{W}[k]^H\textbf{W}[k])^{-1}\textbf{W}[k]^H\textbf{H}[k]\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{H}[k]^{H}\Big\rvert\Big), \label{SE_formulation} \end{align} where $\sigma^{2}_{k}$ denotes the noise power of the $k^{\rm th}$ sub-carrier. To focus on the analysis of the DS-FPS architecture at the transmitter, we consider that the receiver is arranged with optimal digital combining, i.e., $\textbf{W}[k]$ is the first $N_s$ columns of the left singular matrix of $\textbf{H}[k]$. It has been studied in~\cite{7389996,7445130} that, when designing the transmitter to maximize the spectral efficiency, the influence of the combining matrix $\textbf{W}[k]$ at the receiver can be decoupled. Hence, the design problem of $\textbf{S}$ and $\textbf{D}[k]$ can be formulated as~\cite{7389996,7445130} \begin{subequations} \begin{align} &\mathop{\rm max\ }\limits_{\textbf{S},\textbf{D}[k]}\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_r}+\frac{1}{\sigma^{2}_{k}}\textbf{H}[k]\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{H}[k]^{H}\Big\rvert\Big) \label{design_problem_objective}\\ &\mathrm{s.t.}\ \textbf{S}_{i,l}\in\{0,1\}, \lVert\textbf{S}_{i}\rVert_{0}=1,\forall i,l \label{design_problem_constraints_1}\\ &\quad \ \ \sum\nolimits_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert^{2}_{F}=\rho, \label{design_problem_constraints_2} \end{align} \label{design_problem}% \end{subequations} where $\textbf{S}_{i,l}$ denotes the element at the $i^{\rm th}$ row and the $l^{\rm th}$ column of $\textbf{S}$. In this work, we focus on the solution to~\eqref{design_problem} at the transmitter, while the design of the DS-FPS architecture at the receiver side is similar, by rewriting~\eqref{design_problem} into the form of combining matrices and applying the proposed algorithms in the following. \section{Row-successive-decomposition (RSD) Algorithm} \label{section_RSD_algorithm} In this section, we propose an RSD algorithm to solve the design problem. One main challenge is that the objective function is related to the full CSI $\textbf{H}[k]$, while the transmitter only knows $\textbf{A}_t[k]$ and $\bm{\bar\Lambda}[k]$. To tackle this problem, we first derive an approximated form of the objective function~\eqref{design_problem_objective}, which only relies on $\textbf{A}_t[k]$ and $\bm{\bar\Lambda}[k]$ and excludes the unknown $\textbf{H}[k]$. Then, we decompose the rows of switch network matrix $\textbf{S}$ successively to transform the intractable design problem as multiple tractable sub-problems, to overcome the non-convex binary constraint of $\textbf{S}$. \subsection{Approximated Form of~\eqref{design_problem_objective}} As analyzed in~\eqref{channel_model_planar}, $\textbf{H}[k]=\textbf{A}_{r}[k]\bm{\Lambda}[k]\textbf{A}_{t}[k]^H=\textbf{A}_{r}[k]\left(\bm{\bar\Lambda}[k]\odot e^{j*\bm{\dot\Lambda}[k]}\right)\textbf{A}_{t}[k]^H$. Hence, the objective function~\eqref{design_problem_objective} can be rearranged as \begin{subequations} \begin{align} &\quad \ \frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_r}+(1/\sigma_k^2)\textbf{H}[k]\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{H}[k]^{H}\Big\rvert\Big)\\ &=\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_r}+(1/\sigma_k^2)\textbf{A}_{r}[k]\bm{\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{G}[k]\textbf{A}_{t}[k]\bm{\Lambda}[k]^H\textbf{A}_{r}[k]^H\Big\rvert\Big)\label{expression_SE_approx_1}\\ &=\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(1/\sigma_k^2)\bm{\Lambda}[k]^H\textbf{A}_{r}[k]^H\textbf{A}_{r}[k]\bm{\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{G}[k]\textbf{A}_{t}[k]\Big\rvert\Big)\label{expression_SE_approx_2}\\ &\approx\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]^2\textbf{A}_{t}[k]^H\textbf{G}[k]\textbf{A}_{t}[k]\Big\rvert\Big)\label{expression_SE_approx_3}\\ &=\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\bm{\bar\Lambda}[k]^H\Big\rvert\Big), \label{expression_SE_approx_4} \end{align} \end{subequations} where $\textbf{G}[k]=\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H$ is included in~\eqref{expression_SE_approx_1}. Moreover, \eqref{expression_SE_approx_2} comes from the property of the determinant that ${\rm log}_2(\lvert\textbf{I}+\textbf{X}\textbf{Y}\rvert)={\rm log}_2(\lvert\textbf{I}+\textbf{Y}\textbf{X}\rvert)$, where $\textbf{X}=\textbf{A}_{r}[k]\bm{\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{G}[k]\textbf{A}_{t}[k]$ and $\textbf{Y}=\bm{\Lambda}[k]^H\textbf{A}_{r}[k]^H$. In addition, \eqref{expression_SE_approx_3} follows the \textit{Approximation 1} as below, i.e., $\textbf{A}_r[k]^H\textbf{A}_r[k]\approx N_r\textbf{I}_{N_p}$. Consequently, we have $\bm{\Lambda}[k]^H\textbf{A}_{r}[k]^H\textbf{A}_{r}[k]\bm{\Lambda}[k]\approx N_r\bm{\Lambda}[k]^H\bm{\Lambda}[k]$. Moreover, since $\bm{\Lambda}[k]$ is a diagonal and square matrix as analyzed in~\eqref{channel_model_planar}, we have $\bm{\Lambda}[k]^H\bm{\Lambda}[k]=\bm{\bar\Lambda}[k]^2$. Last, \eqref{expression_SE_approx_4} follows the property of the determinant similar to~\eqref{expression_SE_approx_2}. \textit{Approximation 1:} In THz UM-MIMO systems, $\textbf{A}_r[k]^H\textbf{A}_r[k]\approx N_r\textbf{I}_{N_p}$, for $k=1$, $2$, ..., $K$. \textit{Proof:} The $i^{\rm th}$ column of $\textbf{A}_{r}[k]$ is $\textbf{a}_{ri}[k]$ in~\eqref{steering_vector_UPA}. The element of $\textbf{A}_{r}[k]^H\textbf{A}_{r}[k]$ at the $i^{\rm th}$ row and the $l^{\rm th}$ column can be represented as $\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]$. Hence, \textit{Approximation 1} is equivalent to \begin{equation} \frac{1}{N_r}\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\approx\mathbbm{1}(i=l), \label{equivalent_approx_1} \end{equation} where $\mathbbm{1}(i=l)$ is the indicator function that $\mathbbm{1}=1$ when $i=l$ and $\mathbbm{1}=0$ when $i\neq l$. According to the structure of $\textbf{a}_{ri}[k]$ in~\eqref{steering_vector_UPA}, when $i=l$, $\frac{1}{N_r}\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]$ is indeed equal to $1$. Next, we show that $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert\approx0$ when $i\neq l$, which leads to $\frac{1}{N_r}\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\approx0$. Specifically, $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert$ can be expressed as \begin{subequations} \begin{align} &\qquad\qquad\quad \ \begin{aligned}\frac{1}{N_r}\Big\lvert\Big(\big[1&, ... ,e^{-j\frac{2\pi}{\lambda_k}d(L-1){\rm sin}(\phi_{ri}){\rm sin}(\theta_{ri})}\big]\otimes\big[1, ... ,e^{-j\frac{2\pi}{\lambda_k}d(W-1){\rm cos}(\theta_{ri})}\big]\Big)\\&\times\Big(\big[1, ... ,e^{j\frac{2\pi}{\lambda_k}d(L-1){\rm sin}(\phi_{rl}){\rm sin}(\theta_{rl})}\big]^T\otimes\big[1, ... ,e^{j\frac{2\pi}{\lambda_k}d(W-1){\rm cos}(\theta_{rl})}\big]^T\Big)\Big\lvert \end{aligned} \label{approx_leq_1}\\ &\qquad\qquad\ \! =\frac{1}{N_r}\left\lvert\sum\nolimits_{a=0}^{L-1} e^{j\frac{2\pi d}{\lambda_k}a({\rm sin}(\phi_{rl}){\rm sin}(\theta_{rl})-{\rm sin}(\phi_{ri}){\rm sin}(\theta_{ri}))}\sum\nolimits_{b=0}^{W-1} e^{j\frac{2\pi d}{\lambda_k}b({\rm cos}(\theta_{rl})-{\rm cos}(\theta_{ri}))}\right\rvert \label{approx_leq_2}\\ &\qquad\qquad\ \!=\frac{1}{N_r}\left\lvert\frac{{\rm sin}(\frac{\pi d}{\lambda_k}L\Psi_1)}{{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_1)}\times\frac{{\rm sin}(\frac{\pi d}{\lambda_k}W\Psi_2)}{{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_2)}\right\rvert \label{approx_leq_3}\\ &\qquad\qquad\ \!\leq\frac{1}{N_r}\frac{1}{\big\lvert{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_1){\rm sin}(\frac{\pi d}{\lambda_k}\Psi_2)\big\rvert}, \label{approx_leq_4} \end{align} \end{subequations} where~\eqref{approx_leq_2} follows the mixed-product property of the Kronecker product. $\Psi_1={\rm sin}(\phi_{rl}){\rm sin}(\theta_{rl})-{\rm sin}(\phi_{ri}){\rm sin}(\theta_{ri})$, $\Psi_2={\rm cos}(\theta_{rl})-{\rm cos}(\theta_{ri})$ in~\eqref{approx_leq_3}. \eqref{approx_leq_4} comes from the fact that ${\rm sin}(\frac{\pi d}{\lambda_k}L\Psi_1)\leq1$ and ${\rm sin}(\frac{\pi d}{\lambda_k}W\Psi_2)\leq1$. Since $i\neq l$, we have $\Psi_1\neq0$, $\Psi_2\neq0$, and $\lvert{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_1){\rm sin}(\frac{\pi d}{\lambda_k}\Psi_2)\rvert\neq0$. Therefore, in THz UM-MIMO systems with ultra-massive antennas, e.g., $N_r\geq1024$, $\frac{1}{N_r}\lvert\textbf{a}_{ri}^H\textbf{a}_{rl}\rvert\leq\frac{1}{N_r}\frac{1}{\lvert{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_1){\rm sin}(\frac{\pi d}{\lambda_k}\Psi_2)\rvert}$ is usually very close to $0$ and the approximation $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert\approx0$ holds. The special case is that when the directions of the $i^{\rm th}$ path and the $l^{\rm th}$ path are similar, $\Psi_1$ and $\Psi_2$ are small such that $\lvert{\rm sin}(\frac{\pi d}{\lambda_k}\Psi_1){\rm sin}(\frac{\pi d}{\lambda_k}\Psi_2)\rvert$ is close to $0$ and the approximation error of $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert\approx0$ is large. \begin{figure*} \setlength{\belowcaptionskip}{0pt} \centering \captionsetup{font={footnotesize}} \subfigure[Formula~\eqref{approx_leq_1} versus $\theta_{rl}$ and $\phi_{rl}$.]{ \includegraphics[scale=0.26]{image/approximations/Approx_1.pdf}} \subfigure[Vertical view of Fig.~\ref{fig_approx}(a).]{ \includegraphics[scale=0.26]{image/approximations/Approx_2_p.pdf}} \caption{$\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert$ in~\eqref{approx_leq_1} versus $\theta_{rl}$ and $\phi_{rl}$, where $\theta_{ri}=60^{\circ}$ and $\phi_{ri}=30^{\circ}$. $L=W=32$, $N_r=L\times W=1024$, $\lambda_k=1$ mm ($0.3$ THz), $d=0.5\lambda_k$.} \label{fig_approx} \vspace{-7.5mm} \end{figure*} We further assess the approximation error in Fig.~\ref{fig_approx}. The direction of the $i^{\rm th}$ path is $\phi_{ri}=30^{\circ}$ and $\theta_{ri}=60^{\circ}$. As shown in Fig.~\ref{fig_approx}(a) and Fig.~\ref{fig_approx}(b), we plot $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert$ versus the direction of the $l^{\rm th}$ path, i.e., $\phi_{rl}$ and $\theta_{rl}$. Due to the symmetrical property of the sinusoidal function, we only consider the cases that $0^{\circ}\leq\phi_{rl}\leq90^{\circ}$ and $0^{\circ}\leq\theta_{rl}\leq90^{\circ}$, while the remaining cases are similar and extensible. For most angles of $\phi_{rl}$ and $\theta_{rl}$ which locate in the blue region, $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert\approx0$. When $\phi_{rl}$ and $\theta_{rl}$ are very close to $\phi_{ri}$ and $\theta_{ri}$, i.e., the $l^{\rm th}$ path locates in the highlighted region $\{(\phi_{rl},\theta_{rl})\big\lvert\phi_{ri}-4^{\circ}<\phi_{rl}<\phi_{ri}+4^{\circ}\cap\theta_{ri}-3^{\circ}<\theta_{rl}<\theta_{ri}+3^{\circ}\}$, $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert$ is not close to $0$. The blue region accounts for a very large proportion of the space, i.e., more than $99\%$. Moreover, the THz channel is usually sparse, e.g., the number of multipath in the spatial domain is around $5$~\cite{6998944}. In light of these, the probability that the $l^{\rm th}$ path locates in the highlighted region is very small. Without loss of generality, we can state that the approximation $\frac{1}{N_r}\lvert\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\rvert\approx0$, i.e., $\frac{1}{N_r}\textbf{a}_{ri}[k]^H\textbf{a}_{rl}[k]\approx0$, holds reasonably well. \hfill $\blacksquare$ \subsection{Design of Digital Beamforming Matrix {\rm \textbf{D}[{\it k}]}} Till now, we obtain an approximated form~\eqref{expression_SE_approx_4} of the original objective function~\eqref{design_problem_objective}, which relies on the partial CSI $\bm{\bar\Lambda}[k]$ and $\textbf{A}_t[k]^H$. Next, we present how to use~\eqref{expression_SE_approx_4} to design $\textbf{D}[k]$ and $\textbf{S}$. We first analyze the solution of $\textbf{D}[k]$ by assuming that $\textbf{S}$ has been determined. The maximization of the original objective~\eqref{design_problem_objective} can be transformed as the maximization of~\eqref{expression_SE_approx_4} as \begin{equation} \mathop{\rm max\ }\limits_{\textbf{S},\textbf{D}[k]}\frac{1}{K}\sum\nolimits_{k=1}^{K}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\bm{\bar\Lambda}[k]^H\Big\rvert\Big), \label{design_problem_1} \end{equation} where the constraints of $\textbf{S}$ and $\textbf{D}[k]$ are the same as in~\eqref{design_problem}. Assuming that $\textbf{S}$ has been determined and omitting the transmit power constraint temporarily, the solution of $\textbf{D}[k]$ to maximize \eqref{design_problem_1} is \begin{equation} \textbf{D}[k]=\widetilde{\textbf{V}}_{N_s}[k]\widetilde{\bm{\Gamma}}[k], \label{solution_D_RSD} \end{equation} where $\widetilde{\textbf{V}}_{N_s}[k]$ is the first $N_s$ columns of $\widetilde{\textbf{V}}[k]$, which comes from the singular value decomposition (SVD) of $\textbf{H}_e[k]=\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H\textbf{S}\textbf{F}$ given by $\textbf{H}_e[k]=\widetilde{\textbf{U}}[k]\widetilde{\bm{\Sigma}}[k]\widetilde{\textbf{V}}[k]^H$. Moreover, $\widetilde{\bm{\Gamma}}[k]$ is the power allocation matrix, for which the water-filling allocation is the optimal. Despite so, to reduce the computational complexity, we consider the more practical equal-power allocation such that $\widetilde{\bm{\Gamma}}[k]=\textbf{I}_{N_s}$. The consideration of water-filling power allocation and the corresponding analog and digital beamforming solution can be considered in the future work. \subsection{Design of Switch Network Matrix {\rm \textbf{S}}} Next, we successively decompose the rows of $\textbf{S}$ to enable the design of $\textbf{S}$ for the maximization of~\eqref{design_problem_1}. By substituting the solution of $\textbf{D}[k]$ in \eqref{solution_D_RSD} into \eqref{design_problem_1}, the $k^{\rm th}$ term of~\eqref{design_problem_1} is derived as \begin{subequations} \begin{align} &\quad \ {\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\bm{\bar\Lambda}[k]^H\Big\rvert\Big) \label{relaxation_SE_1}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\textbf{H}_e[k]\widetilde{\textbf{V}}_{N_s}\textbf{I}_{N_s}\textbf{I}_{N_s}^H\widetilde{\textbf{V}}_{N_s}^H\textbf{H}_e[k]^H\Big\rvert\Big) \label{relaxation_SE_2}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\widetilde{\textbf{U}}\widetilde{\bm{\Sigma}}\widetilde{\textbf{V}}^H\widetilde{\textbf{V}}_{N_s}\widetilde{\textbf{V}}_{N_s}^H\widetilde{\textbf{V}}\widetilde{\bm{\Sigma}}^H\widetilde{\textbf{U}}^H\Big\rvert\Big) \label{relaxation_SE_3}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\widetilde{\bm{\Sigma}}^H\widetilde{\textbf{U}}^H\widetilde{\textbf{U}}\widetilde{\bm{\Sigma}}[\widetilde{\textbf{V}}_{N_s},\widetilde{\textbf{V}}_{\epsilon} ]^H\widetilde{\textbf{V}}_{N_s}\widetilde{\textbf{V}}_{N_s}^H[\widetilde{\textbf{V}}_{N_s},\widetilde{\textbf{V}}_{\epsilon}]\Big\rvert\Big) \label{relaxation_SE_4}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_s}+(N_r/\sigma_k^2)\widetilde{\bm{\Sigma}}_{N_s}^2\Big\rvert\Big) \label{relaxation_SE_5}\\ &\leq{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{L_t}+(N_r/\sigma_k^2)\widetilde{\bm{\Sigma}}_{L_t}^2\Big\rvert\Big) \label{relaxation_SE_6}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\bm{\bar\Lambda}[k]^H\Big\rvert\Big). \label{relaxation_SE_8} \end{align} \end{subequations} From~\eqref{relaxation_SE_2} to~\eqref{relaxation_SE_8}, for $\widetilde{\textbf{U}}[k]$, $\widetilde{\bm{\Sigma}}[k]$, $\widetilde{\textbf{V}}[k]$, and $\widetilde{\textbf{V}}_{N_s}[k]$, we omit the index $[k]$ for simplicity. \eqref{relaxation_SE_3} is the result of applying the SVD of $\textbf{H}_e[k]$ given by $\textbf{H}_e[k]=\widetilde{\textbf{U}}\widetilde{\bm{\Sigma}}\widetilde{\textbf{V}}^H$. \eqref{relaxation_SE_4} comes from the property that ${\rm log}_2(\lvert\textbf{I}+\textbf{X}\textbf{Y}\rvert)={\rm log}_2(\lvert\textbf{I}+\textbf{Y}\textbf{X}\rvert)$, where $\textbf{X}=\widetilde{\textbf{U}}\widetilde{\bm{\Sigma}}\widetilde{\textbf{V}}^H\widetilde{\textbf{V}}_{N_s}\widetilde{\textbf{V}}_{N_s}^H\widetilde{\textbf{V}}$ and $\textbf{Y}=\widetilde{\bm{\Sigma}}^H\widetilde{\textbf{U}}^H$. Moreover, $\widetilde{\textbf{V}}_{N_s}$ and $\widetilde{\textbf{V}}_{\epsilon}$ denote the first $N_s$ columns and the remaining columns of $\widetilde{\textbf{V}}$, respectively. \eqref{relaxation_SE_5} follows the property of SVD that $\widetilde{\textbf{U}}^H\widetilde{\textbf{U}}=\textbf{I}$, $\widetilde{\textbf{V}}_{N_s}^H\widetilde{\textbf{V}}_{N_s}=\textbf{I}$, and $\widetilde{\textbf{V}}_{\epsilon}^H\widetilde{\textbf{V}}_{N_s}=\textbf{0}$, where $\widetilde{\bm{\Sigma}}_{N_s}$ represents the first $N_s$ rows and $N_s$ columns of $\widetilde{\bm{\Sigma}}$. In the hybrid beamforming system, the number of RF chains $L_t$ is usually slightly larger than or equal to the number of data streams $N_s$, since the spectral efficiency enhancement brought by additional RF chains is negligible~\cite{7397861}. Therefore, we use \eqref{relaxation_SE_6} as an upper bound on \eqref{relaxation_SE_5}, where $\widetilde{\bm{\Sigma}}_{L_t}$ represents the first $L_t$ rows and $L_t$ columns of $\widetilde{\bm{\Sigma}}$ and the equality holds when $N_s=L_t$. The dimension of $\textbf{H}_e[k]$ is $N_p\times L_t$ such that $\textbf{H}_e[k]$ has at most $L_t$ non-zero singular values, which suggests that $\widetilde{\bm{\Sigma}}_{L_t}$ contains all the non-zero singular values of $\textbf{H}_e[k]$. According to the property of SVD, we obtain ${\rm{log}}_2(\lvert\textbf{I}_{L_t}+(N_r/\sigma_k^2)\widetilde{\bm{\Sigma}}_{L_t}^2\rvert)={\rm{log}}_2(\lvert\textbf{I}_{N_p}+(N_r/\sigma_k^2)\textbf{H}_e[k]\textbf{H}_e[k]^H\rvert)$, which equals to \eqref{relaxation_SE_8}. So far, we have derived an upper bound according to \eqref{relaxation_SE_8} for~\eqref{relaxation_SE_1}, which is uncorrelated with $\textbf{D}[k]$ such that the coupling between $\textbf{D}[k]$ and $\textbf{S}$ is mitigated. Next, we propose to design $\textbf{S}$ to maximize \eqref{relaxation_SE_8} rather than directly maximizing \eqref{relaxation_SE_1}. One main difficulty to solve $\textbf{S}$ is the binary constraint~\eqref{design_problem_constraints_1} on each row. The authors in~\cite{7445130} proposed to successively decompose the hybrid beamforming matrix in the column-manner to enable the design. Inspired by this, to tackle the row-manner constraint of $\textbf{S}$, we propose to decompose each row of $\textbf{S}$ successively. Although the idea of successive decomposition is similar, the detailed constraints and derivations of the row-manner decomposition in this work are quite different from the column-manner decomposition in~\cite{7445130}. Moreover, some important elements of the RSD algorithm, including the derived approximated spectral efficiency~\eqref{expression_SE_approx_4}, the solution of $\textbf{D}[k]$ as well as the derived upper bound~\eqref{relaxation_SE_8} on the approximated spectral efficiency, and the solution of each row of $\textbf{S}$ have not been studied in~\cite{7445130}. To start with, \eqref{relaxation_SE_8} can be rewritten as \begin{subequations} \begin{align} &\quad \ {\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!+\!(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\bm{\bar\Lambda}[k]^H\Big\rvert\Big) \label{RSD_SIC_1}\\ &= {\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!+\!(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]^2\textbf{A}_{t}[k]^H\textbf{S}\textbf{F}\textbf{F}^H\textbf{S}^H\textbf{A}_{t}[k]\Big\rvert\Big) \label{RSD_SIC_2}\\ &={\rm{log}}_2\bigg(\bigg\lvert\textbf{I}_{N_p}\!+\!\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2\!\Big[\textbf{C}_{1:N_t-1}[k],\textbf{C}_{N_t}[k]\Big]\!\!\bigg[\!\!\begin{array}{c}\textbf{S}_{1:N_t-1}\\\textbf{S}_{N_t}\end{array}\!\!\!\bigg]\!\textbf{F}\textbf{F}^H\![\textbf{S}_{1:N_t-1}^H,\textbf{S}_{1:N_t}^H]\!\bigg[\!\!\begin{array}{c}\textbf{C}_{1:N_t-1}[k]^H\\\textbf{C}_{N_t}[k]^H\end{array}\!\!\bigg]\bigg\rvert\bigg) \label{RSD_SIC_3}\\ &={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!+\!(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]^2(\textbf{A}[k]\textbf{A}[k]^H\!+\!\textbf{B}[k]\textbf{A}[k]^H\!+\!\textbf{A}[k]\textbf{B}[k]^H\!+\!\textbf{B}[k]\textbf{B}[k]^H\!)\Big\rvert\Big) \label{RSD_SIC_4}\\ &\ \begin{aligned} ={\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}&\!+(N_r/\sigma_k^2)\bm{\bar\Lambda}[k]^2\textbf{A}[k]\textbf{A}[k]^H\Big\rvert\Big)\\&+{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!+\!(N_r/\sigma_k^2)\textbf{T}[k]^{-1}\bm{\bar\Lambda}[k]^2(\textbf{B}[k]\textbf{A}[k]^H\!+\!\textbf{A}[k]\textbf{B}[k]^H\!+\!\textbf{B}[k]\textbf{B}[k]^H\!\Big\rvert\Big). \end{aligned} \label{RSD_SIC_5} \end{align} \end{subequations} In \eqref{RSD_SIC_3}, we use $\textbf{C}[k]$ to represent $\textbf{A}_t^H[k]$, where $\textbf{C}_{1:p}[k]$ and $\textbf{C}_{q}[k]$ denote the first $p$ columns and the $q^{\rm th}$ column of $\textbf{C}[k]$, respectively. $\textbf{S}_{1:p}$ and $\textbf{S}_{q}$ denote the first $p$ rows and the $q^{\rm th}$ row of $\textbf{S}$, respectively. In \eqref{RSD_SIC_4}, $\textbf{A}[k]=\textbf{C}_{1:N_t-1}[k]\textbf{S}_{1:N_t-1}\textbf{F}$ and $\textbf{B}[k]=\textbf{C}_{N_t}[k]\textbf{S}_{N_t}\textbf{F}$. \eqref{RSD_SIC_5} comes from the property of determinant that $\lvert\textbf{I}+\textbf{X}+\textbf{Y}\rvert=\lvert\textbf{I}+\textbf{X}\rvert\cdot\lvert\textbf{I}+(\textbf{I}+\textbf{X})^{-1}\textbf{Y}\rvert$, where $\textbf{X}=\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2\textbf{A}[k]\textbf{A}[k]^H$, $\textbf{Y}=\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2(\textbf{B}[k]\textbf{A}[k]^H+\textbf{A}[k]\textbf{B}[k]^H+\textbf{B}[k]\textbf{B}[k]^H)$, and $\textbf{T}[k]=\textbf{I}_{N_p}+\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2\textbf{A}[k]\textbf{A}[k]^H$. We observe that, by substituting $\textbf{A}[k]=\textbf{C}_{1:N_t-1}[k]\textbf{S}_{1:N_t-1}\textbf{F}$, the first term in~\eqref{RSD_SIC_5} can be further expressed as ${\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2\textbf{C}_{1:N_t-1}[k]\textbf{S}_{1:N_t-1}\textbf{F}\textbf{F}^H\textbf{S}_{1:N_t-1}^H\textbf{C}_{1:N_t-1}[k]^H\Big\rvert\Big)$, which has the similar structure with \eqref{RSD_SIC_2}. Therefore, we can continue to decompose the first term of~\eqref{RSD_SIC_5} with the similar procedures from~\eqref{RSD_SIC_2} to~\eqref{RSD_SIC_5}. After $N_t$ times, \eqref{RSD_SIC_5} can be represented as \begin{equation} \begin{aligned} \sum\nolimits_{i=1}^{N_t}{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}+\frac{N_r}{\sigma^{2}_{k}}\textbf{T}_{i}[k]^{-1}\bm{\bar\Lambda}[k]^2\big(\textbf{B}_{i}[k]\textbf{A}_{i}[k]^H\!+\!\textbf{A}_{i}[k]\textbf{B}_{i}[k]^H\!+\!\textbf{B}_{i}[k]\textbf{B}_{i}[k]^H\big)\Big\rvert\Big), \end{aligned} \label{RSR_SIC_expression} \end{equation} where $\textbf{A}_{i}[k]=\textbf{C}_{1:i-1}[k]\textbf{S}_{1:i-1}\textbf{F}$, $\textbf{B}_{i}[k]=\textbf{C}_{i}[k]\textbf{S}_{i}\textbf{F}$, and $\textbf{T}_{i}[k]=\textbf{I}_{N_p}+\frac{N_r}{\sigma^{2}_{k}}\bm{\bar\Lambda}[k]^2\textbf{A}_{i}[k]\textbf{A}_{i}[k]^H$. When $i=1$, $\textbf{A}_{1}[k]=\textbf{0}$ and $\textbf{T}_{1}[k]=\textbf{I}_{N_p}$. We have now transformed~\eqref{relaxation_SE_1}, which is the $k^{\rm th}$ term of~\eqref{design_problem_1}, to~\eqref{RSR_SIC_expression}. Recall that we aim to design $\textbf{S}$ to maximize~\eqref{design_problem_1}. Hence, designing $\textbf{S}$ to maximize~\eqref{design_problem_1} is transformed as designing $\textbf{S}$ to maximize the summation of~\eqref{RSR_SIC_expression} about $k$, which is given by \begin{equation} \begin{aligned} \frac{1}{K}\!\sum\nolimits_{i=1}^{N_t}\!\underbrace{\sum\nolimits_{k=1}^{K}\!\!\!{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!\!+\!\!\frac{N_r}{\sigma^{2}_{k}}\textbf{T}_{i}[k]^{-1}\bm{\bar\Lambda}[k]^2\big(\textbf{B}_{i}[k]\textbf{A}_{i}[k]^H\!\!+\!\textbf{A}_{i}[k]\textbf{B}_{i}[k]^H\!\!+\!\textbf{B}_{i}[k]\textbf{B}_{i}[k]^H\big)\Big\rvert\Big).}_{(\star)} \label{RSR_SIC_expression_1_2} \end{aligned} \end{equation} Note that finding the optimal $\textbf{S}$ to maximize~\eqref{RSR_SIC_expression_1_2} is still difficult due to the summation operation. Fortunately, for each $i$, we observe that $\textbf{T}_i[k]$, $\textbf{A}_i[k]$, and $\textbf{B}_i[k]$ are only related with the first $i^{\rm th}$ rows of $\textbf{S}$ but not related with the remaining rows. According to this property, we propose the following $N_t$-stage method to design $\textbf{S}$ to maximize~\eqref{RSR_SIC_expression_1_2}. At the first stage, we design $\textbf{S}_1$ to maximize $(\star)$ with $i=1$. At the second stage, with the determined $\textbf{S}_1$ at the first stage, we design $\textbf{S}_2$ to maximize $(\star)$ with $i=2$. Following this trend, at the $N_t^{\rm th}$ stage, the first $N_t-1$ rows of $\textbf{S}$ has been determined and we design $\textbf{S}_{N_t}$ to maximize $(\star)$ with $i=N_t$. After that, all rows of $\textbf{S}$, i.e., the whole $\textbf{S}$, can be determined. \begin{table} \centering \footnotesize \begin{tabular}{p{230pt}} \hline \textbf{Algorithm 1: RSD algorithm} \\ \textbf{Input:} $\bm{\bar\Lambda}[k]$, $\textbf{A}_t[k]$, and $\textbf{F}$, $k=1$, $2$, ..., $K$\\ \quad01:\quad \textbf{for} $i=1:{N_t}$\\ \quad02:\quad\quad Design $\textbf{S}_i$ by maximizing \eqref{expression_u}\\ \quad03:\quad \textbf{end for}\\ \quad04:\quad Calculate $\textbf{D}[k]$ through \eqref{solution_D_RSD}, for $k=1$, $2$, ..., $K$\\ \quad05:\quad Normalize each $\textbf{D}[k]$ as $\textbf{D}[k]\leftarrow\frac{\rho}{\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert_F^2}\textbf{D}[k]$\\ \textbf{Output:} $\textbf{S}$ and $\textbf{D}[k]$, $k=1$, $2$, ..., $K$\\ \hline \end{tabular} \vspace{-9.5mm} \end{table} Next, we present how to design $\textbf{S}_i$ to maximize $(\star)$ for each stage. Without loss of generality, we use the $i_*^{\rm th}$ stage as an illustration as below. \begin{subequations} \begin{align} &\mathop{\rm max \ }\limits_{\textbf{S}_{i_*}} \sum\nolimits_{k=1}^{K}\!\!\!{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!\!+\!\!\frac{N_r}{\sigma^{2}_{k}}\textbf{T}_{i_*}[k]^{-1}\bm{\bar\Lambda}[k]^2\big(\textbf{B}_{i_*}[k]\textbf{A}_{i_*}[k]^H\!\!+\!\textbf{A}_{i_*}[k]\textbf{B}_{i_*}[k]^H\!\!+\!\textbf{B}_{i_*}[k]\textbf{B}_{i_*}[k]^H\big)\!\Big\rvert\Big) \label{problem_RSD_obj}\\ &\mathrm{s.t.}\ \textbf{S}_{i_*,l}\in\{0,1\}, \lVert\textbf{S}_{i_*}\rVert_{0}=1,\forall l. \label{problem_RSD_cons} \end{align} \end{subequations} Due to the structure of $\textbf{F}$ in \eqref{structure_FPS_network}, all the diagonal elements of $\textbf{F}\textbf{F}^{H}$ are $1$. There is only one `1' in $\textbf{S}_{i_*}$ while the other elements are `0's. As a result, regardless how we design $\textbf{S}_{i_*}$, $\textbf{S}_{i_*}\textbf{F}\textbf{F}^{H}\textbf{S}_{i_*}^H$ equals $1$, i.e., $\textbf{B}_{i_*}[k]\textbf{B}_{i_*}[k]^H=\textbf{C}_{i_*}[k]\textbf{S}_{i_*}\textbf{F}\textbf{F}^{H}\textbf{S}_{i_*}^H\textbf{C}_{i_*}[k]^H=\textbf{C}_{i_*}[k]\textbf{C}_{i_*}[k]^H$. Consequently, by substituting $\textbf{B}_{i_*}[k]=\textbf{C}_{i_*}[k]\textbf{S}_{i_*}\textbf{F}$ and $\textbf{B}_{i_*}[k]\textbf{B}_{i_*}[k]^H=\textbf{C}_{i_*}[k]\textbf{C}_{i_*}[k]^H$ into \eqref{problem_RSD_obj}, \eqref{problem_RSD_obj} can be expressed as \begin{align} \sum_{k=1}^{K}\!{\rm{log}}_2\Big(\Big\lvert\textbf{I}_{N_p}\!\!+\!\!\frac{N_r}{\sigma^{2}_{k}}\textbf{T}_{i_*}[k]^{-1}\!\bm{\bar\Lambda}[k]^2\!\big(\textbf{C}_{i_*}\![k]\textbf{S}_{i_*}\textbf{F}\textbf{A}_{i_*}[k]^H\!\!\!+\!\!\textbf{A}_{i_*}[k]\textbf{F}^H\textbf{S}_{i_*}^H\textbf{C}_{i_*}[k]^H\!\!\!+\!\textbf{C}_{i_*}[k]\textbf{C}_{i_*}[k]^H\big)\!\Big\rvert\Big). \label{expression_u} \end{align} We point out that, since there is only one `1' in $\textbf{S}_{i_*}$, the possible $\textbf{S}_{i_*}$ has $L_tQ$ choices, where $L_tQ$ is the number of FPSs in the DS-FPS architecture. As will be shown in the numerical results in Sec.~\ref{section_simulation}, the number of FPSs is usually limited, e.g., 32. Therefore, it is reasonable and efficient to use the exhaustive search method to find the optimal $\textbf{S}_{i_*}$ to maximize \eqref{expression_u}. The flow and pseudocodes of the RSD algorithm are presented in \textbf{Algorithm 1}. \section{Low Complexity Row-by-row (RBR) Algorithm} \label{section_RBR_algorithm} In the previous section, we have proposed an RSD algorithm to design $\textbf{S}$ and $\textbf{D}[k]$ with the partial CSI. The rows of the RSD algorithm need to be optimized successively, which incurs high complexity. In this section, we propose a low-complexity RBR algorithm, by transforming the design problem into multiple parallel sub-problems and optimizes each row of $\textbf{S}$ in parallel, which effectively reduces the computational complexity. As analyzed in Sec.~\ref{section_RSD_algorithm}-B, the original design problem can be transformed into problem~\eqref{design_problem_1}. By treating $\textbf{S}\textbf{F}\textbf{D}[k]$ as a whole beamforming matrix and considering the low-complexity equal-power allocation, the solution of the optimal unconstrained beamforming matrix to maximize~\eqref{design_problem_1} is $\textbf{P}[k]=\widehat{\textbf{V}}_{N_s}[k]$, where $\widehat{\textbf{V}}_{N_s}[k]$ is the first $N_s$ columns of $\widehat{\textbf{V}}[k]$, and $\widehat{\textbf{V}}[k]$ is derived from the SVD of $\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H$ such that $\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H=\widehat{\textbf{U}}[k]\widehat{\bm{\Sigma}}[k]\widehat{\textbf{V}}[k]^H$. To reduce the computational complexity, rather than directly solving $\textbf{S}$ and $\textbf{D}[k]$ to maximize~\eqref{design_problem_1}, we propose to design $\textbf{S}$ and $\textbf{D}[k]$ to make $\textbf{S}\textbf{F}\textbf{D}[k]$ close to the optimal unconstrained beamforming matrix $\textbf{P}[k]$, as \begin{subequations} \begin{align} &\mathop{\rm min\ }\limits_{\textbf{S},\textbf{D}[k]}\sum\nolimits_{k=1}^{K}\lVert \textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k]\rVert_{F}^2 \label{problem_SFD_Euclidean_objective}\\ &\mathrm{s.t.}\ \textbf{S}_{i,l}\in\{0,1\}, \lVert\textbf{S}_{i}\rVert_{0}=1,\forall i,l, \sum\nolimits_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert^{2}_{F}=\rho. \label{problem_SFD_Euclidean_constraints_2} \end{align} \label{problem_SFD_Euclidean}% \end{subequations} The transformation from design problem~\eqref{design_problem_1} to problem~\eqref{problem_SFD_Euclidean} reduces the complexity, at the cost of an acceptable spectral efficiency loss, as will be shown in the numerical results in Sec.~\ref{section_simulation}. However, it is still uneasy to solve the problem~\eqref{problem_SFD_Euclidean} due to the non-convex binary constraint and the coupling of $\textbf{S}$ and $\textbf{D}[k]$. Hence, we propose the RBR algorithm to alternatively design $\textbf{D}[k]$ and $\textbf{S}$, i.e., alternatively fix one to optimize another one as follows. \subsection{Design of Digital Beamforming Matrix {\rm \textbf{D}[{\it k}]}} To begin with, we design $\textbf{D}[k]$ when $\textbf{S}$ is fixed. A semi-unitary digital beamforming matrix can mitigate the interference among the data streams and enhance the spectral efficiency~\cite{7397861,7579557}. Inspired by this, we enforce a semi-unitary constraint to the digital beamforming matrix, given by $\textbf{D}[k]^{H}\textbf{D}[k]=\textbf{I}_{N_s}$. By fixing the switch network matrix $\textbf{S}$ and omitting the transmit power constraint temporarily, the problem~\eqref{problem_SFD_Euclidean} can be reformulated as \begin{equation} \begin{aligned} &{\mathop{\rm min\ }\limits_{\textbf{D}[k]}}\sum\nolimits_{k=1}^{K}\lVert \textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k]\rVert_{F}^2\\ &\mathrm{s.t.}\ \textbf{D}[k]^{H}\textbf{D}[k]=\textbf{I}_{N_s}. \end{aligned} \label{problem_SFD_Euclidean_D} \end{equation} The solution to~\eqref{problem_SFD_Euclidean_D}, which is the orthogonal procrustes problem, is given as~\cite{7397861,7579557} \begin{equation} \textbf{D}[k]=\ddot{\textbf{V}}_{N_s}[k]\ddot{\textbf{U}}[k]^{H}, \label{solution_D_RBR} \end{equation} where $L_t\times L_t$- and $N_s\times N_s$-dimensional $\ddot{\textbf{V}}[k]$ and $\ddot{\textbf{U}}[k]$ are obtained from the SVD of $\textbf{P}[k]^{H}\textbf{S}\textbf{F}$, yielding that $\textbf{P}[k]^{H}\textbf{S}\textbf{F}=\ddot{\textbf{U}}[k]\ddot{\bm \Sigma}[k]\ddot{\textbf{V}}[k]^{H}$, and $\ddot{\textbf{V}}_{N_s}[k]$ is the first $N_s$ columns of $\ddot{\textbf{V}}[k]$. \subsection{Design of Switch Network Matrix {\rm \textbf{S}}} Then, we design $\textbf{S}$ to solve the problem \eqref{problem_SFD_Euclidean}, with fixed $\textbf{D}$. By omitting the transmit power constraint temporarily, solving $\textbf{S}$ to minimize \eqref{problem_SFD_Euclidean_objective} is rearranged as \begin{subequations} \begin{align} &\mathop{\rm min\ }\limits_{\textbf{S}}\sum\nolimits_{k=1}^{K}\lVert \textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k]\rVert_{F}^2 \label{subproblem_SFD_Euclidean_objective}\\& \mathrm{s.t.}\ \textbf{S}_{i,l}\in\{0,1\}, \ \lVert\textbf{S}_{i}\rVert_{0}=1, \forall i,l, \label{subproblem_SFD_Euclidean_constraint_1} \end{align} \label{subproblem_SFD_Euclidean}% \end{subequations} where~\eqref{subproblem_SFD_Euclidean} is an integer programming problem associated with a matrix variable, which is inefficient to solve. To make the problem more tractable, we rewrite the $k^{\rm th}$ term of \eqref{subproblem_SFD_Euclidean_objective} as \begin{subequations} \begin{align} &\quad \ \left\lVert \textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k]\right\rVert_{F}^2\label{trace_expression_1}\\ &={\rm Tr}\left((\textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k])(\textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k])^H\right)\label{trace_expression_2}\\ &={\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)+{\rm Tr}\left(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{D}[k]^H\textbf{F}^H\textbf{S}^H\right)-2{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)\label{trace_expression_3}\\ &={\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)+{\rm Tr}\bigg(\textbf{S}\textbf{F}\textbf{K}[k]\bigg[\!\!\begin{array}{cc} \textbf{I}_{N_s}&\\ &\bm{0}\\ \end{array}\!\!\bigg]\textbf{K}[k]^H\textbf{F}^H\textbf{S}^H\bigg)-2{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)\label{trace_expression_4}\\ &={\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)+{\rm Tr}\bigg(\bigg[\!\!\begin{array}{cc} \textbf{I}_{N_s}&\\ &\bm{0}\\ \end{array}\!\!\bigg]\textbf{K}[k]^H\textbf{F}^H\textbf{S}^H\textbf{S}\textbf{F}\textbf{K}[k]\bigg)-2{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)\label{trace_expression_5}\\ &\leq{\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)+{\rm Tr}(\textbf{K}[k]\textbf{K}[k]^H\textbf{F}^H\textbf{S}^H\textbf{S}\textbf{F})-2{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)\label{trace_expression_6}\\ &={\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)+{\rm Tr}(\textbf{F}^H\textbf{S}^H\textbf{S}\textbf{F})-2{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right), \label{trace_expression_7} \end{align} \end{subequations} where $\textbf{K}[k]\bigg[\!\!\begin{array}{cc} \textbf{I}_{N_s}&\\ &\bm{0}\\ \end{array}\!\!\bigg]\textbf{K}[k]^H$ in \eqref{trace_expression_4} is the SVD of $\textbf{D}[k]\textbf{D}[k]^H$ since we have $\textbf{D}[k]^H\textbf{D}[k]=\textbf{I}_{N_s}$. \eqref{trace_expression_5} comes from the property of matrix trace. The inequality \eqref{trace_expression_6} follows that the diagonal elements of the Hermitian matrix $\textbf{K}[k]^H\textbf{F}^H\textbf{S}^H\textbf{S}\textbf{F}\textbf{K}[k]$ are no smaller than zero and the equality holds when $\textbf{D}[k]$ is a square matrix, i.e., $L_t=N_s$. Therefore, the $k^{\rm th}$ term of~\eqref{subproblem_SFD_Euclidean_objective} can be relaxed as \eqref{trace_expression_7}, where ${\rm Tr}\left(\textbf{P}[k]\textbf{P}[k]^H\right)$ is known and fixed. According to the structure of $\textbf{F}$ in \eqref{structure_FPS_network} and the constraint $\lVert\textbf{S}_i\rVert_{0}=1$, regardless how we design $\textbf{S}$, ${\rm Tr}(\textbf{F}^H\textbf{S}^H\textbf{S}\textbf{F})=\lVert\textbf{S}\textbf{F}\rVert_F^2$ is a constant $N_t$. Hence, to minimize~\eqref{trace_expression_7} is equivalent to maximize ${\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)$. Note that~\eqref{trace_expression_7} is a relaxed form of the $k^{\rm th}$ term of~\eqref{subproblem_SFD_Euclidean_objective}. Therefore, minimizing~\eqref{subproblem_SFD_Euclidean_objective} can be relaxed as maximizing the summation of ${\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)$ about $k$, given by $\sum\nolimits_{k=1}^{K}{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)$. According to the property of matrix trace, $\sum\nolimits_{k=1}^{K}{\rm Tr}\left({\rm Re}(\textbf{S}\textbf{F}\textbf{D}[k]\textbf{P}[k]^H)\right)$ is equivalent to \begin{equation} \sum\nolimits_{i=1}^{N_t}\sum\nolimits_{k=1}^{K}{\rm Re}(\textbf{S}_i\textbf{F}\textbf{D}[k]\textbf{P}_i[k]^H), \label{third_term_1} \end{equation} where $\textbf{S}_i$ and $\textbf{P}_i[k]$ are the $i^{\rm th}$ row of $\textbf{S}$ and $\textbf{P}[k]$, respectively. According to~\eqref{third_term_1}, we have decomposed each row of $\textbf{S}$ as $N_t$ uncorrelated parts. As a result, designing $\textbf{S}$ to maximize~\eqref{third_term_1} is equivalent to separately designing $\textbf{S}_i$ to maximize $\sum\nolimits_{k=1}^{K}{\rm Re}(\textbf{S}_i\textbf{F}\textbf{D}[k]\textbf{P}_i[k]^H)$, for $i=1$, $2$, ..., $N_t$, which can be stated as \begin{subequations} \begin{align} &\mathop{\rm max\ }\limits_{\textbf{S}_i}\textbf{S}_i\sum\nolimits_{k=1}^{K}{\rm Re}(\textbf{F}\textbf{D}[k]\textbf{P}_i[k]^H) \label{subproblem_SFD_2_obj} \\& \mathrm{s.t.}\ \textbf{S}_{i,l}\in\{0,1\},\ \lVert\textbf{S}_{i}\rVert_{0}=1, \forall l. \end{align} \label{subproblem_SFD_2}% \end{subequations} Following the binary property of the row vector $\textbf{S}_i$, i.e., $\textbf{S}_{i,l}\in\{0,1\}, \forall l$ and $ \lVert\textbf{S}_{i}\rVert_{0}=1$, maximizing \eqref{subproblem_SFD_2_obj} is equivalent to finding the position of the maximal element of the column vector $\sum\nolimits_{k=1}^{K}{\rm Re}(\textbf{F}\textbf{D}[k]\textbf{P}_i[k]^H)$, which is a simple ranking problem and can be solved efficiently based on the sorting algorithm in solvers. Consequently, by denoting $p_{\rm max}$ as the position of the maximal element, the optimal solution of $\textbf{S}_i$ to the problem \eqref{subproblem_SFD_2} is \begin{equation} \textbf{S}_i=[\ \underbrace{0,...,0}_{p_{\rm max}-1},\ 1,\underbrace{0,...,0}_{L_tQ-p_{\rm max}}]. \label{solution_S_RBR} \end{equation} Then, by solving problem \eqref{subproblem_SFD_2} with $i=1$, ..., $N_t$ in parallel, the solution of $\textbf{S}$ to problem \eqref{subproblem_SFD_Euclidean} is obtained. Based on the aforementioned procedures, in the proposed RBR algorithm, we can alternatively solve $\textbf{D}[k]$ and $\textbf{S}$ via \eqref{solution_D_RBR} and \eqref{solution_S_RBR} until convergence. After that, we enforce the transmit power constraint to $\textbf{D}[k]$ such that $\textbf{D}[k]\leftarrow\frac{\rho}{\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert_F^2}\textbf{D}[k]$. The pseudocodes of the RBR algorithm are described in \textbf{Algorithm 2}. \begin{table} \centering \footnotesize \begin{tabular}{p{260pt}} \hline \textbf{Algorithm 2: RBR algorithm} \\ \textbf{Input:} $\bm{\bar\Lambda}[k]$, $\textbf{A}_t[k]$, and $\textbf{F}$, $k=1$, $2$, ..., $K$\\ \quad01:\quad Calculate each $\textbf{P}[k]=\widehat{\textbf{V}}_{N_s}[k]$ and initialize $\textbf{S}$ randomly\\ \quad02:\quad \textbf{Repeat}\\ \quad03:\quad Solve $\textbf{D}[k]$ via~\eqref{solution_D_RBR}, for $k=1$, $2$, ..., $K$\\ \quad04:\quad\ \ \textbf{For} $i=1:{N_t}$\\ \quad05:\quad\quad\ \ Solve $\textbf{S}_i$ via~\eqref{solution_S_RBR} \\ \quad06:\quad\ \ \textbf{end for}\\ \quad07:\quad \textbf{Until convergence} \\ \quad08:\quad Normalize each $\textbf{D}[k]$ as $\textbf{D}[k]\leftarrow\frac{\rho}{\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert_F^2}\textbf{D}[k]$\\ \textbf{Output:} $\textbf{S}$ and $\textbf{D}[k]$, $k=1$, $2$, ..., $K$\\ \hline \end{tabular} \vspace{-9.5mm} \end{table} \section{Simulation Results and Analysis} \label{section_simulation} In this section, we evaluate the performance of the proposed DS-FPS architecture as well as the RSD and RBR algorithms. The simulation setup is given in Sec.~\ref{section_simulation}-A. We first evaluate the spectral efficiency and energy efficiency of the DS-FPS architecture with the RSD and RBR algorithms in Sec.~\ref{section_simulation}-B. Then, we analyze the impact of CSI on the spectral efficiency for the DS-FPS architecture in Sec.~\ref{section_simulation}-C. Furthermore, we analyze the computational complexity and convergence of the RSD and RBR algorithms in Sec.~\ref{section_simulation}-D. \subsection{Simulation Setup} \subsubsection{\textbf{Generation of THz UM-MIMO Channel}} The operating frequency is $0.3$ THz, with $5$~GHz bandwidth. The number of sub-carrier is 10 and the noise power is $-87$ dBm for each sub-carrier. We consider a typical outdoor street scenario as shown in Fig.~\ref{fig_channel_setup}. The transmitter (TX) is arranged at the roof of one building, whose height is $30$m. The height of the receiver (RX) is 1.5m. There are five positions of RX in red points, for which the LoS distances $D$ equal $40$m, $70$m, $100$m, $130$m, and $160$m. The number of antennas at TX and RX is equal, which is set as $128$, $256$, $512$, and $1024$, respectively. The ray-tracing tool Wireless Insite is adopted, which can characterize the multipath components with high accuracy, to calculate the parameters of each propagation path from TX to RX, e.g., the DoD and DoA~\cite{8304810}. Then, we compute the path gains according to our previous THz channel work~\cite{6998944}, by considering the THz-specific propagation characteristics. For illustration, the lines with different colors in~Fig.~\ref{fig_channel_setup} denote the propagation paths and path gains with $100$m LoS separation at $0.3$ THz. The paths whose path gain is weaker than the LoS path over 50 dB are omitted since their contributions to the channel are negligible. Substituting the path gains, DoA, and DoD into~\eqref{channel_model_planar}, we can obtain the THz UM-MIMO channel. \begin{figure} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[scale=0.3]{image/simulations/paths1.pdf} \captionsetup{font={footnotesize}} \caption{The environment of THz channels. } \label{fig_channel_setup} \vspace{-9.5mm} \end{figure} \subsubsection{\textbf{Competitors of Proposed DS-FPS Architecture}} We compare our scheme, i.e., the proposed DS-FPS architecture with RSD and RBR algorithms, with the following schemes: i) the FC architecture with the hybrid beamforming algorithm in~\cite{7913599}, ii) the DHB architecture with the algorithm in~\cite{9110865}, iii) the FPS group-connected (FPS-GC) architecture with the FPS-AltMin algorithm in~\cite{8310586}, iv) the AoSA architecture with SIC algorithm in~\cite{7445130}, v) the subconnected phase shifter network with fully connected switch networks (SPSF) architecture with the algorithm in~\cite{8295113}, and iv) the subconnected with reduced number of phase shifters and phase shifter selection (SRPS) architecture with the algorithm in~\cite{8295113,8382230}. For the FPS-GC architecture, the number of groups is set as 2 to achieve high energy efficiency. The key idea of the SPSF and SRPS architectures is using switches to dynamically turn off partial antennas with relatively small contribution to the spectral efficiency. The number of phase shifters is $\frac{N_t}{\beta}$, where $N_t$ is the number of antennas and $1-\frac{1}{\beta}$ is the ratio of the antennas which need to be turned off. Specifically, in SPSF architecture, each phase shifter connects to $\frac{N_t}{L_t}$ adjacent antennas through $\frac{N_t}{L_t}$ switches, where $L_t$ is the number of RF chains. Among these $\frac{N_t}{L_t}$ antennas, $\frac{N_t}{L_t}(1-\frac{1}{\beta})$ antennas are turned off. In SRPS architecture, each phase shifter connects to $\beta$ adjacent antennas through 1-to-$\beta$ switch and then remains one antenna with largest contribution to the spectral efficiency. In this work, we set $\beta=2$ for both SPSF and SRPS architectures. The major difference between our DS-FPS architecture and the SPSF and SRPS architectures is that the SPSF and SRPS use adjustable phase shifters while the DS-FPS uses low-cost FPSs. Moreover, the switches in the DS-FPS architecture are used to allow each antenna selecting one proper FPS such that all antennas are active, while in the SPSF and SRPS architectures partial antennas are turned off. With different hardware architectures, the algorithms in this work are also different from those in~\cite{8295113,8382230}. For all architectures, the number of RF chains at TX and RX is $L_t=4$. The number of data streams is $N_s=4$. In our scheme, we adopt the DS-FPS architecture at both transmitter and receiver with the same system parameters. For fair comparison, the counterpart architectures are also used at two sides. \subsubsection{\textbf{Energy Efficiency Model}} \begin{table} \centering \footnotesize \captionsetup{font={footnotesize}} \caption{Power consumption at transmitter of different architectures.} \begin{tabular}{cc} \hline Architecture&Power consumption\\ \hline Proposed DS-FPS&${\rm P_{common}}+{\rm P_{SW}}N_t+{\rm P_{FPS}}{\rm N_{FPS}^{a}}$\\ FC&${\rm P_{common}}+{\rm P_{PS}}N_{t}L_t$\\ AoSA&${\rm P_{common}}+{\rm P_{PS}}N_{t}$\\ DHB&${\rm P_{common}}+{\rm P_{SW}}N_{t}+{\rm P_{PS}}N_{t}$\\ FPS-GC&${\rm P_{common}}+{\rm P_{SW}}{\rm N_{SW}}+{\rm P_{FPS}}{\rm N_{FPS}}$\\ SPSF \& SRPS&${\rm \widehat{P}_{common}}+{\rm P_{PS}}N_{t}/\beta+{\rm P_{SW}}N_{t}/\beta$\\ \hline \end{tabular} \label{table_power_existing_architectures} \vspace{-7.5mm} \end{table} The energy efficiency $EE$ is defined as the ratio between the spectral efficiency and the power consumption at transmitter $P_{TX}$ and receiver $P_{RX}$, i.e., $EE=\frac{SE}{P_{TX}+P_{RX}}$. We adopt the power consumption model of the hybrid beamforming studies~\cite{9110865,9374093}, according to which the power consumption of the DS-FPS architecture at transmitter can be expressed as \begin{equation} P_{TX}={\rm P_{common}}+\underbrace{{\rm P_{SW}}N_t+{\rm P_{FPS}}{\rm N_{FPS}^a}}_{\rm P_{analog}}, \label{eq_power_consumption_DS_FPS} \end{equation} where ${\rm P_{common}}={\rm P_{BB}}+{\rm P_{DAC}}L_t+{\rm P_{RF}}L_t+{\rm P_{PA}}N_t+\rho$. ${\rm P_{BB}}$, ${\rm P_{DAC}}$, ${\rm P_{RF}}$, ${\rm P_{PA}}$, ${\rm P_{SW}}$, ${\rm P_{FPS}}$ denote the power consumption of the baseband, DAC, RF chain, power amplifier, switch, and FPS, respectively. The corresponding multipliers denote the quantity of these devices used in the architecture. $\rho$ is the transmit power at transmitter. ${\rm P_{common}}$ is usually the same for different hybrid beamforming architectures. ${\rm P_{analog}}$ is the analog beamforming part of power consumption, which is different for various hybrid beamforming architectures according to the used hardware components to realize the analog beamforming. For the DS-FPS architecture, ${\rm P_{analog}}$ is composed by the power consumed by switches and FPSs as expressed in~\eqref{eq_power_consumption_DS_FPS}. In the DS-FPS architecture, the total number of FPSs is $L_tQ$. Each antenna connects to all FPSs through $L_tQ$ switches. As analyzed in Sec.~\ref{section_channel_system_model}-B, each antenna selects one FPS with proper phase from all FPSs to generate the beamforming weight. For one antenna, only one of the $L_tQ$ switches is closed and the others are disconnected which do not consume power. Hence, the number of switches which consume power equals the number of antennas $N_t$. In this work, $N_t$, $L_t$, and $\rho$ are not design variables such that ${\rm P_{common}}+{\rm P_{SW}}N_t$ can be referred to as the static terms which are note related to the hybrid beamforming design. We denote the FPSs which are selected by at least one switch as the active FPSs and the others as the non-active FPSs which do not consume power. The number of active FPSs ${\rm N_{FPS}^a}$ equals the number of non-zero column of $\textbf{S}$. Hence, ${\rm P_{FPS}}{\rm N_{FPS}^a}$ is related to the design variable $\textbf{S}$ and is the dynamic term. For FC, AoSA, DHB, and FPS-GC architectures, the common part is the same with~\eqref{eq_power_consumption_DS_FPS}, while the analog beamforming part needs to be changed by accounting their own devices. In FPS-GC architecture, ${\rm N_{SW}}$ and ${\rm N_{FPS}}$ should be determined by the FPS-AltMin algorithm~\cite{8310586}. For SPSF and SRPS architectures, only $N_t/\beta$ antennas are active such that ${\rm \widehat{P}_{common}}={\rm P_{BB}}+{\rm P_{DAC}}L_t+{\rm P_{RF}}L_t+{\rm P_{PA}}N_t/\beta+\rho$. For the analog beamforming part, the number of phase shifters is $N_t/\beta$. Moreover, each phase shifter only has one closed switch which consumes power such that the number of closed switches is $N_t/\beta$. Hence, the consumed power of analog beamforming part is ${\rm P_{PS}}N_{t}/\beta+{\rm P_{SW}}N_{t}/\beta$. We summarize the power consumption at transmitter, i.e., $P_{TX}$, of different architectures in~TABLE~\ref{table_power_existing_architectures}. For each architecture, the expression of $P_{RX}$ is similar with $P_{TX}$, by substituting the DAC and power amplifier with ADC and low noise power and then deleting $\rho$. We use the typical power consumption reported by the existing THz studies, mainly around 0.3 THz, in the unit of mW as follows. The power consumption of DAC, ADC, power amplifier, low noise amplifier, switch, RF chain, and baseband is ${\rm P_{DAC}}=110$~\cite{7019002}, ${\rm P_{ADC}}=158.6$~\cite{8951264}, ${\rm P_{PA}}=49$~\cite{5518016}, ${\rm P_{LNA}}=53$~\cite{5619646}, ${\rm P_{SW}}=9$~\cite{9223920}, ${\rm P_{RF}}=43$~\cite{8733134}, and ${\rm P_{BB}}=200$~\cite{7436794}, respectively. It has been reported in~\cite{8058787} that ${\rm P_{PS}}=52$, $39$, and $26$ for 3, 2, and 1-bit phase shifters. The FC~\cite{7913599}, AoSA~\cite{7445130}, SPSF~\cite{8295113}, and SRPS~\cite{8382230} architectures use the infinite-resolution phase shifter, while the DHB architecture~\cite{9110865} uses the 2-bit or 1-bit resolution phase shifters. Since the infinite-resolution phase shifter is ideal, we use the power consumption of 3-bit phase shifter to represent its power consumption. Hence, for FC, AoSA, SPSF, and SRPS architectures, ${\rm P_{PS}}=52$, while for DHB architecture, ${\rm P_{PS}}=39$ for 2-bit phase shifter and ${\rm P_{PS}}=26$ for 1-bit phase shifter. The FPS is usually realized by a passive delay element which provides a fixed phase adjustment with no power consumption and an amplifier with small gain to keep the output amplitude and power of the FPSs with different phases at the same level~\cite{9426936}. Since the passive delay element with different phases in~\cite{9426936} has about 3.3 dB output power difference, we consider an amplifier with about 4.8 dB gain which consumes 16.8 mW~\cite{4633188} to keep the equal output amplitude and power of FPSs. Hence, the power consumption of FPS is ${\rm P_{FPS}}=16.8$. In this work, we consider the use of full-bit DACs/ADCs. It has been studied in~\cite{9205899} that we can use the power consumption of DAC/ADC ($>$8 bit) to represent the power consumption of full-bit DAC/ADC, for which we use the power consumption of 9-bit DAC in~\cite{7019002}, i.e., ${\rm P_{DAC}}=110$ mW, and 12-bit ADC in~\cite{8951264}, i.e., ${\rm P_{ADC}}=158.6$ mW. The use of low-bit DACs/ADCs may be a potential research direction. The joint optimization of the hybrid beamforming matrices and the bit of the DACs/ADCs has been analyzed in~\cite{9205899} and a channel estimation algorithm for hybrid beamforming with low-bit ADCs has been proposed in~\cite{8553303}. \subsection{Spectral Efficiency and Energy Efficiency of the Proposed DS-FPS Architecture} In the following simulations, the DS-FPS architecture with the RSD and RBR algorithms only knows partial CSI. For the other architectures and algorithms, full CSI is assumed to be known. Fig.~\ref{fig_SE_EE_architecture}(a) and Fig.~\ref{fig_SE_EE_architecture}(b) evaluate the spectral efficiency and energy efficiency of the proposed DS-FPS architecture. As shown in Fig.~\ref{fig_SE_EE_architecture}(a), for the DS-FPS architecture, the RSD algorithm yields higher spectral efficiency than the low-complexity RBR algorithm, e.g., $1.4$ bits/s/Hz when $\rho=20$ dBm. Furthermore, both the spectral efficiencies of the DS-FPS architecture with either RSD or RBR algorithm are higher than the FPS-GC and the AoSA architectures, e.g., $2$ bits/s/Hz and $6$ bits/s/Hz when $\rho=20$~dBm. The SPSF and SRPS architectures aim to reduce the power consumption by deactivating partial antennas, which decreases the spectral efficiency at the same time. As a result, the spectral efficiency of the DS-FPS architecture is about 9~bits/s/Hz and 11~bits/s/Hz higher than the SPSF and SRPS architectures when $\rho=20$~dBm. The SPSF has higher spectral efficiency than the SRPS due to the more flexible switch connections. Furthermore, the spectral efficiencies of the DS-FPS architecture with the RSD and RBR algorithms are similar to the DHB with 2-bit phase shifter and lower than the FC. Therefore, the proposed DS-FPS architecture with the RSD and RBR algorithms can achieve good spectral efficiency. \begin{figure} \setlength{\belowcaptionskip}{0pt} \centering \captionsetup{font={footnotesize}} \subfigure[Spectral efficiency versus transmit power $\rho$.]{ \includegraphics[scale=0.28]{image/simulations/SE_rho.pdf}} \subfigure[Energy efficiency versus spectral efficiency, $\rho=20$~dBm.]{ \includegraphics[scale=0.28]{image/simulations/EE_SE.pdf}} \subfigure[Energy efficiency versus spectral efficiency with another group of power consumption values, $\rho=20$~dBm.]{ \includegraphics[scale=0.28]{image/simulations/EE_SE_another_group.pdf}} \caption{The spectral efficiency and energy efficiency of the DS-FPS architecture. Communication distance is $D=40$m, $N_t=N_r=1024$. The number of FPSs in DS-FPS and FPS-GC is $32$.} \label{fig_SE_EE_architecture} \vspace{-9.5mm} \end{figure} Fig.~\ref{fig_SE_EE_architecture}(b) evaluates the energy efficiency as well as the spectral efficiency concurrently. The energy efficiencies of the DS-FPS architecture with the RSD and RBR algorithms are significantly higher than the other architectures, due to the low power consumption of the FPS and switch. The power consumed by switches is only a small part of the overall power consumption of DS-FPS architecture, e.g., 15\% in Fig.~\ref{fig_SE_EE_architecture}(b). Although the spectral efficiency of the DS-FPS architecture is lower than the FC architecture, the energy efficiency of the DS-FPS architecture is remarkably higher than the FC, i.e., over $3$ times. The DHB architecture aims to use low-cost 2-bit and 1-bit phase shifter to reduce the power consumption and achieve better trade-off between energy efficiency and spectral efficiency. However, the power consumption of 2-bit and 1-bit phase shifter is still higher than the FPS in this work. Moreover, the quantity of phase shifters in DHB architecture equals $N_t$, i.e., $1024$, which is much larger than the number of FPSs in DS-FPS architecture, i.e., $32$. Hence, the power consumption of the DS-FPS architecture is much lower than the DHB architecture. Compared to the DHB architecture with 2-bit phase shifter, the proposed DS-FPS architecture can achieve similar spectral efficiency, while with $60$\% energy efficiency enhancement. Both the spectral efficiency and energy efficiency of the DS-FPS architecture are higher than the DHB architecture with 1-bit phase shifter. The spectral efficiency and energy efficiency of the proposed DS-FPS architecture are noticeably higher than the FPS-GC and AoSA architectures. With $\beta=2$, the SPSF and SRPS architectures deactivate half antennas, which reduces the power consumption substantially. However, with half antennas, the spectral efficiency is also low, which makes that the energy efficiency is lower than the DS-FPS architecture. Therefore, by using the RSD and RBR algorithms, the proposed DS-FPS architecture can achieve substantially improved energy efficiency and good spectral efficiency at the same time, compared to the existing competitors. In Fig.~\ref{fig_SE_EE_architecture}(c), we evaluate the impact of different choice of power consumption value (unit:~mW) on energy efficiency, compared to the value provided in Sec.~\ref{section_simulation}-A-3). We set ${\rm P_{DAC}}=110$, ${\rm P_{ADC}}=200$, ${\rm P_{PA}}=16$, ${\rm P_{LNA}}=30$, ${\rm P_{RF}}=43$, and ${\rm P_{BB}}=243$, as the literature~\cite{8733134}. We set ${\rm P_{SW}}=5$, ${\rm P_{PS}}=50$, $20$, $10$ for infinite-resolution, 2-bit, and 1-bit phase shifter, respectively, as the literature~\cite{9110865}. Moreover, we set ${\rm P_{FPS}}=6.8$~\cite{9360389}. Although the detailed values of the energy efficiency change substantially, the energy efficiency of the DS-FPS architecture is much higher than the others, which is similar to Fig.~\ref{fig_SE_EE_architecture}(b). In Fig.~\ref{fig_SE_Nt} and Fig.~\ref{fig_EE_Nt}, we evaluate the spectral efficiency and energy efficiency versus number of antennas, respectively. On one hand, for all architectures, spectral efficiencies grow with the number of antennas, due to the higher array gain offered by more antennas. Moreover, the spectral efficiencies of the DS-FPS architecture with the RSD and RBR algorithms are always similar to the DHB with 2-bit phase shifter, which are higher than the FPS-GC, AoSA, SPSF, and SRPS architectures, as shown in Fig.~\ref{fig_SE_Nt}. On the other hand, with more antennas, the power consumption is higher such that the energy efficiencies of all architectures decrease, as shown in Fig.~\ref{fig_EE_Nt}. Particularly, for various numbers of antennas, the superiority on energy efficiency of the DS-FPS architecture stands out clearly. \begin{figure} \centering \begin{tabular}{cc} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/SE_Nt.pdf} \captionsetup{font={footnotesize}} \caption{Spectral efficiency versus number of antennas. $D=40$m, $\rho=20$ dBm. Number of FPSs is $32$.} \label{fig_SE_Nt} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/EE_Nt.pdf} \captionsetup{font={footnotesize}} \caption{Energy efficiency versus number of antennas. $D=40$m, $\rho=20$ dBm. Number of FPSs is $32$.} \label{fig_EE_Nt} \end{minipage} \begin{minipage}[t]{0.34\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/SE_EE_Q.pdf} \captionsetup{font={footnotesize}} \caption{Energy efficiency versus spectral efficiency with different number of FPSs. $D=40$m, $\rho=20$ dBm, $N_t=N_r=256$.} \label{fig_SE_EE_FPS} \end{minipage} \end{tabular} \vspace{-9.5mm} \end{figure} In Fig.~\ref{fig_SE_EE_FPS}, we evaluate the impact of the number of FPSs on the spectral efficiency and energy efficiency of the proposed DS-FPS architecture. The number of FPSs from left to right is $12$, $16$, $20$, $24$, $32$, $40$, $56$, $72$, $96$, and $120$, respectively. The spectral efficiencies of the DS-FPS architecture with RSD and RBR algorithms increase with the number of FPSs, since the number of provided phases of the FPS network increases. When the number of FPSs exceeds $32$, the enhancement of spectral efficiency is negligible. By contrast, the energy efficiency of the DS-FPS architecture first increases and then decreases when the number of FPSs increases. The highest energy efficiency of the DS-FPS architecture is achieved with $32$ FPSs. The reason is that, when the number of FPSs is small, e.g., less than $32$, the increase of the number of FPSs provides higher spectral efficiency, which leads to higher energy efficiency. Differently, when the number of FPSs is too large, the increase of the spectral efficiency is negligible while the caused higher power consumption dominates the reduction of the energy efficiency. Therefore, $32$ FPSs are proper for the considered DS-FPS architecture to achieve both high spectral efficiency and energy efficiency. \subsection{Impact of Partial and Inaccurate CSI} We evaluate the impact of partial and inaccurate CSI on the spectral efficiency of the DS-FPS architecture. The proposed RSD and RBR algorithms are designed for the case of partial CSI. With the following adjustment, the RSD and RBR algorithms can also work for the case of full CSI. $\bm{\bar\Lambda}[k]\textbf{A}_{t}[k]^H$ in~\eqref{design_problem_1} should be replaced by the full channel matrix $\textbf{H}[k]$ and the associated steps of the RSD and RBR algorithms need to be changed accordingly. Fig.~\ref{fig_SE_partial_CSI} presents the spectral efficiency versus communication distance of DS-FPS architecture with partial CSI and full CSI. With partial CSI, both the proposed RSD and RBR algorithms can achieve the similar spectral efficiency of the case with full CSI, e.g., 97\% with $40$m distance. \begin{figure} \centering \begin{tabular}{cc} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/SE_CSI_distance.pdf} \captionsetup{font={footnotesize}} \caption{Spectral efficiency with full CSI and partial CSI. $N_t=N_r=1024$,~$Q=8$,~$\rho=20$ dBm.} \label{fig_SE_partial_CSI} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/SE_CSI_xi.pdf} \captionsetup{font={footnotesize}} \caption{Spectral efficiency versus $\xi$. $N_t=N_r=1024$, $Q=8$, $\rho=20$ dBm, communication distance is $70$m.} \label{fig_SE_inaccurate_CSI} \end{minipage} \begin{minipage}[t]{0.315\linewidth} \includegraphics[width = 1\linewidth]{image/simulations/converge_RBR.pdf} \captionsetup{font={footnotesize}} \caption{Convergence of the RBR algorithm. $N_t=N_r=1024$, $Q=8$. Communication distance is $40$m.} \label{fig_convergence_RBR} \end{minipage} \end{tabular} \vspace{-9.5mm} \end{figure} In Fig.~\ref{fig_SE_inaccurate_CSI}, we evaluate the spectral efficiency of the RSD and RBR algorithms under inaccurate partial CSI at TX and RX, i.e., the known $\bm{\bar{\Lambda}}[k]$, $\textbf{A}_{t}[k]$, and $\textbf{A}_{r}[k]$ are inaccurate. We use $\xi\in[0,1]$ to represent the level of accuracy. The inaccurate $\bm{\bar{\Lambda}}[k]$ can be represented as \begin{align} \bm{\bar{\Lambda}}_{\hbar}[k]=\xi\bm{\bar{\Lambda}}[k]+\sqrt{1-\xi^2}\textbf{E}_1\odot\bm{\bar{\Lambda}}[k], \label{eq_inaccurate_CSI} \end{align} where $\textbf{E}_1$ denotes the error matrix with elements following the distribution of i.i.d $\mathcal{N}(0,1)$. For the inaccurate $\textbf{A}_{t}[k]$ and $\textbf{A}_{r}[k]$, the expressions are similar to~\eqref{eq_inaccurate_CSI} while the inaccuracy is enforced on the phase since the amplitude of the elements of these two matrices should be fixed as one, as stated in~\eqref{steering_vector_UPA}. Fig.~\ref{fig_SE_inaccurate_CSI} shows the spectral efficiency of the RSD and RBR algorithms by varying the value of $\xi$. The dash lines represent the case of full CSI which is accurate and not related to $\xi$. Compared to the case of full CSI, the spectral efficiency loss of the RSD and RBR algorithms under inaccurate partial CSI grows with the smaller $\xi$. Note that the spectral efficiency loss of both RSD and RBR algorithms maintains less than 15\% when $\xi\geq 0.7$, which is acceptable and reveals that the proposed RSD and RBR algorithms are robust to CSI error. \subsection{Convergence and Computational Complexity Analysis} Fig.~\ref{fig_convergence_RBR} shows the convergence performance of the RBR algorithm with partial CSI. We evaluate the objective function of the RBR algorithm, i.e., $\frac{1}{K}\sum\nolimits_{k=1}^{K}\lVert \textbf{P}[k]-\textbf{S}\textbf{F}\textbf{D}[k]\rVert_{F}^2$, versus the number of iterations. We run the RBR algorithm 1000 times to take the average. The simulation results show that the RBR algorithm converges fast with about 15 iterations. As listed in Table~\ref{table_computational_complexity}, we analyze the computational complexities of the RSD and RBR algorithms. In THz UM-MIMO systems, the number of antennas $N_t$ is usually very large, e.g., 1024. The number of RF chains $L_t$, the number of data streams $N_s$, the number of multipath of channel $N_p$, and the number of FPSs $Q$ within each RF chain are small, e.g., $L_t=N_s=4$, $N_p=6$, and $Q=8$ in our simulations. Without loss of generality, we have ${\mathcal{O}}(L_t)\approx{\mathcal{O}}(N_s)\approx{\mathcal{O}}(N_p)\approx{\mathcal{O}}(Q)\ll {\mathcal{O}}(N_t)$. The RSD algorithm does not involve iterations, whose overall computational complexity is $ {\mathcal{O}}(KN_tN_p^3L_tQ)$. The RBR algorithm includes iterations and we denote $M$ as the number of iterations. The overall computational complexity of RBR algorithm is ${\mathcal{O}}(KMN_tL_tQ)$. Specifically, for the calculation of $\textbf{P}[k]$, RBR algorithm needs to calculate the $N_s$-truncated SVD of $\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H$ to obtain the first $N_s$ columns of the right singular matrix, whose complexity is ${\mathcal{O}}((N_t+N_p)N_s^2)$~\cite{8438554}. Through the simulation in Fig.~\ref{fig_convergence_RBR}, we observe that, the RBR algorithm can converge with about $15$ iterations. Therefore, $M$ is usually much smaller than $N_p^3$. Consequently, although the RBR algorithm has lower spectral efficiency than the RSD algorithm as analyzed before, its computational complexity is also lower than the RSD algorithm. \begin{table} \centering \footnotesize \captionsetup{font={footnotesize}} \caption{Computational complexity analysis of the proposed RSD and RBR algorithms.} \begin{tabular}{ll} \hline \multicolumn{2}{c}{RSD algorithm}\\ \hline Operation&Complexity\\ Design $\textbf{S}_i$ via maximizing~\eqref{expression_u} for $i=1$, $2$, ..., $N_t$ &${\mathcal{O}(KN_tN_p^3L_tQ)}$\\ Calculate $\textbf{D}[k]$ through~\eqref{solution_D_RSD}, for $k=1$, $2$, ..., $K$&${\mathcal{O}}(K(N_tL_t^2Q+(N_p+L_t)N_s^2))$\\ Normalize each $\textbf{D}[k]$ as $\textbf{D}[k]=\frac{\rho}{\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert_F^2}\textbf{D}[k]$&$\mathcal{O}(KN_tL_tN_s)$\\ \textbf{Overall} &$\bm{ {\mathcal{O}}(KN_tN_p^3L_tQ)}$\\ \hline \multicolumn{2}{c}{RBR algorithm}\\ \hline Operation&Complexity\\ Calculate $\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H$, for $k=1$, $2$, ..., $K$&$\mathcal{O}(KN_tN_p^2)$\\ Calculate each $\textbf{P}[k]$ through $N_s$-truncated SVD of $\bm{\bar\Lambda}[k]\textbf{A}_t[k]^H$, for $k=1$, $2$, ..., $K$ &$\mathcal{O}(K(N_t+N_p)N_s^2)$\\ Calculate each $\textbf{D}[k]$ via~\eqref{solution_D_RBR}&$\mathcal{O}(K(N_tL_tN_s+(L_t+N_s)N_s^2))$\\ Design $\textbf{S}_i$ via~\eqref{solution_S_RBR} for $i=1$, $2$, ..., $N_t$&$\mathcal{O}(KN_tL_tQ)$\\ Normalize each $\textbf{D}[k]$ as $\textbf{D}[k]=\frac{\rho}{\sum_{k=1}^{K}\lVert\textbf{S}\textbf{F}\textbf{D}[k]\rVert_F^2}\textbf{D}[k]$&$\mathcal{O}(KN_tL_tN_s)$\\ \textbf{Overall (M iterations)} &$\bm {{\mathcal{O}}(KMN_tL_tQ)}$\\ \hline \end{tabular} \label{table_computational_complexity} \vspace{-9.5mm} \end{table} The computational complexities of the algorithm in~\cite{7913599} for FC, the algorithm in~\cite{9110865} for DS, the SIC algorithm~\cite{7445130} for AoSA, and the FPS-AltMin algorithm~\cite{8310586} for FPS-GC are $\mathcal{O}(KN_t^3)$, $\mathcal{O}(KN_t^3L_t)$, $\mathcal{O}(KN_t^2L_t^2)$, and $\mathcal{O}(KN_tL_t^2N_s^2Q\cdot{\rm log}_2(KN_tL_t^2Q))$, respectively. The computational complexity of the RBR algorithm is much lower than these existing algorithms. Although the RSD algorithm has higher complexity than the RBR, it has higher spectral efficiency as well. The digital beamforming of the algorithms for SRPS and SPSF architectures is the same with computational complexity $\mathcal{O}(N_t^2)$~\cite{8295113,8382230}. The analog beamforming of these two algorithms are based on the first $L_t$ columns of the right singular matrix of channel while with different strategy about the turning off of the antennas. The computational complexity of the analog beamforming of these two algorithms is proportional to $\mathcal{O}(N_t)$. Hence, the computational complexity of the algorithms for SPSF and SRPS architectures is proportional to $\mathcal{O}(N_t^2)$, which is higher than the RSD and RBR algorithms in this work, i.e., linearly related with $N_t$. \section{Conclusion} \label{section_conclusion} In this paper, we have proposed an energy-efficient DS-FPS architecture for THz hybrid beamforming systems, using low-cost FPSs. Moreover, to account for practical partial CSI, we have proposed an RSD algorithm to design the hybrid beamforming matrices for the DS-FPS architecture. To further reduce the computational complexity of the RSD algorithm, an RBR algorithm has been developed. Extensive simulation results have shown that the DS-FPS architecture achieves remarkably higher energy efficiency than the existing architectures. Moreover, by using the RSD and RBR algorithms, the DS-FPS architecture based on partial CSI can achieve $97\%$ spectral efficiency of that with full CSI. When the partial CSI is inaccurate with errors, the spectral efficiency loss compared to full CSI is still less than $15\%$, which is acceptable and reveals that the proposed RSD and RBR algorithms are robust to the incompleteness and inaccuracy of CSI. Furthermore, we have analytically found that the computational complexity of the RBR algorithm is linearly related with the number of antennas, which is much lower than the existing algorithms. Compared to the RBR algorithm, the RSD algorithm yields higher spectral efficiency, at the cost of increased computational complexity. \bibliographystyle{IEEEtran}
1,314,259,992,699
arxiv
\section{Introduction} \label{sec : Introduction} Digel, de Geus, \& Thaddeus (1994) found more than 10 molecular clouds possibly beyond the optical disk of our Galaxy in the direction to the Perseus arm (l $\sim$ 130$^\circ$). Their Galactic radius ($R_{\rm g}$) is estimated at more than 20 kpc and as much as 28 kpc (see also, Digel et al. 1996; Heyer \& Terebey 1998). Because the distribution of Population I and Population II stars in the Galaxy have a sharp cutoff at around 18--20 kpc and 14 kpc, respectively (Digel et al. 1994, and references therein), these distant molecular clouds are potentially very interesting sites to investigate the star-forming process away from the Galactic disk with little or no perturbation from the spiral arms. In such an outermost Galaxy region, the molecular gas surface density is much smaller than in spiral arms (Heyer et al. 1998; Heyer \& Terebey 1998; Digel et al. 1996) and the \ion{H}{1} surface density is one fifth to one tenth of that in the spiral arms (e.g., Wouterloot et al. 1990). Thus, the global star formation environment in the outermost Galaxy region is quite different from that in the spiral arms. Also, metallicity is very low in such a region. The metal abundance at $R_g$ = 20 kpc is estimated at 12 + log(O/H) $\sim$ 8.0, assuming the standard abundance curve (e.g, Smartt \& Rolleston 1997). This metallicity is comparable to that of dwarf irregular galaxies or some damped Ly$\alpha$ systems of higher metallicity (see Ferguson, Gallagher, \& Wyse 1998, and references therein). Therefore, studies of star formation in the outermost Galaxy may reveal the details of the star formation process in an environment similar to that thought to exist during the early stage of the formation of the Galactic disk. Of the 11 distant molecular clouds found by Digel et al. (1994, 1996), Cloud 2 has the largest kinematic $R_{\rm g}$ of 28 kpc. If this kinematic $R_{\rm g}$ is correct, Cloud 2 is located far beyond the optical disk of our Galaxy and at the edge of the \ion{H}{1} gas disk (around $\sim$30 kpc, e.g., Kulkarni, Blitz, \& Heiles 1982). Cloud 2 has a high CO luminosity (M$_{\rm CO}$ $\sim$ 3.7 $\times$ 10$^4$ $M_\odot$) that suggests star formation activity in this cloud (Digel et al. 1994). Indeed, de Geus et al. (1993) found an extended H$\alpha$ emission that has the same radial velocity as Cloud 2 ($V_{\rm LSR}$ = $-$103 km s$^{-1}$). They concluded that the H$\alpha$ traces an \ion{H}{2} region associated with Cloud 2, and they proposed an early B-type star near Cloud 2 (``MR-1'': Muzzio \& Rydgren 1974; see also Smartt et al. 1996) as the photoionizing source. However, no star-forming activities like those in the nearby star-forming molecular clouds have been reported thus far. Smartt et al. (1996) obtained high spectral resolution optical spectra of MR-1 and found $R_{\rm g}$ $\sim$ 15 to 19 kpc for MR-1 based on the atmospheric parameters from their spectra and the photometry by Muzzio \& Rydgren (1974)\footnote{ Smartt et al. (1996) derived an $R_{\rm g}$ of 15 kpc (heliocentric distance of 8.2 kpc) based on LTE model of the optical spectrum. They suggested that a non-LTE model can make it larger up to 19 kpc (heliocentric distance of 12 kpc). Because the non-LTE model is more likely for stars like MR-1 with high effective temperatures as described by Smartt et al. (1996), we assume $R_{\rm g}$ $=$ 19 kpc and a heliocentric distance of 12 kpc hereafter. The $R_{\rm g}$ of Earth is assumed to be 8.5 kpc.} . They suggest that MR-1 is probably physically related to the \ion{H}{2} region (de Geus et al. 1993) because the radial velocity of MR-1 ($-$90 $\pm$ 13 km s$^{-1}$) is in reasonable agreement with the nebular velocity (--103 km s$^{-1}$). If this is the case, Cloud 2 is located close to the edge of the optical disk ($R_{\rm g}$ $\sim$ 20 kpc) rather than far beyond the optical disk. However, it still remains as one of the most distant molecular clouds/\ion{H}{2} regions known to date. The metal abundance of MR-1 is estimated at 12 $+$ log(O/H) $\sim$ 8.3 (Smartt \& Rollenston 1997), which is comparable to that for irregular dwarf galaxies (e.g., $\sim$8.4 for the Large Magellanic Cloud; Arnault et al. 1988). Here we report a discovery of young stellar objects (YSOs) associated with Cloud 2 made during our near-infrared studies. These sources could shed light on the star formation processes in such a low-density and low-metallicity environment as well as the distance to Cloud 2. We have made comprehensive near-infrared observations of Cloud 2 that include a wide-field survey, spectroscopy of detected infrared sources, and deep imaging for the purpose of detecting low-mass YSOs. The details of our study will be reported in subsequent papers. \section{Observations and Results} \label{sec : Observations and Results} In October 1997, we made an initial near-infrared survey of Cloud 2 with University of Hawaii's QUIST (Quick Infrared Survey Telescope) mounted at the UH 0.6 m telescope atop Mauna Kea. QUIST consists of University of Hawaii's QUIRC (Quick Infrared Camera), a near-infrared camera with 1024$\times$1024 HgCdTe HAWAII array, and 25.4 cm Cassegrain telescope that provides a 25$\arcmin$ field of view with a 1.5$\arcsec$ pixel$^{-1}$ scale.\footnote{The 0.6-m telescope is not used. The QUIST telescope is attached to its equatorial mount.} The observing was done remotely from the Institute for Astronomy in Honolulu. The observations were partly affected by intermittent cirrus. Several standard stars from Elias et al. (1982) were observed at several airmass positions for photometric calibration. We obtained images of a field centered on Cloud 2 in three near-infrared bands, $J$ (1.25\,$\mu$m), $H$ (1.65\,$\mu$m), and $K$ (2.2\,$\mu$m). The total integration times for each band were 36 min, 36 min, and 45 min, respectively. We detected seven red sources associated with Cloud 2 with QUIST (Fig. 1). The coordinates and near-infrared magnitudes of all sources and MR-1 (Muzzio \& Rydgren 1974) are summarized in Table 1. All of the near-infrared sources are associated with $IRAS$ sources in Cloud 2: IRAS 02450+5816 for IRS 1; IRAS 02447+5811 for IRS 2, 3, 4, 5; and IRAS 02455+5808 for IRS 6\&7 (Fig. 2a and Table 2). {\it JHK} photometry has been performed using IRAF APPHOT tasks.\footnote{ IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} An aperture of 18$\arcsec$ was employed. The resultant $J-H$ vs. $H-K$ color-color diagram is shown in Figure 3. We made follow-up $K$-band spectroscopy of IRS 1 and IRS 2 with the near-infrared spectrograph CGS4 at UKIRT\footnote{The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council.} in December 1997. IRS 1 and 2 are two bright sources near the northern and southern clumps of the molecular cloud, respectively (see Fig. 2). A 40 grooves mm$^{-1}$ grating that provides a spectral resolution of $\lambda/\Delta\lambda$ $=$ 900 was used. Because seeing was excellent, we used a narrow (0\farcs 6) slit with the tip-tilt secondary. Observing conditions were photometric. To achieve sufficient sampling, we took three exposures with one-third pixel shift between exposures. After basic reductions (e.g., sky subtraction and flattening), one-dimensional spectra were extracted with standard IRAF tasks. The standard star HR831 was used for the correction of atmospheric extinction and flux calibration. The Br$\gamma$ absorption line in the standard spectrum was removed by interpolation before the extinction correction. The results are shown in Fig. 4. When observing IRS1, the humidity was so low and stable that we could clearly detect emission lines in spectral regions of significant telluric water vapor absorption. The details of this spectroscopy will be described in a separate paper with results from our new CGS4 spectroscopy of additional Cloud 2 sources (Kobayashi \& Tokunaga 1999a). \section{Discussion} \label{sec : Discussion} \subsection{Near-infrared Sources} \label{subsec : Near-infrared Sources} Among all the detected sources in the observed field, the seven red sources are distinctively red as shown in the true color pictures (Fig. 1) and in the ($J-H$) vs. ($H-K$) color-color diagram (Fig. 3). All of the red sources are associated with Cloud 2, and {\it no other bright red sources were found apart from Cloud 2 in the surveyed field} (Fig. 1). All sources except IRS 2 appeared to be point sources with the QUIST spatial resolution ($\sim$2$\arcsec$). IRS 2 is discussed in more detail in \S\ref{subsec : IRS 2}. The Five College Radio Astronomy Observatory (FCRAO) $^{12}$CO survey data (Heyer et al. 1998)\footnote{We obtained the FCRAO data electronically from the NASA Astronomical Data Center at http://adc.gsfc.nasa.gov/.} show no foreground clouds in the direction of Cloud 2. However, a number of small foreground clouds are around Cloud 2 in the surveyed field: a small cloud in the local arm at 5$\arcmin$ southward, another small cloud in the local arm at 10$\arcmin$ westward, a small cloud associated with the Perseus arm at 10$\arcmin$ eastward, and a large cloud associated with the Perseus arm at 20$\arcmin$ northward. In spite of the existence of many foreground clouds in the surveyed field, we detected red sources only in the small area centered at Cloud 2. This result strongly suggests that all the red sources are physically associated with Cloud 2. All seven sources show a large $H-K$ excess of more than 0.8. As shown in Figure 3, the $H-K$ excesses of the sources except IRS 4 are not due to interstellar extinction but are caused by intrinsically large $H-K$ excess. YSOs, highly obscured late-type stars (e.g, OH-IR stars, protoplanetary nebulae (PPNs)), and active galactic nuclei (AGNs) are known to show large intrinsic $H-K$ excess from dust emission (e.g., Lada \& Adams 1992 for YSOs; Garc\'\i a-Lario et al. 1997 for late-type stars; and Hunt et al. 1997 for AGNs). Although it is difficult to distinguish between these three classes of objects solely from near-infrared colors, the red sources are most likely YSOs in view of the association with the molecular cloud. The two brightest red sources, IRS 6 and 7, which, among the red sources, are most distant from Cloud 2 on the sky (Fig. 2a), might be foreground stars in view of their brightness (a few magnitudes brighter than other sources in Cloud 2; see Table 1) and relatively large angular distance from Cloud 2 (7$\arcmin$--8$\arcmin$ from the CO peaks; see Fig. 2a). Also, they are located at the edge of one of the foreground molecular clouds in the Perseus arm. These two sources are associated with the bright $IRAS$ point source, IRAS 02455+5808, but not resolved within the $IRAS$ beam (1 $\sigma$ ellipse of 37$''$ $\times$ 10$''$ with PA $=$ 59$^\circ$). The $IRAS$ color is typical for various kinds of objects (e.g., galaxies, YSOs, planetary nebulae) and does not reveal the nature of IRS 6 and 7 clearly. The pointlike appearance and extremely red near-infrared color ($H-K$ $>$ 1.5) suggest that they are at least Galactic stars. Further study is necessary to clarify the nature of those sources. MR-1 is located on a reddening vector from early-type stars. The visual extinction of MR-1 is estimated at about $A_V =$ 3 to 4 mag from this color-color diagram. This is consistent with the estimate from $B$ and $V$ photometry of $A_V =$ 3.1 mag (Muzzio \& Rydgren 1974). \subsection{IRS 1} \label{subsec : IRS 1} The $K$-band spectrum of IRS 1 (Fig. 4) shows three strong hydrogen recombination lines: Pa$\alpha$ (1.875\,$\mu$m), Br$\delta$ (1.945\,$\mu$m), and Br$\gamma$ (2.166\,$\mu$m). Those lines show a blueshift of about 100 to 200 km s$^{-1}$, suggesting IRS 1 is not an extragalactic object. Also, our $K$-band spectrum shows that IRS 1 is unlikely to be an OH/IR star or a PPN because these objects do not usually show hydrogen emission lines. OH/IR stars show strong CO/H$_2$O absorption lines (Nagata 1999) and PPNs usually show hydrogen {\it absorption} lines (Oudmaijer et al. 1995; Hrivnak, Kwok, \& Geballe 1994). Although a few PPNs, possibly more evolved than most PPNs, are known to show near-infrared hydrogen emission lines like planetary nebulae (e.g., Aspin et al. 1993 for M 1-16; Thronson 1981 for AFGL 618), it is highly unlikely that such a rare source is located near an $IRAS$ source in a molecular cloud (Fig. 2a). Instead, it is highly plausible that the near-infrared emission lines are signatures of an \ion{H}{2} region around YSOs. For the reasons above, we conclude that IRS 1 is a YSO physically associated with Cloud 2. Assuming the Galactic radius of 19 kpc for IRS 1 (heliocentric distance = 12 kpc), the $K$-band absolute magnitude without any correction for extinction is $M_K$ = $-$ 2.4 mag. This is comparable to those for high to intermediate mass YSOs such as Herbig Ae/Be stars (e.g., Hillenbrand et al. 1992). We estimate the spectral type of IRS 1 roughly at mid to late B from the K-band apparent magnitudes and distances for the Herbig Ae/Be samples in Hillenbrand et al (1992). \subsection{IRS 2} \label{subsec : IRS 2} IRS 2 is located at the southern peak of the CO molecular cloud as well as at the center of the error ellipse of $IRAS$ 02450+5816 (Figs. 2a, 2b). IRS 2, 3, 4, and 5 form a cluster of red sources near the southern CO peak (Figs. 2a, 2b); IRS 2 is the brightest. The near-infrared color of IRS 2 shows that it is as highly extinguished ($A_V$ $\sim$ 10 mag) as IRS 1. IRS 2 appeared to be extended in the QUIST image with FWHM of $\sim$7$\arcsec$ (Fig. 1b). We recently obtained deep $JHK$ images of Cloud 2 with higher spatial resolution and found that IRS 2 is a cluster of more than 20 red pointlike sources (Kobayashi \& Tokunaga 1999b). This morphology strongly suggests that IRS 2 is a star cluster in or behind the molecular cloud. Further observations are necessary to clarify the nature of IRS 2 as well as of IRS 3/4/5. \subsection{Star Formation in Cloud 2} \label{subsec : Star Formation in Cloud 2} The ionized gas traced by H$\alpha$ emission extends from MR-1 toward Cloud 2. The peaks of the H$\alpha$ emission are between the molecular cloud peaks and MR-1 (de Geus et al. 1993; see also Fig. 2a). Since IRS 1 is located at the center of the H$\alpha$ emission, it could also be one of the major ionizing sources of this \ion{H}{2} region. However, MR-1 is likely to dominate the ionization of the entire \ion{H}{2} region because the number of ionizing photons from IRS 1 is expected to be much lower than that from MR-1, assuming a spectral type of mid- to late-B and B0--1, respectively, for IRS 1 and MR-1. The $IRAS$ source associated with IRS 1 (02450+5816) is located between the H$\alpha$ peak and northern CO peak of Cloud 2 (Figs. 2a, 2b). This pattern is typical for Galactic \ion{H}{2} regions (e.g., Gatley et al. 1979 for M17): young OB stars photoionize the surface of an associated molecular cloud and make the warm dust region which is traced by $IRAS$ 60/100$\mu$m flux. IRAS 02450+5816 is not detected at 12 or 25\,$\mu$m but only at 60 and 100\,$\mu$m. Its [60]-[100] color temperature (about 30 K; assuming emissivity $\epsilon_\lambda$ $\sim$ $\lambda^{-2}$) is significantly lower than those for stars, planetary nebulae, single YSOs or active galaxies (see, e.g., Walker et al. 1989). Also, IRAS 02450+5816 is cataloged with a ``small-scale structure flag,'' which denotes an association with a confirmed extended source (IRAS Point Source Catalogue 1988). These characteristics suggest that the $IRAS$ source is a warm extended region adjacent to a molecular cloud rather than a single object with compact far-infrared emission. Judging from the low dust temperature ($\sim$30 K), the warm region is not a prominent photodissociation region (PDR) in a young star cluster (e.g., S140: Timmermann et al. 1996) but a less energetic one in a dark cloud such as $\rho$ Oph (Liseau et al. 1998). This is consistent with the suggestion by de Geus et al. (1994) that Cloud 2 is more like a dark cloud (e.g., Taurus dark cloud) than a large star-forming complex with OB star cluster (e.g., Orion molecular cloud complex). The bolometric luminosity of IRAS 02450+5816 is estimated to be $L_{\rm IR}$ $\sim$ 1000 $L_\odot$ from the $IRAS$ flux densities and assuming a 12 kpc heliocentric distance (Emerson 1988; Tokunaga 1999). Assuming a spectral type of B0V--B1V, the luminosity of MR-1 is expected to be $L_{\rm IR}$ $\sim$ 10$^4$ $L_\odot$ if all the emitting photons are entirely absorbed by the molecular cloud (see Fig. 2 in MacLeod et al. 1998). If we assume that the northern peak of Cloud 2 covers only 10\% of the sphere centered at MR-1 (the cone of 60$^\circ$ apex angle), the observed $IRAS$ luminosity can be explained by the ionization of MR-1. Although it is hard to estimate a precise solid angle from the current data, it is likely that MR-1 is the major ionizing source exciting the \ion{H}{2} region and the PDR. The $IRAS$ source associated with the southern CO peak (IRAS 02447+5811) appears to have a small offset from the CO peak to MR-1 as is the case for the northern peak (Fig. 2b). Since the geometry of an ionizing source, ionizing gas, an $IRAS$ source, and a molecular cloud is similar to that for the northern peak, IRAS 02447+5811 could also be a PDR associated with Cloud 2. The near-infrared sources IRS 1--5 are located between the H$\alpha$ peaks and the molecular cloud peaks near the $IRAS$ sources (Figs. 2a, 2b). This geometry suggests that the photoionization of MR-1 triggered the formation of the near-infrared sources in Cloud 2. Thus, the star formation in Cloud 2 seems to be dominated by the single early B-type star MR-1. It is also interesting to consider how a single B-star, MR-1, was formed in the outermost Galaxy, but this is beyond the scope of this paper. \section{Conclusion} \label{sec : Conclusion} We have conducted a wide field near-infrared search for YSOs associated with Cloud 2 as denoted by Digel et al. (1994). This cloud is one of the most distant molecular clouds from the Galactic center known thus far; the Galactic radius is estimated to be 15--19 kpc (Smartt et al. 1996). Although extended H$\alpha$ emission is associated with this cloud, ongoing star-forming activity like that in the nearby star-forming molecular clouds has not been previously reported. We have discovered seven very red near-infrared sources in and around Cloud 2 with wide-field imaging in the $J$ (1.25\,$\mu$m), $H$ (1.65\,$\mu$m), and $K$ (2.2\,$\mu$m) bands. Although foreground clouds in Perseus and local spiral arms are around Cloud 2 on the sky, we could not detect any red sources apart from Cloud 2 within the total surveyed area of roughly 900 arcmin square. Therefore, the detected red sources are very likely to be members of Cloud 2. Most of the sources show large $H-K$ excess ($H-K$ $>$ 0.8), indicating their YSO nature. We have also obtained a $K$-band (1.85--2.45\,$\mu$m) spectrum of two of the infrared sources, IRS 1 and IRS 2, that are near the two CO peaks in Cloud 2. Strong hydrogen emission lines (Br$\gamma$, Br$\delta$, and Pa$\alpha$) with a slight blueshift were detected for IRS 1, while emission or absorption lines were not detected for IRS 2 within the uncertainty. In view of the cloud association and the emission-line spectrum, we conclude that IRS 1 is a YSO physically associated with Cloud 2. IRS 1 is associated with an $IRAS$ point source with an extended feature (IRAS 02450+5816) near the northern CO peak of Cloud 2. This $IRAS$ source has a low color temperature ($\sim$30 K) and is located between an H$\alpha$ peak and the CO peak, suggesting it is a photodissociation region. IRS 2 is associated with IRAS 02447+5811 on the southern CO peak of Cloud 2. IRS 3, 4, and 5 are located around this $IRAS$ source. The overall distribution of ionized gas, $IRAS$ sources, molecular cloud, and near-infrared sources suggests that MR-1, an early B-type star near Cloud 2, has triggered the formation of near-infrared sources in Cloud 2. \acknowledgements We are grateful to Mike Nassir, Jim Deane, and Richard Wainscoat, and to the staff of University of Hawaii 2.2 m telescope, for help during our QUIST remote observing run. We thank the UKIRT support scientist John Davis and the UKIRT staff for their kind help during our CGS4 observing run. Special thanks goes to Tom Kerr of UKIRT for providing special processing of the CGS4 data. Lastly, NK thanks Miwa Goto for her useful comments on the first manuscript. NK was supported by a JSPS overseas fellowship.
1,314,259,992,700
arxiv
\section{State of the Art} \label{sec:sota} \subsection{Scene and Scenario Descriptions} In recent years, knowledge graphs have found their way into a wide variety of domains. They are used to store knowledge in a structured and extensible graph representation and enable agents to query them for all kinds of information. Ulbrich et al.~\cite{Ulbrich2014GraphbasedCR} present an approach for representing and modeling context and environment in driving scenarios. It comprises various layers for describing information about lane, traffic rules, participants, and the overall situation. Henson et al.~\cite{DBLP:conf/semweb/HensonSTK19} describes a semantic model with the most important concepts in a driving scene such as \emph{sequence}, \emph{scene}, \emph{participant} and their inter-relationships. A knowledge-graph based approach for representing and fusing heterogeneous types of information of traffic scenes is presented in~\cite{Halilaj2021AKnowledge}. The integrated knowledge is then used along with graph neural networks for classification of driving situations. \subsection{Motion Prediction} A general survey about motion prediction for automated driving can be found in~\cite{Leon2019ARO}. A prominent method that reasons jointly about the 3D scene layout of intersections as well as the location and orientation of objects in the scene is proposed in~\cite{Geiger20143DTS}. An approach that includes high-level semantic information in a spatial grid, combined with CNN to model complex scene context for trajectory prediction, is described in~\cite{Hong2019RulesOT}. Casas et al.~\cite{Casas2018IntentNetLT} present IntentNet, a one-stage detector and forecaster based on 3D point clouds of a LiDAR sensor and dynamic maps of the environment. MutliPath~\cite{Chai2019MultiPathMP} uses state-sequence anchors that correspond to model of trajectory distributions. At inference, the model predicts a discrete distribution over the anchors and regresses offsets from anchor waypoints along with uncertainties. These yield a Gaussian mixture at each time step. Zhao et al.~\cite{Zhao2020TNTTT} present the Target-driveN Trajectory (TNT) framework, that consists of three steps: predict the agent's potential target states into the future, generate trajectory state sequences conditioned on targets, estimate trajectory likelihoods and final compact trajectory predictions. \subsection{Graph-based Motion Predictions} A survey of deep learning-based vehicle behavior prediction for automated driving is provided by~\cite{Mozaffari2019DeepLV}. Diel et al.~\cite{Diehl2019GraphNN} compare the trajectory prediction performance of Graph Convolutional Networks (GCN)~\cite{Kipf2017SemiSupervisedCW} with Graph Attention Networks (GAT)~\cite{DBLP:conf/iclr/VelickovicCCRLB18} and propose modifications to the task of vehicle behavior prediction. An attention-based spatio-temporal Graph Neural Networks (GNN)~\cite{Scarselli2009TheGN} for pedestrian trajectory prediction is proposed in~\cite{Zhou2021ASTGNNAA,Yu2020SpatioTemporalGT,Haddad2019SituationAwarePT}. Li et al.~\cite{Li2019GRIPGI} present a graph based interaction-aware trajectory prediction (GRIP) approach. This models the interaction between the vehicles using a GCN and graph operations. Mo et al.~\cite{Mo2021GraphAR} describe a GNN-RNN based approach for trajectory prediction, where the vehicles are modelled by an RNN and the interactions between them as a directed graph using a GNN. The output of the model is further processed by a Long Short Term Memory (LSTM)~\cite{hochreiter_long_1997}. An enhanced version~\cite{Li2019GRIPEG} uses both fixed and dynamic graphs for trajectory prediction. Frenet coordinate systems are well investigated for motion planning and trajectory generation~\cite{Werling2010OptimalTG}. The Frenet representation is adopted for multi-agent interaction aware trajectory prediction by a GNN-based solution \cite{ma_multi-agent_2021}. Instead of a Cartesian coordinate system and global context images, it uses the pairwise relations between agents and pairwise context information. Fang et al.~\cite{Fang2019OntologybasedRA} propose an approach for long-term behavior prediction using ontology-based reasoning that considers interactions between traffic participants. Their likelihood behavior is inferred by a Markov logic network. Gao et al.~\cite{Gao2020VectorNetEH} presented VectorNet, a vectorized definition of the scene where unified representations are learned from their vectorized form. The graphic extent of a road feature can be a point, a polygon, or a curve in geographic coordinates. A GNN is then used to incorporate the set of vectors, where each vector is treated as a node in the graph. Li et al.~\cite{Li2021SpatioTemporalGD} present a spatio-temporal graph dual-attention network for multi-agent prediction and tracking. It incorporates relational inductive biases, a kinematic constraint layer and leverages both trajectory and scene context information. \section{Introduction} In road traffic, there exist many influencing factors important to safely drive from one point to another. While a number of these factors can be considered static, for example the road infrastructure, the behavior of the various traffic participants changes dynamically over time. Therefore, the behavior of nearby traffic participants is important in determining the appropriate action of an autonomous vehicle, e.g. steering or braking. Traffic participants might react differently depending on their particular situation, so a model about their intentions, indicating whether they will brake soon, would provide added value. To obtain that, in addition to the position of traffic participants, knowledge about how they relate to other traffic participants is crucial. This is particularly relevant for autonomous driving when the autonomous agent needs to detect and evaluate the current traffic situation (scene) in order to generate a trajectory. Human drivers derive an understanding of the underlying traffic situation directly from their perceived environment and their knowledge about traffic rules, as well as experience from previous social interactions and behaviors. In this work, we incorporate explicit spatial information about relations among the various traffic participants, and clearly demonstrate the importance of this information. For autonomous agents, it is desirable to have an explicit description of the road environment or driving context in order to evaluate the influences of individual factors and ensure the safety requirements of the autonomous vehicle. To this end, we use semantic scene graphs as a way to describe the driving context of an autonomous agent, including the relation to nearby traffic participants independently of scene types (constellation of traffic participants) or road geometry. This description is represented in a form that can be used as contextual information for the motion planning of an autonomous agent. This paper is structured as follows: In \Cref{sec:sota} we provide a review of existing traffic scenario descriptions and related work in the context of motion prediction. \Cref{sec:scene_description} describes a short overlook over the used scene representation model and how certain graph's attributes are derived. In \Cref{sec:implementation}, our implementation in regard to the used datasets is discussed. Moreover, two net architectures are proposed on how to extract information from a scene graph. The results of the models are shown and evaluated in \Cref{sec:experiments}. Finally, in \Cref{sec:conclusion} we conclude this contribution. \section{Conclusion} \label{sec:conclusion} In this work, we present two models that are capable of predicting the movement of traffic participants based on spatial semantic scene graphs. By means of the scene graphs, semantic relationships among traffic participants are encoded. Through the ablative analysis, it is shown that these play an important role in the behavior of the traffic participants. In addition, the temporal model empirically shows that the preceding scenes provide important contextual information for the prediction task, and lead to significantly better results. With the inclusion of previous scenes, up to 46\% better results are achieved for the PandaSet and up to 73\% better results are achieved for the INTERACTION dataset compared to the baseline. In the future, more semantic relation types will be investigated. While the presented model is able to represent various types of traffic participants and their different types of relations, e.g., pedestrians are not included yet, and only spatial relations are considered in our experiments. In the upcoming work, the influence of other traffic participants will be investigated. In addition, the used scene graph is limited to a few attributes, which will be extended in the future by further semantic information, possibly from external sources. Moreover, we will also investigate to what extent the contextual information of the scene extracted by the presented model can be used to improve a state-of-the-art image-based trajectory predictor. \section{Experiments} \label{sec:experiments} In this section, we evaluate the results of the two proposed network architectures. With the experiments, we want to show that the scene graphs can help to provide contextual information of a traffic scene in respect to behavioral prediction. For each entity $i$ in $G(t)$, the acceleration $a_i$ is set as the respective label. The acceleration is calculated using the ground truth trajectories' velocities $vel_i$ of entity $i$. \begin{align} a^{label}_i(t) = \frac{vel_i(t+\delta_t) - vel_i(t)}{\delta_t} \label{eq:acceleration} \end{align} It should be noted that acceleration prediction is only a single use case of the proposed approach. It intends to demonstrate the applicability and to verify the functionality. In our experiments, we set $\delta_t$ to 10 time steps, which corresponds to a time period of $1s$ in the datasets used. We evaluate our model, how well acceleration can be imitated using two different motion datasets. We compare our model against a set of simple baselines for two reasons: First, we want to measure the benefit of a spatial semantic scene graph and second, to the best of our knowledge, there are no comparable approaches based on our input data. Many recent approaches use a rasterized bird's-eye image of the scenes~\cite{bansal_chauffeurnet_2018,Chai2019MultiPathMP} or at least the coordinates of traffic participants as input. The work of Ma et al.~\cite{ma_multi-agent_2021} is still the closest related to our approach, but it also uses the image information for the edge attributes. Therefore, two straightforward baselines were used: Baseline \emph{Mean} emulates a model that always outputs the mean value over the whole test dataset. Baseline \emph{Zero} emulates a model that always outputs zero as the acceleration value, regardless of the input. \subsection{Training} We trained our model using Adam~\cite{kingma2014adam} with an initial learning rate of $10^{-3}$ and a batch size of 1. As a learning rate scheduler, we employed the \emph{ReduceLROnPlateau} method with a factor of 0.1 and a patience value of 10. The number of epochs is set to a maximum of 200 with an early stopping mechanism after 25 epochs without validation improvement. In our experiments, the trainings stopped at roughly 75 epochs, depending on the model and dataset. Furthermore, we clipped the gradients to a global gradient norm of 1. Our model is trained on the INTERACTION dataset with 13603 graphs, corresponding to 59972 entities. The model is subsequently tested with 11871 unseen graphs or behavior of 57413 traffic participants. The training set of PandaSet dataset consists of 4977 graphs and the testing set consists of 2923 graphs. During the training phase, we used the loss defined in \Cref{eq:loss_all} for the INTERACTION dataset and the loss defined in \Cref{eq:loss_pandaset} for the PandaSet dataset. \begin{align} Loss = \frac{1}{n}\sum_{i\in V}\left| a_i - a^{label}_i \right| \label{eq:loss_all} \end{align} \begin{align} Loss_{pandaset} = \left| a_0 - a^{label}_0 \right| \label{eq:loss_pandaset} \end{align} Due to the inaccurate vehicle tracking described in \Cref{sec:preprocessing}, only the acceleration of the measurement vehicle $a_0$ is predicted for the PandaSet dataset and also used for training. \subsection{Results} The results are calculated in \Cref{tab:error_table} for each model as linear error (L1-loss) or as squared error (MSE-loss). Compared to the baseline, the Single Step model (see \Cref{fig:nnconv_net_architecture}) is about 20\% better at predicting accelerations. The Single Step model, with no edge data used, serves as an ablation. Only the node information is fed into the neural network, and all edge information is set to zero. Compared to the ablation model, our model performed about 12\% better. The temporal network architecture performs significantly better. Compared to the baseline, better results are achieved between 70\% and 73\%, depending on the length of the sequence under consideration. Sequences with 5, 10 and 15 scenes are considered. The longer the time history, the better the acceleration of the individual traffic participants can be predicted. However, in our experiments there are only minimal changes in the errors, when considering 10 or 15 scenes per sequence. To make the results comparable, we assume the calculated acceleration to be constant for future time steps and calculate a trajectory based on the ground truth path of a traffic participant. This would describe the deviation of the driven distance, which is comparable to the \emph{Final Displacement Error} (FDE) of a trajectory comparison, as follows: \begin{align} \textrm{FDE}_t = \left| a_i - a^{label}_i \right| \frac{t^2}{2} \end{align} With our Recurrent15 model of the INTERACTION dataset, this would give a $\mathrm{FDE}_3 =0.765\,m$. However, it should be kept in mind that so far our model only predicts the longitudinal motion component (accelerating/breaking) of a traffic participant and neglects the lateral component (steering). If we compare the predictive performance of our model with those of state-of-the-art models \cite{interpret_challenge}, we perform in the lower range of common predictors. However, there are obvious reasons for that since we optimize only for an acceleration value and not for a future trajectory, i.e., the future course of the road is not included in the model. The curvature of the road, regulating road markings, such as a stop line, is also not considered yet. Instead, we show that relational information in a traffic scene helps to enable more meaningful predictions in general. Our approach complements the previous work and does not intend to replace it. \begin{table}[htbp] \centering \begin{tabular}{lccc} \hline Model & Dataset & L1 & MSE \\ \hline Single Step & INTERACTION & 0.494 & 0.451 \\ Single Step no edge data & INTERACTION & 0.552 & 0.538 \\ Recurrent5 & INTERACTION & 0.271 & 0.14 \\ Recurrent10 & INTERACTION & 0.188 & 0.098 \\ Recurrent15 & INTERACTION & \textbf{0.170} & \textbf{0.085} \\ Baseline Mean & INTERACTION & 0.654 & 0.740\\ Baseline Zero & INTERACTION & 0.599 & 0.622\\ \\ Single Step & PandaSet & 0.332 & 0.378 \\ Recurrent5 & PandaSet & 0.259 & 0.294 \\ Recurrent10 & PandaSet & 0.178 & 0.241 \\ Recurrent15 & PandaSet & \textbf{0.158} &\textbf{ 0.206} \\ Baseline Mean & PandaSet & 0.347 & 0.489 \\ Baseline Zero & PandaSet & 0.34 & 0.489 \\ \hline \end{tabular} \caption{Evaluation results of the tested models} \label{tab:error_table} \end{table} The relative trend of the prediction models between the INTERACTION and the PandaSet dataset is similar (see \Cref{tab:error_table}). However, comparatively inferior results are achieved with the PandaSet. One possible explanation is that the PandaSet dataset contains about 20 times less training samples than the INTERACTION dataset because in the latter, a future acceleration is estimated for each traffic participant and not only for the ego vehicle. In addition, the PandaSet contains significantly larger numbers of participants per traffic scenes. On average, it contains about 24 traffic participants, each with 12 relations to their neighbors. In the INTERACTION dataset, there are on average about 5 vehicles in the scene and each object has about 2 relations. This reduces noise and allows the model to focus more precisely on individual traffic participants. \section{Scene Representation} \label{sec:scene_description} In this chapter, we describe the procedure for creating the scene description for a concrete application example. To analyze realistic traffic scenes and the resulting behaviors, we use openly available datasets, which we describe in detail in \Cref{sec:dataset}. As the foundation for representing relations between traffic participants and describing the underlying traffic scenes, we apply the \emph{Semantic Scene Model} (SSM), which is based on our previous work \cite{zipfl2021traffic}. This model establishes relations between the traffic participants in the scene based on the given road topology. The traffic scene is mapped to a graph, in which the traffic participants are described by nodes and their relations by edges. \begin{figure}[htbp] \centering \includegraphics[width=0.8\columnwidth]{images/Interaction_Dataset_Visualization_roundabout_example.png} \caption{Traffic scene seen from bird's eye view.} \label{fig:interaction_roundabout_example} \end{figure} \begin{figure}[htbp] \centering \def\columnwidth{0.7\columnwidth} \input{images/graph_Dataset_Visualization_roundabout_example.pdf_tex} \caption{Scene graph topology of the traffic scene of \Cref{fig:interaction_roundabout_example}.} \label{fig:graph_interaction_roundabout_example} \end{figure} \Cref{fig:interaction_roundabout_example} illustrates an exemplary traffic scene from the INTERACTION dataset \cite{interactiondataset} and its corresponding topology of the derived scene graph is depicted in \Cref{fig:graph_interaction_roundabout_example}. Each of the seven traffic participants is represented as a node. Each node $v$ contains information about its classification (car, pedestrian, truck, ...) and its velocity. Edges between the nodes are of three different types: longitudinal, lateral and intersecting. This can be seen for example at the nodes or vehicles 10, 13, 14 and 15. If we consider the road layout in \Cref{fig:interaction_roundabout_example}, vehicles 13 and 15 follow vehicle 10. Vehicle 14 also follows vehicle 10, but at the same time, vehicle 13 and vehicle 15 have an intersecting relation with vehicle 14, since they drive on two roads that will merge eventually. This situation can be illustrated well by the generic graph description (\Cref{fig:graph_interaction_roundabout_example}). When converting the scene into the scene graph, the individual traffic participants are assigned to lanes. This allows the traffic scene to be viewed in the \emph{Frenet} space, with the roads being the curves. In addition to the classification of the edge type, its probability and the distance along the roadway, between the two respective entities, is also included as an edge attribute (see \cref{tab:distance_table}). When assigning every entity to a lane, the distance to the centerline of the lane and the angular difference between the entity's pose and the lane is calculated. These two values can be used to calculate the assignment probability $P(i)$ of an entity $i$ to a lane. We define the probability $P(e_{ij})$ of an edge $e_{ij}$ as follows: \begin{align} P(e_{ij}) = P(i) \cdot P(j). \end{align} A more detailed overview of the calculation of the probability of each entity, especially with respect to the assignment to multiple lanes, is given in our previous work~\cite{zipfl2021traffic}. An example of the distance attributes, a schematic traffic scene and the corresponding edges is presented in \Cref{fig:distance_scheme}. If the relation between two entities is longitudinal (\texttt{lon}), the distance along the center line of the road to the projection of the entity is calculated, so that a lateral shift of the entity does not affect the distance (see $d_{2,4}$). The principle remains the same for merging lanes. Here, the distance along the road is also determined. The point where the two roads merge is called $p_{int}$. This means that the distance between vehicle (3) and vehicle (4) is noted as $d_{3,p}+d_{p,4}$. For two parallel roads, the distance between the two entities is determined similarly to longitudinal calculation. In this example, vehicle (2) is projected to the same lane as vehicle (1). Then the distance between both traffic participants is measured (compare $d_{1,2}$). For intersecting entities (\texttt{int}), only the distance from the tail-entity to the intersection point (here $p_{int}$) is considered (see $d_{2,p}, d_{3,p}$). \begin{figure}[htbp] \centering \def\columnwidth{0.9\columnwidth} \input{images/intersection_distances.pdf_tex} \\ \caption{Schematic traffic with distances between traffic participants} \label{fig:distance_scheme} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{lccc} \hline edge & edge probability & classification & distance \\ \hline $e_{12}=(1,2)$ & $P(e_{12})$ & \texttt{lat} & $d_{1,2}$ \\ $e_{21}=(2,1)$ & $P(e_{21})$ & \texttt{lat} & $-d_{1,2}$ \\ $e_{23}=(2,3)$ & $P(e_{23})$ & \texttt{int} & $d_{2,p}$ \\ $e_{32}=(3,2)$ & $P(e_{32})$& \texttt{int} & $d_{3,p}$ \\ $e_{24}=(2,4)$ & $P(e_{24})$ & \texttt{lon} & $d_{2,4}$ \\ $e_{34}=(3,4)$ & $P(e_{34})$& \texttt{lon} & $d_{3,p}+d_{p,4}$\\\hline \vspace{0.5cm} \end{tabular} \caption{Structure of the edge attributes of the scene described in \Cref{fig:distance_scheme}.} \label{tab:distance_table} \end{table} \section{Implementation and Training} \label{sec:implementation} \subsection{Dataset} \label{sec:dataset} Two different datasets are investigated: first, the PandaSet dataset \cite{xiao2021pandaset}, which provides short sequences (8s). Driving scenes are captured by a car equipped with multiple cameras and sensors. The captured scenes are divided into single images that depict the state of a scene at a specific point in time. The dataset comprises complex scenarios typically happening in urban areas like dense traffic and construction sites, as well as a variety of times of day and lighting conditions. Secondly, the INTERACTION dataset \cite{interactiondataset}, which is recorded by a drone at intersections and roundabouts. This dataset focuses on the behavior of vehicles on roads in different countries. In addition to the object tracks, the corresponding roads are provided as HD\footnote{high-definition} maps. Both datasets are recorded at a frequency of $10\,Hz$. \subsection{Preprocessing} \label{sec:preprocessing} We initially convert both datasets to the same data format. The used output data format is the same as in the INTERACTION dataset. Here, all tracks of each traffic participant for each point in time are stored in a csv\footnote{comma-separated values}-file. Each state of each traffic participant is uniquely defined by the track-id and the timestamp. In addition, the pose, the classification $\{Car, Pedestrian, Truck,...\}$, the velocity vector and, in the PandaSet, the motion-state $\{Parked, Stopped, Driving\}$ are stored. The focus of the PandaSet dataset is on object detection using Lidar sensors. Nevertheless, object lists with the individual tracked entities with their pose and classification can be read out. An important input variable is the velocity of the individual traffic participants. In the PandaSet dataset, this is calculated retrospectively based on the change in position over time, followed by a low pass filter to compensate measurement inaccuracies. The original dataset contains partially duplicate annotations for a given entity at a given timestamp. In order to remove the duplicates, we represent each entity by a rotated rectangular polygon and calculate \emph{intersection over union} (IoU) between possible duplicates. Starting from an IoU of 0.2 and above, we consider those two to as conflicting. To identify the set of duplicates, we interpret conflicts as edges and involved entities as nodes in a conflict graph $G_{conf}$. $G_{conf}$ is used to search for a maximum independent set of nodes such that all nodes are not connected, and delete all other entities. Furthermore, we removed all vehicles farther away than 80 meters from the ego vehicle since the measuring inaccuracy for far vehicles lead to extreme velocities and accelerations. \subsection{Net Architecture} For modelling the traffic scenes, we use a graph representation $G = (V,E)$ on which we can leverage a message passing approach. This allows for displaying specific traffic participants and their relation in a machine-readable format. For further calculations, the graph is described by three data blocks. The node attributes are described in a $n\times\left|v\right|$ matrix, where $n$ describes the number of nodes in the graph $G$ and $\left|v\right|$ the number of attributes of a node $v\in V$. The edge information is stored in a $m\times\left|e\right|$ matrix, where $m$ represents the number of edges and $\left|e\right|$ represents the number of edge attributes. The graph topology information, which nodes $i,j$ are connected by which edge $e_{ij}\in E$, is stored in a $2\times m$ matrix in the COO\footnote{Coordinate list}-format. \\ \newline \textbf{Single Step Network} In this process, the main goal is to capture the behavior of a traffic participant based on its immediate (dynamic) environment. A graph convolution approach is used to learn the representation of the participant's environment, where each participant is denoted by a node. Thus, nodes are updated depending on the outgoing edges and the corresponding neighbors. In general, the graph convolution approach consists of two steps, the message step (\Cref{eq:message}) and the update step (\Cref{eq:update}). In \Cref{fig:nnconv_net_architecture}, a schematic illustration of our graph convolution architecture is depicted. Our message passing approach is based on the work of Gilmer et al.~\cite{gilmer_neural_2017}. \begin{align} m_i^{s+1} = \frac{1}{\left|N(i)\right|}\sum_{j\in N(i)} h_{j}^s \cdot \theta_{edge}(e_{ij}) \label{eq:message} \end{align} \begin{align} h_i^{s+1} = h_i^s\cdot \Theta+m_i^{s+1} \label{eq:update} \end{align} A message $m_i^{s+1}$ for each node $i$ is generated regarding the neighbor node's state $h^s_j$ and the corresponding edge attributes $e_{ij}$, for all neighbors $j\in N(i)$. This is done by the propagation module P (see \Cref{fig:nnconv_net_architecture}) in which all outgoing edge attributes $h_{e_{ij}}$ of the root node $i$ are extracted with the corresponding initial attributes $h^0_j$ of the neighbor node $j$. The edge attributes of $e_{ij}$ are fed through a neural network $\theta_{edge}$, which consists of two fully connected layers and an interposed activation function. Subsequently, the current hidden state of the root node is updated by a learnable weight matrix $\Theta$. By adding this state and the message $m_i^{s+1}$ generated in the previous step, the state is updated to the new hidden state $h^{s+1}_i$. \begin{figure}[htbp] \centering \def\columnwidth{\columnwidth} \input{images/net_architecture_diagram.pdf_tex} \caption{Graph convolution architecture} \label{fig:nnconv_net_architecture} \end{figure} Our network is constructed in such a way that several of these message passing steps $(0 \leq s \leq S)$ can be connected in series. This propagates the information of the individual nodes further into the graph. In this work, the GNN architecture contains only one step of message passing $(s=S)$ to demonstrate the principle function of the model and to keep it as simple as possible. After the message passing step, each node in the graph has a hidden state depending on its neighbors and the corresponding relations. Finally, each $h_i^S$ is mapped to a single floating point number which represents the predicted acceleration $a_i$ via a neural network $\phi(\cdot)$, which consists of two fully connected layers with an interposed activation function. \begin{align} a_i = \phi(h_i^S) \end{align} In our implementation $\theta_{edge}$ is an MLP with a hidden size of 32. Hidden states $h$ have the size of 64. $\phi$ is also an MLP with a hidden size of 128. All MLPs have an interposed ReLU activation function. All hyperparameters were optimized manually and empirically. \\ \newline \textbf{Recurrent Network} In addition to considering only a single traffic scene, we examine the influence of a temporal sequence of scenes on motion prediction. To capture the temporal information, we use an LSTM architecture (see \Cref{fig:rgnn_net_architecture}). Our temporal architecture is based on the works of Taheri et al.~\cite{taheri_predictive_2019}. A sequence can contain up to $T$ previous scenes. Hereby, $T$ is practically only limited by the dataset and the scenes occurring in it. It should be noted that only the acceleration of the traffic participants that are in the scene at the starting point $t=0$ are predicted. Traffic participants that are in the sequence but leave the scene earlier are still considered for the calculation of the historical states. At first, we use the graph convolution $GConv$ architecture described above to generate a hidden state for each traffic participant ($H^1(t) = \{h^1_0(t),...,h^1_n(t)\}$) with respect to its neighbors at a given time $t$ (compare \Cref{fig:nnconv_net_architecture} and \Cref{eq:message}, \Cref{eq:update}). These states are fed into an LSTM block \cite{hochreiter_long_1997}, which updates the LSTM encoding $H^{LSTM}$. This step is then repeated for every scene in the sequence to continuously update the LSTM encoding. The final encoding $H^{LSTM}(t)$ is then mapped to the corresponding acceleration values $a = \{a_0, a_1, ..., a_n\}$ for each node using $\phi$. \begin{figure}[htbp] \centering \def\columnwidth{\columnwidth} \input{images/rgnn_net_architecture_diagram.pdf_tex} \caption{Recurrent graph neural network architecture} \label{fig:rgnn_net_architecture} \end{figure}
1,314,259,992,701
arxiv
\section{Introduction} The prototypical examples of noninteracting topological states of matter are categorized by quantized invariants, corresponding to (sets of) energy bands that are separated by gaps from the rest of the band structure. A conceptual step forward in the topological characterization of materials was the definition of topological invariants for systems in which energy gaps vanish and bands touch~\cite{volovikbook}. For example, in the presence of inversion and time-reversal, two-dimensional spinless graphene exhibits a quantized topological invariant. In the vicinity of isolated points in the Brillouin zone, where Dirac nodes occur, the topological invariant is obtained by integrating the Berry potential over a closed line encircling these points. Similarly, in three dimensions, nodes may appear in pairs of opposite chirality, i.e., as sources and sinks of Berry flux~\cite{Nielsen1981a,Nielsen1981b,Nielsen1983, volovikbook, Fang2003}. The two nodes in each pair can be pushed apart in reciprocal space by breaking the product of time reversal and inversion symmetries. The low-energy theory describing electrons at such a nodal point is encapsulated in the Weyl equation. When the chemical potential crosses or is close to these nodal points in a material, the latter is called a Weyl semimetal (WSM)~\cite{Wan2011,Xu2011,Balents2011,Bernevig2015}. Unlike Dirac nodes in graphene, Weyl nodes cannot be gapped or otherwise removed from the band structure by small translation-symmetry preserving perturbations. When a closed Fermi surface (FS) patch encloses only one Weyl node, one can define a FS Chern number, which is equal to the topological charge of the node~\cite{Turner2013,Wang2012,Wang2013a,Weng2014}. Recently, experimental evidence for the discovery of Weyl fermions in TaAs and NbAs was provided by angle-resolved photoemission spectrocopy~\cite{Lv2015a,Lv2015b,Xu2015a,Xu2015b,Yang2015} and (magneto)transport measurements~\cite{Zhang2015,Zhang2015a}. The theory that guided the discovery~\cite{Weng2015,Huang2015c} attracted immediate attention, because the materials are stoichiometric and therefore easy to synthesize. The prediction of a second type of WSMs rendered another two compounds, WTe${}_2$ and MoTe${}_2$, promising candidates for realization~\cite{Soluyanov2015,Sun2015,Wang2015}. One of the most interesting hallmarks of a WSM is the presence of open constant-energy contours in its surface band structure called Fermi arcs~\cite{Wan2011,Xu2011}. The existence of the corresponding surface states is a direct consequence of the nonzero topological charge associated with a Weyl node. Since they pertain solely to the surface, these previously elusive FS features are also amenable to observation via scanning tunneling spectroscopy (STS). An analysis of quasiparticle-interference (QPI) patterns in the Fourier-transformed local density of states (FTLDOS) at the boundary of a material can yield important properties of surface quantum states~\cite{McElroy2003,Aynajian2012,Roushan2009,Zhang2009,Seo2010,Okada2011,Alpichshev2010,Alpichshev2011,Fang2013,Zhang2014a}. The potential for detecting Fermi arcs with STS was recognized in earlier theoretical work~\cite{Hosur2012,Hofmann2013}, but the QPI fingerprints of Fermi arcs remain theoretically and experimentally unresolved. The purpose of the present manuscript is to determine the unique signatures of Fermi arcs in the QPI patterns obtained by STS measurements at the surface of a WSM. First, we identify the most elementary QPI pattern shapes in the presence of a single Fermi arc and define criteria for their unambiguous experimental observation. Since both discovered and candidate WSMs host two or more pairs of Weyl nodes and will hence have more than one Fermi arcs on a given surface, we examine the fundamental QPI features when more than one arcs coexist on the same surface. In the case of type-2 WSMs, the boundary FS will comprise of both Fermi arcs and electron and hole pockets. We therefore study the fate of the nontrivial characteristics in QPI when surface modes are allowed to scatter into states originating from the bulk. We then pinpoint all aforementioned signatures in QPI patterns obtained from both generic tight-binding models and density functional theory (DFT) calculations for MoTe${}_2$ and TaAs. \section{Theory of QPI at the surface of Weyl semimetals} \subsection{Definition of QPI response} The FTLDOS obtained from STS measurements can be generally expressed as~\cite{Capriotti2003,Derry2015} \begin{align} &F(\bm{q},E) = \frac{i}{2\pi}[\Lambda(\bm{q},E)-\Lambda(-\bm{q},E)^*], \label{eq:ftldos}\\ &\Lambda(\bm{q},E) = \int \mathrm{d}\bm{k}\, \text{Tr}[G(\bm{k}+\bm{q},E)T(\bm{k}+\bm{q},\bm{k};E)G(\bm{k},E)] \,, \end{align} where $G(\bm{k},E)$ is the retarded Green's function for a clean sample and $T(\bm{k},\bm{k}';E)$ is the $T$-matrix associated with disorder~\cite{Mahan}. On heuristic grounds, the power spectrum $|F(\bm{q},E)|$ is commonly approximated by the autocorrelation of the spectral functions~\cite{Hoffman2002,Simon2007,Roushan2009} \begin{align} &J_{\nu}(\bm{q},E) = \int\mathrm{d}\bm{k} \, \text{Tr}[A_{\nu}(\bm{k}+\bm{q},E) A_{\nu}(\bm{k},E)], \label{eq:J} \\ &A_{\nu}(\bm{k},E) = (i/2\pi)\text{Tr}_{\bar{\nu}}[G(\bm{k},E)-G(\bm{k},E)^\dag], \label{eq:A} \end{align} where $\nu$ stands for the set of inner degrees of freedom that is preserved in the scattering (e.g. spin in spin-preserving scattering), and Tr$_{\bar{\nu}}$ stands for the partial trace over all inner degrees of freedom other than $\nu$, such that $A_{\nu}$ is a reduced density matrix in terms of $\nu$. In this work, we will consider two types of autocorrelations: the joint density of states (JDOS) $J_0$ with $\nu$ being an empty set, and the spin-dependent scattering probability (SSP) $J_s$ with $\nu$ being solely electron spin. The JDOS is particularly important in studying a WSM that lacks any symmetry --- this is the most generic WSM although it is still to be found experimentally; the SSP includes suppressions due to the symmetries of the eigenstates and is hence important for WSMs that respect time-reversal symmetry --- the case for all confirmed WSMs. The JDOS ignores all matrix-element effects inherent in FTLDOS and takes into account all energetically allowed scattering wavevectors on equal footing, whereas SSP includes only the scattering suppression that comes from the spin content of the wavefunction. Approximating FTLDOS with JDOS / SSP amounts to replacing the impurity landscape with a single scattering center, which can be easily treated within band theory. Even though the rationale behind evaluating JDOS / SSP instead of the full FTLDOS is clear, it is not always straightforward to rigorously connect one to the other~\cite{Derry2015}. For this reason, we have verified that our key findings based on JDOS / SSP calculations are qualitatively the same in the full FTLDOS of our tight-binding models~\cite{suppl}. \subsection{Phenomenology} Let us now consider the JDOS patterns most broadly associated with Fermi arcs. First, we illustrate the key points phenomenologically, by assuming that the Fermi arcs have a constant curvature and a constant spectral density. The Fermi level is supposed to cross the bulk band structure only at the nodal points, so that only boundary modes are visible in the surface FS. The spectral function of an individual arc at a fixed energy can be parametrized as \begin{equation} A(\bm{k};\bm{k}_1,r_1,\gamma_1,\varphi_1) = \int\displaylimits_{\varphi_1}^{\varphi_1+\gamma_1} \mathrm{d}\varphi \, \delta( \bm{k} - \bm{k}_1 - r_1 (\cos\varphi,\sin\varphi) ) \,,\label{eq:arc} \end{equation} where $\bm{k}_1$ is the offset of the circle center from the origin, $r_1$ the circle radius and $\gamma_1$ the angle subtended by the arc. The endpoints of the arc are located at $r_1 (\cos\varphi_1,\sin\varphi_1)$ and $r_1 (\cos(\varphi_1+\gamma_1),\sin(\varphi_1+\gamma_1))$. The JDOS generated solely by this single arc is independent of $\bm{k}_1$, while $r_1$ and $\varphi_1$ change only its size and orientation, respectively. The only parameter that affects the shape of the arc is $\gamma_1$. This is shown in Fig.~\ref{fig:toy}(a-d) for three idealized, perfectly circular, cases. Figs.~\ref{fig:toy}(e,f) illustrate the autocorrelation of a FS that includes a second arc. Apart from the feature that arises from the autocorrelations of the two arcs, which is exactly like that of Fig.~\ref{fig:toy}(b), there are now cross-correlation patterns at finite momenta, corresponding to scattering between arcs. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{toy} \caption{(a-d) Single Fermi arc (a) parametrized by Eq.~\eqref{eq:arc} with $r_1=0.45$, $\bm{k}_1 = 0$, $\gamma_1 = - 2 \varphi_1$ and the shape of its corresponding JDOS from Eq.~\eqref{eq:J} for (b) $\varphi_1 = - \pi/4$, (c) $\varphi_1 = - \pi/2$ and (d) $\varphi_1 = - 3\pi/4$. (e) Two Fermi arcs with $\gamma_1 = \gamma_2 = 2\pi/3$ and (f) the shape of the corresponding JDOS. In (a-d), dashed lines encircle the pinch points; dotted lines are described in the text.} \label{fig:toy} \end{figure} The most distinctive feature is the presence of a pinch point at $\bm{q}=0$ for arcs with $\gamma_1 \leq \pi$. This is a unique characteristic of an open contour in the surface BZ and can be interpreted as follows: a pinch point exists as long as scattering within a FS contour vanishes at all wavevectors along a specific direction. When such a pinch point exists in the QPI pattern, then the contour that generates it must be open. Consider a translation of the spectral function of an arc defined as ${\cal T}_{\epsilon \bm{v}} A(\bm{k}) = A(\bm{k}+\epsilon \bm{v})$, with $\bm{v}$ a unitary vector defining a direction in $\bm{k}$-space and $\epsilon\in \mathbb{R}$. A pinch point exists if there is a $\bm{v}$ such that $A(\bm{k}) {\cal T}_{\epsilon \bm{v}} A(\bm{k}) = 0$ for any $\epsilon\not=0$, so that, from Eq.~\eqref{eq:J}, $J_0(\bm{q}=\epsilon\bm{v})=0$. The directions $\bm{v}$ for which this property holds are revealed by the orientation of the resulting pattern in $J_0$. This is illustrated by the examples in Figs.~\ref{fig:toy}(a-c): a translation of the arc with $\gamma_1 = \pi/2$ [shown in black in Fig.~\ref{fig:toy}(a)] along either of the two dotted lines in Fig.~\ref{fig:toy}(a) leads to $A(\bm{k}) {\cal T}_{\epsilon \bm{v}} A(\bm{k}) = 0$. Translated to the origin, the same lines cross the autocorrelation pattern Fig.~\ref{fig:toy}(b) only at the pinch point. For $\gamma_1 = \pi$, the above holds only for $\bm{v} = \hat{\bm{x}}$, the unitary vector in the $x$ direction. For $\gamma_1 > \pi$, this property does not hold: $A(\bm{k}) {\cal T}_{\epsilon \bm{v}} A(\bm{k}) \not= 0$ for small $\epsilon$ along any $\bm{v}$. Nonetheless, a pinch point can still be found in the autocorrelation of an arc with $\gamma_1 > \pi$: one can simply split it into two arcs, the first one with $\gamma_1'=\pi$ and a second one with the residual angle $\gamma_1-\gamma_1'$. The autocorrelation of the first part generates the pattern in Fig.~\ref{fig:toy}(c), while the autocorrelation of the residue is similar to Fig.~\ref{fig:toy}(b) with a pinch point at $\bm{q}=0$. The pinch point in this case, however, is on top of the pattern stemming from $\gamma_1'$ and the cross-correlation between the two parts [see Fig.~\ref{fig:toy}(d)]. Even though for the purpose of illustration we employed circular arcs, the translation condition for the presence of a pinch point in $J_\nu$ is general and can be used regardless of the arc shape. We shall recover this feature in both tight-binding and DFT calculations below. We remark that, even though the $\bm{q}\simeq0$ region may be difficult to resolve in QPI experiments, identification of the figure-eight pattern at larger $\bm{q}$, like the ones in Figs.~\ref{fig:toy}(b,c,f), indicates a pinch point at $\bm{q}=0$. \subsection{Tight-binding formulation} The simplest tight-binding formalism for WSMs is given by the Hamiltonian \begin{equation} {\cal H} = \sum_{\bm{k}} \psi_{\bm{k}}^\dagger H({\bm{k}}) \psi_{\bm{k}} \,, \end{equation} where $\psi_{\bm{k}} = (c_{\bm{k},A,\uparrow} \ c_{\bm{k},A,\downarrow} \ c_{\bm{k},B,\uparrow} \ c_{\bm{k},B,\downarrow} )^{\mathsf{T}}$ is a fermionic spinor containing electronic annihilation operators $c_{\bm{k},s,\sigma}$, with $s=A,B$ orbital/sublattice and $\sigma=\uparrow,\downarrow$ spin indices respectively, and $\psi_{\bm{k}}^\dagger$ its hermitian conjugate. Let us first ignore the spin degree of freedom. In this case, we can write a minimal (two-component) tight-binding model describing a WSM with only two Weyl nodes as \begin{subequations} \begin{equation} H_{2\times2}(\bm{k}) = \bm{g}(\bm{k}) \cdot \boldsymbol\tau + g_{0}(\bm{k}) \tau_0 \,, \end{equation} where $\boldsymbol\tau$ is the vector of Pauli matrices and $\tau_0$ the $2\times2$ unity matrix in orbital/sublattice space, $\bm{g} = ( g_{1}, g_{2}, g_{3} )$ and \begin{align} g_{0}(\bm{k}) =&{\ } 2d (2 - \cos k_x - \cos k_y) \,,\\ g_{1}(\bm{k}) =&{\ } a \sin k_x \,,\\ g_{2}(\bm{k}) =&{\ } a \sin k_y \,,\\ g_{3}(\bm{k}) =&{\ } m + t \cos k_z + 2b ( 2 - \cos k_x - \cos k_y ) \,, \end{align}\label{eq:tbm1}% \end{subequations}% with $a, b, d, m, t$ real parameters $(a,t\ne 0)$. With $b=d=0$ and $|m| < |t|$, the energy spectrum has 8 Weyl nodes at points given by $k_{x/y}=0,\pi$ and $k_z = \pm\arccos\frac{m}{t}$. A finite $b$ can gap the nodes with $k_{x/y}=\pi$, so that for $|m+4b|>|t|$ there are exactly two Weyl nodes at $(0,0,\pm\arccos\frac{m}{t})$. If one introduces a boundary, a Fermi arc connects the projections of the nodal points on the boundary FS and $d$ controls the curvature of the arc. To investigate inter-arc scattering that is subject to time-reversal symmetry, we use the four-spinor $\psi_{\bm{k}}$ and construct the following Hamiltonian \begin{align}\label{eq:tbm2} H_{4\times4}(\bm{k}) =&{\ }g_{1}(\bm{k}) \tau_1 \sigma_3 + g_{2}(\bm{k}) \tau_2 \sigma_0 + g_{3}(\bm{k}) \tau_3 \sigma_0 \nonumber\\ &{\ } + g_{0}(\bm{k}) \tau_0 \sigma_0 + \beta \tau_2 \sigma_2 + \alpha \sin k_y \tau_1 \sigma_2 \,, \end{align} where $\alpha, \beta$ are real parameters, $\sigma_0$ and $\sigma_1, \sigma_2, \sigma_3$ are the $2\times2$ identity and Pauli matrices spanning the spin degree of freedom and a tensor product between $\tau$ and $\sigma$ matrices is assumed. This model produces four Weyl nodes and two Fermi arcs per surface in a finite parameter regime. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{tbm} \caption{(a,c) Fermi surfaces ($E=0$) projected to the (010) surface; (b) JDOS for (a); (d) SSP for (c). The model for (a) and (b) is Eqs.~\eqref{eq:tbm1} with $a=b=t=1$, $m=0.5$, $d=0.8$; the model for (c) and (d) is Eq.~\eqref{eq:tbm2} with $a=b=1$, $t=1.5$, $d=m=0$, $\beta=0.9$ and $\alpha=0.3$. The JDOS of (c), not shown here, is similar to (d) but shows significantly stronger inter-arc scattering intensity.} \label{fig:tbm1} \end{figure} Our results for $J_0$ and $J_s$, for one and two Fermi arcs yielded by Eqs.~\eqref{eq:tbm1} and Eqs.~\eqref{eq:tbm2} respectively, are shown in Fig.~\ref{fig:tbm1}~\cite{suppl}. The characteristic ``figure-eight'' encountered in the previous section is evident here as well, but its intensity is modulated in accordance with the Fermi-arc DOS, which causes a fading of the pattern at larger $\bm{q}$. In the case of $H_{4\times4}(\bm{k})$, the suppression due to the spin texture of the two Fermi arcs has been taken into account. As can be seen in the resulting QPI pattern Fig.~\ref{fig:tbm1}(d), there is no qualitative change to the intra-arc scattering intensity, whereas now inter-arc cross-correlation patterns are present [cf. Fig.~\ref{fig:toy}(f)], even though the spin content of the wavefunction causes their partial suppression. \subsection{Density functional theory} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{mote2} \caption{(a) Surface FS and (b) SSP for the (001) surface of MoTe${}_2$ at $E=-0.05$~eV; (c) JDOS for the surface DOS at $\bm{k}$-points where the intensity is not lower than 10\% of the maximum, i.e., keeping only the Fermi arcs shown in the inset of (c); (d) SSP for inset of (c).} \label{fig:mote2} \end{figure} Finally, we present results for QPI in the experimentally discovered WSMs based on density-functional theory (DFT). First, we focus on MoTe${}_2$, which was recently proposed as a candidate for a type-2 WSM \cite{Sun2015,Wang2015}. The band structure obtained in ab initio calculations features four Weyl nodes at points $(\pm 0.1011 ,\pm 0.0503 , 0)$ in units of reciprocal lattice vectors. This renders the plane $k_y=0$ to be topologically $Z_2$ nontrivial, exhibiting a Quantum Spin Hall effect. The result of this QSH is to give rise to two Fermi arcs per surface. By its definition, a type-2 WSM will have a surface DOS that comprises of both Fermi arcs and bulk states projected to the boundary, which is shown in Fig.~\ref{fig:mote2}(a). As depicted in Fig.~\ref{fig:mote2}(b), contributions to the JDOS from both types of features are superimposed. Nevertheless, due to the fact that the states of Fermi arcs are more localized on the surface and have a larger intensity compared to the bulk states that participate in the surface DOS, we recover a clear signature of the Fermi arcs in the JDOS in the form of an ``X''-shaped scar. To positively identify this signature, in Fig.~\ref{fig:mote2}(c) we show the JDOS obtained if we ``mask out'' all the bulk signal in the surface DOS. The resulting pattern, which matches the ``X''-shaped feature in Fig.~\ref{fig:mote2}(b) perfectly, is closely resemblant of Figs.~\ref{fig:toy}(f) and~\ref{fig:tbm1}(d). Taking spin suppression into account [see Fig.~\ref{fig:mote2}(d)] does not alter this result significantly: both intra- and inter-arc features are present in the QPI pattern, although the inter-arc part is weaker. This observation shows that it is possible to distill the contribution of Fermi arcs in the surface QPI spectrum, especially for large Fermi arcs, even if the latter comprises of overlapping patterns stemming from arcs and other FS features. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{taas} \caption{(a) Surface FS and (b) SSP for the (001) surface of TaAs at $E=0.12$~eV; (c-e) autocorrelation of DOS features numbered in (a) --- cf. Figs.~\ref{fig:toy}(b) and~\ref{fig:tbm1}(b); (f) SSP close to $\bm{q}=0$; (g) SSP close to $\bm{q}=0$ minus SSPs centered at $(\pm2\pi,0)$ and $(0,\pm2\pi)$~\cite{suppl}; (h) sum of autocorrelations of features numbered 3 and 4 and their symmetric partners. The intensity of feature 4 is more than two times that of feature 3, so the pattern in (h) is mostly due to the former.} \label{fig:taas} \end{figure} Next, we investigate the calculated QPI patterns for TaAs. This material has a more complex surface band structure with several Fermi arcs on the (001) surface ~\cite{Weng2015,Huang2015c,Sun2015a}. The surface DOS obtained from DFT and the corresponding QPI patterns corresponding to the first BZ are presented in Fig.~\ref{fig:taas}. At $E=0.12$~eV bulk contributions to the surface DOS are almost completely suppressed. The FS comprises of 12 Fermi arcs (features 2, 3, 4, 6 and their symmetric copies in Fig.~\ref{fig:taas}) and a smaller number of other, non-topological surface features~\cite{note1}. The bow-tie shaped arcs numbered 2 and 6 extend into the second BZ. With sufficiently high resolution data on a high quality sample all the contributions of the arcs to the QPI should be observable and comparable to our theory. Here, as the $\gamma_1$ angle of the weaker spoon-like features 3 and 4 is less than $\pi$, we focus on identifying the signatures associated with their intra-arc scattering. We can partially isolate their contributions close to $\bm{q}=0$ using only the SSP, as described in the Supplemental Material~\cite{suppl}. With this procedure, we can resolve the figure-eight pattern and pinch point on top of bow-tie contributions, as shown in Figs.~\ref{fig:taas}(g,h). However, it is likely that the small spoon features observed in our calculation may be obscured by long wavelength variations that typically complicate the analysis of STM QPI data at small $\bm{q}$. \section{Conclusion} In conclusion, we have identified signatures of Fermi arcs in quasiparticle interference at the surface of WSMs. We have observed a characteristic figure-eight shape with a pinch point in its middle in both general tight-binding models and realistic DFT calculations, which, in addition to detailed comparison that can be done for the QPI from the Fermi arcs, is a hallmark of scattering between Fermi arcs. Finally, we have demonstrated that the trademark of a Fermi arc can be distinguished even in cases where the QPI pattern is a superposition of bulk and surface contributions, provided that the Fermi arc has a prominent surface DOS. Our results suggest that there can be an unequivocal observation of Fermi-arc signatures in STS experiments. \textit{Note} --- Recently, an article with results on quasiparticle interference in Weyl semimetals appeared~\cite{Mitchell2016}. While the results of Ref.~\cite{Mitchell2016} on surface projections of nontrivial bulk topology are complementary to our own, the results on QPI of Fermi arcs show heavy suppression of intra-arc scattering. \begin{acknowledgments} This work was supported by NSF CAREER DMR-0952428, ONR-N00014-11-1-0635, MURI-130-6082, the Packard Foundation, and the Keck grant. SK acknowledges financial support by the ICAM branch contributions. JL acknowledges support from Swiss National Science Foundation. SK is grateful to A.~G.~Grushin for enlightening discussions. \end{acknowledgments} \bibliographystyle{apsrev4-1}
1,314,259,992,702
arxiv
\section{Introduction} \label{sec:introductionRe} One of the many skills involved in learning how to play the clarinet is to control the attack of a new note. Tonguing is an important aspect of a clear and precise attack, but the evolution of the mouth pressure during the first instants of the note is also seen to affect the attack considerably. Moreover, in some particular situations, tonguing may not be involved in starting a new note. Hence there is both scientific and practical interest in the question: what combinations of tonguing and evolution of the blowing pressure produces sharp and precise attacks? As a self-sustained musical instrument, the clarinet can be seen as a dynamic system in which the oscillation is controlled by input parameters from the musician. Two of the most important \cite{McIntyre83:JASA,WilJASA1974,schum1981} are the blowing pressure and the lip force upon the reed. Models predict the range of parameter values that allow for the production of a musical note \cite{OllivActAc2005}. Other useful predictions are the dependence of amplitude of oscillation on these two parameters, period doubling bifurcation points \cite{KergoCFA2004,NonLin_Tail_2010}, or the parameter regions where the reed touches the lay of the mouthpiece. \cite{KergoActa2000,dalmont:3294}. More complex models exist that can simulate the reed oscillation in time-domain \cite{Keefe1992} or harmonic balance methods \cite{farner2006contribution}. They provide more accurate predictions but their complexity makes it hard to grasp the causality relation between parameters and consequences in oscillatory behaviour. In previous studies in which mouth pressure was gradually increased at constant rates, oscillations appeared at a much higher mouth pressure threshold than that predicted assuming a constant mouth pressure. High thresholds were observed in an artificially blown clarinet \cite{Jasa2013BBergeot} and even higher in numerical simulations \cite{ThFabSil}. Analytical reasoning ~\cite{BergeotNLD2012} based on dynamic bifurcation theory \cite{Baesens1991,FruchaScaf2003} predicts a delay in the threshold of oscillation for a linearly increasing mouth pressure, but the exact value of mouth pressure at which it occurs is only valid for simulations performed with very high precision. The threshold observed with normal precision simulations can only be explained with a modified theory \cite{BergeotNLD2012b} using stochastic perturbations \cite{Baesens1991}. This article extends previous studies by the present authors by switching the focus from the threshold of oscillation to a complete description of the amplitude of oscillation. A simplified model of a note attack is a constant increase in the mouth pressure (as used in previous articles) which ceases increasing and then remains constant at a defined value. The effect of ceasing the pressure increase is studied analytically to develop a full recipe for estimating the envelope of the attack. This recipe is then explored by comparing to actual simulations of the Raman model. In section \ref{sec:clari-theo}, the model of the clarinet used in this work is briefly presented, as well as some of its known properties. The remaining of this section provides a brief overview of the key concepts that are needed for the present article (most of these concepts are described with more details in two articles by the authors \cite{BergeotNLD2012, BergeotNLD2012b}). Section \ref{sec:envel-after-dynam} describes the calculation of the envelope of the oscillations relative to the invariant curve, firstly in an ideal case with infinite precision, then with limited precision or noise (section~\ref{sec:traj-syst-affect}). To some extent, these methods were already employed in previous articles \cite{BergeotNLD2012,BergeotNLD2012b} to determine a dynamic threshold of oscillation. Here they are extended to calculate the envelope before this threshold is reached. Section \ref{sec:interr-ramps-contr} presents a method to take into account a discontinuity in the time derivative of the mouth pressure. In section \ref{sec:examples}, the models are applied to particular examples and simulations, analysing the consequences in terms of expected evolution of the sound. A list of the symbols used in this article is provided in Appendix~\ref{sec:notation-table}. \section{Elements of clarinet theory} \label{sec:clari-theo} \subsection{The clarinet model} \label{sec:elementary-modelRe} For an elementary analysis, the clarinet can be described using a version of the lossless Raman model \cite{OllivActAc2004}, originally used for the bowed string. The system is described by two state variables $p$ and $u$, made non-dimensional by dividing them respectively by the minimum pressure that closes the reed in steady-state, and the maximum flow allowed by the reed valve. A non-linear function $u=F(p)$ relates the pressure difference between the mouth and the mouthpiece ($\Delta p = \gamma - p$, where $\gamma$ is the mouth pressure) to the volume of air that flows past the reed ($u$). The derivation of this formula is given for instance by Chaigne and Kergomard~\cite{Cha08Belin}. \begin{subnumcases}{\label{nonlin_carac_2eq_ad}F(p)=} \zeta \left(\Delta p - 1 \right)\sqrt{-\Delta p} \hspace{0.5cm} \text{if} \ \Delta p <0; \\ \zeta \left(1-\Delta p \right)\sqrt{\Delta p} \hspace{0.35cm} \text{if} \ \Delta p \in [0,1]; \\ 0 \hspace{3.1cm} \text{if} \ \Delta p> 1.\label{carNL_beatreed} \end{subnumcases} The control parameters of the system are the mouth pressure $\gamma$ and the embouchre parameter $\zeta=\frac{\rho c}{S_{res}} S\sqrt{\frac{2P_M}{\rho}}\frac{1}{P_M}$. $\zeta$ is related the lip force via the opening area of the reed at rest $S$ and is proportional to the characteristic impedance at the resonator input $\frac{\rho c}{S_{res}}$. Three examples of the function $F$ (Fig.~\ref{rep_FRe}) show that smaller values of $\zeta$ bring the characteristic function closer to that of a stopped pipe ($u=0$). Increasing $\gamma$ shifts the curve along the $p$-axis. The reed-mouthpiece system drives the resonator. It is linked to it by the acoustic variables $p$ and $u$ found in Eq. \eqref{nonlin_carac_2eq_ad}. For a time-domain description it is usually simpler to describe the resonator using two non-dimensional traveling wave variables $x$ and $y$, respectively the outgoing and incoming pressure waves: \begin{eqnarray} p(t)=x(t)+y(t),\nonumber \\ u(t)=x(t)-y(t). \label{eq:3} \end{eqnarray} The incoming wave $y(t)$ at the bore input is the opposite of the delayed outgoing wave $-x(t-\tau)$, since no losses in the propagation or reflection are considered\footnote{$x$ and $y$ are usually written respectively as $p^+$ and $p^-$. The latter form is used in this article for conciseness.}. In practice, only one value of $x(t)$ is calculated in each round-trip of the wave, with a duration of $\tau=2l/c$, where $l$ is the resonator length and $c$ the speed of sound. All the variables can thus be discretized, $x_n$ meaning the value of a variable $x$ at time $n\tau$. \begin{figure}[t] \centering \subfigure[$F$ function]{\includegraphics[width=0.8\columnwidth,keepaspectratio=true]{FunctF.pdf}\label{rep_FRe}} \subfigure[$G$ function]{\includegraphics[width=0.8\columnwidth,keepaspectratio=true]{FunctG.pdf}\label{rep_GRe}} \caption{Non-linear characteristics in $u=F(p)$ representation (a) and $x=G(y)$ representation (b) for 3 different parameter values} \end{figure} The behaviour of the whole instrument then can be described in a single iterative equation: \begin{equation} x_n=G\left(x_{n-1},\gamma\right). \label{DiffEq_G_InflGamm} \end{equation} Function $G$ can be obtained by replacing $p(t)$ and $u(t)$ in function $F$ with Eq. \eqref{eq:3}. An explicit formulation for $G$ is given by Taillard \emph{et al.} \cite{NonLin_Tail_2010}, for $\zeta<1$. Fig. \ref{rep_GRe} shows that the change from coordinates $(p,u)$ to $(x,y)$ can be performed graphically as a mirror about the axis $p=0$ and a $45^\circ$ rotation about the origin. Like $F$, $G$ also depends on the control parameters $\gamma$ and $\zeta$. To keep the notation simple, the parameters will be omitted when constant. $\gamma$ will be included as an argument to the function when it varies with time. In most works on the clarinet, functions $F$ and $G$ are studied in a static-parameter regime, referring to a case where the instrument is blown at a constant pressure with a constant force applied on the lip. This article focuses on a case where the mouth pressure $\gamma$ varies over time, a situation is referred to hereafter as dynamic-parameter regime, or simply dynamic regime. The graphics of Fig. \ref{rep_GRe} thus change over time. \subsection{Invariant manifolds and non-oscillating solutions} \label{sec:stat-regime-with} In a static-parameter regime, there is a value of $\gamma=\gamma_{st}$ establishing the transition between non-oscillating and oscillating solutions. This is called the \emph{static oscillation threshold}. Above this value, the clarinet system can oscillate, and will indeed oscillate for most initial values $x_0$. However, for particular sets of initial conditions (in the scope of this paper sets of $\gamma_0$ and $x_0$), the solution is non-oscillating. \rem{CV: Quand on initialise sr le point fixe c'est plus que non-ocillating, c'est même non-evolving AA: je parle aussi du cas de $\gamma$ variable, donc la solution ne va pas rester toujours à la même valeur... CV: Et inversement il ne suffit pas que la solution soit non-oscillante pour que la condition initialle apartienne à la variété invariante. AA je pense qu'on ne dit pas ça...} These sets correspond to the ``invariant manifolds''. If $\gamma$ does not vary with time, the invariant manifold is called a fixed point, as the variable $x$ will remain constant ($x=x_0$). The fixed point $x^*$ can be found by solving: \begin{equation} x^{*}=G\left(x^{*}\right). \label{fixed_point_G_def} \end{equation} $x^*$ is a function of $\gamma$, $x^*=x^*(\gamma)$. When $\gamma$ varies with time, the invariant manifold cannot correspond to a single fixed point, but is also time-dependent, corresponding to an ``invariant curve''. Perhaps surprisingly, it is not the set of values $x^*(\gamma_n)$. The invariant curve is defined as the set of values ($x$, $\gamma$) such that during the planned time-variation of $\gamma$, this set of values will always be followed, independently of the particular value the system is initiated in. The following equation is a defining condition for this curve: \begin{equation} \phi_\epsilon(\gamma)=G\left(\phi_\epsilon(\gamma-\epsilon),\gamma\right). \label{eqdiff_1ArtRec} \end{equation} A method for calculating the invariant curve for the clarinet system is given in a previous article \cite{BergeotNLD2012}. In appendix \ref{sec:perturb} simpler expressions for the invariant curve are given by using the characteristic curve expressed as $u=F(p)$ instead of function $G$. The invariant curve depends on how the parameter $\gamma$ varies in time, i.~e., it is different for different rates of variation of $\gamma$ (different $\epsilon$ values). \subsection{Local stability of non-oscillating solutions} \label{sec:stab-non-osc} In both static and dynamic cases, the non-oscillating solutions can be either stable or unstable, depending on the behaviour of the system initialized close to the invariant manifold. If initialized with a value $x_0$ close to a stable invariant manifold, the state variable $x$ will approach it exponentially. Conversely, the state variable is repelled exponentially by an unstable manifold while in its vicinity. The distance to a fixed point (in a static-parameter case and while $x_n$ is sufficiently close to the fixed point) is an exponential function of time (expressed as iteration number $n$) \cite{Cha08Belin}: \begin{equation} x_n-x^*\approx (x_0-x^*) \left[G'\left(x^{*}\right)\right]^n. \label{sol_stat} \end{equation} where $G'\left(x^{*}\right)$ is the derivative of the iterative function at the fixed point. When this value exceeds $1$, the fixed point is unstable and the oscillation grows. Due to the non-linear nature of the system, the oscillation cannot grow forever, of course, and it stabilises in a periodic solution. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio=true]{EDAEnvCI009ArtRec_all.pdf} \caption{Time evolution of the outgoing pressure $x$, solution of Eq.~\eqref{DiffEq_G_InflGamm} for different values of its initial value $x_0=x^*+w_0$. From left to right: $w_0=0.01$ \mbox{(\textcolor{gray}{\textbf{\color{black!20}-----}})}, $w_0=10^{-5}$ \mbox{(\textcolor{gray}{\textbf{\color{black!45}-----}})} and $w_0=10^{-10}$ \mbox{(\textcolor{gray}{\textbf{\color{black!90}-----}})}. \mbox{(\textcolor{black}{\color{blue}\textbf{- - -}})} Exponential envelope deduced from the function~\eqref{sol_stat}. The following parameters are used: $\gamma=0.42$ (constant) and $\zeta=0.5$.} \label{figEDA:ExTrasCI1ArtRec1} \end{figure} In a static-parameter context, the oscillation would eventually stabilise in an oscillatory regime between values given by the 2-branch part of the static bifurcation diagram (an extensive discussion is given by Taillard \emph{et al.} \cite{NonLin_Tail_2010}). For time-varying parameters, the evolution of the system can be interpreted as a dynamic bifurcation diagram. In this case, it is observed that the system still follows closely the invariant curve $\phi_{\epsilon}$ even after it becomes unstable (see for instance Fig.~\ref{zoom1}). Eventually an oscillation appears at a value of mouth pressure much higher than the static oscillation threshold, so that we speak of a \emph{bifurcation delay}. The new threshold is called the \emph{dynamic oscillation threshold}. Above this threshold, a periodic regime is established whose amplitude is given approximately by the 2-branch part of the static bifurcation diagram. The article focuses on providing the necessary elements to calculate the amplitude envelopes in different conditions, including when the time-variation of a parameter abruptly changes rate. \subsection{Similarities and differences between static and dynamic parameter cases} The duration of the transient is mainly characterized by two aspects: \begin{itemize} \item The time constant of the exponential approach or departure from the invariant manifold, which is proportional to $\log\left(G'(x^*)\right)$, as shown by Eq.~\eqref{sol_stat} \item the value of the initial condition of $x$, or how far it is from the invariant curve or the fixed point. \end{itemize} Figure \ref{figEDA:ExTrasCI1ArtRec1} illustrates how, for a similar exponential time constant (and parameters that are constant in time), it is possible to obtain very different transient times by changing the value of the initial conditions. These are important results for understanding the behaviour of the system in a situation where the parameters change. The differences in dynamic parameter contexts are: \begin{itemize} \item If the parameter starts increasing at a value below the static oscillation threshold, the system will first undergo an approach to the invariant curve, and only beyond this value will it start the departure phase. In fact the approach can be so dramatic that a visible oscillation is only observed far beyond the static threshold. \item The exponential time-constant varies throughout the growth of the parameter, but it is not simply given by $\log\left(G'(x^*(t))\right)$ at each time $t$. \end{itemize} In realistic experimental situations, however, stochastic fluctuations prevent the system from coming too close to the invariant curve in the approach phase, and this can reduce the bifurcation delay. \section{Envelopes for dynamic-parameter regimes} \label{sec:envel-after-dynam} This section provides a method to describe the oscillation amplitude in the particular case of a clarinet model system in which the blowing pressure parameter increases with time at a small constant nondimensional rate $\epsilon \ll 1$: \begin{subnumcases}{\label{dynsys_ppAR}} x_n=G\left(x_{n-1},\gamma_n\right)\label{dynsys_ppAR_a}\\ \gamma_{n}=\epsilon n+\gamma_0. \end{subnumcases} \subsection{Unlimited precision (noiseless)} \label{sec:ideal-case-} First, the case with an arbitrarily high precision is analysed. $x_n$ is the state variable of the system described in section~\ref{sec:elementary-modelRe}. With the knowledge of $x_n$ and its previous value $x_{n-1}$ all remaining variables of the system can be calculated. In \cite{BergeotNLD2012} it is shown that during a significant part of a slow transient, $x_n$ is close to the invariant curve $\phi_{\epsilon}(\gamma)$ described above. As seen in the previous section, for a constant parameter, the envelope is well described by an exponential envelope (Eq.~\eqref{sol_stat}), as long as the state variable $x$ remains sufficiently close to the fixed point $x^*$ so that function $G$ is well approximated by its tangent line. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth,keepaspectratio=true]{ArRecExSec4_v2.pdf} \caption{(black points) Numerical simulation of the system~(\ref{dynsys_ppAR}). (dashed black line) Invariant curve $\phi_{\epsilon}(\gamma)$. (blue line) Curve of fixed points $x^*(\gamma)$. $\zeta=0.5$, $\epsilon=10^{-3}$ and $\gamma_0=0$.} \label{zoom1} \end{figure} \rem{BB: paragraphe à revoir; enlever fig.~3} Fig.~\ref{zoom1} suggests that when the parameter $\gamma$ varies over time, $x_n$ follows more closely the invariant curve than the curve of fixed points ($x^*(\gamma)$). Instead of following the distance to the fixed point as in Eq.~\eqref{sol_stat}, a new variable $w_n$ is therefore defined: \begin{equation} w_n = x_n - \phi_\epsilon(\gamma_n). \label{eq:6} \end{equation} Note that, when the parameter is constant, the definition \eqref{eq:6} reverts to $x-x^*$ of Eq.~\eqref{sol_stat}, as can be verified by substituting $\epsilon=0$ in the perturbation approximation to $\phi_\epsilon$ (see Appendix~\ref{sec:perturb}, Eq.~\eqref{invcurve_series}). For small amplitudes $w_n$, the function $G$ in Eq.~\eqref{dynsys_ppAR_a} can be expanded as a first-order Taylor series around the invariant curve. The advantage of switching to this description is that future values of the oscillation amplitude $|w_n|$ can be approximated using a simple function $w(\gamma)$ relating to an initial iteration $w_0$: \begin{multline} |w_n|=w(\gamma_n)\approx \\ |w_0|\exp \left(\vphantom{\frac{1}{\epsilon}\int_{\gamma_0+\epsilon}^{\gamma_n+\epsilon}\ln\left| G'\phi_\epsilon\left((\gamma'-\epsilon),\gamma'\right)\right|d\gamma'}\right \frac{1}{\epsilon}\underbrace{\int_{\gamma_0+\epsilon}^{\gamma_n+\epsilon}\ln\left| G'\left(\phi_\epsilon(\gamma'-\epsilon),\gamma'\right)\right|d\gamma'}_{I(\gamma_n+\epsilon)-I(\gamma_0+\epsilon)} \left.\vphantom{\frac{1}{\epsilon}\int_{\gamma_0+\epsilon}^{\gamma_n+\epsilon}\ln\left| G'\phi_\epsilon\left((\gamma'-\epsilon),\gamma'\right)\right|d\gamma'}\right) \label{exact_solution_w3Rec} \end{multline} Eq.~\eqref{exact_solution_w3Rec} is the equivalent to Eq.~\eqref{sol_stat} for variable parameters (see \cite{BergeotNLD2012} for details). Function $I$ is defined by: \begin{equation} I(\gamma) = \int_{\gamma_{st}}^{\gamma}\ln\left| G'\left(\phi_\epsilon(\gamma'-\epsilon),\gamma'\right)\right| d\gamma'. \label{in} \end{equation} In the applications shown in this article, $I$ is always used as a definite integral. As a consequence the integration constant, or one of the bounds of the integral $I$ can be defined arbitrarily. $\gamma_{st}$ is used in this article as a reference point close to the minimum amplitude, although for $\epsilon\ne 0$ the minimum is attained at a slightly lower pressure. The discrete equivalent of Eq. \eqref{exact_solution_w3Rec} is: \begin{eqnarray} |w_n|&=& |w_0|\exp\left(\sum_{i=1}^{n}\ln\left| \partial_xG\left(\phi(\gamma_i-\epsilon),\gamma_i\right)\right|\right),\nonumber\\ &=& |w_0|\prod_{i=1}^{n} \left|\partial_xG\left(\phi(\gamma_i-\epsilon),\gamma_i\right)\right|. \label{prodint_w3Rec} \end{eqnarray} The "product form"~\eqref{prodint_w3Rec} highlights that when the magnitude of $G'$ is smaller than $1$ in modulus, which happens before the static threshold $\gamma_{st}$ is reached, $x_n$ approaches the invariant curve. Beyond this threshold, $x_n$ moves away from the invariant curve, but initially at a very slow pace, because the logarithm remains close to $0$. \rem{BB: pas facile à voir sur l'équation. AA: mieux comme ça?} Although $I(\gamma)$ is not easy to calculate analytically, for small values of the increase rate $\epsilon$, the derivative $G'\left(\phi_\epsilon(\gamma'-\epsilon),\gamma'\right)$ can be approximated by its value at the fixed point $G'\left(x^*(\gamma'),\gamma'\right)$, and the integral $I(\gamma)$ written in the form: \begin{equation} \tilde{I}(\gamma) = \int_{\gamma_{st}}^{\gamma}\ln\left| G'\left(x^*(\gamma'),\gamma'\right)\right| d\gamma'. \label{approx_in} \end{equation} The error in $I(\gamma)$ committed in this approximation is observed to be smaller than $\epsilon$ (the difference between $I(\gamma)$ and $\tilde{I}(\gamma)$ in Fig.~\ref{fig:wn05} is much smaller than $\epsilon$). For the clarinet model, the expressions involved in the calculation of the derivative $G'$ are too complicated if function $G$ is used in its explicit form. However, they can be obtained in a simple form (see Appendix \ref{sec:perturb}, Eq.~\eqref{eq:17}) from the definition of $F$ in coordinates $(p,u)$, providing simpler expressions for a numerical calculation of the integral. In the rest of this paper we use the approximate form \eqref{approx_in}. \begin{figure}[ht!] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio=true]{BaseCurveBB.pdf} \caption{Integral $I(\gamma)$ calculated using Eq.~\eqref{in} (dashed line) and approximately using Eq.~\eqref{approx_in} (solid line). $\zeta=1/2$, $\epsilon=1/20$.} \label{fig:wn05} \end{figure} The predicted amplitude $\widetilde{w}(\gamma)$ is calculated as a distance to the invariant curve: \begin{equation} \widetilde{w}(\gamma)=|w_0|\exp\left(\frac{\tilde{I}(\gamma+\epsilon)-\tilde{I}(\gamma_0+\epsilon)}{\epsilon}\right). \label{eq:Wfunction} \end{equation} At iteration $n$, $|w_n| \approx \widetilde{w}(\gamma_n)$. The graphic in Fig.~\ref{fig:wn05} can be used to predict the qualitative behavior of the system: starting at a value $\gamma_0$, the distance to the invariant curve is a monotonic function of $\tilde{I}(\gamma+\epsilon)$. Whenever $\tilde{I}(\gamma+\epsilon)<\tilde{I}(\gamma_0+\epsilon)$, the amplitude is smaller than the starting value. Conversely, when $\tilde{I}(\gamma+\epsilon)>\tilde{I}(\gamma_0+\epsilon)$ the amplitude is higher. $\tilde{I}(\gamma+\epsilon)=\tilde{I}(\gamma_0+\epsilon)$ corresponds to the \emph{dynamic oscillation threshold}, as defined in \cite{BergeotNLD2012}. The curve described by Eq.~\eqref{eq:Wfunction} is often a good approximation of the envelope for most of the range of the growth parameter, except for large values of $w_n$, which typically arise in 2 situations: \begin{itemize} \item In the beginning of the transient, where the iterate $x_0$ can be far from the invariant curve, depending on the initial conditions. Note that the invariant curve usually diverges for small values of $\gamma$, so that even for reasonable values of $x_0$, the amplitude $w_0$ can be very large. The region where this curve diverges depends on $\zeta$, but is usually well below the static threshold $\gamma_{st}$ (see Appendix \ref{sec:pert-terms}). \item At the end of the transient, where $x_n$ finally escapes from the invariant curve. \end{itemize} In practice these two situations can be avoided by carefully choosing the time interval of interest. For example, a few initial iterations may be calculated exactly using the recursive relation $x_n=G(x_{n-1})$ until they become sufficiently close to the invariant curve. In the end of the transient the envelope would not be valid for other reasons, in particular because the linear approximation in Eq.~\eqref{eq:Wfunction} is not valid (otherwise the envelope would grow indefinitely). The prediction $\widetilde{w_n}$ is valid until a few (3 or 4) iterations before the envelope starts stabilising in the oscillating branch of the bifurcation diagram. \subsection{Remarks on very low amplitudes} \label{sec:infl-prec-or} The curve $\widetilde{w}(\gamma)$ in Eq. \eqref{eq:Wfunction} often reaches very small values if the value of $\epsilon$ is sufficiently small. As a quick example of application, consider a simulation started at a value of $\gamma$ close to $0$. For this case, Fig. \ref{fig:wn05} shows that the value of the amplitude at $\gamma_{st}$ is $\widetilde{w}(\gamma_{st})\simeq |w_0|\exp\left(-\frac{0.3}{\epsilon}\right)$. In this simulation, $0.3$ is the difference between the minimum of $I$ (at $\gamma\simeq \gamma_{st} = 1/3$) and the starting value of $I$. For $\epsilon=1/100$, this means that the minimum amplitude will be $\exp(-30)\simeq 10^{-13}$. Reducing the increase rate by a factor of ten ($\epsilon=1/1000$) brings the minimum amplitude down to the suprisingly low value of $\exp(-300)\simeq 5\times10^{-131}$. In general, the minimum amplitude reached by the system can be roughly calculated with: \begin{equation} \label{eq:4} w_{\textrm{min}} = |w_0| \exp\left(\frac{\tilde{I}(\gamma_{st}) - \tilde{I}(\gamma_0)}{\epsilon}\right) \end{equation} A few remarks are suggested by these extremely low values. Firstly, extremely low values cannot be computed using ordinary machine precision. In this article, the calculations are performed with a \emph{Python} library (\emph{MPMath}) that simulates arbitrary precision in an ordinary machine. Fig. \ref{dyn_bif_digramArtRecNo} shows how three different values of the precision produce very different envelopes. For certain values of $\gamma$ the errors are many orders of magnitude higher than the precision of the calculations. Beyond a certain value of the precision, the envelope is not greatly affected, only producing ``microscopic'' errors, which are of the same magnitude as the precision. In practice, the precision $a$ required to simulate the system should be higher than the minimum amplitude $w_\textrm{min}$ reached by the system. This ensures that the difference between the simulation and the exact system never exceeds $a$, otherwise larger differences are expected because of the change in dynamic threshold. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth,keepaspectratio=true]{DiagBifDynArtRec3.pdf} \caption{Static and dynamic bifurcation for $\zeta=0.5$. Dynamic diagram is obtained with $\epsilon=10^{-3}$ and $\gamma_0=0$, from numerical simulations performed with three different numerical precisions: $a=10^{-12}$,$a=10^{-45}$ and $a=10^{-200}$.} \label{dyn_bif_digramArtRecNo} \end{figure} Second, even if the simulations are performed using correct precision, the amplitudes $w$ can only be seen relative to an accurately calculated invariant curve $\phi$. The estimation of $\phi_\epsilon$ requires a precision $a_\text{IC}<w_\textrm{min}$ so that it can be used as an accurate reference for determining the amplitudes $w$. In this paper, the invariant curve is calculated approximately using a perturbation series (see Appendix \ref{sec:perturb}), whose precision depends on the number of perturbation terms. Assuming that the perturbation terms $\phi_i(\gamma)$ all have the same magnitude (which as shown in Fig. \ref{fig:icterms} is true for $\gamma>1/10$), the biggest influence in precision comes from the powers of $\epsilon$ that multiply each term in Eq.~\eqref{invcurve_series}. Using this simple reasoning, a number of terms $n$ is required for an invariant curve with precision $a_\text{IC}$: \begin{equation} \epsilon^n\approx a_\text{IC} \hspace{0.3cm} \Longleftrightarrow \hspace{0.3cm} n \approx \frac{\log_{10}(a_\text{IC})}{\log_{10}(\epsilon)}. \label{eq:CondNumPeTer} \end{equation} Returning to the previous example, for $\epsilon=1/100$, $n=7$ perturbation terms are required to observe correctly the envelope $w$ at very low amplitudes, whereas for $\epsilon=1/1000$ the number of terms is $n=65$. However, even though the invariant curve requires a lengthy calculation in order to serve as a reference for the observation of $w$, a direct estimation $\tilde{w}$ can be obtained with a much cruder approximation of the invariant curve, as shown below. Note that the previous argument is typically valid for high values of $\gamma$. For low values, some of the perturbation terms can reach values higher than 1, especially for high values of $\zeta$. The argument seems valid in general above the static threshold (see appendix~\ref{sec:pert-terms} and Fig.~\ref{fig:icterms}). In real systems, the problem of precision does not apply. However, experimental systems are very often affected by noise from different sources. The major source of noise in the clarinet is turbulence, which cannot be avoided even with a very precise control of the pressure. Noisy situations, as well as finite precision situations, can be analysed introducing a stochastic variable in the iterative system (Eq.~\eqref{dynsys_ppAR}) \subsection{Trajectory of the system affected by noise} \label{sec:traj-syst-affect} If numerical simulations are run with a precision coarser than the $w_\textrm{min}$ calculated through Eq.~\eqref{eq:4}, the previous formul{\ae} must be extended. The limited precision (i.e. below $w_\textrm{min}$) used in simulations is modelled as a stochastic variable (with a standard deviation of $\sigma=a$) in the system. This case is studied in~\cite{BergeotNLD2012b}. A ``squared average'' trajectory $<w_n^2>$ is described by: \begin{multline} <w_n^2> \ \approx \underbrace{\widetilde{w}(\gamma_n)^2}_{A(\gamma_n)}+ \underbrace{\frac{\sigma^2}{\epsilon} \int_{\gamma_0+\epsilon}^{\gamma_n+\epsilon} \left(\frac{\widetilde{w}(\gamma_n)}{\widetilde{w}(\gamma')}\right)^2d \gamma'}_{B(\gamma_n)}. \label{eq:2} \end{multline} The two terms of the right-hand side of Eq.~\eqref{eq:2} are functions of the parameter $\gamma$. The term labeled $A(\gamma)$ corresponds to the approximation of the trajectory in the absence of noise, the same as in Eq.~ \eqref{exact_solution_w3Rec}. $B(\gamma)$ is the expected value of the additional distance to the invariant curve due to the presence of noise. In practice, when the noise level is sufficiently high or the precision low (relative to the estimation of Eq.~\eqref{eq:CondNumPeTer}), only the term $B(\gamma)$ is relevant, i.e. the trajectory of the system is described by: $ \sqrt{<w_n^2>} \ \approx \sqrt{B(\gamma_n)} $ with \begin{equation} \label{eq:bn} B(\gamma) = \frac{\sigma^2}{\epsilon}\int_{\gamma_0+\epsilon}^{\gamma+\epsilon}\left(\frac{w(\gamma)}{w(\gamma')}\right)^2 d\gamma'. \end{equation} For $\epsilon$ sufficiently small, since $\tilde{I}>0$ by definition and considering Eq.~\eqref{eq:Wfunction}, it can be deduced that $\widetilde{w}(\gamma)\gg \widetilde{w}(\gamma')$ for $\gamma'$ close to $\gamma_{st}$ (keeping in mind that $\widetilde{w}$ depends exponentially on $\tilde{I}/\epsilon$ with $\epsilon$ small, in this article, and the minimum of $I$ and $w$ are close to $\gamma_{st}$) and $w$ is negligible for all remaining values of $\gamma'$. This allows a simplification of the expression for $B(\gamma)$ as described below. According to the shape of $\tilde{I}(\gamma)$ (see Fig.~\ref{fig:wn05}), a second-order Taylor expansion of $\tilde{I}(\gamma)$ around the static oscillation threshold $\gamma_{st}$ is used to simplify its expression (for details, see Appendix \ref{secan:Itilde}): \begin{equation} \label{eq:wngst} \tilde{I}(\gamma) \approx 3\sqrt{3}\frac{\zeta}{2} (\gamma-\gamma_{st})^2. \end{equation} Using approximation~\eqref{eq:wngst}, the expression of $B(\gamma)$ can be simplified to: \begin{equation} \label{eq:bngstwn1} B(\gamma) = \sigma^2\sqrt{\frac{\pi}{3\sqrt{3}\zeta\epsilon}}\exp\left(2\frac{\tilde{I}(\gamma+\epsilon)}{\epsilon}\right). \end{equation} Details of the calculations of the simplified expression \eqref{eq:bngstwn1} are given in Appendix~\ref{sec:detailBn}. This amplitude $B(\gamma)$ does not depend on the starting amplitude $w_0$, and is also independent of the starting value of $\gamma$. It is interesting to notice that according to \eqref{eq:Wfunction}, expression~\eqref{eq:bngstwn1} can also be written: \begin{multline} \label{eq:bngstwn2} B(\gamma)= \sigma^2\sqrt{\frac{\pi}{3\sqrt{3}\zeta\epsilon}}\\ \times \exp\left(2\frac{\tilde{I}(\gamma_0+\epsilon)}{\epsilon}\right) \left(\frac{w(\gamma)}{w_0}\right)^2. \end{multline} In this form, Eq.~\eqref{eq:bngstwn2} shows that, in the presence of noise and far beyond the static threshold, the envelope followed by the system has the same shape as without noise, but with a different amplitude, i.e in this case we have: \begin{equation} \label{eq:bngstwn3} \sqrt{<w_n^2>} \ \approx \sqrt{B(\gamma_n)} \approx K \ w(\gamma_n), \end{equation} where $K$ is a constant deduced from Eq.~\eqref{eq:bngstwn2}. As a remark, a different calculation with similar objectives is made in a previous article~\cite{BergeotNLD2012b} to determine the dynamic thresholds in presence of noise. The approximation \eqref{eq:wngst} was used formally to integrate $\tilde{I}(\gamma_n+\epsilon)$ in Eq.~\eqref{eq:bngstwn1}. The result is an explicit expression for $B(\gamma)$, and therefore of the dynamic oscillation threshold. Here, $\tilde{I}(\gamma_n+\epsilon)$ is numerically integrated, keeping its precise expression given by Eq.~\eqref{approx_in}. This leads to a better estimation of the envelope, but that envelope does not have an analytic expression. \section{Interrupted variation of the mouth pressure parameter} \label{sec:interr-ramps-contr} This section describes the behaviour of the system for an example profile consisting of a limited linear growth of the parameter at a constant rate $\epsilon$ followed by a constant value $\gamma_M$ for an indefinite period of time. The parameter is therefore formally defined as: \begin{subnumcases}{\label{Gamma_prof}\gamma_n=} \epsilon n+\gamma_0 \ \text{if} \ n \le M\\ \gamma_M\ \text{if} \ n > M. \end{subnumcases} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{articleRecette_TIKZ-figure1.pdf} \caption{Algorithm for determination of the envelope. \rem{Vérifier références, remplacer y/n par yes/no}} \label{fig:algo} \end{figure} Due to the change in increase rate at $n=M$ the growth phase and the static phase are studied independently. An amplitude envelope $\tilde{w} ^-(\gamma)$ is computed for the growth phase and another $\tilde{w} ^+(\gamma)$ for the static phase. The two envelopes are connected at $n=M$ since the initial value $\tilde{w} ^+(\gamma_M)$ is deduced from $\tilde{w} ^-(\gamma_M)$. The method is described in the next sections and summarised in Fig.~\ref{fig:algo}. \subsection{Amplitude envelope of the growing phase: $w^-$} As explained in section \ref{sec:envel-after-dynam}, the first few ($N_{\text{lin}}$) iterations must usually be performed manually. These correspond to an ``approach phase'' that brings the system close enough to the invariant curve so that the assumption of linearity is valid. At iteration $N_{\text{lin}}$ the state of the system is given by: \begin{eqnarray} \label{eq:10} n &=& N_{\text{lin}} \nonumber \\ x_{N_{\text{lin}}} &=& G^{N_{\text{lin}}}\left(x_0\right) = \underbrace{G \circ G \circ \ldots \circ G}_{N_{\text{lin}} \text{times}}(x_0) \nonumber \\ \gamma_{N_{\text{lin}}} &=& \gamma_0 + N_{\text{lin}} \epsilon\nonumber\\ w^+_{N_{\text{lin}}}&=&x_{N_{\text{lin}}}-\phi_{\epsilon}(\gamma_{N_{\text{lin}}}) \end{eqnarray} The number of iterations required for the approach phase depends on the starting value of $\gamma$ and the increase rate $\epsilon$. In practice the state of the system is simulated iteratively until it reaches an amplitude $w_n<\epsilon$. A complication to this view arises when $\gamma$ goes through a superstable point $\gamma_{ss}$ defined by $G'\left(x^*(\gamma_{ss}),\gamma_{ss}\right)=0$. At this point the iterations can approach arbitrarily the invariant curve. Although this situation can be analysed under some simplifying assumptions, this is not done in this article, and the reader is referred to Baesens \cite{Baesens1991} for a detailed description of this case or to Bergeot \cite{ThesisBBergeot} in the context of the clarinet. A simple way of circumventing this problem is to force $N_\textrm{lin}$ to bring $\gamma$ beyond the super-stable point. A few explicit iterations (usually less than 5) allow the calculation of the amplitude $w(\gamma_{N_{\text{lin}}})$. Iteration $n=N_\text{lin}$ is used as a safe starting point for the analytic determination of the envelope. \begin{multline} \widetilde{w}^-(\gamma)=\left|w^-_{N_{\text{lin}}}\right|\\ \times \exp\left(\frac{\tilde{I}(\gamma+\epsilon)-\tilde{I}(\gamma_{N_{\text{lin}}} +\epsilon)}{\epsilon}\right). \label{eq:WfunctionPl} \end{multline} When the simulations are performed with a lower precision than that required for simulating the exact system (see Eq.~\eqref{eq:CondNumPeTer}), the initial value $\gamma_0$ does not affect the growth of oscillations. In this case, an average squared amplitude is given by Eq.~\eqref{eq:bngstwn1} starting from $\gamma_{st}$. Therefore, the envelope is given by: \begin{equation} \label{eq:rec-noise} \widetilde{w}^-(\gamma) = \sigma \left(\frac{\pi}{3\sqrt{3}\zeta\epsilon}\right)^{1/4}\exp\left(\frac{\tilde{I}(\gamma+\epsilon)}{\epsilon}\right). \end{equation} This approximation is valid for $\gamma>\gamma_{st}$, which is the usual region of interest. Below $\gamma_{st}$ the oscillations are mostly random, with an average level that remains close to the standard deviation of the stochastic perturbation $\sigma$. \subsection{Amplitude envelope of the static phase: $w^+$} \label{secAR:wplus} At $n=M$, $\gamma$ becomes constant and the oscillation undergoes an exponential growth (provided that $\gamma_M> \gamma_{st}$), given by Eq.~\eqref{sol_stat} where the initial value $w_0$ is replaced by the value $w_M^+$ deduced from the previous study of the growing phase: \begin{equation} \label{eq:1} \widetilde{w}_n^+ = \left|w_M^+ \left[G'(x^*(\gamma_M),\gamma_M)\right]^{(n-M)}\right|. \end{equation} The starting amplitude $w_M^+$ for the static phase is given by continuity of $x$: \begin{equation} \label{eq:14} w_M^+ = \widetilde{w}^-(\gamma_M) + \phi_\epsilon(\gamma_M) - x^*(\gamma_M). \end{equation} due to the change in invariant manifold from the invariant curve $\phi_{\epsilon}(\gamma)$ to $x^*(\gamma_M)$. As a remark, when the amplitude $ \widetilde{w}^-(\gamma_M)$ is sufficiently small (i.~e. $ \widetilde{w}^-(\gamma_M)\ll|x^*(\gamma_M)-\phi_\epsilon(\gamma_M)|$), the starting amplitude can be given simply by the difference between the invariant curve and the curve of fixed points: \begin{equation} \label{eq:141} w_M^+ = \phi_\epsilon(\gamma_M) - x^*(\gamma_M). \end{equation} In such a situation, the transient time is roughly given by the time until the slope discontinuity in the blowing pressure profile, plus a delay corresponding to the time needed for the oscillations to grow from $w_M^+$ (independently of $w_M^-$) to the final amplitude. Since the starting amplitude and the exponential coefficient ($G'(x^*)$ in Eq.~\eqref{sol_stat}) are independent of the slope of the growth phase, so is the duration of the transient resulting from the interruption in the growth. This matches observations on real instruments blown artificially \cite{Jasa2013BBergeot}. In any case, the oscillation usually starts very close to the fixed point $x^*(\gamma_M)$. This ensures that the linear approximation is valid on a large part of the transient (see Fig.~\ref{figEDA:ExTrasCI1ArtRec1}). \section{Examples} \label{sec:examples} A few examples of simulations are presented in this section, together with predictions based on the previous sections, and their limitations. The ``actual envelopes'' corresponding to the absolute distance between the iterated values and the invariant curve are plotted together with the estimation of the envelopes (Eq.~\eqref{eq:2}). In examples presented in sections~\ref{sec:small-gamma_m} and \ref{sec:large-gamma_m} the numerical precision is higher than the minimum amplitude reached by the system (Eq.~\eqref{eq:4}). The effect of introducing a stochastic variable in the system, which plays a similar role as performing simulations with low precision~\cite{BergeotNLD2012b}, is shown in the example of section~\ref{sec:EffectNoise}. \subsection{Interruption below dynamic oscillation threshold} \label{sec:small-gamma_m} In Fig.~\ref{fig:sim06}, the increase in mouth pressure $\gamma$ is stopped at a relatively small value of the parameter. In consequence, the amplitude of the oscillations is considerably smaller when the increase is interrupted. A jump in the relative amplitude is observed when $\gamma=\gamma_M$, in a logarithmic plot (see Fig.~\ref{fig:sim06}(b)). This jump arises because $w$ is the distance to the invariant curve $\phi_\epsilon$ before $\gamma_M$ and to the fixed point $x^*$ after. In this example, 6 iterations ($N_{\text{lin}}$) are used to reach the linear approximation. Moreover $\widetilde{w}^-(\gamma_M)\ll|x^*(\gamma_M)-\phi_\epsilon(\gamma_M)|$, so that the starting amplitude for the constant parameter phase ($w_M^+$) is deduced from Eq.~\eqref{eq:141}. The envelope is then computed following the method described above (see Fig.~\ref{fig:algo}). Fig.~\ref{fig:sim06}(b) also shows that the prediction is slightly in advance relative to the actual envelope. The reason is that $\tilde{I}(\gamma)$ is calculated using a severe approximation $\phi_\epsilon(\gamma-\epsilon)\approx x^*(\gamma)$ (see Eq.~\eqref{approx_in}). For small values of $\epsilon$ the approximation is satisfactory. The advantage of using this approximation is that a single curve $\tilde{I}(\gamma)$ can be used for any small value of the growth rate. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth,keepaspectratio=true]{SimJump.pdf} \caption{Simulation of the system in Eq.~\eqref{eqdiff_1ArtRec} with unlimited precision. The invariant curve (Eq.~\eqref{invcurve_1A}) is calculated with 8 perturbation terms and envelope predictions given by Eq.~\eqref{eq:Wfunction}. $\epsilon=0.01$, $\zeta=1/2$, $\gamma_M=0.6$, $\gamma_0=1/10000$, $x_0=0.5$.} \label{fig:sim06} \end{figure} \subsection{Interruption near the dynamic oscillation threshold} \label{sec:large-gamma_m} In Fig.~\ref{fig:sim09}, $\gamma$ reaches a higher stable value. This results in higher values of amplitude $w_n$ when the parameter stops increasing. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth,keepaspectratio=true]{SimuNoJump.pdf} \caption{Simulation of the system in Eq.~\eqref{eqdiff_1ArtRec} with unlimited precision. The invariant curve (Eq.~\eqref{invcurve_1A}) is calculated with 8 perturbation terms and envelope predictions given by Eq.~\eqref{eq:Wfunction}. $\epsilon=.005$, $\zeta=1/2$, $\gamma_M=0.9$, $x_0=0.5$, $\gamma_0=1/10000$.} \label{fig:sim09} \end{figure} The envelopes during the growing phase are estimated using the same method as in the previous example. At iteration $M$, since the system is estimated to have an amplitude that is higher than the difference $|\phi(\gamma)-x^*(\gamma_M)|$, the new amplitude $w^+_M$ is the distance to the invariant curve of the growing phase $\widetilde{w}^-(\gamma_M)$. The remaining envelope is the exponential $w^+_M\exp|G'(x^*(\gamma_M),\gamma_M)|$, as in the example above. The biggest difficulty in estimating the amplitude of the static phase arises when the amplitude $\widetilde{w}^-(\gamma_M)$ is similar to $|\phi(\gamma)-x^*(\gamma_M)|$. In this case, the iterate at $n=M$ can be either very close to the fixed point curve or at twice the distance between the two reference curves, which will imply very different amplitudes for the static phase. Finally, in Fig.~\ref{fig:sim09}(b), the jump in relative amplitude at the beginning of the static phase exists but it is not clearly visible because the amplitude of the oscillation at $\gamma=\gamma_n$ is large compared with the case shown in the previous example (see Fig.~\ref{fig:sim06}(b)). The disagreement between the iterates and the prediction for $10<n<130$ may appear to suggest that the prediction is not good here, whereas in fact it is the ``actual envelope'' that is incorrect. This is due to an inaccurate determination of the invariant curve. In fact, the number of terms needed for the invariant curve (Eq. \eqref{eq:CondNumPeTer}) makes its analytical computation too complicated. This situation is thus different from the numerical precision problem outlined in Fig.~\ref{dyn_bif_digramArtRecNo}, where the iterates are in some cases very different from those of the ideal system simulated with infinite precision due to the shift in dynamic threshold. The prediction is valid for most of the simulation between $n=4$ and $n=160$, and it matches the envelope whenever the invariant curve is valid (in particular above $n=130$). This shows that the envelope and the dynamic threshold can be fairly well predicted, even with an inaccurate approximation. \subsection{Simulations with noise} \label{sec:EffectNoise} In the example of Fig.~\ref{fig:sim-noise}, the simulation is performed adding a stochastic variable to $\gamma_n$ with a uniform probability distribution having a standard deviation $\sigma=10^{-4}$. This is roughly equivalent to a simulation without noise but with a numerical precision fixed to 4 significant digits (i.e. $a=10^{-4}$)~\cite{BergeotNLD2012b}. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth,keepaspectratio=true]{SimuNoise_eps005_gm8_g01_x5.pdf} \caption{Simulation of the system in Eq.~\eqref{DiffEq_G_InflGamm} with linearly increasing $\gamma$ and added noise. Invariant curve (Eq.~\eqref{invcurve_1A}) and envelope predictions given by Eq.~\eqref{eq:bngstwn3}. Unlimited precision, $\epsilon=.01$, $\zeta=1/2$, $x_0=0.5$, $\gamma_0=1/10$, $\sigma=10^{-4}$.} \label{fig:sim-noise} \end{figure} The sequence $B(\gamma_n)$ (Eq. \eqref{eq:bngstwn1}) is calculated based on this value of $\sigma$, and its square-root plotted as the envelope prediction. The prediction is valid for $\gamma>\gamma_{st}=1/3$, where $B(\gamma_n)$ reaches its minimum value. The departure of the oscillations occurs earlier when compared to the case without noise. When $\gamma<\gamma_\text{st}$ the amplitude is roughly that of the noise, because the noise is added at each new iteration to a value smaller that of the non-linear function applied to the previous iteration. \rem{Instead of: Before reaching $\gamma_{st}$ the amplitude is roughly that of the added random variable, because the random value is added at each new iteration to a smaller value resulting from the application of the nonlinear function.} Because of the reduction in the bifurcation delay, the oscillations are seen to depart much earlier than the cessation of the increase of the parameter $\gamma$. After the end of the exponential increase in amplitude, the envelope increases with the parameter, following the two-state oscillation given by the static bifurcation diagram. A discontinuity similar to that observed in figure \ref{fig:sim06} can arise also in noisy conditions. However, because the envelope curve $w(\gamma)$ multiplies a bigger value at $\gamma_\text{st}$, the discontinuity is seen only for smaller values of $\gamma_M$. \rem{Instead of: When $\gamma$ stabilizes at a low values $\gamma_M$, implying small amplitudes $w^+_M$, the remaining envelope of the recipe given in section~\ref{secAR:wplus} can be applied, giving rise to a similar discontinuity in the envelope as seen in Fig. \ref{fig:sim06}} Due to the random nature of the system, the prediction should not be interpreted as an approximation to the exact envelope, but rather as the envelope followed on average by a series of runs of the simulation. In fact, in this case, for a series of runs with different noise samples, the actual envelope was seen to shift towards the right or the left by about 4 iterations. \section{Discussion} The method can in principle be extended to include frequency independent losses \cite{dalmont:3294}, although this may be hard to acheive analitically. The invariant curve cannot be calculated directly from the simple expression of $F$, as in Appendix~\ref{sec:perturb}, requiring the use of the much more complicated expressions of $G$ and its derivatives. More complex models of clarinets with frequency dependent losses are known to give rise to long attack transients with similar envelope shapes \cite{ThFabSil}, and the envelope estimation used in the present article may be similar in models with small dispersion in the reflection function \cite{Almeida2014}. When the mouth pressure grows linearly over time, the logarithm of the amplitude is proportional to a predetermined curve, which we call $I(\gamma)$. The proportionality factor depends on the inverse of the growth rate $\epsilon$, whereas the offset depends on one of these two factors: \begin{itemize} \item the initial amplitude (starting distance to the invariant curve), when the precision is high enough (see Eq.~\eqref{eq:Wfunction}) or \item the stochastic level $\sigma$ when the simulation is imprecise or the system is noisy (Eq.~\eqref{eq:bngstwn3}). \end{itemize} A stop in the linear growth of the mouth pressure may occur while the system is still oscillating with low amplitude. In this case, when the pressure stops increasing, the oscillation resumes exponentially from a higher amplitude, which is given by the distance between the invariant curve and the fixed point at the particular value of the mouth pressure. A discontinuity in the amplitude envelope is observed if before the mouth pressure stops increasing while the amplitude was still at a value lower than the distance between the invariant curve and the fixed point. Bifurcation delay has also been observed in a real instrument. So far it has been hard to relate the amplitude envelope to the value of the mouth pressure. In interrupted ramps of the mouth pressure however, the oscillations seem to be triggered close to the inflection point of the blowing pressure~\cite{Jasa2013BBergeot}. In this case, an exponential amplitude growth then resumes. This is as expected for low values of $\gamma_M$, as shown in the example in section \ref{sec:small-gamma_m}. The values of $\gamma$ at the start of the oscillation depend on the rate of growth of the mouth pressure, an indication that the system is determined by the stochastic fluctuations in the mouth pressure. For a constantly increasing parameter, the \emph{dynamic oscillation threshold} $\gamma_{dt}$~\cite{BergeotNLD2012,BergeotNLD2012b} gives the approximate value of the mouth pressure parameter for which an audible sound appears, or in other terms, the distance from the invariant $w$ curve becomes ``macroscopic''. When the linear growth of the mouth pressure is suddenly stopped at $n=M$ and then kept constant at a value $\gamma_M$, two situations must be distinguished: \begin{enumerate}[$\bullet$] \item $\gamma_M < \gamma_{dt}$: a growing exponential envelope starts at $\gamma=\gamma_{M}$ with a fixed starting amplitude, which only depends on the value of $\gamma_M$ (see section~\ref{sec:small-gamma_m}). Audible sound occurs at a fixed time interval from the stop in pressure increase; \item $\gamma_M > \gamma_{dt}$: the audible (``macroscopic'') sound begins at $\gamma=\gamma_{dt}$ (see sections~\ref{sec:large-gamma_m} and \ref{sec:EffectNoise}). \end{enumerate} In most practical cases the latter situation is more common: because of the limited precision or noise, $\gamma_{dt}$ is effectively reduced to values that are much closer to the static threshold. \section{Conclusion} \label{sec:concl} This work shows that the amplitude envelope produced with a regular increase of blowing pressure in a simplified clarinet system can be described reasonably well by the use of a single function $I(\gamma)$ that is a characteristic of the system. This function can be used in exact and ``noisy'' cases to describe the envelope beyond the static threshold $\gamma_{st}$. When the pressure increase is interrupted, the exponential envelope corresponding to the transient of a static-parameter case can be matched with the one corresponding to growing pressures. In many practical cases, when the interruption occurs at sufficiently low values of the blowing pressure, this corresponds to a fixed starting amplitude so that the transient time measured from the interruption is roughly independent of the previous history of the system. These conclusions show some dramatic effects of the stabilisation of the mouth pressure that are due to the discontinuity in derivative. In summary, a sudden cessation in the increase inmouth pressure can have a large impact in the initial transient of the clarinet if it appears at a low enough value of mouth pressure. A preliminary comparison with a smoother stabilisation profiles \cite{Bergeot2014} suggests that smoother profiles give rise to slower transients. However, because of the simple mathematic expressions used for the profiles, they are not easy to compare to the piecewise linear profiles shown in this article. \subsection*{Acknowledgement} This work is part of the research project SDNS-AIMV ``Syst\`{e}mes Dynamiques Non-Stationnaires - Application aux Instruments \`{a} Vent'' (ANR-09-RPDOC-022-01) financed by the \emph{Agence Nationale de la Recherche}. The authors thank Prof. Joe Wolfe for useful suggestions and proof reading.
1,314,259,992,703
arxiv
\section{Introduction} In this paper we resume an old idea of two of us~\cite{soft}: in a multi-Higgs-doublet model furnished with three right-handed neutrino singlets and the seesaw mechanism~\cite{seesaw}, lepton flavour may be conserved in the Yukawa couplings of all the Higgs doublets and violated solely in the Majorana mass terms of the right-handed neutrinos $\nu_{\ell R}$ ($\ell=e,\mu,\tau$), \textit{viz.}\ in \begin{equation} \label{MR} \mathcal{L}_{\nu_R\, \mathrm{mass}} = - \frac{1}{2}\, \sum_{\ell_1, \ell_2} \bar \nu_{\ell_1 R} \left( M_R \right)_{\ell_1 \ell_2} C\, \bar \nu_{\ell_2 R}^T + {\rm H.c.}, \end{equation} where $C$ is the charge-conjugation matrix in Dirac space and $M_R$ is a non-singular symmetric matrix in flavour space. Since $\mathcal{L}_{\nu_R\, \mathrm{mass}}$ has dimension three, the violation of the individual lepton flavour numbers $L_\ell$ and of the total lepton number $L = L_e + L_\mu + L_\tau$ is \emph{soft}. Thus, in our framework $\mathcal{L}_{\nu_R\, \mathrm{mass}}$ is responsible for \begin{enumerate} \item the smallness of the light-neutrino masses, \item lepton mixing, \item violation of $L$, and \item violation of $L_e$, $L_\mu$, and $L_\tau$. \end{enumerate} In this context, lepton flavour-violating processes were explicitly investigated at one-loop order in ref.~\cite{GL2002} and the following property of our framework was discovered. Let $m_R$ denote the seesaw scale---the scale of the square roots of the eigenvalues of $M_R M_R^\ast$---and $n$ denote the number of Higgs doublets; it was found in ref.~\cite{GL2002} that \begin{enumerate} \renewcommand{\labelenumi}{\roman{enumi}.} \item the amplitudes of the lepton flavour-violating processes involving gauge bosons, like $\mu^- \to e^- \gamma$ and $Z^0 \to e^- \mu^+$, scale down as $1 \! \left/ m_R^2 \right.$ when $m_R \to \infty$; this holds even when in those processes the gauge bosons $\gamma$ and $Z^0$ are virtual, \textit{i.e.}\ they are off-mass shell; \item the amplitudes of the box diagrams for lepton flavour-violating processes like $\tau^- \to \mu^- \mu^- e^+$ and $\tau^- \to e^- e^- \mu^+$ also scale down as $1 \! \left/ m_R^2 \right.$ for a large seesaw scale; \item however, if $n \ge 2$, the amplitudes for lepton flavour-violating processes $\ell_1^- \to \ell_2^- \left( S^0_b \right)^*$, where $\left( S^0_b \right)^\ast$ is a virtual (off-mass shell) neutral scalar, approach a nonzero limit when $m_R \to \infty$. The non-decoupling of the seesaw scale in $\ell_1^- \to \ell_2^- \left( S^0_b \right)^*$ is an effect of the one-loop diagrams with neutrinos and charged scalars in the loop. \end{enumerate} As a consequence, in our framework the amplitude of the process $\mu^- \to e^- e^+ e^-$, which derives from $\mu^- \to e^- \left( S^0_b \right)^*$ followed by $\left( S^0_b \right)^* \to e^+ e^-$, is \emph{unsuppressed}\/ in the limit $m_R \to \infty$. The same happens to the amplitudes of the four $\tau^-$ decays of the same type. It is important to stress that in our model the amplitude for $\mu^- \to e^- e^+ e^-$ is unsuppressed because of the penguin diagrams for neutral-scalar emission in the $\mu^- \to e^-$ conversion; indeed, the penguin diagrams for either $\gamma$ or $Z^0$ emission vanish in the limit $m_R \to \infty$. Thus, our model for lepton-flavour violation differs from, for instance, the scotogenic model discussed in ref.~\cite{3840}, wherein it is precisely the $\gamma$ and $Z^0$ penguins that are instrumental in $\mu^- \to e^- e^+ e^-$ and in muon--electron conversion in nuclei.\footnote{In this paper we do not address muon--electron conversion in nuclei because in order to do it we would need to specify, through \emph{additional assumptions}, the Yukawa couplings of the extra Higgs doublets to the quarks. This is so because in our model muon--electron conversion in nuclei occurs---in the limit $m_R \to \infty$---through $\mu^- \to e^- \left( S^0_b \right)^\ast$ followed by the $\left( S^0_b \right)^\ast$ coupling to quarks.} Let us estimate a lower bound on $m_R$ by using the experimental bounds, given in table~\ref{bounds},\footnote{Two new experiments are planned in search for lepton flavour-violation at the Paul-Scherrer Institute. The MEG~II experiment~\cite{baldini} plans a sensitivity improvement of one order of magnitude for $\mu^+ \to e^+ \gamma$. The $Mu3e$ experiment~\cite{blondel}, which is in the stage of construction, aims at a sensitivity for $\mbox{BR} \left( \mu^+ \to e^+ e^- e^+ \right)$ of order $10^{-16}$.} on the radiative decays $\ell_1 \to \ell_2 \gamma$. \begin{table} \begin{center} \begin{tabular}{rcl} $\mbox{BR} \left( \mu^+ \to e^+ \gamma \right)$ &$<$& $4.2 \times 10^{-13}$ \\ $\mbox{BR} \left( \tau^- \to e^- \gamma \right)$ &$<$& $3.3 \times 10^{-8}$ \\ $\mbox{BR} \left( \tau^- \to \mu^- \gamma \right)$ &$<$& $4.4 \times 10^{-8}$ \\ $\mbox{BR} \left( \mu^- \to e^- e^+ e^- \right)$ &$<$& $1.0 \times 10^{-12}$ \\ $\mbox{BR} \left( \tau^- \to e^- e^+ e^- \right)$ &$<$& $2.7 \times 10^{-8}$ \\ $\mbox{BR} \left( \tau^- \to e^- \mu^+ \mu^- \right)$ &$<$& $2.7 \times 10^{-8}$ \\ $\mbox{BR} \left( \tau^- \to \mu^- \mu^+ \mu^- \right)$ &$<$& $2.1 \times 10^{-8}$ \\ $\mbox{BR} \left( \tau^- \to \mu^- e^+ e^- \right)$ &$<$& $1.8 \times 10^{-8}$ \\ \end{tabular} \end{center} \caption{The experimental bounds on the branching ratios of some lepton flavour-changing decays. All the bounds are at the 90\%~CL. The first bound is from ref.~\cite{meg}, all the other bounds are from ref.~\cite{rpp}. \label{bounds}} \end{table} The amplitude for any such decay has the form \begin{equation}\label{Agamma} \mathcal{A} \left( \ell_1^\pm \to \ell_2^\pm \gamma \right) = e\, \varepsilon_\rho^\ast\, \bar u_2 \left( i \sigma^{\rho \lambda} q_\lambda \right) \left( A_L \gamma_L + A_R \gamma_R \right) u_1, \end{equation} where $\varepsilon_\rho$ is the polarization vector of the photon, $u_1$ and $u_2$ are the spinors of $\ell_1^\pm$ and $\ell_2^\pm$, respectively, and $\gamma_L$ and $\gamma_R$ are the projectors of chirality. The decay rate is given, in the limit $m_{\ell_2} = 0$, by \begin{equation} \Gamma \left( \ell_1^\pm \to \ell_2^\pm \gamma \right) = \frac{\alpha m_{\ell_1}^3}{4} \left( \left| A_L \right|^2 + \left| A_R \right|^2 \right). \end{equation} Knowing that $A_L$ and $A_R$ are suppressed by $m_R^{-2}$, one may estimate, just on dimensional grounds, that \begin{equation} A_{L,R} \sim \frac{1}{16 \pi^2}\, \frac{m_{\ell_1}}{m_R^2}. \end{equation} Using the first two bounds of table~\ref{bounds} together with the experimental values for the masses and widths of the $\mu$ and $\tau$, one may then derive the lower bounds $m_R \gtrsim 50$\,TeV from $\mu^+ \to e^+ \gamma$ and $m_R \gtrsim 2$\,TeV from $\tau^- \to e^- \gamma$. Thus, in the framework of ref.~\cite{GL2002}, if we take $m_R \gtrsim 500$\,TeV then the radiative decays $\ell_1 \to \ell_2 \gamma$ are invisible in the foreseeable future. On the other hand, because of the nonzero limit of the amplitudes for $\ell_1 \to \ell_2 \left( S^0_b \right)^*$, the charged-lepton decays $\ell_1 \to \ell_2 \ell_3^+ \ell_3^-$ are unsuppressed when $m_R \to \infty$. It is the purpose of this paper to investigate those decays numerically in the framework of ref.~\cite{GL2002}, \emph{assuming $m_R$ to be so large that the radiative charged-lepton decays are invisible}. Then, $m_R$ is also much larger than the masses of the scalars in the model, which we assume to be in between one and a few TeV. As a sideline, in this paper we also consider the contributions of both the neutral and charged scalars to the anomalous magnetic moment $a_\ell$ of the charged lepton $\ell$, with particular emphasis on $a_\mu$. In order to keep the number of parameters of the model at a minimum, we restrict ourselves to just two Higgs doublets. Anticipating our results, we find that \emph{all}\/ five decays $\ell_1 \to \ell_2 \ell_3^+ \ell_3^-$ may well be just around the corner, while at the same time the contributions of the non-Standard Model (SM) scalars of the model can make up for the discrepancy $a_\mu^\mathrm{exp} - a_\mu^\mathrm{SM}$ of the anomalous magnetic moment of the muon. This paper is organized as follows. In section~\ref{LFV} we recall some results of ref.~\cite{GL2002}. We then specialize to the case of just two Higgs doublets in section~\ref{2HD}. We present the formulas for the contribution of the non-SM scalars to $a_\ell$ in section~\ref{MM}. Section~\ref{numerics} is devoted to a numerical simulation. In section~\ref{concl} we summarize and conclude. \section{The lepton flavour-violating decays $\ell_1^- \to \ell_2^- \ell_3^+ \ell_3^-$} \label{LFV} \subsection{The effective lepton flavour-violating interaction} \label{2.1} The framework of ref.~\cite{GL2002} assumes an $n$-Higgs-doublet setup wherein the violation of the family lepton numbers $L_\ell$ is soft. The corresponding Yukawa Lagrangian has the form \begin{equation} {\cal L}_{\rm Yukawa} = - \sum_{k=1}^{n}\, \sum_{\ell = e, \mu, \tau} \left[ \phi_k^\dagger\, \bar \ell_R \left( \Gamma_k \right)_{\ell \ell} + {\tilde \phi_k}^\dagger\, \bar \nu_{\ell R} \left( \Delta_k \right)_{\ell \ell} \right] D_{\ell L} + {\rm H.c.} \label{Yukawa} \end{equation} The basic assumption is \begin{equation} \mbox{the\ matrices}\ \Gamma_k \; \mbox{and}\ \Delta_k\; \mbox{are\ diagonal},\ \forall \; k=1,\ldots,n, \end{equation} as is already implicit in equation~\eqref{Yukawa}. In that equation, the Higgs doublets and the left-handed-lepton gauge doublets are given by \begin{equation} \phi_k = \left( \begin{array}{c} \varphi_k^+ \\*[0.5mm] \varphi_k^0 \end{array} \right), \quad \tilde \phi_k = \left( \begin{array}{c} {\varphi_k^0}^\ast \\*[0.5mm] - \varphi_k^- \end{array} \right), \quad \mbox{and} \quad D_{\ell L} = \left( \begin{array}{c} \nu_{\ell L} \\*[0.5mm] \ell_L \end{array} \right), \end{equation} respectively. The scalar mass eigenfields $S_a^+$ and $S_b^0$ are related to the $\varphi_k^+$ and $\varphi^0_k$ by \begin{equation} \varphi_k^+ = \sum_{a=1}^n U_{ka} S^+_a \quad \mbox{and} \quad \varphi_k^0 = \frac{1}{\sqrt{2}} \left( v_k + \sum_{b=1}^{2n} V_{kb} S_b^0 \right), \label{guywo} \end{equation} respectively~\cite{osland}. The vacuum expectation values (VEVs) are $v_k \left/ \sqrt{2} \right.$. The unitary $n \times n$ matrix $U$ diagonalizes the Hermitian mass matrix of the charged scalars. The $2n \times 2n$ real orthogonal matrix $\tilde V$, which diagonalizes the mass matrix of neutral scalar fields, is written as~\cite{osland} \begin{equation} \label{bjghw} \tilde V = \left( \begin{array}{c} \mathrm{Re}\,V \\ \mathrm{Im}\, V \end{array} \right) \quad \mbox{with} \quad V \equiv \mathrm{Re}\,V + i\, \mathrm{Im}\, V. \end{equation} The matrix $V$ is $n \times 2n$. We number the scalar mass eigenfields in such a way that $S_1^\pm = G^\pm$ and $S_1^0 = G^0$ are the Goldstone bosons. If there is only one Higgs doublet, \textit{i.e.}\ when $n=1$, the matrix $V$ is simply $V = \left( i,\ 1 \right)$ in the phase convention where $v_1 > 0$, and $S^0_2$ is the Higgs field of the SM. We define the diagonal matrices \begin{equation} \label{gfytu} M_D = \sum_{k=1}^n \frac{v_k}{\sqrt{2}}\, \Delta_k, \quad M_\ell = \sum_{k=1}^n \frac{v_k^\ast}{\sqrt{2}}\, \Gamma_k = \mathrm{diag} \left( m_e, m_\mu, m_\tau \right). \end{equation} According to ref.~\cite{GL2002}, in the limit $m_R \to \infty$, where $m_R$ is the seesaw scale, the flavour-changing interactions of the physical neutral scalars $S^0_b$, induced by loops with charged scalars and neutrinos, are given by \begin{equation} \label{Leff} \mathcal{L}_\mathrm{eff} \left( S^0 \right) = \sum_{b \ge 2} S^0_b\, \sum_{\ell_1 \neq \ell_2} \bar \ell_1 \left[ \left( A_L^b \right)_{\ell_1 \ell_2} \gamma_L + \left( A_R^b \right)_{\ell_1 \ell_2} \gamma_R \right] \ell_2. \end{equation} Note that the summation over $b$ begins with $b=2$, \textit{i.e.}\ it excludes the Goldstone boson $S_1^0$. The coefficients $\left( A_{L,R}^b \right)_{\ell_1 \ell_2}$ were computed in ref.~\cite{GL2002}. Let us define the $3 \times 3$ unitary matrix $U_R$ that diagonalizes $M_R$ as \begin{equation}\label{RRR} U_R^\dagger M_R U_R^* = \mbox{diag} \left( m_4, m_5, m_6 \right), \end{equation} where $m_{4,5,6}$ are, in the limit $m_R \to \infty$, the masses of the heavy neutrinos. We next define \begin{equation}\label{X} X_{\ell_1 \ell_2} \equiv \frac{1}{16 \sqrt{2} \pi^2}\, \sum_{i=4}^6 \left[ \left( U_R \right)_{\ell_1 i} \left( U_R^\ast \right)_{\ell_2 i} \ln{\frac{m_i^2}{\mu^2}} \right] = X_{\ell_2 \ell_1}^\ast, \end{equation} where $\mu$ is a mass scale which is arbitrary because of the unitarity of $U_R$. Finally, we define the flavour space matrices $A_{1,2,3}$ as \begin{subequations} \label{A123} \begin{eqnarray} \left( A_1 \right)_{\ell_1 \ell_2} &\equiv& \sum_{k=1}^n \left( \Gamma_k \right)_{\ell_1 \ell_1} \left( \Delta_k \right)_{\ell_2 \ell_2}, \\ \left( A_2 \right)_{\ell_1 \ell_2} &\equiv& \sum_{k=1}^n \left( \Delta_k^\ast \right)_{\ell_1 \ell_1} \left( \Delta_k \right)_{\ell_2 \ell_2}, \\ \left( A_3 \right)_{\ell_1 \ell_2} &\equiv& \sum_{k=1}^n \left( \Delta_k^\ast \right)_{\ell_1 \ell_1} \left( \Gamma_k^\ast \right)_{\ell_2 \ell_2}. \end{eqnarray} \end{subequations} Notice that $A_3 = A_1^\dagger$ and $A_2 = A_2^\dagger$. Then, \begin{equation} \left( A_L^b \right)_{\ell_1 \ell_2} = \frac{X_{\ell_1 \ell_2} A^b_{\ell_1 \ell_2}} {m_{\ell_1}^2 - m_{\ell_2}^2} \quad \mbox{and} \quad \left( A_R^b \right)_{\ell_1 \ell_2} = \frac{X_{\ell_2 \ell_1}^\ast \left( A^b_{\ell_2 \ell_1} \right)^\ast} {m_{\ell_2}^2 - m_{\ell_1}^2}, \end{equation} where $m_{\ell_i}$ is the mass of the charged lepton $\ell_i$ and \begin{eqnarray} A^b_{\ell_1 \ell_2} &=& \sum_{k=1}^n V_{kb}^\ast \left\{ \left( \Delta_k^\ast \right)_{\ell_1 \ell_1} \left( m_{\ell_1}^2 - m_{\ell_2}^2 \right) \left( A_1 \right)_{\ell_1 \ell_2} \right. \nonumber \\ & & + \left( \Gamma_k \right)_{\ell_1 \ell_1} \left[ - m_{\ell_1} \left( M_D^\ast \right)_{\ell_1 \ell_1} \left( A_1 \right)_{\ell_1 \ell_2} + \frac{m_{\ell_2}^2}{2}\, \left( A_2 \right)_{\ell_1 \ell_2} - m_{\ell_2} \left( M_D \right)_{\ell_2 \ell_2} \left( A_3 \right)_{\ell_1 \ell_2} \right] \nonumber \\ & & \left. + \left( \Gamma_k \right)_{\ell_2 \ell_2} \left[ m_{\ell_2} \left( M_D^\ast \right)_{\ell_1 \ell_1} \left( A_1 \right)_{\ell_1 \ell_2} - \frac{m_{\ell_1} m_{\ell_2}}{2}\, \left( A_2 \right)_{\ell_1 \ell_2} + m_{\ell_1} \left( M_D \right)_{\ell_2 \ell_2} \left( A_3 \right)_{\ell_1 \ell_2} \right] \right\}. \nonumber \\ & & \label{AAAA} \end{eqnarray} We note that, in every multi-Higgs-doublet model, it is possible to choose a basis for the scalar doublets such that only one of them, say $\phi_1$, has nonzero VEV: \begin{equation} \left\langle \varphi_1^0 \right\rangle_0 = \frac{v}{\sqrt{2}}, \quad \left\langle \varphi_k^0 \right\rangle_0 = 0 \quad \forall \; k > 1. \end{equation} This basis is called the `Higgs basis'. In it, from equation~\eqref{gfytu}, \begin{equation} \label{dsifp} \left( \Delta_1^\ast \right)_ {\ell_1 \ell_1} = \frac{\sqrt{2}}{v^\ast} \left( M_D^\ast \right)_{\ell_1 \ell_1}, \quad \left( \Gamma_1 \right)_{\ell_1 \ell_1} = \frac{\sqrt{2}}{v^\ast}\, m_{\ell_1}. \end{equation} With equations~\eqref{dsifp} one finds that, in the sum over $k$ in equation~\eqref{AAAA}, the term with $k=1$ gives a null contribution. Thus, \emph{in the Higgs basis, the contribution to $A^b_{\ell_1 \ell_2}$ proportional to $V_{1b}^\ast$ is identically zero}. In particular, if there is only one Higgs doublet, \textit{i.e.}\ in the SM, $A^b_{\ell_1 \ell_2} = 0$, \textit{viz.}\ when $n=1$ there are no effective lepton flavour-violating interactions of the neutral scalar in the limit $m_R \to \infty$. \subsection{The decay rate} If $\ell_2 \neq \ell_3$, then $\ell_1^- \to \ell_2^- \ell_3^+ \ell_3^-$ may be either $\tau^- \to \mu^- e^+ e^-$ or $\tau^- \to e^- \mu^+ \mu^-$. Equation~(\ref{Leff}) supplies the amplitude of the subprocess $\ell_1^- \to \ell_2^- \left( S^0_b \right)^*$. For the subsequent $\left( S^0_b \right)^* \to \ell_3^+ \ell_3^-$ we have \begin{equation} \label{Leff2} \mathcal{L}^{(\ell^\pm)}_\mathrm{Yukawa} \left( S^0 \right) = -\frac{1}{\sqrt{2}}\, \sum_{b=2}^{2n} S^0_b \sum_{\ell=e,\mu\tau} \bar \ell \left[ \left( \hat\Gamma_b \right)_{\ell \ell} \gamma_L + \left( {\hat\Gamma}_b^\dagger \right)_{\ell \ell} \gamma_R \right] \ell, \end{equation} where \begin{equation} \label{vugop} \hat\Gamma_b \equiv \sum_{k=1}^n V_{kb}^* \Gamma_k. \end{equation} We write the decay amplitude for $\ell_1^- \to \ell_2^- \ell_3^+ \ell_3^-$ as \begin{equation} \label{ampl} \mathcal{A} = \sum_{b=2}^{2n} \bar u_2 \left[ \left( \lambda_b \right)_{\ell_2 \ell_1} \gamma_L + \left( \rho_b \right)_{\ell_2 \ell_1} \gamma_R \right] u_1\, \bar u_3 \left[ \left( \hat \Gamma_b \right)_{\ell_3 \ell_3} \gamma_L + \left( \hat \Gamma_b^\dagger \right)_{\ell_3 \ell_3} \gamma_R \right] v_3, \end{equation} where, from equations~\eqref{Leff} and~\eqref{Leff2}, \begin{equation} \label{vytop} \left( \lambda_b \right)_{\ell_2 \ell_1} = - \frac{\left( A_L^b \right)_{\ell_2 \ell_1}}{\sqrt{2}\, M_b^2} \quad \mbox{and} \quad \left( \rho_b \right)_{\ell_2 \ell_1} = - \frac{\left( A_R^b \right)_{\ell_2 \ell_1}}{\sqrt{2}\, M_b^2}. \end{equation} In equations~\eqref{vytop}, $M_b$ is the mass of $S^0_b$. In the scalar propagators, we have neglected the four-momentum of the $\ell_3^+ \ell_3^-$ subsystem. With the amplitude in equation~(\ref{ampl}), the decay rate is given by \begin{eqnarray} \Gamma \left( \ell_1^- \to \ell_2^- \ell_3^+ \ell_3^- \right) &=& \frac{m_{\ell_1}^5}{6144 \pi^3} \left[ \left| \sum_{b=2}^{2n} \left( \lambda_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b \right)_{\ell_3 \ell_3} \right|^2 + \left| \sum_{b=2}^{2n} \left( \lambda_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b^\ast \right)_{\ell_3 \ell_3} \right|^2 \right. \nonumber \\ & & \left. + \left| \sum_{b=2}^{2n} \left( \rho_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b \right)_{\ell_3 \ell_3} \right|^2 + \left| \sum_{b=2}^{2n} \left( \rho_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b^\ast \right)_{\ell_3 \ell_3} \right|^2 \right]. \label{rate1} \end{eqnarray} We have neglected the masses of the final charged leptons in the kinematics. If $\ell_2 = \ell_3$, then $\ell_1^- \to \ell_2^- \ell_2^+ \ell_2^-$ may be either $\mu^- \to e^- e^+ e^-$ or $\tau^- \to e^- e^+ e^-$ or $\tau^- \to \mu^- \mu^+ \mu^-$. In equation~(\ref{ampl}) one must antisymmetrize the amplitude with respect to $\ell_2^-$ and in the kinematics one must insert an extra factor $1/2$. The final result is \begin{eqnarray} \Gamma \left( \ell_1^- \to \ell_2^- \ell_2^+ \ell_2^- \right) &=& \frac{m_{\ell_1}^5}{6144 \pi^3} \left[\, \frac{1}{2}\, \left| \sum_{b=2}^{2n} \left( \lambda_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b \right)_{\ell_2 \ell_2} \right|^2 + \frac{1}{2}\, \left| \sum_{b=2}^{2n} \left( \rho_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b^\ast \right)_{\ell_2 \ell_2} \right|^2 \right. \nonumber \\ & & \left. + \left| \sum_{b=2}^{2n} \left( \lambda_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b^\ast \right)_{\ell_2 \ell_2} \right|^2 + \left| \sum_{b=2}^{2n} \left( \rho_b \right)_{\ell_2 \ell_1} \left( {\hat \Gamma}_b \right)_{\ell_2 \ell_2} \right|^2 \right]. \end{eqnarray} \section{Two Higgs doublets} \label{2HD} From now on we assume $n=2$, \textit{i.e.}\ a two-Higgs-doublet model. In the Higgs basis, the VEVs are given by \begin{equation} \left\langle \varphi_1^0 \right\rangle_0 = \frac{v}{\sqrt{2}}, \quad \left\langle \varphi_2^0 \right\rangle_0 = 0, \end{equation} where $v \approx 246\,$GeV is real and positive. Thus, according to equation~\eqref{guywo}, \begin{equation} \label{fptmjk} \varphi_k^0 = \frac{1}{\sqrt{2}} \left( \delta_{k1} v + \sum_{b=1}^4 V_{kb} S_b^0 \right). \end{equation} Moreover, the matrix $U$ is the $2 \times 2$ unit matrix, \textit{i.e.}\ $\varphi_1^+ = S_1^+ = G^+$ is the charged Goldstone boson and $\varphi_2^+ = S_2^+$ is the physical charged scalar. According to the notation of ref.~\cite{osland}, the $4 \times 4$ orthogonal matrix $\tilde V$ of equation~\eqref{bjghw}, which diagonalizes the mass matrix of neutral scalar fields, is given by \begin{equation} \tilde V = \left( \begin{array}{cccc} 0 & R_{11} & R_{12} & R_{13} \\ 0 & R_{21} & R_{22} & R_{23} \\ 1 & 0 & 0 & 0 \\ 0 & R_{31} & R_{32} & R_{33} \end{array} \right), \quad \textit{i.e.}\ V = \left( \begin{array}{cccc} i & R_{11} & R_{12} & R_{13} \\ 0 & R_{21} + i R_{31} & R_{22} + i R_{32} & R_{23} + i R_{33} \end{array} \right), \end{equation} with a $3 \times 3$ orthogonal matrix $R$. The third row of $\tilde V$ corresponds to the neutral Goldstone boson $S_1^0 = G^0$. The definition~\eqref{vugop} reads \begin{equation} \label{fogpy} \hat \Gamma_b = \Gamma_1 R_{1\,b-1} + \Gamma_2 \left( R_{2\,b-1} - i R_{3\,b-1} \right) \end{equation} for $b = 2, 3, 4$. We parameterize the flavour-diagonal Yukawa coupling matrices as \begin{subequations}\label{2hdys} \begin{eqnarray} \Gamma_1 &=& \frac{\sqrt{2}}{v}\, \mbox{diag} \left( m_e, m_\mu, m_\tau \right) = \frac{\sqrt{2}}{v}\, M_\ell, \\ \Gamma_2 &=& \mbox{diag} \left( \gamma_e, \gamma_\mu, \gamma_\tau \right), \\ \Delta_1 &=& \mbox{diag} \left( d_e, d_\mu, d_\tau \right) = \frac{\sqrt{2}}{v}\,M_D, \\ \Delta_2 &=& \mbox{diag} \left( \delta_e, \delta_\mu, \delta_\tau \right). \end{eqnarray} \end{subequations} Therefore, from equations~\eqref{A123} and~\eqref{AAAA}, \begin{subequations} \begin{eqnarray} A^b_{\ell_1 \ell_2} &=& V_{2b}^\ast\, A_{\ell_1 \ell_2}, \label{cuigp} \\ A_{\ell_1 \ell_2} &=& \frac{\sqrt{2} \left( m_{\ell_1}^2 - m_{\ell_2}^2 \right) m_{\ell_1} \delta_{\ell_1}^* d_{\ell_2}} {v} \nonumber \\ & & - \left( m_{\ell_1}^2 + \frac{m_{\ell_2}^2}{2} \right) \gamma_{\ell_1} d_{\ell_1}^* d_{\ell_2} + \frac{3 m_{\ell_1} m_{\ell_2}}{2}\, d_{\ell_1}^* \gamma_{\ell_2} d_{\ell_2} \nonumber \\ & & + \left( m_{\ell_1}^2 - \frac{m_{\ell_2}^2}{2} \right) \gamma_{\ell_1} \delta_{\ell_1}^* \delta_{\ell_2} - \frac{m_{\ell_1} m_{\ell_2}}{2}\, \delta_{\ell_1}^* \gamma_{\ell_2} \delta_{\ell_2} \nonumber \\ & & + \frac{v m_{\ell_2}}{\sqrt{2}} \left( \gamma_{\ell_1} d_{\ell_1}^* \gamma_{\ell_2} \delta_{\ell_2} - \gamma_{\ell_1} \delta_{\ell_1}^* \gamma_{\ell_2}^* d_{\ell_2} \right) + \frac{v m_{\ell_1}}{\sqrt{2}} \left( \delta_{\ell_1}^* \left| \gamma_{\ell_2} \right|^2 d_{\ell_2} - \gamma_{\ell_1}^2 d_{\ell_1}^* \delta_{\ell_2} \right). \hspace*{5mm} \label{hlrpsa} \end{eqnarray} \end{subequations} As demonstrated at the end of section~\ref{2.1}, in $A^b_{\ell_1 \ell_2}$ the term proportional to $V_{1b}^\ast$ vanishes. We now make the further assumption that $\phi_1$ is just identical with the Higgs doublet of the SM; this means that $S^0_2$ is exactly like the SM Higgs boson. This choice relieves us from having to take into account the experimental restrictions on the couplings of the SM Higgs boson, which become automatically fulfilled. We now have \begin{equation} \phi_1 = \left( \begin{array}{c} S_1^+ \\ \left( v + S_2^0 + i S_1^0 \right) \left/ \sqrt{2} \right. \end{array} \right), \end{equation} where $S_1^+ = G^+$ and $S_1^0 = G^0$ are the Goldstone bosons. This means that we choose $R_{11} = 1$, whence it follows that $R$ can be written as\footnote{We assume without loss of generality that the orthogonal matrix $R$ has determinant $+1$.} \begin{equation}\label{R} R = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos{\alpha} & \sin{\alpha} \\ 0 & -\sin{\alpha} & \cos{\alpha} \end{array} \right). \end{equation} The matrix $V$ is \begin{equation} V = \left( \begin{array}{cccc} i & 1 & 0 & 0 \\ 0 & 0 & e^{- i \alpha} & i e^{- i \alpha} \end{array} \right). \end{equation} Thus, from equation~\eqref{fptmjk}, \begin{equation} \label{vigop} \phi_2 = \left( \begin{array}{c} S_2^+ \\ e^{- i \alpha} \left( S_3^0 + i S_4^0 \right) \left/ \sqrt{2} \right. \end{array} \right). \end{equation} From equation~\eqref{fogpy}, \begin{equation} \hat \Gamma_2 = \Gamma_1, \quad \hat \Gamma_3 = e^{i \alpha} \Gamma_2, \quad \hat \Gamma_4 = - i e^{i \alpha} \Gamma_2, \end{equation} and, from equation~\eqref{cuigp}, \begin{equation} A^2_{\ell_1 \ell_2} = 0, \quad A^3_{\ell_1 \ell_2} = e^{i \alpha} A_{\ell_1 \ell_2}, \quad A^4_{\ell_1 \ell_2} = - i e^{i \alpha} A_{\ell_1 \ell_2}. \end{equation} The decay rates are then \begin{subequations} \label{bihpi} \begin{eqnarray} \Gamma \left( \ell_1^- \to \ell_2^- \ell_3^+ \ell_3^- \right) &=& \frac{m_{\ell_1}^5}{6144 \pi^3} \left| X_{\ell_2\ell_1} \right|^2 \left| \gamma_{\ell_3} \right|^2 \frac{\left| A_{\ell_2 \ell_1} \right|^2 + \left| A_{\ell_1 \ell_2} \right|^2} {\left( m_{\ell_2}^2 - m_{\ell_1}^2 \right)^2} \left( \frac{1}{M_3^4} + \frac{1}{M_4^4} \right), \label{r1} \\ \Gamma \left( \ell_1^- \to \ell_2^- \ell_2^+ \ell_2^- \right) &=& \frac{m_{\ell_1}^5}{6144 \pi^3} \left| X_{\ell_2\ell_1} \right|^2 \left| \gamma_{\ell_2} \right|^2 \frac{\left| A_{\ell_2 \ell_1} \right|^2 + \left| A_{\ell_1 \ell_2} \right|^2} {\left( m_{\ell_2}^2 - m_{\ell_1}^2 \right)^2} \nonumber \\ && \hspace{39mm} \times \left[ \frac{3}{4} \left( \frac{1}{M_3^4} + \frac{1}{M_4^4} \right) + \frac{1}{2 M_3^2 M_4^2} \right]. \hspace*{8mm} \label{r2} \end{eqnarray} \end{subequations} The decay rates depend on the masses $M_3$ and $M_4$ of the non-SM neutral scalar fields $S^0_3$ and $S^0_4$, respectively. There is no dependence on the phase $\alpha$. In equation~\eqref{r1}, $\ell_2 \neq \ell_3$ is understood. \section{The anomalous magnetic moment of the muon} \label{MM} Let $a_\ell^{(S)}$ denote the contributions of the non-SM scalars $S^0_3$, $S^0_4$, and $S^\pm_2$ to the anomalous magnetic moment (AMM) of the charged lepton $\ell$. To a good approximation, \begin{subequations} \label{aell} \begin{eqnarray} a_\ell^{(S)} &\simeq& \frac{m_\ell^2}{96 \pi^2} \left\{ 2 \left| \gamma_\ell \right|^2 \left( \frac{1}{M_3^2} + \frac{1}{M_4^2} \right) \right. \label{a} \\ & & - 3\, \mbox{Re} \left( e^{2i\alpha} \gamma_\ell^2 \right) \left[ \frac{1}{M_3^2} \left( 3 + 2 \ln \frac{m_\ell^2}{M_3^2} \right) - \frac{1}{M_4^2} \left( 3 + 2 \ln \frac{m_\ell^2}{M_4^2} \right) \right] \label{b} \\ & & \left. - \frac{\left| \gamma_\ell \right|^2}{\mu_2^2} \right\}. \label{c} \end{eqnarray} \end{subequations} Lines~(\ref{a}) and~(\ref{b}) derive from a loop with $\ell$ and either $S_3^0$ or $S_4^0$; the photon line attaches to $\ell$. Line~(\ref{c}) comes from a loop with $S_2^\pm$ and light neutrinos, wherein the external photon attaches to $S_2^\pm$; in that line, $\mu_2$ denotes the mass of $S^\pm_2$. We have dropped all the terms proportional to $m_R^{-2}$, including in particular the contributions from the loop with $S_2^\pm$ and heavy neutrinos. For the coupling of the charged scalars to the charged leptons we refer the reader to ref.~\cite{GL2002}. There is a long-standing discrepancy between the experimental value of the AMM of the muon, $a_\mu^\mathrm{exp}$, and the SM theoretical value of that AMM, $a_\mu^\mathrm{SM}$~\cite{blum}:\footnote{See also ref.~\cite{lindner} for a recent review.} \begin{equation}\label{discrepancy} a_\mu^\mathrm{exp} - a_\mu^\mathrm{SM} = \left\{ \begin{array}{l} (287 \pm 80) \times 10^{-11}\ (\mbox{at}\ 3.6\,\sigma)~\cite{davier}, \\ (261 \pm 78) \times 10^{-11}\ (\mbox{at}\ 3.3\,\sigma)~\cite{hagiwara}. \end{array} \right. \end{equation} If this discrepancy signals new physics, then the contributions of the scalars in our model to the AMM of the muon may be relevant. Taking for instance $\gamma_\mu^2$ real and $e^{2i\alpha} = 1$, one has \begin{equation} a_\mu^{(S)} \simeq - \frac{\gamma_\mu^2}{96 \pi^2} \left[ \frac{m_\mu^2}{M_3^2} \left( 7 + 6 \ln \frac{m_\mu^2}{M_3^2} \right) - \frac{m_\mu^2}{M_4^2} \left( 11 + 6 \ln \frac{m_\mu^2}{M_4^2} \right) + \frac{m_\mu^2}{\mu_2^2} \right]. \label{vugig} \end{equation} The right-hand side of equation~\eqref{vugig} is dominated by the two terms with logarithms. One readily sees that the terms with $M_4$ and $\mu_2$ give negative contributions to $a_\mu^{(S)}$ (assuming $\gamma_\mu^2$ to be positive), while the term with $M_3$ gives a positive contribution; since $a_\mu^\mathrm{exp} - a_\mu^\mathrm{SM}$ is positive, we would like the term with $M_3$ to dominate over the other two; this is achieved with $M_3 < M_4$. Taking for instance $M_3 = 1\,$TeV, $M_4 = \mu_2 = 2$\,TeV,\footnote{Our choice $M_4 = \mu_2$ has the advantage that it automatically leads to a zero oblique parameter $T$. Indeed, in our two-Higgs-doublet model with $R_{11} = 1$, % \begin{equation}\nonumber T = \frac{1}{16 \pi s_w^2 m_W^2} \left[ f \left( M_3^2, \mu_2^2 \right) + f \left( M_4^2, \mu_2^2 \right) - f \left( M_3^2, M_4^2 \right) \right], \end{equation} % where $f \left( x, y \right)$ is a function~\cite{osland,review} that is zero when $x = y$. Thus, $T = 0$ when $M_4 = \mu_2$.} and $\gamma_\mu = 1.7$, we find $a_\mu^{(S)} = 258 \times 10^{-11}$, which is of the right sign and absolute value to explain the discrepancy~\eqref{discrepancy}. We conclude that our model can, using reasonable parameters, fill the gap between $a_\mu^\mathrm{exp}$ and $a_\mu^\mathrm{SM}$. The experimental AMM of the electron is in good agreement with the SM prediction for $a_e$. We must therefore check that the non-SM scalars of our model give an $a_e^{(S)}$ smaller than the experimental error $2.6 \times 10^{-13}$~\cite{rpp} of $a_e$. We might of course simply take $\gamma_e = 0$, but this would eliminate \textit{e.g.}\ the decay $\mu^- \to e^- e^+ e^-$, which we would like to have close to its experimental upper limit. So we use instead the same scalar masses as before and choose $\gamma_e = 1.7$, obtaining $a_e^{(S)} = 1.0 \times 10^{-13}$. Thus, even for a relatively large $\gamma_e$, $a_e^{(S)}$ can be below the experimental error. This is of course because of the tiny electron mass. \section{Numerics} \label{numerics} In this section, we want to show that in the two-Higgs-doublet version of the framework of ref.~\cite{GL2002}, and assuming moreover $R_{11} = 1$, \emph{there is a region in parameter space where the branching ratios of all five decays $\ell_1^- \to \ell_2^- \ell_3^+ \ell_3^-$ are close to their present experimental upper bounds}\/ displayed in table~\ref{bounds}. Notice that we only strive in this section to prove that something is \emph{possible}; we do \emph{not}\/ attempt a full scan of the parameter space of our model, which is quite vast. On the contrary, we shall make many simplifying assumptions, for instance \emph{we assume that all the parameters of the model are real}. In the decay rates of equations~\eqref{bihpi} there are various unknowns: \begin{enumerate} % \item the neutral-scalar masses $M_3$ and $M_4$; \label{p3} % \item the factors $\left| X_{\ell_2 \ell_1} \right|^2$; \label{p1} % \item the Yukawa couplings $\gamma_\ell$ together with those in $A_{\ell \ell'}$. \label{p2} % \end{enumerate} In this section we also want to fit $a_\mu^\mathrm{exp} - a_\mu^\mathrm{SM}$ of equation~(\ref{discrepancy}) by using $a_\mu^{(S)}$ of equation~\eqref{aell}; in that equation there are the neutral-scalar masses $M_3$ and $M_4$, the charged-scalar mass $\mu_2$, the Yukawa coupling $\gamma_\mu$, and the phase $\alpha$. In order to simplify our task, \emph{we fix all those parameters at the values used in section~\ref{MM}}, \textit{viz.} \begin{subequations} \label{input1} \begin{eqnarray} & & M_3 = 1\, \mathrm{TeV}, \quad M_4 = 2\, \mathrm{TeV}, \label{mass} \\ & & \gamma_\mu = 1.7. \label{gamma} \end{eqnarray} \end{subequations} Thus, the neutral-scalar masses mentioned in point~\ref{p3} above are fixed through equation~\eqref{mass}. Notice in equation~\eqref{gamma} that $\gamma_\mu$ is assumed to be real. In order to compute the factors $\left| X_{\ell_2 \ell_1} \right|^2$ we proceed in the following way. The mass matrix of the light neutrinos is obtained by the seesaw formula. In our notation, it reads \begin{equation}\label{mnu} \mathcal{M}_\nu = - M_D^T M_R^{-1} M_D = -\frac{v^2}{2}\, \Delta_1 M_R^{-1} \Delta_1, \end{equation} where $\Delta_1 = \mathrm{diag} \left( d_e, d_\mu, d_\tau \right)$ is diagonal. We shall fix \begin{equation} \label{input2} d_e = 0.6, \quad d_\mu = d_\tau = 0.1. \end{equation} Inverting equation~(\ref{mnu}), we obtain \begin{equation} M_R = -\frac{v^2}{2}\, \Delta_1 \mathcal{M}_\nu^{-1} \Delta_1. \label{mrr} \end{equation} The matrix $\mathcal{M}_\nu$ is diagonalized as \begin{equation} V_L^T \mathcal{M}_\nu V_L = \mbox{diag} \left( m_1, m_2, m_3 \right) \equiv \hat m, \label{vl} \end{equation} where $m_{1,2,3}$ are the light-neutrino masses and $V_L = e^{i \hat\alpha}\, U_\mathrm{PMNS}\, e^{i\hat\beta}$ is identical to the lepton mixing matrix $U_\mathrm{PMNS}$, apart from a diagonal matrix of unphysical phases $e^{i \hat\alpha}$ on the left and apart from the Majorana phase factors of the diagonal matrix $e^{i\hat\beta}$ on the right. Using equations~\eqref{mrr} and~\eqref{vl} together with the fact that the matrices $\Delta_1$, $\hat m$, $e^{i \hat \alpha}$, and $e^{i \hat \beta}$ are diagonal, we obtain \begin{equation} M_R = -\frac{v^2}{2}\, e^{i \hat\alpha} \Delta_1\, U_\mathrm{PMNS} \left( e^{2 i \hat\beta} \hat m^{-1} \right) U_\mathrm{PMNS}^T\, \Delta_1 e^{i\hat\alpha}. \label{mr} \end{equation} Using our simplifying assumption that all the parameters in the model are real, we set in equation~\eqref{mr} $e^{i \hat \alpha} = e^{i \hat \beta} = \mathbbm{1}$ and we also assume that $U_\mathrm{PMNS}$ is real. Using the standard parameterization for $U_\mathrm{PMNS}$ in ref.~\cite{rpp}, we fix $e^{i\delta} = -1$;\footnote{We might alternatively have chosen $e^{i \delta} = +1$; we have checked that there is no qualitative difference between the two cases.} we also fix the mixing angles at their best-fit values of ref.~\cite{schwetz}, \textit{viz.}\ $s_{12}^2 = 0.304$, $s_{23}^2 = 0.452$, and $s_{13}^2 = 0.0218$. We also have to choose the type of light-neutrino mass spectrum, either normal or inverted---for definiteness, we settle on a normal mass spectrum. Let the lightest neutrino mass $m_1$, which is unknown to date, be a free parameter; with a choice for $m_1$ and the best-fit values $\Delta m^2_{21} = 7.50 \times 10^{-5}\, \mathrm{eV}^2$ and $\Delta m^2_{31} = 2.457 \times 10^{-3}\, \mathrm{eV}^2$ of ref.~\cite{schwetz}, we obtain for the other two light-neutrino masses \begin{equation} m_2 = \sqrt{m_1^2 + \Delta m^2_{21}} \quad \mbox{and} \quad m_3 = \sqrt{m_1^2 + \Delta m^2_{31}}. \end{equation} We are now able to compute the matrix $M_R$ as a function of $m_1$ through equation~\eqref{mr}; therefrom we compute the quantities $\left| X_{\ell_2 \ell_1} \right|^2$ by using equations~\eqref{RRR} and~\eqref{X}. We obtain the result depicted in figure~\ref{XXXX}. \begin{figure}[t] \begin{center} \epsfig{file=Xn.eps,width=0.9\textwidth} \end{center} \caption{The factors $\left| X_{\ell_2 \ell_1} \right|^2$ as functions of $m_1$. The full line gives $\left| X_{e \mu} \right|^2$, the dashed line gives $\left| X_{e \tau} \right|^2$, and the dashed-dotted line is $\left| X_{\mu \tau} \right|^2$. \label{XXXX}} \end{figure} Notice that $X_{e \mu}$ has a zero for $m_1 \approx 0.0086$\,eV; else, the $\left| X_{\ell_2 \ell_1} \right|^2$ are decreasing functions of $m_1$, and vary by a few orders of magnitude from $m_1 = 0$ to $m_1 = 0.1\,$eV. From now one we fix \begin{equation} \label{input3} m_1 = 0.05\,\mbox{eV}. \end{equation} We then have \begin{equation} \label{Xfactors} \left| X_{e \mu} \right|^2 = 1.99 \times 10^{-8}, \quad \left| X_{e \tau} \right|^2 = 4.43 \times 10^{-8}, \quad \left| X_{\mu \tau} \right|^2 = 2.11 \times 10^{-6}. \end{equation} In this way we have fixed the factors mentioned in point~\ref{p1} above. Besides equations~\eqref{Xfactors}, we also obtain, from equation~\eqref{input3}, heavy-neutrino masses $m_4 = 4.3 \times 10^{12}$\,GeV, $m_5 = 6.0 \times 10^{12}$\,GeV, and $m_6 = 2.2 \times 10^{14}$\,GeV. These masses represent the seesaw scale,\footnote{Actually, $m_6$ is two orders of magnitude larger than $m_4$ and $m_5$ and therefore there is no well-defined seesaw scale, but that is not relevant for our purposes.} which is so large that all the radiative charged-lepton decays are completely invisible. Actually, $m_R$ is this large partly because we chose the Yukawa couplings $d_\ell$ close to one, \textit{cf.}\ equation~\eqref{input2}, in order to achieve large $\tau$-lepton branching ratios.\footnote{Note that if one uses $d_e = d_\mu = d_\tau = 0$ in the $A_{\ell_1 \ell_2}$ of equation~\eqref{hlrpsa}, then only two subdominant terms, \textit{i.e.}\ terms without $v$ in the numerator, survive.} Thus, the effect that we want to produce in our model can only occur for a large seesaw scale---it disappears, at least in the case of the $\tau$-lepton, for small $m_R$. Some of the Yukawa couplings mentioned in point~\ref{p2} are given in equations~\eqref{input1} and~\eqref{input2}. We now fix the remaining Yukawa couplings as \begin{equation} \label{input4} \gamma_e = \gamma_\tau = 1.7, \quad \delta_e = 0, \quad \delta_\mu = 0.00007, \quad \delta_\tau = 0.2. \end{equation} With all these input values, we obtain the branching ratios \begin{subequations} \label{theBRs} \begin{eqnarray} \mathrm{BR} \left( \mu^- \to e^- e^+ e^- \right) &=& 3.872 \times 10^{-13}, \\ \mathrm{BR} \left( \tau^- \to e^- e^+ e^- \right) &=& 1.111 \times 10^{-8}, \\ \mathrm{BR} \left( \tau^- \to e^- \mu^+ \mu^- \right) &=& 1.280 \times 10^{-8}, \\ \mathrm{BR} \left( \tau^- \to \mu^- \mu^+ \mu^- \right) &=& 1.307 \times 10^{-8}, \\ \mathrm{BR} \left( \tau^- \to \mu^- e^+ e^- \right) &=& 1.506 \times 10^{-8}. \end{eqnarray} \end{subequations} One sees that all these branching ratios are less than a factor of three away from the upper bounds of table~\ref{bounds}. \emph{We have thus demonstrated that in our model it is possible to suppress the radiative decays of the muon and tau lepton, while keeping the branching ratios of their decays into charged leptons very close to the experimental upper bounds.} Some remarks concerning the input values that we have utilized are in order: \begin{itemize} % \item All the experimental upper bounds on the branching ratios of the decays of the $\tau$-lepton in table~\ref{bounds} are quite similar. Therefore, if we want to have both $\tau^- \to \ell^- e^+ e^-$ and $\tau^- \to \ell^- \mu^+ \mu^-$ close to their experimental upper bounds, then $\gamma_e$ and $\gamma_\mu$ will have to be similar---see the explicit factors $\gamma_{\ell_3}$ and $\gamma_{\ell_2}$ in the decay rates of equations~\eqref{r1} and~\eqref{r2}, respectively. For definiteness we have chosen all three $\gamma_\ell$ to be the same. In figure~\ref{figure1} we depict the way the five branching ratios vary as functions of some $\gamma_\ell$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.9]{4figures.eps} \end{center} \caption{$\mathrm{BR} \left( \mu^- \to e^- e^+ e^- \right)$ (orange line), $\mathrm{BR} \left( \tau^- \to e^- e^+ e^- \right)$ (green line), $\mathrm{BR} \left( \tau^- \to e^- \mu^+ \mu^- \right)$ (blue line), $\mathrm{BR} \left( \tau^- \to \mu^- \mu^+ \mu^- \right)$ (lilac line), and $\mathrm{BR} \left( \tau^- \to \mu^- e^+ e^- \right)$ (red line) as functions of various Yukawa couplings. In the top-left figure, $\gamma_e$ varies in between 0 and 1.7. In the top-right figure, $\gamma_\mu$ varies. Bottom left, $\gamma_e$ and $\gamma_\tau$ change but with $\gamma_\tau$ remaining equal to $\gamma_e$. In the bottom right, $\gamma_\mu = \gamma_\tau$ varies. In all the figures, all the Yukawa couplings that do not vary take the values in equations~\eqref{gamma}, \eqref{input2}, and~\eqref{input4}. \label{figure1}} \end{figure} \item In $A_{\ell_1 \ell_2}$ in equation~\eqref{hlrpsa} the dominant terms have $v \simeq 246$\,GeV in the numerator. For large $\gamma_e = \gamma_\mu = 1.7$ and large $d_e = 0.6$ and $d_\mu = 0.1$, these terms will give a much too large contribution to $\mbox{BR} \left(\mu^- \to e^- e^+ e^-\right)$ unless there is a delicate cancellation between the terms proportional to $\delta_e$ and the terms proportional to $\delta_\mu$. This cancellation is illustrated in figure~\ref{figure2} for $\delta_\mu$ of equation~\eqref{input4}. For larger values of $\delta_\mu$ the curve is basically identical but shifted to the right. \begin{figure}[t] \begin{center} \includegraphics[scale=0.6]{gammadelta.eps} \end{center} \caption{$\mathrm{BR} \left( \mu^- \to e^- e^+ e^- \right)$ as a function of the Yukawa coupling $\delta_e$ while all the other Yukawa couplings remain fixed at their values in equations~\eqref{gamma}, \eqref{input2}, and~\eqref{input4}. The experimental bound $\mathrm{BR} \left( \mu^- \to e^- e^+ e^- \right) \times 10^{13} < 10$ is not depicted in this figure but must be taken into account. \label{figure2}} \end{figure} \item On the other hand, in the decays of the $\tau$-lepton the terms with $v$ in the numerator are just the relevant ones and we have needed, since we have chosen tiny $\delta_e$ and $\delta_\mu$, large parameters $\delta_\tau$, $d_e$, $d_\mu$, and $\gamma_\ell$ ($\ell=e,\mu,\tau$). \end{itemize} We may thus say that the branching ratios in equations~\eqref{theBRs} involve some finetuning. \section{Conclusions} \label{concl} It is now known, since the experimental observation of neutrino oscillations~\cite{nuosc-exp},\footnote{For reviews on the phenomenology of neutrino oscillations, see for instance ref.~\cite{nuosc-reviews}.} that there is lepton flavour-violation. However, that violation has not yet been observed in the charged-lepton sector and it is not quite certain where it is most likely to be observed first. In this context, the radiative decays $\ell_1^\pm \to \ell_2^\pm \gamma$ seem the best guess, and decays of the form $\ell_1^\pm \to \ell_2^\pm \ell_3^+ \ell_3^-$ may be an option as well. In this paper we have demonstrated, through an explicit numerical example, that there is a class of models where the radiative decays in the paragraph above may be so suppressed as to be utterly invisible, yet any of the five decays of the form $\ell_1^\pm \to \ell_2^\pm \ell_3^+ \ell_3^-$, or indeed---if one assumes some finetuning---all such five decays simultaneously, may be just around the corner. Our class of models, first considered in ref.~\cite{soft}, has three right-handed neutrino singlets and has more than one Higgs doublet. The crucial assumption is that the lepton flavours are conserved in the Yukawa couplings and broken only in the Majorana mass terms of the right-handed neutrinos; this assumption is field-theoretically consistent because those mass terms have dimension three while the Yukawa couplings have dimension four. As demonstrated in ref.~\cite{GL2002}, the effect mentioned in the previous paragraph occurs if the seesaw scale is much larger than all other scales in this class of models. In the present paper we have shown that there is a relevant simplification of the effective flavour-violating couplings of the neutral scalars, emerging at the one-loop level, when one uses the Higgs basis, \textit{i.e.}\ the basis for the Higgs doublets wherein only one of them has nonzero VEV. We have explicitly computed the branching ratios of the five decays $\ell_1^\pm \to \ell_2^\pm \ell_3^+ \ell_3^-$ in the case of a two-Higgs-doublet model assuming that the first doublet $\phi_1$ coincides with the Higgs doublet of the SM, \textit{viz.}\ it does not mix with the second doublet. Moreover, we have employed several simplifying assumptions in order to reduce the parameter space of the model. We have noted that some finetuning is needed in order that $\mathrm{BR} \left( \mu^- \to e^- e^+ e^- \right)$ does not become too large when all other four branching ratios are simultaneously close to their experimental limits. Flavour-diagonal Yukawa coupling matrices have no straightforward implementation in the quark sector,\footnote{For an attempt in this direction see, however, ref.~\cite{GL2003}.} so one has to admit non-diagonal Yukawa couplings there and avoid excessive flavour-changing neutral interactions by finetuning. Thus there is an asymmetry between the quark and the lepton sector. This may seem ugly, but, as pointed out in this paper, the intriguing consequences for charged-lepton decays make a consideration of such a framework worthwhile. \vspace*{5mm} \paragraph{Acknowledgements:} E.H.A.\ is supported by the FWF Austrian Science Fund under the Doctoral Program W1252-N27 ``Particles and Interactions.'' L.L.\ is supported by the FCT Portuguese Science Foundation through the projects CERN/FIS-NUC/0010/2015 and UID/FIS/00777/2013, which are partially funded by POCTI (FEDER), COMPETE, QREN, and the European Union. \newpage
1,314,259,992,704
arxiv
\section{Introduction} The Internet of Things (IoT) describes a vision where objects become part of the Internet, and expand it in such a way that our digital and physical worlds are fused together \cite{Coetzee2011-jp}. At a technical level this vision is actualised using networked microcontrollers, sensors and actuators \cite{Cunningham}. These devices are consistently subject to downward pressure on cost and energy consumption, similarly to embedded electronics and distinct from general purpose computers (e.g. desktops, smartphones). Smart homes are understood to consist of a network of interconnected devices and sensors, that seamlessly communicate with each other, and can be controlled by the user in a convenient way \cite{Gram-Hanssen2018-ik}. The primary user benefits of smart homes include saving energy, enabling comfier, healthier living environments and ensuring home safety and security \cite{Nicholls2020-eq}. These days, the IoT is viewed as an important technology towards improving living environments and quality of life \cite{Wang2021-zn}. The number of publications on IoT-based smart homes was seen to grow significantly between 2015 and 2019 \cite{Choi2021-ju}, reflecting the increase in research interest in this field. Advances in Automatic Speech Recognition (ASR) technologies have given rise to voice-controlled smart homes \cite{Poongothai2018-pf}, and the market is now populated with devices to build these, e.g. Samsung's SmartThings, Amazon’s Alexa, Apple’s HomeKit and Google’s Home Assistant. The uptake of voice-controlled smart home devices has been slowed due to privacy and trust concerns that stem from the product’s reliance on cloud-based data analysis \cite{Lau2018-vl, Brush2011-rb}. Secrecy surrounds corporate practices, meaning companies can capture user conversation and preferences without them being aware of what, how or why data is recorded \cite{Nicholls2020-eq}. Consumers have expressed specific concerns regarding lack of control over their data, audio and video access, household profiling, government access, and data breaches \cite{Haney2020-fc} \cite{Marikyan2019-pz}. Recent events draw attention to smart home privacy and security concerns. It was reported that Amazon’s Ring doorbell can give data to the police without needing your knowledge or consent \cite{Morrison2022-in}, and fraudulent legal requests have caused some technology companies to provide sensitive information about their customers \cite{Turton2022-oo}. Additionally, a security analysis of the Samsung SmartThings framework found it was possible to steal lock pin-codes and cause fake fire alarms through exploiting design flaws and vulnerabilities of over-privileged third-party apps \cite{Fernandes2016-uc}. 75\% of people agree there is reason for concern about their data being used by other organisations without their permission, and security concerns deter almost a third of people who do not own smart devices from buying one. \footnote{https://www.internetsociety.org/wp-content/uploads/2019/05/\\CI\_IS\_Joint\_Report\-EN.pdf} The use of local data processing and authentication methods have been suggested to protect users rights and abide by GDPR principles \cite{Hernandez_Acosta2022-ux}. Edge computing has emerged as a paradigm in which computing and storage resources are placed in close proximity to end users on devices or sensors \cite{Satyanarayanan2017-xv}. In contrast with cloud-based systems that suffer from power hungry components, high latency and privacy and security concerns \cite{Pinto2020-sc}, edge computing brings advantages in terms of energy savings, bandwidth savings, privacy protection, reliability, low-cost components and low-latency \cite{Wang2022-tl} \cite{Wang2020-ge}. Edge computing can ensure the security and privacy of a network \cite{Ding2022-lf} and it is predicted that in the next decade most speech recognition will happen on the device or at the edge \cite{Hannun2021-dn}. The paper arises from an investigation into the feasibility of implementing voice control at the edge, as we work towards a more ideal smart home than those commercially provided. As a first step in answering this question, we examine commercial smart home systems and present a taxonomic classification of the current technologies enabling voice-control in the smart home. The paper begins detailing background literature in the field, we then present our taxonomy and turn to examine academic efforts in implementing and evaluating voice-controlled smart homes. In line with our aims, we consider the existing landscape of offline speech recognition tools and implementations, and also discuss further methods that could help towards a \textit{privacy-preserving} voice-controlled smart home. \begin{itemize} \item \textbf{Section 2} gives a high-level overview of typical smart home hardware architectures, and outlines the technologies that comprise voice assistant software. \item \textbf{Section 3} considers previous studies that categorise the devices and technologies involved in voice-controlled smart home setups in order to inform the design of our taxonomy. \item \textbf{Section 4} presents our taxonomy of commercially available voice-controlled smart home technologies and devices, and offers some discussion and analysis of the categories defined. \item \textbf{Section 5} considers academic-based smart home implementations with consideration for the devices and voice assistant technologies employed, as well as evaluation methods that assess the performance of these. \item In \textbf{Section 6} we identify and compare currently available speech recognition and voice assistant systems that have been or could be used for voice-control in a smart home set-up. Tools and techniques used are discussed, as well as evaluation methods used. \item \textbf{Section 7} envisions the design of a better voice-controlled smart home. We consider the use of cheaper, programmable devices, authentication methods, and model personalisation. \end{itemize} \section{Background} \subsection{Smart Home IoT Architectures} At a technical level, we simplify the IoT to center on the use of networked microcontroller devices, that combine in architectures that typically include microprocesser-based hubs or gateways, and a cloud-based server side \cite{Cunningham}. We define smart homes to be systems that combine networked devices that seek to replace or augment the control mechanisms that have matured gradually over previous decades: TV remotes, central heating thermostats or washing machine programmers. Smart home systems can either be centralised or distributed. A centralised gateway architecture is generally seen to be optimal for supporting multiple resource-constrained devices \cite{Lin2016-th}, where the gateway serves to coordinate devices, and connect the local infrastructure to the internet \cite{Samuel2016-lr}. Upper and lower parts of a smart home system have been defined in \cite{Wang2013-cn}, with the upper part consisting of a wireless router, computers, tablets, and the lower part consisting of switch modules, data collectors, plus a smart central controller to connect the parts together. A typical gateway architecture for IoT devices is described in \cite{Kruger2014-yw}. Market leaders usually provide a central hub (or gateway) for smart homes, where compatible smart devices can be purchased to connect with the hub and establish a smart home network, e.g. in product descriptions you can often see “works with X” \cite{Serrenho2019-bk}. Figure \ref{sys_diagram} presents an architecture diagram of a gateway-based IoT smart home system that makes use of cloud-processing. In the case of non-cloud systems, the processing and storage handled by the IoT backend (cloud) is usually placed at the gateway or IoT devices. Sensors and actuators are commonly attached to edge devices, where sensors provide continuous data streams about an environment \cite{Sudharsan2022-yf}, and actuators act on the environment in some way (e.g. switch). Wireless technologies allow the flexibility to add and remove components to the smart home network, enabling scalability and expansion \cite{Viani2013-uj}. Communication types commonly used in smart homes include Wi-Fi, Infrared, Radio Frequency (RF), and Bluetooth \cite{Katuk2018-zj}, plus Global System Mobile (GSM), Z-Wave, ZigBee, and wired connections (e.g. Ethernet) can also be used \cite{Arriany2016-cv}. \begin{figure} \centering \includegraphics[width=1.8in]{Figure1.png} \caption{Smart Home IoT architecture} \label{sys_diagram} \end{figure} \subsection{Voice Control Technologies} Voice control entails computational transcription of the spoken word, and the interpretation of user intentions for device control or information seeking. Voice-controlled systems are described in terms of three modules in \cite{Mishakova2019-qd}: Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Decision Making. Alternatively, \cite{Vacher2015-dv} define voice-controlled dialogue systems to be composed of five stages: Voice Activity Detection (VAD), ASR, NLU, a decision stage, and a communication stage. We define voice control in the home to entail keyword spotting (KWS) for system entry, ASR for transcribing user utterances, and NLU for interpreting the action specified in the utterance. Following \cite{Huang2015-db}, we define voice assistants (VAs) to build on these core functionalities, and make use of a dialogue manager, natural language generator (NLG) and speech synthesis to enable the two-way interaction seen in dialogue systems. We present the relationship between our defined modules comprising voice assistants in Figure \ref{vasystem} Spoken Language Understanding (SLU) refers specifically to the task of inferring the meaning or intent of a spoken utterance \cite{Lugosch2019-bg}. ASR and NLU modules comprise a conventional SLU system. More recently end-to-end SLU systems have gained popularity, where a single model is used to map speech input directly to user intent, without the intermediary step of producing a transcript. By jointly optimising the ASR and NLU components, cascading errors are reduced which helps training and gives the technique an advantage \cite{Desot2022-ub}. There exist a wide range of additional speech processing tasks that voice assistants have been seen to use. For example, smart homes that have multiple voice-enabled devices use device arbitration, speech enhancement, and speech localization models to improve the performance and user experience \cite{Ciccarelli2022-uh}. We primarily focus on ASR and NLU components in examining voice assistants in this paper, since these are integral to voice-control in the smart home. All voice assistants use ASR for recognising what the user has said, and NLU is necessary for finding meaning in natural language commands. \begin{figure} \centering \includegraphics[width=1.8in]{Figure2.png} \caption{Voice Assistant system components} \label{vasystem} \end{figure} \subsection{Smart Homes, Voice-Control, and the IoT} Figure \ref{homearch} shows how the IoT, smart home and voice control technologies operate in conjunction. Our diagram is informed by commercial smart home and IoT architectures. We introduce each component and the voice-control capabilities of each: \begin{itemize} \item Cloud-based computation is the current predominant method of processing speech data, and is capable of running complex algorithms to allow for voice assistant (VA) interaction. \item Voice User Interface (VUI) devices are equipped with microphones, and both collect and transmit user commands to the cloud for processing, as well as co-ordinate the corresponding action execution via communication with edge devices. VUI devices tend to be based on micro{\em processors} which can handle some VA functionality such as ASR. \item The edge devices are the distinctive category in the IoT, distinguishing the field from general purpose computing. They are usually low power, MCU based devices, that sense or act on the home environment. The tightly constrained nature of edge devices limits their speech processing capabilities to simple KWS algorithms at present. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=2.8in]{Figure3.png} \caption{Voice-control in the smart home} \label{homearch} \end{figure} \section{Existing Categorisations} \subsection{System and Device Categorisations} The importance of having clear terminology relating to IoT devices has been stressed \cite{Haller2010-yx}, but there exist a variety of diverse smart home devices that are somewhat difficult to categorise. We analyse previous efforts to classify and categorise aspects of smart home systems and devices, in order to inform the design of our taxonomy that is intended to cover a wide range of currently available smart home devices. \cite{Koreshoff2013-eg} consider human interaction with the IoT, with the aim of comparing commercial products with academic research efforts in the field to observe trends, gaps, differences and common areas of effort. These aims closely mirror our own, making the paper highly relevant to our work. In the study, IoT-based products are classed as either person-centric if they gather data about the human body, or home-centric if they gather data about the environment. Devices in each category are detailed in terms of their input and output sensor types and means of user interaction. Insights about current trends can be made using the device-level detail, e.g. person-centric devices most commonly contain accelerometers, therefore we will follow aspects of the work in our taxonomy design. \cite{Alam2012-zl} categorise smart home devices, with consideration for smart home services offered (comfort, healthcare, security), as well as the devices that enable these services. Devices are categorised as either a sensor (acquire data from the environment), a physiological device (monitor health conditions), or a multimedia device (provide an interface between the system and the user). Example devices for each category are given, and we look to a similar approach where we characterise devices by the hardware-level abilities of the device, in combination with the type of service it provides the user. \cite{Katuk2018-zj} classify devices based on the room each resides in, with attention placed on product manufacturers and smart features of devices, e.g. LED lights are manufactured by Philips and you can select preferred lighting modes. Identifying the functionality of devices and leading manufacturers of smart home devices is useful. Categorising devices based on room appears restrictive and not that informative, since homes can have a range of layouts. \cite{Kumar2019-mq} analyse the use of IoT devices in home networks, and define fourteen categories of smart home device: Computer, Network node (e.g. home router), Mobile device (e.g. iPhone or Android), Wearable (e.g. Apple Watch), Game console (e.g. XBox), Home automation (e.g. Nest Thermostat), Storage (e.g. home NAS), Surveillance (e.g., IP camera), Work appliance (e.g. printer), Home appliance (e.g. smart fridge), Generic IoT (e.g. toothbrush), Vehicle (e.g. Tesla), Media/TV (e.g. Roku), Home Voice Assistant (e.g. Alexa). The work focuses on the popularity of the smart home device categories, with discussion on device vendors. A wide range of devices are seen to be used, yet there is limited discussion on the characteristics of these. User benefits are a common way to categorise smart home systems, which does not provide much insight into the hardware and specific capabilities of devices. For example, \cite{Holroyd2010-gr} define three classes of smart home user benefits: energy saving, support for elderly or disabled, security and safety. \cite{De_Silva2012-tm} similarly identify four applications of smart home devices: healthcare, better life, security, energy efficiency. \cite{Arriany2016-cv} categorise smart home applications as convenience and entertainment, safety and security, energy savings, and healthcare. While useful to consider, there is limited scope for insights to be made using such categorisation methods. Categorising device components based on their role in the smart home network is also commonly seen. \cite{Suh2008-zt} define sensors to gather home environment data, actuators to control home devices, control to components manage the actuators, decision components to select services based on sensor data, and service components to be the software that provide the user benefits. Similarly \cite{P_R_Filho2018-cx}, categorise devices in a smart home network as sensor nodes, decider nodes, actuator nodes and sink nodes. Likewise, \cite{Sun2013-gu} define smart homes to have sensing agents (e.g. temperature sensor), action agents (e.g. door lock), administration and decision agents (e.g. smart speaker), and database agents (e.g. knowledge bases). Furthermore, \cite{Gunge2016-rc} define a home automation system to have a User Interface (UI), mode of transmission (wired or wireless), a central controller (a hardware interface) and electronic devices (compatible with transmission mode and connected to the central controller). Studies like these, that define classes of hardware devices based on their functional role and capabilities in the home network, are useful towards the design of our taxonomy. There can be crossover between categories however, that are not considered in these works. Recognising that smart home device classifications can fail to accommodate devices with multiple functionalities, \cite{Lopez2011-fz} propose the ISADN specification, where devices are categorised as having Identity, Sensors, Actuators, Network connectivity, and Decision making abilities. The specification is intended to help describe the characteristic functionality of smart objects, rather than pose a constrained view of smart devices. The terminology helps towards differentiating and describing the nature of hardware devices based on their abilities. Distinct from other works, \cite{Sturgess2018-pv} reduce smart home devices to their data-collecting capabilities, so that they can assess the privacy risk of the system based on the information the user exposes. We will also examine the type of sensors found on each device in our taxonomy in order to place awareness and focus on the data types that can be captured by smart home devices. \subsection{Voice Assistant Categorisations} Voice assistants can be distinguished as being manually activated, speech activated, or always on \cite{Hernandez_Acosta2022-ux}. For example, Alexa (used in Amazon Echo) is a cloud-based voice service that is always on, and therefore records all voice activity in the home even when it is not activated \cite{Venkatraman2021-ly}. Voice assistant systems can also be characterised by the types of speech act it can understand and respond to, e.g. speech acts can inquire about information, control devices and request services \cite{Huang2015-db}. Response styles of popular commercial virtual assistants have been categorised as either minimal, keyword and full sentence, by analysing responses to frequently used queries and commands \cite{Haas2022-ej}. ASR is a core component of voice assistants. Criteria for classifying ASR systems more specifically, considers isolated vs. continuous speech, speaker dependent vs. independent models, dictation vs. spontaneous speech styles and vocabulary size \cite{Peinl2020-fv}. \cite{Arriany2016-cv} classify ASR systems as speaker-based or word-based. Word-based systems are categorised by how the speaker says the words in the sentence, e.g. discrete words or connected words (continuous speech). Speaker-based systems can either be speaker dependent or speaker independent. Speaker dependent systems use template matching and are trained on certain voices or words before use, and speaker independent systems perform feature analysis to analyse the input voice. \section{Taxonomy} \subsection{Taxonomy Overview} We present a taxonomy of the components of voice-controlled smart home systems, from both a software and hardware perspective by studying commercially available systems. An overview of the taxonomy design is first introduced, and then we draw attention to specific parts of the taxonomy. The high level view of the taxonomy is informed by identifying core components in a commercial smart home system architecture. We define three device categories that make-up a voice-enabled smart home system from a user interaction perspective: Voice Assistant, Voice User Interface, Edge Device. On the software side, we consider the voice assistants used in voice-controlled smart homes to process and respond to spoken commands. Specifically, we consider Amazon Alexa, Google Assistant and Apple Siri since they are widely used and popular commercial solutions to smart home voice-control. On the hardware side, we consider VUI’s that receive voice input and have abilities to communicate and control edge devices. Edge devices are low power devices, typical of the IoT, and designed specifically to sense and/or act on aspects of smart home environments. The taxonomy overview is depicted in Figure \ref{fig:taxonomy}, where we show the relationship between categories of smart home technology. Hubs and wearables are defined as types of VUI, and sensors and actuators are types of edge device. Crossover between hardware device categories can occur, therefore we distinguish the categories. Hubs are generally home-centric gateway devices, requiring more power than wearable devices that reside on the person. VUI devices usually contain CPUs and are more power hungry than typically low-power MCU-containing edge devices \footnote{The core distinction is between microcontroller (MCU) and microprocessor (which we're abbreviating as CPU here). Strictly speaking, the latter includes more than just a CPU, but it is relatively common to refer to microprocessors in this way \cite{Schlett}.}. In some cases, a VUI could also be considered an edge device, (e.g. a smart speaker). The differentiation between a VUI and a edge device arises from the general purpose capabilities a voice assistant provides a VUI device (e.g. setting timers, making calls, controlling other devices). Edge devices are usually special purpose, but can have both sensing and actuation abilities, therefore we generally characterise these devices by classifying each based on its predominant function. Outside of the scope of the taxonomy lie general purpose computing and networking devices that are not exclusive to smart homes, such as routers, mobile devices, games consoles and computers. Third-party services that provide additional functionality to commercial smart homes are also not considered. We move on to take a specific look at the software and hardware components we have outlined in our taxonomy. Each of the three top-level categories is discussed individually across the following three subsections. \begin{figure}[!t] \centering \includegraphics[width=2.3in]{Figure4.png} \caption{Taxonomy} \label{fig:taxonomy} \end{figure} \subsection{Voice Assistants} Three classes of voice assistant software are defined, as shown in the taxonomy overview (Figure \ref{fig:taxonomy}): Google Assistant, Amazon Alexa and Apple Siri. The voice assistants from these three massive corporations are selected for inclusion given their significance, popularity and wide-spread use in the realm of commercial voice-enabled smart home systems. We seek to identify the similarities and differences between each by examining their capabilities, the available keywords for waking the device ({\em wakewords}), the accepted command types and the response behaviours. Our findings are detailed in Table \ref{tab:VA}, and reflect the similarities between commercially available voice assistants. While extensive, the functionality is consistent amongst each, with minimal variation in the accepted commands and types of response styles. Third-party apps are available for each to extend capabilities, and as expected, none are open-source and all use cloud-based processing. Combined, these characteristics point up the lack of transparency, flexibility and diversity in commercially available voice assistants. From this perspective, we can conclude that there is space for an alternative voice assistant that can meet users' needs without compromising their privacy. Our analysis of voice assistant functionalities supports research into how these could be replicated using on-device processing methods. In Section 6, we discuss currently available methods to achieve this. \begin{table*}[t]\footnotesize \centering \begin{tabular}{|p{1.3cm}|p{9cm}|p{1.5cm}|p{1.5cm}|p{2cm}|} \hline {\bf Name} & {\bf Capabilities} & {\bf Wakeword} & {\bf Command input type} & {\bf Response behaviour} \\ [0.5ex] \hline \hline Google Assistant & Control smart home devices; play and control media; search and retrieve information; manage alarms, timers, lists, calendars, tasks; play games; purchase items; make calls and announcements & Hey Google, Ok Google & Full sentence & Full sentence ('brief' mode available) \\ \hline Amazon Alexa & Control smart home devices; play and control media; search and retrieve information; manage alarms, timers, lists, calendars, notes; play games; purchase items; make calls and announcements & Alexa, Amazon, Echo, Computer & Full sentence & Full sentence ('brief' mode available) \\ \hline Apple Siri & Control smart home devices; play and control media; search and retrieve information; make calls, texts, announcements, payments; manage alarms, timers, lists, calendars, notes, reminders & Hey Siri & Full sentence & Full sentence \\ [1ex] \hline \end{tabular} \caption{Voice Assistant characteristics.} \label{tab:VA} \end{table*} \subsection{Voice User Interface Devices} Voice control interfaces are divided into two categories: hubs and wearable devices. Depicted in Figure \ref{fig:vuis}, hubs are further divided into smart speaker devices and smart display devices, and wearables are further divided into smartwatches and smart glasses. We identify products that fall into each category, and that are either manufactured by, or compatible with the voice assistants of Google, Amazon and Apple to allow for comparison between these commercially-offered smart home systems. Products identified are detailed in terms of their compatibility, connectivity, processor type, input types and output mechanisms in Table \ref{tab:VUI}. From here, we can examine and compare commercial VUI products. As with the voice assistants, findings reflect the lack of variation between market-leading VUI devices. In terms of connectivity, all use Wi-Fi and/or Bluetooth, with some products using Thread, and smart watches using NFC for contactless payments. With regard to interaction mechanisms, all products are equipped with a microphone and speaker, enabling voice interaction, and several products additionally have touch-screen displays. A variety of sensors are seen to be used to collect various types of information and expand the device capabilities, particularly on smart watches that often monitor factors of the users health. Most products are CPU-based, with exception to some small wearable devices like smart earbuds. In examining market-leading smart home UI devices on a functional and hardware level, we have somewhat characterised the current smart home VUI market. The taxonomy supports thinking towards an alternative set of programmable, cost effective devices that could replicate the functionality of commercial VUIs in the design of a private smart home. Additionally, the findings can be used to compare the VUI devices commercially available for smart homes, with those used in academic implementations in Section 5. \begin{figure}[t] \centering \includegraphics[width=3.3in]{Figure5.png} \caption{VUI devices} \label{fig:vuis} \end{figure} \begin{table*}[t]\footnotesize \centering \begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|p{1.2cm}|p{6.8cm}|p{1.6cm}|} \hline {\bf Product name} & {\bf Compatibility} & {\bf Connectivity} & {\bf CPU/MCU} & {\bf Input types (incl. sensors)} & {\bf Output types} \\ \hline \hline \multicolumn{6}{|c|}{\bf Smart Speaker: Hubs: VUI: hardware} \\ \hline Google Nest Mini & Google Assistant & Wi-Fi, Bluetooth & CPU & Microphone, capacitive touch, ultrasound sensing & Speaker \\ \hline Apple HomePod Mini & Apple Siri & Wi-Fi, Bluetooth, Thread, Ultra Wideband chip & CPU & Microphone, touch control & Speaker, LED lights \\ \hline Amazon Echo Dot (5th Gen) & Amazon Alexa & Wi-Fi, Bluetooth & CPU & Microphone, buttons & Speaker, LED lights \\ \hline \multicolumn{6}{|c|}{\bf Smart Display: Hubs: VUI: hardware} \\ \hline Google Nest Hub (2nd Gen) & Google Assistant & Wi-Fi, Bluetooth & CPU & Microphone, touch screen, ultrasound sensing, ambient light, motion, temperature sensors & Speaker, screen display \\ \hline Amazon Echo Show 8 & Amazon Alexa & Wi-Fi, Bluetooth & CPU & Microphone, camera, touch screen, buttons & Speaker, screen display \\ \hline \multicolumn{6}{|c|}{\bf Smart Watch: Wearables: VUI: hardware} \\ \hline Samsung Galaxy Watch 4 & Google Assistant & Wi-Fi, Bluetooth, NFC, GPS & CPU & Microphone, barometer, accelerometer, gyroscope, optical heart rate sensor, electrical heart sensor, bioelectrical impedance analysis sensor, light sensor, geomagnetic sensor, hall sensor & Speaker, screen display \\ \hline Apple Watch S3 & Apple Siri & Wi-Fi, Bluetooth, NFC, GPS & CPU & Microphone, force touch, barometric altimeter, optical heart rate, accelerometer, gyroscope, ambient light sensors & Speaker, screen display \\ \hline \multicolumn{6}{|c|}{\bf Smart Buds: Wearables: VUI: hardware} \\ \hline Google Pixel Buds A Series & Google Assistant & Bluetooth & MCU & Microphone, capacitive touch, motion-detecting accelerometer, IR proximity sensor & Speaker \\ \hline Apple AirPods (3rd Gen) & Apple Siri & Bluetooth & MCU & Microphone, motion and speech detecting accelerometers, skin-detecting and force sensors & Speaker \\ \hline Amazon Echo Buds (2nd Gen) & Amazon Alexa & Bluetooth & MCU & Microphone, accelerometer, capacitive touch, proximity sensor & Speaker \\ \hline \end{tabular} \caption{Voice User Interface device characteristics.} \label{tab:VUI} \end{table*} \subsection{Edge Devices} The two types of edge devices we have identified (sensors and actuators) are discussed in this section. We further classify sensing and actuator agents according to the type of service they enable in the smart home, which can also be seen as their perceived role in the smart home from a user perspective. For each high level category, we give examples of product types that fit within each category. The type hierarchy of sensor devices can be seen in Figure \ref{fig:sensors}, with the type hierarchy of actuator devices given in Figure \ref{fig:actuators}. Sensor devices available gather information for the purpose of either: \begin{itemize} \item security and safety, e.g., smoke alarm \item surveillance, e.g, indoor camera \item environment monitoring, e.g., indoor air quality monitor \end{itemize} Actuator devices available serve to control either: \begin{itemize} \item entertainment, e.g., TV streaming \item access, e.g., door lock \item lighting and plugs, e.g., smart bulb \item climate, e.g., air conditioning \item household appliances, e.g. coffee machine \end{itemize} For each product type, we have identified commercially available devices that are compatible with the VUI’s and voice assistants described. Each device is detailed in terms of its manufacturer, connectivity, compatibility, processor type (MCU/CPU) type, input mechanisms and output mechanisms/actions controlled. See Table \ref{tab:sensors} for a detailed list of products for each category of sensor devices. See Table \ref{tab:sensors} for a detailed list of products for each category of products under actuator devices. The taxonomy and characterisation of edge devices helps us to understand the complexity and nature of the hardware used in smart home systems. For example, most edge devices contain MCUs, most use either Wi-Fi or Bluetooth Low Energy (BLE) for connectivity, and many interact with other home devices and phones for sending alerts, monitoring and/or control. Ultimately, there now exists a wide range of devices that are commercially available for smart home implementations, serving a range of purposes and enabling various types of user benefit. We move on to examine research-based systems, in order to consider the types of device used in academic smart home implementations, where we can compare the device types used in commercial and academic smart home realms. \begin{figure*}[!t] \centering \includegraphics[width=6.5in]{Figure6.png} \caption{Sensors} \label{fig:sensors} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=6.5in]{Figure7.png} \caption{Actuators} \label{fig:actuators} \end{figure*} \begin{table*}[!t]\footnotesize \centering \begin{tabular}{|p{2.5cm}|p{2cm}|p{1.4cm}|p{1.2cm}|p{5.5cm}|p{2.8cm}|} \hline {\bf Product name} & {\bf Compatibility} & {\bf Connectivity} & {\bf CPU/MCU} & {\bf Input types (incl. sensors)} & {\bf Output types} \\ \hline \hline \multicolumn{6}{|c|}{\bf Door \& Window Monitoring: Security \& Safety: Sensing Device: Edge Device: hardware} \\ \hline Eve Door \& Window & Apple HomeKit & BLE, Thread & MCU & Contact sensor & Notifies phone \\ \hline Ring Alarm Contact Sensor & Amazon Alexa & Z-Wave & MCU & Contact sensor & Siren, notifies devices, phone \\ \hline \multicolumn{6}{|c|}{\bf Smoke Alarm: Security \& Safety: Sensing Device: Edge Device: hardware} \\ \hline Google Nest Protect (2nd Gen) & Google Assistant & Wi-Fi, BLE & MCU & Microphone, buttons, ambient light, humidity, temperature, split spectrum smoke, occupancy sensors, microphone, accelerometer & Siren, light ring, notifies phone \\ \hline Netatmo Smart Smoke Alarm & Apple HomeKit & Wi-Fi, BLE & MCU & Photoelectric (optical) smoke sensor & Siren, notifies phone \\ \hline \multicolumn{6}{|c|}{\bf Leak Detector: Security \& Safety: Sensing Device: Edge Device: hardware} \\ \hline Eve Water Guard & Apple HomeKit & Bluetooth, Thread & MCU & Water sensor & Siren, light, notifies phone, devices \\ \hline D-Link Water Leak Sensor & Google Assistant & Wi-Fi & MCU & Water sensor & Siren, notifies phone, devices \\ \hline \multicolumn{6}{|c|}{\bf Smart Doorbell: Surveillance: Sensing Device: Edge Device: hardware} \\ \hline Google Nest Doorbell (battery) & Google Assistant, Amazon Alexa & Wi-Fi, BLE & CPU & Camera, microphone, PIR proximity and motion sensor, magnetometer & Speaker, light ring, notifies devices \\ \hline Ring Video Doorbell & Amazon Alexa & Wi-Fi & CPU & Camera, microphone, motion sensor & Speaker, notifies devices \\ \hline Netatmo Smart Video Doorbell & Google Assistant, Amazon Alexa, Apple Siri & Wi-Fi & \textit{CPU} & Camera, microphone & Speaker, notifies phone \\ \hline \multicolumn{6}{|c|}{\bf Indoor Camera: Surveillance: Sensing Device: Edge Device: hardware} \\ \hline Google Nest Cam (battery) & Google Assistant, Amazon Alexa & Wi-Fi, BLE & CPU & Camera, microphone, motion sensor & Speaker, LED light, notifies phone \\ \hline Ring Indoor Cam & Amazon Alexa & Wi-Fi & MCU & Camera, microphone, motion sensor & Speaker, notifies phone \\ \hline Netatmo Smart Indoor Cam & Google Home, Amazon Alexa, Apple HomeKit & Wi-Fi & \textit{CPU} & Camera & Notifies devices \\ \hline \multicolumn{6}{|c|}{\bf Indoor Air Quality Monitor: Environment Monitoring: Sensing Device: Edge Device: hardware} \\ \hline Amazon Air Quality Monitor & Amazon Alexa & Wi-Fi, BLE & MCU & Temperature, CO, humidity, volatile organic compounds (VOCs), particulate matter sensors & Notifies hub, devices \\ \hline Eve Room Indoor Air Quality Monitor & Apple Siri & BLE, Thread & MCU & Temperature, humidity, volatile organic compounds sensors & E-ink display, monitor via phone, hub device \\ \hline \multicolumn{6}{|c|}{\bf Outdoor Weather Monitor: Environment Monitoring: Sensing Device: Edge Device: hardware} \\ \hline Eve Weather & Apple Siri, Eve app & BLE, Thread & MCU & Temperature, humidity, barometric pressure sensors & E-ink display, monitor via phone, hub device \\ \hline Netatmo Weather Station & Apple HomeKit, Amazon Alexa & Wi-Fi & MCU & Temperature, humidity, barometric pressure, sound, CO2 sensors, rain gauge, anemometer & Monitor via connected devices \\ \hline \end{tabular} \caption{Sensor-based product characteristics. (Note: entries in \textit{italics} are assumed)} \label{tab:sensors} \end{table*} \begin{table*}[!t]\footnotesize \centering \begin{tabular}{|p{3cm}|p{2cm}|p{2.3cm}|p{1.2cm}|p{4.4cm}|p{2.4cm}|} \hline {\bf Product name} & {\bf Compatibility} & {\bf Connectivity} & {\bf CPU/MCU} & {\bf Input types (incl. sensors)} & {\bf Output types} \\ \hline \hline \multicolumn{6}{|c|}{\bf Entertainment: TV Streaming: Actuator Device: Edge Device: hardware} \\ \hline Google Chromecast HD & Google Assistant & Wi-Fi, HDMI & CPU & via connected devices, Google voice remote & Streams media to connected TV \\ \hline Amazon Fire TV Cube & Amazon Alexa & Wi-Fi, Bluetooth, HDMI & CPU & Microphone, buttons, via connected devices, Alexa voice remote & Speaker, LED strip, streams media to connected TV \\ \hline Apple TV 4K & Apple Siri & Wi-Fi, Bluetooth, HDMI & CPU & via connected devices, Siri remote & Streams media to connected TV \\ \hline \multicolumn{6}{|c|}{\bf Smart Lock: Access: Actuator Device: Edge Device: hardware} \\ \hline Yale Smart Door Lock & Google Assistant, Amazon Alexa, Apple HomeKit & Bluetooth (Yale Wi-Fi bridge required for remote control) & \textit{MCU} & via assistants, Yale access app & Unlocks door \\ \hline \multicolumn{6}{|c|}{\bf Smart Bulb: Lighting \& Plugs: Actuator Device: Edge Device: hardware} \\ \hline Philips Hue Smart Bulb & Google Assistant, Amazon Alexa, Apple HomeKit & Bluetooth, Zigbee & MCU & via app, assistant & LED light \\ \hline Nanoleaf Essentials Bulb & Google Assistant, Apple HomeKit & BLE, Thread & MCU & via app, assistant devices & LED light \\ \hline Hey! Smart Bulb & Google Assistant, Amazon Alexa & Wi-Fi & MCU & via app, assistant devices & LED light \\ \hline \multicolumn{6}{|c|}{\bf Smart Plug: Lighting \& Plugs: Actuator Device: Edge Device: hardware} \\ \hline Philips Hue Smart Plug & Google Assistant, Amazon Alexa, Apple HomeKit & Bluetooth & MCU & via assistant devices, mobile app & Turn plug on/off \\ \hline Amazon Smart Plug & Amazon Alexa & Wi-Fi & MCU & via assistant devices, mobile app & Turn plug on/off \\ \hline Eve Energy Smart Plug & Apple Siri & Bluetooth, Thread & MCU & via assistant devices, mobile app & Turn plug on/off \\ \hline Wemo Mini Smart Plug & Amazon Alexa, Google Assistant, Apple Siri & Wi-Fi & MCU & via assistant, mobile app & Turn plug on/off \\ \hline \multicolumn{6}{|c|}{\bf Smart Blinds \& Curtains: Lighting \& Plugs: Actuator Device: Edge Device: hardware} \\ \hline WEONSEE Smart Blinds Motor & Google Assistant, Amazon Alexa & Wi-Fi & \textit{MCU} & via assistant, remote, mobile app & Raise/lower blinds \\ \hline \multicolumn{6}{|c|}{\bf Smart Humidifiers \& Purifiers: Climate Control: Actuator Device: Edge Device: hardware} \\ \hline VOCOlinc MistFlow Smart Humidifier & Google Assistant, Amazon Alexa, Apple HomeKit & Wi-Fi & \textit{MCU} & touch control, via app, assistant, water level sensor & LED light, mist \\ \hline VOCOlinc PureFlow Smart Air Purifier & Google Assistant, Amazon Alexa, Apple HomeKit & Wi-Fi & \textit{MCU} & touch control, via app, assistant, temperature, humidity, particulate matter sensors & LED screen, LED light, filters air \\ \hline \multicolumn{6}{|c|}{\bf Thermostat: Climate Control: Actuator Device: Edge Device: hardware} \\ \hline Google Nest Thermostat E & Google Assistant & Wi-Fi, BLE & MCU & via mobile app, assistant, temperature, humidity, proximity, occupancy, ambient light sensors & LCD screen, adjust temperature \\ \hline Netatmo Smart Thermostat & Google Assistant, Apple HomeKit, Amazon Alexa & Wi-Fi, Radio long-range & MCU & via mobile app, assistant, temperature sensor & E-paper display, temperature adjustment \\ \hline Tado Smart Thermostat & Google Assistant, Amazon Alexa, Apple HomeKit & Wi-Fi, 6LoWPAN & MCU & via mobile app, assistant, capacitive touch buttons, temperature, humidity sensors & LED screen, adjust temperature \\ \hline \multicolumn{6}{|c|}{\bf Air Conditioning: Climate Control: Actuator Device: Edge Device: hardware} \\ \hline Tado Smart AC Control V3+ & Google Assistant, Amazon Alexa, Apple HomeKit & Wi-Fi, Infrared & MCU & via mobile app, assistant, LED touch surface, temperature, humidity sensors & Controls AC unit \\ \hline \multicolumn{6}{|c|}{\bf Robot Vacuum: Household Appliances: Actuator Device: Edge Device: hardware} \\ \hline Samsung JetBot robot vacuum & Google Assistant, Amazon Alexa & Wi-Fi & CPU & via app, assistant, LiDAR sensor, anti-cliff sensor & Clean floors with brush \\ \hline iRobot Roomba i7 & Google Assistant, Amazon Alexa & Wi-Fi & CPU & via mobile app, assistant, dirt detect sensor, cliff sensor, camera & Clean floors with brush \\ \hline \multicolumn{6}{|c|}{\bf Smart Coffee Machine: Household Appliances: Actuator Device: Edge Device: hardware} \\ \hline Smarter Smart Coffee Maker (2nd Gen) & Google Assistant, Amazon Alexa, Apple Siri & Wi-Fi & \textit{MCU} & via mobile app, assistant & Makes coffee \\ \hline \end{tabular} \caption{Actuator-based product characteristics. (Note: entries in \textit{italics} are assumed)} \label{tab:actuators} \end{table*} \section{Smart Homes: Implementations \& their Evaluation} This section enumerates gateway-based voice-controlled smart home implementations seen in academic literature. Focus is placed on both online and offline architectures, where we focus on the device types, control mechanisms, and evaluation methods employed. The work contributes towards providing a review of the current academic efforts in implementing and evaluating voice-control in smart home systems, so that we can assess current progress and bridge the gap between academic and commercial efforts in this realm. \cite{Kamdar2017-uq} discuss methods used to compare and evaluate home automation systems with voice control, considering factors of flexibility, robustness, security, cost and response time. The work highlights challenges such as speech recognition in noisy environments, training voice recognition modules, the number of commands voice recognition modules can store, and the response time of the devices. We look to some more recent voice-controlled home automation systems, and consider the evaluation methods employed, as well as the challenges that arise and limitations of the systems. \subsection {Online Implementations} Voice-controlled smart home systems discussed in the literature often rely on cloud processing, and we highlight such works here. \cite{Kodali2017-uy} use an ESP8266 MCU in their smart home implementation, with the objective of implementing a cost effective, robust and scalable system. Domestic appliances, including a light, fan, bulb, and charger, are connected via relay channels to the MCU. An Android app allows for the remote control of the appliances via touch screen buttons or voice control. The system accepts simple voice commands such as “turn on” and “turn off” followed by the appliance name, and commands are recognised using Google speech to text. Experimentation results simply show the system to work as expected, using both voice and button command inputs. \cite{Rani2017-gs} detail the implementation of an IoT-based voice-controlled home automation system similar to the previous. A mobile device is the central controller and cloud-based NLP performs command interpretation. Arduino Boards with a low power MCU and Wi-Fi connectivity are interfaced with appliances and programmed to respond to commands interpreted by the mobile device. The proof-of-concept implementation includes a fan, light, coffee machine and door alarms, however details of evaluation and the NLP software used are omitted. \cite{Poongothai2018-pf} employ an open-source Google Assistant to provide remote voice-control of devices in an IoT lab, where an Android smartphone application serves as the VUI and allows for both voice and text input. Appliances include lights, fans, a projector and air conditioner, which are connected to Wi-Fi via NodeMCU devices, allowing them to be remotely controlled and monitored. Testing takes the form of checking if the sensor readings are correctly displayed, checking voice commands work (e.g. lights come on when requested), and measuring the energy consumption of devices using a current sensor. While easily extendable, the setup is very simple and evaluation methods are limited. \cite{Putthapipat2018-qb} implement a voice-controlled smart home system, where a CPU-based Raspberry Pi is used as the central controller, with a speaker and microphone attached to allow for VUI capabilities. Voice recognition is performed using Google Cloud API, Wit.ai identifies the user's intent and the response is output via Google translate for text to speech conversion. The system requires a stable internet connection to work, the energy consumption is high, the speech recognition performance depends on the quality of the microphone and there is no built-in screen. The advantages of the design lie in its flexibility and scalability, and the potential for more hardware devices to be added. \cite{Kumar2021-ow} use a mobile device interfaced with Google's DialogFlow API to allow voice-control of appliances in a smart home system. A Raspberry Pi serves as the central server, and ESP8266 and Arduino Mega 2560 MCUs are used to allow remote control home appliances. Additional IR sensors are used for object detection to turn lights on/off and PIR sensors are used to turn on/off fans. The experimental setup is proposed, but there are no evaluation methods employed. \cite{Sudharsan2022-yf} propose a prototype Alexa smart speaker composed of a Raspberry Pi, ReSpeaker v2, Raspberry Pi camera v2 and a regular speaker. OpenCV provides a face recognition algorithm which is used to enable user authentication, and the Alexa voice service SDK is used for voice interaction. Snowboy wake-word engine is employed to prevent accidental activation of the smart speaker, and improve privacy, where the accuracy of this function is measured by the false alarm per hour vs the miss detection rates. The evaluation methods are not specifically described, however observations are made, e.g. there is a 0.5 second delay after wake word detection, after which audio is sent to the Alexa cloud service. \subsection {Offline Implementations} We turn now to look at literature on offline implementations of voice-controlled smart home systems. Offline is used in the sense of not using third party, typically cloud-based, services, rather than meaning the system is disconnected from any network. \cite{Arriany2016-cv} use Windows 7 speech recognition software for voice control of a smart home fan and light. A laptop with Windows 7 OS serves as the UI, a PC acts as the home server, and an Arduino UNO MCU is used to process commands sent from the home server and route teach to the intended device. For evaluation, the word recognition accuracy is measured in noisy and quiet environments, and when using varying quality microphones. Execution time and general system functionality are additionally tested. The authors suggest the use of more specific means of testing in future work, where data rate and error rate could be measured. \cite{Ali2017-am} implement voice-control of electrical devices in the home and office, using the EasyVR 2.0 \footnote{https://www.sparkfun.com/products/retired/12656} voice recognition module to handle command inputs. An MCU connects to appliances via relays, and RF communication is used to transmit commands from the voice recognition module to the MCU. Experimentation is performed by repeating each command 30 times. The success rate of command recognition for different age groups and genders is measured, as well as when different types of noise or physical obstacles are present. Noise and obstacles were seen to most negatively impact recognition performance. Again the EasyVR Shield 2.0 is used for voice-control of appliances connected via relays to an Arduino MCU in \cite{Elsokah2020-ur}. The work implements a voice-controlled smart room, where the lighting, radio, television, music player and air-conditioning can be controlled using voice commands. Testing took the form of repeating spoken commands in different noise and weather conditions, with participants of different genders and ages. The average success rate of command recognition was seen to be 96\%. The authors also considered the price and quality of system components, the response speed, and the overall system quality. Future work includes using a Raspberry Pi the provide the system greater AI capabilities, and using mobile phone input to increase the user-friendliness. \cite{Ehikhamenle2017-er} implement a wireless voice recognition system to detect a finite set of commands, using Arduino Uno microcontrollers, a relay circuit, an Arduino v3 voice recognition module, plus an RF transmitter and receiver. When a command is recognised, the MCU coordinates the execution of the command via the relay. The voice input device consists of a microphone, ultrasonic sensor, and the voice recognition module. The v3 voice recognition module was trained on a single user, and speech recognition performance was tested by placing the user and device in a quiet and noisier room, finding music to interfere with recognition accuracy. Due to the v3 module being trained on a single-person, the command recognition system is speaker-dependent, and therefore more errors occurred when tested on a different user. \cite{Munir2019-md} detail the development of a speaker-dependent offline smart home system, where speech recognition is also performed on an Arduino v3 module. ESP8266 devices are used to enable wireless communication, and the OpenCV library is used on a Raspberry Pi for face recognition to allow for automatic access control (door opening). The v3 module performs speech to text and text to speech functions. The accuracy of the speech and face recognition models are reported to be 90\% and 96\% respectively with low latency, but the specific evaluation methods are not described. The v3 module limits the system since it can only store 80 commands and is trained on a single person. Similarly, \cite{Shehab2020-hx} implement a voice and gesture controlled home automation system that uses an Arduino v3 voice recognition module to take speech as input. An Arduino Mega 2560 MCU acts as the central controller, connecting to a light, alarm, fan, and air conditioning via a relay. A display shows the status of devices, and there an ultrasonic sensor allows for the detection of four hand gesture patterns. Testing saw the system used by different types of patients, and the success rate of voice and gesture interaction were measured. Alternatively, \cite{Bhagath2021-gk} use the open-source PocketSphinx library to develop an offline embedded speech recognition system for Android-based mobile devices that can be used in a home automation system. A Raspberry Pi 3 serves as the smart home controller in the prototype implementation, with LED lights connected. When tested with live speech the system saw 80\% accuracy, with a delay time of 1 second for command recognition and execution. \cite{Bai2022-hl} detail the design of a speech recognition algorithm for a voice-controlled smart home. The algorithm uses feature extraction to extract speech features, and a template library for pattern matching, where the template with the highest similarity to the speech features is returned as the recognition result. For evaluation, voice files containing recordings of commands were played to the system, e.g. “Help” was repeated 20 times and was recognised with a 95\% accuracy rate, “Fire extinguishing” was repeated 20 times and was recognised with a 85\% accuracy rate. The average speech recognition accuracy reported is 83.75\%, where the authors identify microphone quality and the clarity of pronunciation tn negatively impact performance. \section{Voice Control: Methods, Tools, \& Evaluations} There is much interest within the research community for the development of an offline speech recognition system that could help towards the creation of a cloudless voice assistant \cite{Murshed2021-ng}. In this section, we consider available libraries and tools for implementing voice assistant, ASR and KWS functionalities on resource-constrained devices. The methods and metrics used to compare and evaluate these components are also of interest, as we look towards potential tools for implementing an on-device voice assistant, as well as an evaluation framework for measuring the performance of this. \subsection{Voice Assistants} \cite{Eric2017-il} compare voice assistants that could be integrated with an embedded home automation system, including Jasper, Google Cloud Speech API, Alexa Voice Service, and Bing Speech API. The characteristics of each are compared, considering programming languages, supported human languages, supported architectures and whether each is open source and works offline or online, e.g. Jasper is open source, modular in design, programmed in Python, and supports an offline voice enabled gateway architecture (when using offline STT and TTS engines). The study provides some insight into available offline voice assistant systems. Rhasspy \footnote{https://rhasspy.readthedocs.io/en/latest/} provides an open source, offline set of voice assistant services, and has seen implementation in a private Raspberry Pi-based smart home voice assistant \cite{Dallmer-Zerbe2021-lp}. The implemented system saw low response times, however transcription accuracy is sub-optimal and would need to be improved, perhaps through employing a more sophisticated language model and a microphone that filters noise. In a comparison of voice assistants, Rhasspy was seen to perform best in terms of trustworthiness, security of transmission, and on-device intelligence, where Alexa, Google Assistant and Mycroft were also considered \cite{Jesse2021-ty}. Voice Assistants can generally be assessed by checking if they perform the desired functionality. For example, scenarios are given to evaluate the functionality of the smart home dialog system in \cite{Huang2015-db}, and \cite{Chen2018-ei} evaluate their voice assistant by measuring the intent recognition accuracy and the entity recognition accuracy for the functions available (e.g. dialing a contact, checking the weather in a location). \subsection{Automatic Speech Recognition} Open-source speech recognition models for edge devices are compared in \cite{Peinl2020-fv}, providing useful methods for comparing such systems. Each ASR model is run on Raspberry Pi 3 and Nvidia Jetson devices, and metrics of real-time factor (RTF) and accuracy (WER) are measured using the LibriSpeech dataset \cite{Panayotov2015-uu}. Amongst Mozilla DeepSpeech \footnote{https://github.com/mozilla/DeepSpeech}, and Facebook wav2letter \footnote{https://github.com/facebookresearch/wav2letter}, PyTorch Kaldi \footnote{https://github.com/mravanelli/pytorch-kaldi} was found to be the most effective model. Kaldi is widely used, and \cite{Pinto2020-sc} implement a Kaldi ASR system on a mobile device containing an ARM CPU and a low-power GPU. It is also notable that a custom version of a Kaldi training recipe was used in the design of the (now unavailable) Snips Voice Platform for SLU, where LibriSpeech was again used for model evaluation, and performance metrics included WER and speech and memory usage \cite{Coucke2018-ay}. Using similar methods to those above, the transformer-based speech recognition systems Wav2Vec 2.0 and Speech2Text are compared by running each on Raspberry Pi and Nvidia Jetson Nano devices in \cite{Gondi2021-jo}. The LibriSpeech dataset \cite{Panayotov2015-uu} is used for testing, and each models performance is measured in terms of latency, accuracy and computational efficiency (CPU and memory footprint). A dataset of one speaker recorded at varying distances away from the microphone has been used to evaluate the WER of the lightweight, open source voice recognition systems, Julius and PocketSphinx, running on a Raspberry Pi 3 \cite{Vojtas2018-jk}. Julius was found to perform better in terms of word recognition probability, and the evaluation method is useful to see since the maximum effective distance for speech recognition can vary significantly between platforms \cite{Heartfield2018-ne}. PocketSphinx is also employed for speech recognition functionality in the implementation of a cloud-free local voice assistant in \cite{Polyakov2018-ff}. We consider some additional libraries for ASR, and their evaluation. Tensorflow Keras Library is used to implement a deep learning model for command classification on Raspberry Pi 3 in \cite{Zonios2021-ry}. The classification model achieved 87.8\% accuracy and 1.136 second latency on an 8 command recognition task, where k-fold cross validation was used for evaluation. The test dataset contained spectrograms for 905 voice samples, each 4 seconds long and recorded by a male and female on both headset and mobile phone microphones. Additionally, an open-source end-to-end ASR toolkit named ESPnet has been developed and seen to achieve reasonable performance on WSJ, CSJ and HKUST ASR tasks \cite{Watanabe2018-ta}. In recent times, end-to-end approaches are generally proving to achieve better results in a range of speech processing tasks, compared with conventional pipelines. \subsection{Keyword Spotting} Keyword spotting (KWS) is an always-on feature that often serves as the entry point for speech-based smart home devices. The problem involves detecting a predefined command from a continuous stream of audio, and the always-on nature means KWS models are normally implemented on very small MCUs \cite{Fernandez-Marques2018-al}. For example, Snowboy hotword-detection is an offline model that can detect one particular word to wake a system \cite{Amberkar2018-nf}. Accuracy for commercial keyword spotting algorithms is high. EdgeSpeechNets \cite{Lin2018-xx} and TinySpeech \cite{Wong2020-vw} are two speech recognition systems that were designed to run on edge devices, and evaluated using the Google Speech Commands dataset \cite{Warden2018-ja}. The dataset was designed to train and evaluate keyword spotting systems, and contains 105,829 utterances of 35 words, with each utterance stored as a one-second WAVE format file. EdgeSpeechNets best run achieved ~97\% accuracy on the test set, and TinySpeech best model achieved ~95\% accuracy. More recently, a transformer-based architecture set a benchmark for the Google Speech Commands dataset, achieving 98.6\% and 97.7\% accuracy on the 12 and 35-command tasks respectively \cite{Berg2021-al}. Metrics relating to accuracy, model size in bits, and power usage to evaluate their keyword spotting algorithm in \cite{Blouw2020-sb}. The network is trained using a train/dev/test split dataset containing one second speech samples belonging to twelve command classes, and the accuracy of various sized models are compared to gauge the trade-off between recognition accuracy and model size. In a similar way, \cite{Zhang2017-vn} compare neural network-based KWS models in terms of accuracy and memory requirement . A depthwise-separable convolution neural network (DS-CNN) was found to be best, achieving ~94\% accuracy and requiring 38.6KB memory. Along with accuracy, the metrics false accept (FA) rate and false reject (FR) rate are commonly used to evaluate keyword spotting systems, where lower rates result in better user experience \cite{Michaely2017-ss}. A positive dataset containing utterances that begin with the trigger phrase can be used for measuring WER and FR rate, and a negative dataset containing utterances that were accepted by a trigger-phrase detector but do not contain the trigger phrase can be used to measure the FA rate. \cite{Myer2018-xo} use the Google Speech Commands dataset \cite{Warden2018-ja} to evaluate a KWS model, where accuracy, FA rates and FR rates are measured in both clean and noisy environments. Plus, an additional dataset is used to test the system in a wider variety of acoustic conditions. \section{Towards a more ideal smart home} We look towards the design of a prototype smart home system that can mirror the benefits provided by commercial smart homes, while ensuring privacy and security. \cite{Kamdar2017-uq} suggest an ideal smart home balances cost, robustness, reaction time, processing power, flexibility, security and convenience, therefore we use these factors to imagine a better voice-controlled smart home system that is respectful of user's needs. The section details technologies, devices and methods that can help towards building a better voice-controlled smart home. \subsection{Voice-based Authentication} Most security problems are seen to be related to the lack of authentication schemes for users and devices \cite{Alam2012-zl}. Lack of user authentication can allow for hidden voice attacks, where ultrasonic voice commands and voice commands unintelligible to humans have been found to be recognised by the majority of commercial voice controlled systems \cite{Heartfield2018-ne}. In one case, a Google system was able to recognise the phrase “Ok Google” with 95\% accuracy, compared to human transcriber’s 22\%. The absence of user authentication allows anyone in the home to interact with and extract information from your smart home, including TVs which can wake and interact with devices. An overview of cybersecurity risks that arise from the absence of authentication schemes in smart homes are described in \cite{Sudharsan2022-yf}. We consider voice-based methods for implementing secure user authentication in smart homes. Voice-based biometric authentication is advantageous in that it is part of the user, and the authentication process can be done hands-free. Despite the benefits, voice authentication is not as accurate or secure as other biometric methods, e.g. voice clips of a speaker can be used to impersonate the speaker (replay-attacks) \cite{Mukhopadhyay2015-uf}. In response to the weakness of purely voice-based user authentication, a continuous speaker authentication system has been implemented \cite{Feng2017-vk}. A microphone is used in combination with an accelerometer on a wearable device to check the speech received by the VUI originated from the speaker's throat. The system achieved an average overall detection accuracy rate of 97\%, with a 0.09\% false positive rate, and appears to be an effective solution. \subsection{Smart Home Usage} User-centric design should be looked towards in the design of an ideal smart home. One way to achieve this is to gauge user preferences, and find how existing smart home users interact with their systems and the benefits that are most popular. For example, one study found media devices (smart TVS and streaming devices) to be the most common type of smart home device in the majority of world regions, with surveillance devices being the most common in South and Southeast Asia \cite{Kumar2019-mq}. In terms of user queries, an analysis of smart speaker voice history logs has shown music to be the most common query, followed by information \cite{Bentley2018-hm}. Users generally made use of three domains on a weekly basis, the majority of commands were single sentence commands, and stop was the most common command made to Google Home. Another study on interactions with smart speaker devices found that requests for audio were most common, followed by requests to control media and requests to control smart home devices \cite{Malkin2019-qg}. Human evaluation can be used to gauge the strengths and weaknesses of a smart home system. The Sweet-Home voice-based system has been used for evaluating a voice-controlled smart home for seniors and people with visual impairment to see if the voice assistant capabilities are effective enough for use in real-world scenarios \cite{Vacher2015-dv}. The paper explores qualitative and quantitative methods of evaluation, and findings showed that some people did not like the rigid grammar, instead preferring more natural commands. The absence of user feedback in the system (to respond to user commands) was noted to negatively impact user experience as it was unknown whether the system had interpreted the given command. The experiments were conducted using scenarios to demonstrate the user performing different tasks using the system. There are always trade-offs to be made in ASR systems, for example it is easier to transcribe utterances if a strict command syntax is imposed, however users often deviate from strict grammars \cite{Mishakova2019-qd}. \subsection{Model Personalisation} Following on from our discussion on smart home usage, we consider the possibility to personalise speech recognition models to users, and in turn improve the robustness and performance. It is predicted that speech recognition models of the future will be greatly personalised to individual devices, through on-device methods of training lightweight models \cite{Hannun2021-dn}. In \cite{Gashi2022-it}, personalised models are shown to perform significantly better than population models for sleep quality recognition and sleep stage detection using wearable devices. Additionally, narrower age-dependent modelling produced higher depression detection accuracy, when compared with age-agnostic modelling. Gender-dependent systems are also commonly used for improving accuracy \cite{Stasak2022-wu}. Personalised speech recognition has been used to help handle individuals' distinct perception of physical objects in their house in \cite{Mehrabani2015-as}. For example, a window in the living room can be called “living room window”, “first floor window" or “the big window” by different users. The system allows users to select customised names for their devices which are integrated into the language mode. It is thought that such customisable communication with devices allows for a more natural interaction between humans and the IoT. Similarly, \cite{Rubio-Drosdov2017-qo} recognise that constrained sets of commands can be troublesome for voice interface systems, therefore a system that associates actions with multiple descriptive tags has been developed. Google found integrating personal information into the language model of a large vocabulary speech recognition system for mobile devices to be advantageous for reducing the WER without increasing computational overhead, e.g. incorporating the users list of contact names to reduce the number of out-of-vocabulary words \cite{McGraw2016-ow}. Personal Voice Activity Detection (VAD) has also been published by Google to help achieve state-of-the-art performance in an on-device speech recognition system \cite{Ding2022-cf}. Personal VAD is an always-running component that detects the voice activity of given target speakers, and can help in improving speech recognition and reducing computational resources. Relating to the field of model personalisation, on-device training paradigms for training and fine-tuning models are increasing in popularity, due to users becoming more privacy aware, and legislation protecting users against the storage of their data \cite{Almeida2021-mf}. A framework for personalising convolutional neural networks (CNNs) using on-device resources, that makes use of early exits to improve efficiency, and that could be applied to speech recognition scenarios has been published \cite{Leontiadis2021-ve}. \subsection{TinyML \& Edge Computing} TinyML aims to bring machine learning (ML) inference to low-power IoT devices, and the field has seen some growth in recent years. The field is highly relevant to the paper, in that we are examining the feasibility of implementing TinyML algorithms for speech recognition on edge devices in the home. Several software stacks have been released to implement and train TinyML algorithms, and TensorFlow Micro has been used for benchmarking and analysing the onboard performance of 30 neural network (NN) models running on 7 available MCU boards in \cite{Sudharsan2021-pz}. Metrics of price-performance ration, onboard accuracy, and memory consumption are used to compare the results. Similar works exist. \cite{Zhang2018-kq} compare the performance of ML packages for edge devices, where metrics of latency, memory footprint and energy usage are considered. Hardware platforms for edge computing are discussed in \cite{Hadidi2019-ws}, and popular frameworks for edge inference are also compared in terms of execution time, energy consumption and temperature. An Open Framework for Edge Intelligence (OpenEI) has been presented in \cite{Zhang2019-sx}. OpenEI has been designed to address challenges in edge computing, relating to computing power mismatches between existing AI algorithms and edge platforms, and the difficulty of data sharing between edge devices. The framework can support multiple applications, including smart homes. \subsection{DIY Technologies \& Devices} We can use the taxonomy of smart home interface devices to imagine a similar collection of devices that can proxy the commercial products we have detailed. In a previous study of commercial products, \cite{Koreshoff2013-eg} highlight the potential for researchers to rethink the approaches used, and use DIY technologies to quickly and cheaply construct prototype systems to explore the vision of the IoT. Arduino and Raspberry Pi devices commonly appear in literature surrounding such smart home implementations, and a comparison of systems built using Raspberry Pi and Arduino controllers can be seen in \cite{Gunge2016-rc}. Low power, low-cost sensors for various smart home applications that work with Arduino and Raspberry Pi devices are compared in \cite{Gazis2021-gk}. Single-purpose units designed for indoor air quality monitoring have also been compared \cite{Omidvarborna2021-cn}, and the technical specifications of low-cost sensors for measuring air quality are collated in \cite{Demanega2021-xb}. The works are useful to refer to when designing a research-based smart home IoT system, where the advantages and limitations of these available DIY technologies could be explored. \subsection{Open-Source Datasets} Datasets are vital for evaluation of a system, and can also be used for training purposes. Available datasets that can be used for both tasks include: \begin{itemize} \item Mozilla’s Common Voice dataset \footnote{https://commonvoice.mozilla.org/en} contains over 2k hours of voice recordings from over 80k English speakers between the ages of 19 and 79. Each entry consists of an MP3 recording and the corresponding text file transcription. \item LibriSpeech \cite{Panayotov2015-uu} is a corpus containing 1000 hours of read English speech. The labels are assigned at a sentence level, therefore there is limited word-level alignment. LibriSpeech is therefore more suitable for automatic speech recognition, rather than keyword spotting. \item Amazon has released an open-source speech dataset with the aim of encouraging developers to build more third-party apps and services for its smart speaker Alexa. The dataset contains one million spoken samples across 51 languages. Open-source code has been released to help developers train multilingual AI models \cite{Quach2022-jp}. \end{itemize} \section{Conclusion} The paper arises from an investigation into the feasibility of implementing a voice-controlled smart home system that processes audio data locally on IoT devices, and overcomes some consumer issues of control, privacy, security, power usage and cost. We have examined hardware and software components of a smart home in our taxonomy, and reviewed academic voice-controlled smart home implementations and their evaluation. Additionally, we have considered available libraries for implementing voice control functionalities on resource-constrained devices, and identified potential avenues to explore in continuing research towards privacy-preserving voice control for smart homes. The taxonomy establishes clear terminology surrounding IoT-based smart homes, and provides insight into the nature of various commercial smart home devices. The approach taken tries to give a holistic reflection of current technologies used in voice-controlled smart homes, taking into account commercial and academic efforts and identifying key areas of improvement. Overall, the work hopes to support research towards methods for developing and evaluating a voice-controlled smart home system that processes speech at the edge, and is private and secure by design. \bibliographystyle{apalike}
1,314,259,992,705
arxiv
\section{Introduction} Combinatorial Multi-Armed Bandits (CMABs) are a well-known extension of Multi-Armed Bandits (MABs) \citep{robbins1952some}, where instead of choosing a single arm at each round, the agent selects a set of arms. It then observes noisy feedback for each arm in this set (`semi-bandit feedback') and aims to maximize a known reward function of the selected arms and their parameters. More specifically, it aims to minimize its regret, which is the expected cumulative difference between the reward of the best action and the reward of the agent's actions. The applications of this framework are numerous and vary between reward functions; the most common one is the linear reward function \citep{kveton2015tight}, which can be applied for problems such as spectrum allocation, shortest paths, routing problems and more \citep{gai2012combinatorial}. Another common application is the Probabilistic Maximum Coverage (PMC) problem \citep{merlisM19}, which is closely related to problems such as influence maximization and ranked recommendations. Due to its usefulness, many previous works analyze regret upper bounds for different variants of this setting. While some works focus on specific reward functions, others derive bounds that hold for general reward functions. In these cases, the bounds usually depend on some measure of smoothness of the reward, for example, its global Lipschitz constant, or its Gini-weighted smoothness. The latter is a more refined smoothness criterion, recently suggested in \citep{merlisM19}, that takes into account the interaction between the local gradients of the reward and concentration properties of the arms. On the other hand, there are almost no works on matching lower bounds; to the best of our knowledge, all existing lower bounds for CMABs were derived for specific reward functions -- either the linear one or the PMC problem. Notably, there is no characterization of lower bounds for general reward functions, and it is unclear whether existing upper bounds are tight. The gain from general lower bounds is threefold: (i) When the bounds are loose, understanding which quantities affect the lower bounds allows devising tighter algorithms; (ii) When the bounds are tight, the instances on which the bounds were derived can help to determine under which additional assumptions the lower bounds do not hold. Such assumptions might allow us to derive improved upper bounds; (iii) When we can control some parameters of the problem, e.g., the number of arms in an action, their effect on the lower bound can help us tune them for each application. In this work, we derive problem-dependent (\textbf{Theorem \ref{theorem:dependent_lower_bound}}) and problem-independent (\textbf{Theorem \ref{theorem:independent_lower_bound}}) lower bounds that hold for general reward functions under mild assumptions. The problem-dependent bound shows that for any `good' bandit strategy, there exists a CMAB instance such that the asymptotic regret must be larger than a certain logarithmic rate. The problem-independent bound shows that for any strategy and any large enough horizon $T$, there exists a horizon-dependent instance with a $\sqrt{T}$ regret. To derive these bounds, we define a family of action sets for CMAB problems, which we call \emph{$\commset$-disjoint}. There, a subset of arms $\commset$ appear in all actions and independent of other actions, while the rest of the arms appear in a single action. We then prove that for $\commset$-disjoint problems, both bounds depend on a new modified Gini-smoothness measure; specifically, they reproduce existing lower bounds for both the linear reward function and the PMC problem. If the reward function is also monotone, as in most practical applications, we derive an additional bound that depends on the Gini-smoothness of the reward and matches the upper bound of \citep{merlisM19} up to logarithmic factors (\textbf{Proposition \ref{prop:smoothness-relation-euclid}}). Thus, our results demonstrate that without any additional assumptions, the bounds are tight for almost any reward function. \section{Related Work} The general framework of combinatorial bandits with semi-bandit feedback was first presented in \citep{chen2013combinatorial}. Since then, it has had many extensions, e.g., for the case of probabilistically-triggered arms, where the set of arms in an action might be random \citep{chen2016combinatorialA,wang2017improving}, and for reward functions that depend on the arm distribution \citep{chen2016combinatorialB}. Moreover, many previous works focus on specific instances of this problem, e.g., linear reward functions \citep{kveton2015tight,combes2015combinatorial,degenne2016combinatorial}, cascading bandits \citep{kveton2015cascading,kveton2015combinatorial} and more. Recently, \citet{merlisM19} presented BC-UCB, a Bernstein-based UCB algorithm with regret bounds that depend on a new smoothness measure, which they call the \emph{Gini-weighted smoothness}. Specifically, they show that by combining the reward nonlinearity with the local behavior of the confidence intervals, the dependency of previous regret bounds in the maximal action size can be removed. In this work, we show that for monotone reward functions, the Gini-smoothness also characterizes the lower bounds for CMAB problems and therefore prove that this upper bound is tight. In addition, while all previously stated papers assume that the reward function is monotone, a few papers also support non-monotone reward functions \citep{wang2018thompson,huyuk2019thompson}. We also present lowers bounds for this scenario. Although there has been extensive work on regret upper bounds for CMABs, there are almost no results on lower bounds for this setting. \citet{kveton2015tight} derived lower bounds for the linear reward function with general arm distributions, and when arms are also independent, lower bounds can be found in \citep{degenne2016combinatorial,combes2015combinatorial}. Also, \citet{kveton2015cascading} derived lower bounds for cascading bandits and \citet{merlisM19} derived bounds for the PMC problem. Nevertheless, and to the best of our knowledge, there are no lower bounds for general reward functions. A comparison of our bounds to previous related bounds can be found in Table \ref{table:comparison}. In contrast to the CMAB problem, the lower bounds for MABs are well characterized. In their seminal work, \citet{lai1985asymptotically} presented the first general problem-dependent lower bound for MABs, which was later extended by \citet{burnetas1996optimal}. In terms of problem-independent bounds, \citet{auer2002nonstochastic} derived an $\Omega(\sqrt{KT})$ lower bound for $K$-armed bandit problems with time horizon $T$, whose constants were later improved by \citet{cesa2006prediction}. Also, \citet{mannor2004sample} proved problem-independent lower bounds with both linear and logarithmic regimes. Recently, \citet{garivier2018explore} presented a general tool that allows deriving various lower bounds for MABs. We adapt this tool for the CMAB problem to derive our new regret bounds. \setlength {\tabcolsep}{0.75pt} \begin{table*} \centering \begin{threeparttable} \caption{Upper (UB) and lower (LB) bounds of different CMAB problems for arbitrary action sets. Dep./Ind. are problem-dependent and problem-independent bounds, and the notations follow Section \ref{section:prelim}. $\gamma_\infty$ is the global Lipschitz constant of a reward function, and for the Gini-smoothness $\gamma_g$, it holds that $\gamma_g\!\le\!\sqrt{K}\gamma_\infty$ \citep{merlisM19}. $\dr{\min}$ is the minimal gap.} \label{table:comparison} {\footnotesize \begin{tabular}{|c|c|c|c|>{\columncolor[gray]{0.9}}c| >{\columncolor[gray]{0.9}}c|}\hline \textbf{CMAB problem} & \textbf{Type} & \textbf{Previous UB} & \textbf{Previous LB} & \textbf{Theorem \ref{theorem:dependent_lower_bound} or \ref{theorem:independent_lower_bound}} & \textbf{Proposition \ref{prop:smoothness-relation-euclid}} \\ \hline \multirow{2}{3.85cm}[-0.225cm]{\centering\textit{General reward functions}} & Dep. & $\Ocal\br*{\frac{\gamma_\infty^2\NarmsK\ln T}{\dr{\min}}}^\dag$& None& $\Omega\br*{\max_{\meanvec, \commset}\frac{\tilde{\gamma}_g^2(\meanvec;\commset)m\ln T}{\dr{\min}{\Nbatch_\commset}}}$& NA\\ \hhline{~-----} & Ind. & None & None & $\Omega\br*{\max_{\meanvec, \commset}\sqrt{\frac{\tilde{\gamma}_g^2(\meanvec;\commset)m T}{{\Nbatch_\commset}}}}$& NA\\ \hline \multirow{2}{3.85cm}[-0.25cm]{\centering\textit{Monotone reward functions}} & Dep. & $\Ocal\br*{\frac{\gamma_g^2m\ln^2K\ln T}{\dr{\min}}}^\ddag$& None & $\Omega\br*{\max_{\meanvec, \commset}\frac{\tilde{\gamma}_g^2(\meanvec;\commset)m\ln T}{\dr{\min}{\Nbatch_\commset}}}$& $\tilde\Omega\br*{\frac{\gamma_g^2m\ln T}{\dr{\min}}}$ \\ \hhline{~-----} & Ind. & $\Ocal\br*{\gamma_g\lnK\sqrt{m T}}^\ddag$& None & $\Omega\br*{\max_{\meanvec, \commset}\sqrt{\frac{\tilde{\gamma}_g^2(\meanvec;\commset)m T}{{\Nbatch_\commset}}}}$& $\tilde\Omega\br*{\gamma_g\sqrt{m T}}$\\ \hline \multirow{2}{3.85cm}[0.1cm]{\centering\textit{Linear reward function} \vspace{-0.1cm}{\footnotesize $$\rdef=\sum_{i\in\action}\meanval_i$$} } & Dep. & $\Ocal\br*{\frac{\NarmsK\ln T}{\dr{\min}}}^\S$ & $\Omega\br*{\frac{\NarmsK\ln T}{\dr{\min}}}^\S$ & $\Omega\br*{\frac{\NarmsK\ln T}{\dr{\min}}}$ & $\Omega\br*{\frac{\NarmsK\ln T}{\br*{\lnK}\dr{\min}}}$ \\ \hhline{~-----} & Ind. & $\Ocal\br*{\sqrt{\NarmsK T}}^\S$ & $\Omega\br*{\sqrt{\NarmsK T}}^\S$ & $\Omega\br*{\sqrt{\NarmsK T}}$ & $\Omega\br*{\sqrt{\frac{\NarmsK T}{\lnK}}}$\\ \hline \multirow{2}{3.85cm}[0.1cm]{\centering\textit{PMC problem} \vspace{-0.18cm}{\mathTable $$\rdef\!=\!\sum_{i=1}^M\!\br*{\!\!1\!-\!\prod_{j\in\action}\!(1-\meanval_{ij})\!\!}$$} } & Dep. & $\Ocal\br*{\frac{\NarmsM^2\ln^2K\ln T}{\dr{\min}}}^\ddag$ & $\Omega\br*{\frac{\NarmsM^2\ln T}{\dr{\min}}}^\ddag$ & $\Omega\br*{\frac{\NarmsM^2\ln T}{\dr{\min}}}$ & $\Omega\br*{\frac{\NarmsM^2\ln T}{\br*{\lnK}^2\dr{\min}}}$ \\ [0.2cm] \hhline{~-----} & Ind. & $\Ocal\br*{M\lnK\sqrt{m T}}^\ddag$ & $\Omega\br*{M\sqrt{m T}}^\ddag$ & $\Omega\br*{M\sqrt{m T}}$ & $\Omega\br*{\frac{M\sqrt{m T}}{\lnK}}$\\ [0.2cm] \hline \end{tabular} }\vspace{-0.075cm} \begin{tablenotes} \small \item \hspace{-0.65cm} $^\dag$\citep{wang2018thompson}, requires independent arms. \,\, $^\ddag$\citep{merlisM19} \,\, $^\S$\citep{kveton2015tight} \end{tablenotes} \vspace{-0.3cm} \end{threeparttable} \end{table*} \setlength{\tabcolsep}{6pt} \section{Preliminaries and Notations} \label{section:prelim} We start with some notations. Let $\brs{n}=\brc*{1,\dots,n}$, and for any vector $\xb\!\in\!\mathbb{R}^n$ and set $I\!\subset\! \brs*{n}$, denote by $\xb_I$, a sub-vector of $\xb$ that contains only elements from $I$. We denote the Kullback-Leibler (KL) divergence between two distributions $\unu,\unu'$ by $D_\mathrm{KL}(\unu,\unu')$, and the KL divergence between two Bernoulli random variables with expectations $p,q$ by $\mathrm{kl}(p,q)$. For any vector $\xb\!\in\!\mathbb{R}^n$, let $\xb^s$ be a permutation such that $x_1^s\!\le\! \dots\!\le\! x_n^s$, and define the increasing permutation of vector $\xb\!\in\!\mathbb{R}^n$ w.r.t. a set $I$ as $p^{\xb,\commset}\!=\!\brs*{\xb_{I^c}^s,\xb_I}\!\in\!\mathbb{R}^n$; namely, the beginning of the vector $p^{\xb,\commset}$ contains a sorted permutation of the elements of $\xb$ in $I^c=\brs*{n}/I$, and its end contains the elements of $\xb$ in $I$. Finally, for any set $I$ of bounded size $\abs*{I}\leK$, we denote by ${\Nbatch_\commset}\!=\!K-\abs{I}$ the size of the complementary set w.r.t. $K$. We work under the combinatorial multi-armed bandit setting with semi-bandit feedback. Denote the number of arms (`base arms') by $m$, and let $\actionset\subset 2^{\brs*{m}}$ be the set of possible actions (`super arms'), that is, the set that contains all valid combinations of base arms that the agent can choose. The number of base arms in each action $\action\in\actionset$ is bounded by $\abs{\action}\leK$, and w.l.o.g., assume that $\abs{\action}=K$. At the beginning of each round $t$, the arms generate an observation vector $\obs{t}\!=\!\br*{\obs[1]{t},\dots,\obs[m]{t}}\in\brs*{0,1}^m$, sampled from a fixed distribution independently of other rounds. Then, the agent chooses an action $\action_t\in\actionset$ and observes feedback $\obs[\action]{t}\triangleq\brc*{\br*{i,\obs[i]{t}}, \forall i\in\action_t}$. Denote the means of base arms by $\mathbb{E}\brs*{\obs{t}}\!=\!\meanvec\!=\!\br*{\meanval_1,\dots,\meanval_{m}}$. The goal of the agent is to maximize a known reward function $\rdef$, without knowing $\meanvec$. Specifically, the agent aims to minimize its regret $R(T) = \sum_{t=1}^T\br*{\reward{\action^*}{\meanvec}-\reward{\action_t}{\meanvec}}\triangleq\sum_{t=1}^T\dr{\action_t}$, where $\action^*\in\argmax_{\action\in\actionset}\rdef$ is an optimal action\footnote{Previous work on regret upper bounds also allows approximate maximization of $r$. We focus on the best achievable performance, so we assume we can efficiently maximize $r$.} and $\dr{\action_t}=\reward{\action^*}{\meanvec}-\reward{\action_t}{\meanvec}$ is the suboptimality gap of $\action_t$. To prove the lower bounds, we require a mild assumption on the reward function, which we call \emph{index invariance}: \begin{definition} \label{assum: differentiable reward} A reward function $\rdef:\actionset\times\brs*{0,1}^m\to\mathbb{R}$ is called differentiable if for any $\action\in\actionset$, it is differentiable in $\meanvec\in\brs*{0,1}^m$. \end{definition} \begin{definition} \label{assum: reward function} A differentiable reward function $\rdef:\actionset\times\brs*{0,1}^m\to\mathbb{R}$ is called smooth index invariant if for any $\action\in\actionset$, it only depends on the arms in $\action$, i.e., $\rdef=\rewardvec{\meanvec_\action}$. \end{definition} When the function is index invariant, and with a slight abuse of notations, we also write $\rewardvec{\meanvec}$, with $\meanvec\in\mathbb{R}^{K}$, to represent the mean of arms $\meanvec_\action$ for $\abs{\action}=K$. This assumption helps avoiding cases in which specific arms behave inherently different than other arms, such that the problem becomes much easier. For example, for the biased linear function $\rdef = \sum_{i\in \action} \br*{\meanval_i +m i}$ and for any $\meanvec\in\brs*{0,1}^m$, the optimal action is $\action^*=\argmax_{\action\in\actionset}\sum_{i\in\action} i$, regardless of the arm means; therefore, both the upper and lower bounds for this reward function trivially equal zero. In contrast, the lower bound for the linear function are nonzero (see Table \ref{table:comparison}); thus, without the index-invariance, the lower bounds cannot be characterized solely by the gradient of reward function w.r.t. $\meanvec$, in contrast to the existing upper bounds. To the best of our knowledge, all practical applications for CMABs are index-invariant or can be written as a sum over an index-invariant function that is applied on different arms (e.g., as in Corollary \ref{corollary: sum index invariant dependent}). We also believe that our analysis will hold for reward functions that depend on the \emph{order} of arms inside an action. However, we leave this extension for future work. Besides this assumption, we later move our focus to monotone reward functions, which are defined as follows: \begin{definition} \label{def:monotone} A differential reward function $\rdef:\actionset\times\brs*{0,1}^m\to\mathbb{R}$ is called monotone if for any $\action\in\actionset$, any $\meanvec\in\brs*{0,1}^m$ and any $i\in\brs*{m}$, it holds that $\nabla_i \rdef\ge0$. \end{definition} We remark that in most previous work, the upper bounds only hold for monotone functions, which include most of the practical application, e.g., the linear and PMC problems. We end this part of the preliminaries with an important inequality that was derived for MABs and will enable us to derive our new bounds for CMABs. Let $\brs*{m}$ be a set of arms, where each arm $a\in\brs*{m}$ is characterized by a distribution $\nu_a$ over $\mathbb{R}^K$, and denote $\unu=\brc*{\nu_a}_{a\in\brs*{m}}$.\footnote{\citet{garivier2018explore} assume that $\nu_a$ are distributions over $\mathbb{R}$, but the exact same proof holds for distributions over $\mathbb{R}^K$.} Assume that at each round, when playing $a_t$, a sample $Y_t$ is drawn independently at random from $\nu_{a_t}$. Let $\psi$ be a strategy that chooses an arm according to the history and internal i.i.d randomization $U_t\in\brs*{0,1}$. Namely, if $H_t = \br*{U_0,Y_1,U_1,\dots,U_t,Y_t}$, then $a_{t+1}=\psi_t(H_t)$. Also, let $N_{\psi,a}(T)$ be the number of times an arm $a$ was played under strategy $\psi$ up to time $T$. Under these notations, the following holds: \begin{lemma}[\citealt{garivier2018explore}] \label{lemma:kl count bound} For all bandit problems $\unu,\unu'$, for all $\sigma(H_T)$-measurable random variables $Z$ with values in $[0,1]$, \begin{align} \label{eq:kl count bound} \sum_{a=1}^m \mathbb{E}_{\unu}\brs*{N_{\psi,a}(T)}D_\mathrm{KL}(\nu_a,\nu_a') \ge \mathrm{kl}\br*{\mathbb{E}_\unu\brs*{Z},\mathbb{E}_{\unu'}\brs*{Z}}\, , \end{align} where $\mathrm{kl}(p,q)=p\ln\frac{p}{q}+(1-p)\ln\frac{1-p}{1-q}$. \end{lemma} In the combinatorial case we use similar notations and denote the action counts by $N_{\psi,\action}(T)$. \subsection{Smoothness Measures} \begin{figure*} \centering \subfigure[\footnotesize Hoeffding confidence bounds]{ \includegraphics[width=0.31\linewidth]{hoeff.png} \label{subfig:hoeff}} \subfigure[\footnotesize Bernstein confidence bounds]{ \includegraphics[width=0.31\linewidth]{bern_loose.png} \label{subfig:bern-loose} \subfigure[\footnotesize Bernstein confidence bounds]{ \includegraphics[width=0.31\linewidth]{bern_tight_text.png} \label{subfig:bern-tight}} \caption{Red arrows: confidence intervals on the function parameter (x-axis confidence), due to either Hoeffding or Bernstein inequalities; the latter is tighter near the edges. Blue zone: the resulting confidence intervals on the reward function in the bold curve (y-axis confidence). In Figures \ref{subfig:hoeff},\ref{subfig:bern-loose}, these intervals are derived using the global Lipschitz constant of the reward $\gamma_\infty$, i.e., $CI\br*{\rewardvec{\hat\meanvec}}\!\lesssim\! \gamma_\infty \sum_i CI(\hat\meanval_i)$. In Figure \ref{subfig:bern-tight}, we present the real confidence interval on the reward, that is much tighter than the bound due to $\gamma_\infty$. This is since the bound is tight where the gradient is large, which is around the edge of the domain, but loose in other areas, where the gradient is small. } \vspace{-0.2cm} \label{figure:gini-smoothness} \end{figure*} We now present the smoothness measures for smooth index-invariant reward functions that govern our lower bounds. The measures are defined for arm parameters $\meanvec\!\in\!\mathbb{R}^K$ and a set $\commset\!\subset\!\brs*{m}$ as follows: \begin{enumerate} \item L2 Gini-weighted smoothness \vspace{-0.25cm} {\small \begin{align} \label{eq:smoothDef L2} \gamma_{g,2}^2(\meanvec;\commset) &\triangleq\sum_{i=1}^{\Nbatch_\commset} p^{\meanvec,\commset}_i(1-p^{\meanvec,\commset}_i)\nabla_i\rewardvec{p^{\meanvec,\commset}}^2 = \sum_{i\notin \commset} \meanval_i(1-\meanval_i)\nabla_i\rewardvec{\meanvec}^2 \end{align} }\vspace{-0.5cm} \item L1 Gini-weighted smoothness\vspace{-0.25cm} {\small \begin{align} \label{eq:smoothDef L1} \gamma_{g,1}^2(\meanvec;\commset) &\triangleq\br*{\sum_{i=1}^{{\Nbatch_\commset}}\sqrt{p^{\meanvec,\commset}_i(1-p^{\meanvec,\commset}_i)}\nabla_i\rewardvec{p^{\meanvec,\commset}}}^2 = \br*{\sum_{i\notin \commset} \sqrt{\meanval_i(1-\meanval_i)}\nabla_i\rewardvec{\meanvec}}^2 \end{align} }\vspace{-0.5cm} \item \emph{Modified} Gini-weighted smoothness\vspace{-0.25cm} {\small \begin{align} \label{eq:newSmooth} &\tilde{\gamma}_g^2(\meanvec;\commset) = \sum_{i=1}^{\Nbatch_\commset} p^{\meanvec,\commset}_i(1-p^{\meanvec,\commset}_i)\nabla_i\rewardvec{p^{\meanvec,\commset}}^2 +2\sum_{i=1}^{\Nbatch_\commset} \sum_{j=i+1}^{\Nbatch_\commset} p^{\meanvec,\commset}_i(1-p^{\meanvec,\commset}_j)\nabla_i\rewardvec{p^{\meanvec,\commset}}\nabla_j\rewardvec{p^{\meanvec,\commset}} \end{align} }\vspace{-0.5cm} \end{enumerate} Notice that the last equality in Equations \eqref{eq:smoothDef L2},\eqref{eq:smoothDef L1} is due to the index-invariance assumption. Specifically, the assumption implies that $\nabla_i\rewardvec{\meanvec}$ only depends on $\meanval_i$ and the set of values $\brc*{\meanval_i}_{i=1}^m$. For smooth index-invariant reward functions, $\gamma_{g,2}^2(\meanvec;\commset)$ is an extension of the Gini-smoothness, as defined in \citep{merlisM19}. In particular, their smoothness parameter is defined as $\gamma_g=\max_{\commset,\meanvec}\gamma_{g,2}(\meanvec;\commset)=\max_{\meanvec}\gamma_{g,2}(\meanvec;\emptyset)$. A motivation to this smoothness criterion is presented in Figure \ref{figure:gini-smoothness}. Notably, the performance of any algorithm strongly depends on the uncertainty in the reward of actions. However, algorithms only have access to uncertainty in arm parameters, and arm uncertainty must be translated into reward uncertainty. The figure illustrates that doing so using the global Lipschitz constant might lead to loose bounds. Intuitively, if the gradients are small, even wide confidence intervals do not cause high uncertainty in the reward. Similarly, narrow confidence intervals do not lead to high reward uncertainty even where the gradients are large. Thus, \citet{merlisM19} suggested weighting the gradients according to the confidence intervals of the arms. Specifically, their algorithm (BC-UCB) relies on Empirical-Bernstein concentration-bounds \citep{audibert2009exploration}, that depend on the variance of the arms and are proportional to $\sqrt{\meanval_i(1-\meanval_i)}$ for Bernoulli arms. Similarly weighting the gradients leads to the Gini-smoothness measures. In the following sections, we prove lower bounds for the CMAB problem that depend on $\tilde{\gamma}_g^2(\meanvec;\commset)$. While complex at first glance, $\tilde{\gamma}_g^2(\meanvec;\commset)$ is actually closely related to the other smoothness measures. Notably, observe that the only difference between $\tilde{\gamma}_g^2(\meanvec;\commset)$ and $\gamma_{g,1}^2(\meanvec;\commset)$ is a small modification to the second (cross) term of \eqref{eq:newSmooth}. In Proposition \ref{prop:smoothness-relation-modified-L1}, we indeed prove that $\tilde{\gamma}_g^2(\meanvec;\commset)\ge\tilde\Omega(\gamma_{g,1}^2(\meanvec;\commset))$. When the function is monotone, we later prove that $\gamma_{g,1}(\meanvec;\commset)$ can also be related to $\gamma_{g,2}(\meanvec;\emptyset)$, which leads to tight lower bounds, up to logarithmic factors. \section{Problem-Dependent Lower Bounds} In this section, we prove a problem-dependent lower bound. Specifically, we show that there exist a CMAB instance, such that the asymptotic regret of any consistent strategy on this instance is lower bounded by a logarithmic term that depends on $\tilde{\gamma}_g(\meanvec;\commset)$ and the minimal gap $\dr{}=\min_{\action\in\actionset,\dr{\action}>0}\dr{\action}$. Consistent strategy is defined as follows: \begin{definition} A bandit strategy $\psi$ is called consistent if for any CMAB problem, any $\action \in\actionset$ such that $\dr{\action}>0$ and any $0<\alpha\le1$, it holds that $\mathbb{E}\brs*{N_{\psi,\action}(T)}=o\br*{T^\alpha}$. \end{definition} To prove the lower bounds, we focus on a subset of CMAB problems which we call \emph{$I$-disjoint}. \begin{definition} \label{def:I-disjoint} For a given subset $\commset\!\subset\!\brs*{m}$, a CMAB problem is called $\commset$-disjoint if all arms $i\!\in\!\commset$ are mutually independent of all arms $i\!\notin\!\commset$ and also $\action_1\!\cap\!\action_2\!=\!\commset$ for any $\action_1\!\ne\!\action_2\!\in\!\actionset$. \end{definition} Since $\abs{\action}\leK$, we implicitly assume that $\abs{\commset}\leK$ and denote the effective maximal action size by ${\Nbatch_\commset}=K-\abs{I}$. In $\commset$-disjoint CMAB problems, the base arms $i\in\commset$ appear in all actions and are mutually independent of the other arms. The rest of the arms can only appear in one action. This notion of CMAB problems actually extends the action sets from previous work -- for the linear reward function, \citet{kveton2015tight} divided the arms into $\frac{m}{K}$ disjoint groups, which is equivalent to $\commset\!=\!\emptyset$. In contrast, in the CMAB instance on which the PMC lower bounds were derived, only a single arm per item varied between actions \citep{merlisM19}. If there are $M$ items, this is equivalent to $\commset\!=\!\brs*{K-M}$. We show that choosing the `worst-case' set $\commset$ naturally results with tighter lower bounds. We start by deriving general problem-dependent lower bound, using Lemma \ref{lemma:kl count bound}: \begin{restatable}{lemma-rst}{generalDependentBound} \label{lemma:generalDependentBound} Let $\rewardNoArgs$ be a smooth index invariant reward function with $\abs*{\action}=K$ for all $\action\in\actionset$. Also, let $\unu$ be the action distribution of an $I$-disjoint CMAB problem such that there exists an arm $i\notin\commset$ in $\action^*$ with $\Pr\brc*{\obs[i]{t}=\meanval_i}<1$ and $\nabla_i\reward{\action^*}{\meanvec}\ne0$. Then, for all consistent strategies $\psi$ and all suboptimal actions $S$, {\small \begin{align*} \liminf_{T\to\infty}\frac{\mathbb{E}_\unu\brs*{N_{\psi,\action}(T)}}{\ln T} \ge \frac{1}{D_\mathrm{KL}(\nu_\action,\nu_{\action^*})} \enspace . \end{align*} } Specifically, it holds that {\small \begin{align*} \liminf_{T\to\infty}\frac{\mathbb{E}_\unu\brs*{R(T)}}{\ln T} \ge \sum_{\action:\dr{\action}>0}\frac{\dr{\action}}{D_\mathrm{KL}(\nu_\action,\nu_{\action^*})}\enspace . \end{align*} } \end{restatable} The proof is in Appendix \ref{appendix:general-bounds}, and partially follows Theorem 1 of \citep{garivier2018explore}, with some adjustments due to the nonlinearity of the reward function. Although this lemma gives a general lower bound for $\commset$-disjoint CMAB problems, it has no clear dependence on any smoothness measure of the reward. To derive lower bounds that directly depend on such measures, we carefully design the arm distributions $\nu_\action, \nu_{\action^*}$ and analyze both $\dr{\action}$ and $D_\mathrm{KL}(\nu_\action,\nu_{\action^*})$ for these distributions. We do so in the following theorem, which results with the desired lower bound: \begin{myTheorem} \label{theorem:dependent_lower_bound} Let $\rewardNoArgs$ be a smooth index invariant reward function. For any $\meanvec\in\brs*{0,1}^K$ and any small enough $\dr{}>0$, there exists an instance of an $\commset$-disjoint bandit problem $\unu$ with minimal gap $\dr{}$ and $\mathbb{E}\brs*{\nu_{\action^*}}=\meanvec$, such that the expected regret of any consistent algorithm is bounded by {\small \begin{align*} \liminf_{T\to\infty} \frac{\mathbb{E}_\unu\brs*{R(t)}}{\ln T} \ge \max_\commset \frac{(m-2K)\tilde{\gamma}_g^2(\meanvec;\commset)}{8{\Nbatch_\commset}\dr{}} \triangleq DB^*_r(\dr{};\meanvec)\enspace. \end{align*} } \end{myTheorem} The result can also be easily extended as follows: \begin{corollary} \label{corollary: sum index invariant dependent} Let $\rewardNoArgs$ be a smooth index invariant reward function and let $\tilde\meanvec\!=\!\brs*{\meanvec_1,\dots,\meanvec_M}\!\in\!\brs*{0,1}^{MK}$, with $\meanvec_i\in\brs*{0,1}^K$ for all $i\in\brs*{M}$. Also, define $\tilde{\rewardNoArgs}\br*{\tilde\meanvec} = \sum_{i=1}^M \rewardvec{\meanvec_i}$. Then, for any $\meanvec\in\brs*{0,1}^K$, there exists a CMAB instance that aims to maximize $\tilde{\rewardNoArgs}$ and has a minimal gap $\dr{}$, such that the optimal action has means $\meanvec_i=\meanvec$ for all $i\in\brs*{M}$ and the expected regret of any consistent algorithm is bounded by {\small \begin{align*} \liminf_{T\to\infty} \frac{\mathbb{E}_\unu\brs*{R(t)}}{\ln T} \ge M^2\cdot DB^*_r(\dr{};\meanvec)\enspace. \end{align*} } \end{corollary} Notice that for each summand of $\tilde{\rewardNoArgs}$, $K$ arms are chosen from a total of $m$ arms, independently of all other summands. Therefore, the problem can also be formulated as an $\tildem=Mm$-armed problem, with $\tildeK=MK$ selected arms per round. The two formulations are equivalent, and we decided to follow this notation for consistancy with \citep{merlisM19}. The corollary is a direct result of Theorem \ref{theorem:dependent_lower_bound}; specifically, by fixing the arm distribution to be identical for all of the summands of $\tilde{\rewardNoArgs}$, we get $\tilde{\rewardNoArgs}=M\cdot \rewardNoArgs$, and since the Gini-smoothness parameters linearly scale with the reward function, the bound naturally follows. Before proving the theorem, we start with a short discussion on the tightness of the results and the relation to existing upper bounds. One interesting case is when $p^{\meanvec,\commset}_i=p_0$ for all $i\in\brs*{{\Nbatch_\commset}}$. Due to the index invariance, the gradient components are also equal, which results with $\tilde{\gamma}_g^2(\meanvec;\commset)={\Nbatch_\commset}^2p_0(1-p_0)\nabla_i\rewardvec{p^{\meanvec,\commset}}^2$. This choice allows us to easily reproduce the existing lower bounds and leads to the bounds stated in Table \ref{table:comparison}. Specifically, for the linear reward function, we achieve the bound by choosing $\commset\!=\!\emptyset$ and $p_0\!=\!\frac{1}{2}$; for the PMC problem, we start by writing $\tilde{\rewardNoArgs}=\sum_{i=1}^M \reward{\action}{\meanvec_i}$ for $\rdef=1-\prod_{j\in\action}\meanval_{j}$. Then, we apply Theorem $\ref{theorem:dependent_lower_bound}$ on $\rewardNoArgs$ and choose the subset $\commset$ such that it contains all arms except for a single (first) element from each vector $\meanvec_i=\br*{\frac{1}{2},0,\dots,0}$, i.e., ${\Nbatch_\commset}=1$ and $p_0=\frac{1}{2}$. The lower bound is then a direct result of Corollary \ref{corollary: sum index invariant dependent}. We remark that although choosing ${\Nbatch_\commset}=K$ is seemingly optimal, this is not always the case. A notable example for this issue is the reward function $\rdef = 1-e^{-\sum_{i\in\action} \meanval_i^2}$. For this instance, optimizing over $p_0$ leads to a bound of $\tilde{\gamma}_g^2(\meanvec;\commset)/{\Nbatch_\commset}=\Omega\br*{1/\sqrt{{\Nbatch_\commset}}}$, and the optimal choice is ${\Nbatch_\commset}=\Ocal(1)$. In this example it also holds that $\gamma_g=\Ocal(1)$, and, therefore, this bound is tight. We note that this bound only requires the smooth index invariance assumption and holds for non-monotone reward functions. When the reward is also monotone, observe that $\tilde{\gamma}_g^2(\meanvec;\commset)\ge\gamma_{g,2}^2(\meanvec;\commset)$. Choosing $\commset=\emptyset$ and maximizing over $\meanvec$ leads to a lower bound of $\Omega\br*{\frac{m\gamma_g^2\ln T}{\dr{}K}}$, which differs from the upper bound of \citep{merlisM19} by a factor of $K$. We later prove that when the reward is monotone, a stronger lower bound can be derived, such that it matches the upper bound up to logarithmic factors. Next, we present the proof of Theorem \ref{theorem:dependent_lower_bound} which is composed of three parts. We first present a carefully designed parametric arm distributions, that allow controlling the suboptimality gap while retaining low KL-divergence. We then bound both the gap and the KL-divergence in terms of the parameters of the distributions and apply Lemma \ref{lemma:generalDependentBound} using these bounds. We conclude the proof by optimizing the resulting lower bound over the distribution parameters. \begin{proof} \par \noindent{\textbf{\scshape{Step 1:}} \textit{Fixing a parametric family of CMAB instances.}} \noindent Denote by $\commset^*$, the maximizer of $\commset$ in $DB^*_r(\dr{};\meanvec)$, and for brevity, let $p=p^{\meanvec,\commset^*}$ and ${\Nbatch_\commset} = {\Nbatch_\commset}^*=K-\abs*{\commset^*}$. Note that this also implies that $0\le p_1\le\dots\le p_{\Nbatch_\commset} \le 1$. In addition, due to the form of the lower bound, we can deduce that $p_1>0$ and $p_{\Nbatch_\commset}<1$, since otherwise, we can move arms with means 0 and 1 into $\commset^*$ and strictly increase $DB^*_r(\dr{};\meanvec)$. For now, we also assume that no two arms have the same mean, i.e., $0<p_1<\dots<p_{\Nbatch_\commset}<1$, and will return to this assumption later in the proof. Without loss of generality, we also assume that ${\Nbatch_\commset}>0$ and there exists $i\in\brs*{{\Nbatch_\commset}}$ such that $\nabla_i\rewardvec{p}\!\ne\!0$ since otherwise, $DB^*_r(\dr{};\meanvec)\!=\!0$ and trivially holds. Similarly, we assume that $m\!>\!2K$. \setlength{\tabcolsep}{1pt} \begin{wraptable}{r}{0.55\textwidth} \captionsetup{width=.9\linewidth} \caption{The probability distributions $\nu$ and $\nu^*$ of arms outside $\commset^*$, with parameters $0\!<\!p_1\!<\!p_2\!<\!\dots\!<\!p_{\Nbatch_\commset}\!<\!1$ and $\epsilon\in\mathbb{R}^{\Nbatch_\commset}$. Arms in $\commset^*$ have mean $\meanvec_{\commset^*}$ and are independent of the rest of the arms.} \vspace{-0.25cm} \label{table:lower_bound_distribution} \begin{center} {\footnotesize \begin{tabular}{|c|c|c|}\hline Observation vector & Probability of & Probability of \\ $\obs{t}\!\!=\!\br*{\obs[1]{t},\cdots,\obs[{\Nbatch_\commset}]{t}}$& $\obs{t}$ in $\nu^*$& $\obs{t}$ in $\nu$\\ \hline $(1,1,\cdots,1,1)$ & $p_1$ & $p_1-\epsilon_1$\\ \hline $(0,1,\cdots,1,1)$ & $p_2-p_1$ & $p_2-p_1-\epsilon_2$\\ \hline $\cdots$ & $\cdots$ & $\cdots$\\ \hline $(0,0,\cdots,0,1)$ & $p_{\Nbatch_\commset} - p_{{\Nbatch_\commset}-1}$ & $p_{\Nbatch_\commset} - p_{{\Nbatch_\commset}-1}-\epsilon_{\Nbatch_\commset}$\\ \hline $(0,0,\cdots,0,0)$ & $1-p_{\Nbatch_\commset}$ & $1-p_{\Nbatch_\commset} + \sum_{i=j}^{\Nbatch_\commset}\epsilon_j$\\ \hline \end{tabular} } \end{center} \vspace{-0.5cm} \end{wraptable} \setlength{\tabcolsep}{6pt} We fix the CMAB problem to be $\commset^*$-disjoint, and choose the action set $\actionset$ to be the maximal action set with action sizes $K$, i.e., $\abs{\actionset} = \floor*{\frac{m-\abs{\commset^*}}{K-\abs{\commset^*}}} \ge \frac{m-K}{{\Nbatch_\commset}}$. Denote the arm distribution in this problem by $\unu$, and fix the distribution of the common arms $i\in\commset^*$ to any distribution with expectation $\mathbb{E}\brs*{\nu_{\commset^*}}=\meanvec_{\commset^*}$, as long as they are mutually independent of the rest of the arms. For a single action, we set the distribution to be $\nu^*$ with mean $\meanvec^{\nu^*}\triangleq\meanvec$, and for the rest of the actions, we fix it to distribution $\nu$ with mean $\meanvec^\nu$. Both distributions are stated in Table \ref{table:lower_bound_distribution}. The distribution $\nu$ depends on $\epsilon\in\mathbb{R}^{\Nbatch_\commset}$, that will be determined later such that $\nu$ is strictly suboptimal, and we denote its suboptimality gap by $\dr{\epsilon}=\rewardvec{\meanvec^{\nu^*}} - \rewardvec{\meanvec^\nu}$. We remark that for $0\!<\!p_1\!<\!\dots\!<\!p_{\Nbatch_\commset}\!<\!1$, there exists $b_{\epsilon,0}$ such that $\nu$ is a valid probability distribution for all $\norm*{\epsilon}_\infty\le b_{\epsilon,0}$. We enforce this condition later in the proof. Note that for all $i\notin\commset^*$, the arms are Bernoulli random variables with mean $\mu_i\notin\brc*{0,1}$, and therefore $\Pr\brc*{\obs[i]{t}=p_i}=0$. Also, since the gradient is not zero for some $i\notin\commset^*$, the conditions of Lemma \ref{lemma:generalDependentBound} hold, and we can bound the regret by {\small \begin{align} \label{eq:lower_bound_form} \liminf_{T\to\infty} \frac{\mathbb{E}_\unu\brs*{R(T)}}{\ln T} &\ge \sum_{i=1}^{\abs{\actionset}-1}\frac{\dr{\epsilon}}{D_\mathrm{KL}(\nu,\nu^*)} \ge \frac{m-2K}{{\Nbatch_\commset}}\frac{\dr{\epsilon}}{D_\mathrm{KL}(\nu,\nu^*)} \enspace . \end{align} } \noindent{\textbf{\scshape{Step 2:}} \textit{Deriving lower bounds that depend on the distribution parameters $\epsilon$. }} \noindent Next, we bound both $D_\mathrm{KL}(\nu,\nu^*)$ and $\dr{\epsilon}$ in terms of $\epsilon$. \begin{restatable}{lemma-rst}{klBound}\label{lemma:kl-bound} Let $p\triangleq p^{\meanvec,\commset}\in\mathbb{R}^K$ such that $0<p_1<\dots<p_{\Nbatch_\commset}<1$ and define $p_0=0$. Also, let $\nu,\nu^*$ be the distributions stated in Table \ref{table:lower_bound_distribution}. Then, there exists a constant $b_{\epsilon,1}>0$ such that for any $\epsilon\in\mathbb{R}^{\Nbatch_\commset}$ with $\norm*{\epsilon}_\infty\le b_{\epsilon,1}$, it holds that {\small \begin{align} \label{eq:kl bound dependent} D_\mathrm{KL}(\nu,\nu^*) \le 2\epsilon^T B_{KL}(p)\epsilon \enspace, \end{align} } where $B_{KL}(p) = D(p)+\frac{1}{1-p_{\Nbatch_\commset}}\mathbf{1}\onevec^T$, $D(p)\in\mathbb{R}^{{\Nbatch_\commset}\times{\Nbatch_\commset}}$ is a diagonal matrix whose elements are $D_{ii}(p)=\frac{1}{p_i-p_{i-1}}$ and $\mathbf{1}\in\mathbb{R}^{\Nbatch_\commset}$ is a vector of ones. \end{restatable} \begin{restatable}{lemma-rst}{gapBound}\label{lemma:gap-bound} Let $\rewardNoArgs$ be a smooth index invariant reward function and let $p\in\mathbb{R}^K$ such that $0<p_1\le\dots\le p_{\Nbatch_\commset}<1$ and there exists $i\in\brs*{{\Nbatch_\commset}}$ with $\nabla_i\rewardvec{p}\ne0$. Also, define $c_j = \sum_{i=j}^{\Nbatch_\commset} \nabla_i \rewardvec{p}$ for all $j\in\brs*{{\Nbatch_\commset}}$ and let $\ub\in\mathbb{R}^{\Nbatch_\commset}$ be a vector such that $c^T\ub>0$. Then, if $\epsilon=\epsilon_0 \ub$, there exists a constant $b_{\epsilon,2}>0$ such that for all $0<\epsilon_0\le b_{\epsilon,2}$, {\small \begin{align} \label{eq:gap bound dependent} \dr{\epsilon} \ge \frac{1}{2}c^T\epsilon>0\enspace. \end{align} } \end{restatable} The proofs are presented in Appendix \ref{appendix:technical}. Specifically, note that $D_{ii}(p)>0$, and thus $B_{KL}(p)$ is positive definite, and the KL bound equals zero only for $\epsilon=0$. Also, since there exists $i\in\brs*{{\Nbatch_\commset}}$ such that $\nabla_i\rewardvec{p}\ne0$, it holds that $c\ne0$. When both lemmas hold, substitution into \eqref{eq:lower_bound_form} yields {\small \begin{align} \label{eq:lower_bound_dependent_form} \liminf_{T\to\infty} \frac{\mathbb{E}_\unu\brs*{R(T)}}{\ln T} & \ge \frac{m-2K}{{\Nbatch_\commset}}\frac{\dr{\epsilon}}{D_\mathrm{KL}(\unu,\unu^*)} = \frac{m-2K}{{\Nbatch_\commset}\dr{\epsilon}}\frac{\dr{\epsilon}^2}{D_\mathrm{KL}(\unu,\unu^*)} \ge \frac{m-2K}{8{\Nbatch_\commset}\dr{\epsilon}}\frac{\br*{c^T\epsilon}^2}{\epsilon^TB_{KL}(p)\epsilon} \enspace. \end{align} } \noindent{\textbf{\scshape{Step 3:}} \textit{Finding the worst-case CMAB instance.}} \noindent We now focus on the function $f_\commset(\epsilon;p) = \frac{\epsilon^Tcc^T\epsilon}{\epsilon^TB_{KL}(p)\epsilon}$, which is defined for any $\epsilon\ne0$, as $B_{KL}(p)$ is positive definite. $B_{KL}(p)$ is also invertible, and we can therefore apply the invertible transformation $\epsilon=B_{KL}^{-1/2}(p)x$, which results with the following function: {\small \begin{align*} \tilde{f}_\commset(x;p) &= \frac{x^TB_{KL}^{-1/2}(p)cc^TB_{KL}^{-1/2}(p)x}{\norm{x}_2^2} = \frac{\br*{c^TB_{KL}^{-1/2}(p)x}^2}{\norm{x}_2^2} \stackrel{(*)}{\le} \frac{\norm{B_{KL}^{-1/2}(p)c}_2^2\norm{x}_2^2}{\norm{x}_2^2} = c^TB_{KL}^{-1}(p)c \enspace, \end{align*} } where $(*)$ is due to Cauchy-Schwarz Inequality, and equality holds for any $\epsilon_0\ne0$ and $x=\epsilon_0 B^{-1/2}_{KL}(p)c$. Therefore, the maximal value of $f_\commset(\epsilon;p)$ is $f_\commset(\epsilon^*;p)=c^TB^{-1}_{KL}(p)c$ and can be attained with $\epsilon^*=\epsilon_0 B^{-1}_{KL}(p)c$, for any $\epsilon_0\ne0$. Motivated by the maximization property of $\epsilon^*$, we now fix $\epsilon\leftarrow\epsilon^*= \epsilon_0 B_{KL}^{-1}(p)c$, for $0<\epsilon_0 \le b_{\epsilon,2}$ such that $\norm*{\epsilon^*}_\infty\le \min\brc*{b_{\epsilon,0}, b_{\epsilon,1}}$. For this choice, $\epsilon^*\ne0$, since $B_{KL}(p)$ is invertible and $c\ne0$, and $c^TB_{KL}^{-1}(p)c>0$ as required for Lemma \ref{lemma:gap-bound}. We explicitly calculate the lower bound in the following lemma (see proof in Appendix \ref{appendix:technical}): \begin{restatable}{lemma-rst}{boundExplicitCalc}\label{lemma:boundExplicitCalc} Under the notations of Lemmas \ref{lemma:kl-bound} and \ref{lemma:gap-bound}, for any $0\!<\!p_1\!<\!\dots\!<\!p_{\Nbatch_\commset}\!<\!1$ and $c\ne0$, if $\epsilon^* = \epsilon_0B_{KL}^{-1}(p)c$, then {\small \begin{align*} \epsilon^*_i = \epsilon_0\br*{p_i-p_{i-1}}\br*{c_i - \sum_{j=1}^{\Nbatch_\commset}\br*{p_j-p_{i-j}}c_j} \enspace. \end{align*} } Also, if $f_\commset(\epsilon;p)\! =\! \frac{\epsilon^Tcc^T\epsilon}{\epsilon^TB_{KL}(p)\epsilon}$, then $f_\commset(\epsilon^*;p) \!=\! \tilde{\gamma}_g^2(\meanvec;\commset)$. \end{restatable} An important conclusion is that under the assumptions of the lemma, $\tilde{\gamma}_g^2(\meanvec;\commset)>0$, since $f_\commset(\epsilon^*;p)=c^TB_{KL}^{-1}(p)c>0$. A more general result naturally arises from the proof of Lemma \ref{lemma:boundExplicitCalc}: for any for any $\meanvec\in\brs*{0,1}^K$ and any $\commset\subset\brs*{K}$, it holds that $\tilde{\gamma}_g^2(\meanvec;\commset)\ge0$, as expected from a smoothness parameter. We refer the readers to the proof of the lemma for additional details. Substituting back into \eqref{eq:lower_bound_dependent_form} and recalling that $\commset^*$ was chosen as the maximizer in $DB^*_r(\dr{};\meanvec)$ we get {\small \begin{align*} \liminf_{T\to\infty} \frac{\mathbb{E}_\unu\brs*{R(T)}}{\ln T} & \ge \frac{(m-2K)\tilde{\gamma}_g^2(\meanvec;\commset^*)(p)}{8{\Nbatch_\commset}^*\dr{\epsilon}} = DB^*_r(\dr{\epsilon};\meanvec) . \end{align*} } We finally return to our assumption that $p_1<\dots<p_{\Nbatch_\commset}$. If there are equal values in $p$, i.e., $p_i=p_i=\dots=p_{i+n-1}$, we modify both distributions in Table \ref{table:lower_bound_distribution} such that the observations of these arms are identical, namely $\obs[i]{t}=\obs[i+1]{t}=\dots=\obs[i+n-1]{t}$. Then, we set $\epsilon^*_{i+1}=\dots=\epsilon^*_{i+n-1}=0$. Notice that this modification does not change the KL divergence, nor the analysis of the gap, and therefore retains the same results. Similarly, Lemma \ref{lemma:boundExplicitCalc} still holds by defining the function $f_\commset$ over the sub-vector of $\epsilon$ with coordinates such that $p_i<p_{i+1}$. Alternatively, note that the existing analysis naturally sets $\epsilon_i=0$ if $p_i=p_{i-1}$ (Lemma \ref{lemma:boundExplicitCalc}), so it is not surprising that this modification does not change the results. We avoided writing the full analysis since it requires indexing the sub-vector of $p$ with strictly increasing values, which would make the notations much more involved. To conclude the proof, we remark that if $\dr{0}>0$ is the gap for some $\epsilon_0>0$ such that Lemmas \ref{lemma:kl-bound} and \ref{lemma:gap-bound} hold, by tuning $\epsilon_0$ we can achieve any gap $\dr{}\le\dr{0}$, and thus the previous result holds for any small enough gap $\dr{}$. \end{proof} \section{Problem-Independent Lower Bounds} In this section, we prove a problem-independent regret lower bound. Specifically, we prove that for any fixed strategy and any large enough horizon $T$, there exists a CMAB instance such that the regret is lower bounded by a gap-independent $\sqrt{T}$ term. We remark that in contrast to problem-dependent bounds, in which the instance is fixed for \emph{all} strategies and time horizons, the instance for problem-independent bounds is designed as the `worst-case' problem for a \emph{specific} strategy and time horizon. Similarly to the previous section, we start by proving a general lower bound for $\commset$-disjoint CMAB problems and then apply it with a specific distribution to derive the desired bound. \begin{restatable}{lemma-rst}{generalIndependentBound} \label{lemma:generalIndependentBound} Let $\rewardNoArgs$ be a smooth index invariant reward function, and let $\commset\subset\brs*{m}$ such that $\abs*{\commset}\leK$ and $m>K$. Also, let $\meanvec,\meanvec^*\in\mathbb{R}^K$ such that $\meanvec_\commset=\meanvec^*_\commset$ and $\dr{}=\rewardvec{\meanvec^*}-\rewardvec{\meanvec}>0$. Finally, let $\nu,\nu^*$ be two distributions with expectations $\meanvec,\meanvec^*\in\mathbb{R}^K$, such that arms in $\commset$ are mutually independent of arms outside $\commset$ and both distributions are identical for arms in $\commset$. Then, for any horizon $T$ and any strategy $\psi$, there exists an $\commset$-disjoint CMAB problem with arm distribution $\unu'$ such that $\nu_{\action^*}=\meanvec^*$ and its regret under strategy $\psi$ is bounded by {\small \begin{align*} \mathbb{E}_{\unu'}\brs*{R(T)} \!\ge\! T\dr{}\br*{\!1-\frac{{\Nbatch_\commset}}{m-K} -\sqrt{\frac{1}{2}\frac{T{\Nbatch_\commset}}{m-K}D_\mathrm{KL}(\nu,\nu^*)}}\!. \end{align*} } \end{restatable} The proof is a variant of Theorem 6 of \citet{garivier2018explore} and can be found in Appendix \ref{appendix:general-bounds}. With this lemma at hand, and similarly to the problem-dependent bound of Theorem \ref{theorem:dependent_lower_bound}, we can also derive a problem-independent lower bound: \begin{restatable}{theorem-rst}{independentLowerBound} \label{theorem:independent_lower_bound} Let $\rewardNoArgs$ be a smooth index invariant reward function and assume that $m\ge3K$. Then, for any $\meanvec\!\in\!\brs*{0,1}^K$, any $T\!\ge\! T_0$ and for any strategy $\psi$, there exists an $\commset$-disjoint CMAB problem $\unu'$ with $\mathbb{E}\brs*{\nu'_{\action^*}}=\meanvec$ such that its regret under strategy $\psi$ is bounded by {\small \begin{align*} \mathbb{E}_{\unu'}\brs*{R(T)} \ge \max_{\commset}\frac{\tilde{\gamma}_g(\meanvec;\commset)}{32}\sqrt{\frac{T(m-K)}{{\Nbatch_\commset}}} \triangleq IB^*_r(\meanvec) \enspace . \end{align*} } \end{restatable} Similarly to Corollary \ref{corollary: sum index invariant dependent}, the result can also be easily extended to sums as follows: \begin{corollary} \label{corollary: sum index invariant independent} Under the notations of Corollary \ref{corollary: sum index invariant dependent}, if $m\ge3K$, then for any $\meanvec\!\in\!\brs*{0,1}^K$, any $T\!\ge\! T_0$ and for any strategy $\psi$, there exists CMAB problem $\unu'$ such that the optimal action has means $\meanvec_i=\meanvec$ for all $i\in\brs*{M}$ whose regret under strategy $\psi$ is bounded by {\small \begin{align*} \mathbb{E}_{\unu'}\brs*{R(T)} \ge M\cdot IB^*_r(\meanvec) \enspace . \end{align*} } \end{corollary} We defer the proof of the theorem to Appendix \ref{appendix:problem independent proof}, and the corollary can be proven similarly to Corollary \ref{corollary: sum index invariant dependent}. The same discussion from the previous section about the tightness of the bound still holds. We start by noting that $IB^*_r(\meanvec)$ reproduces the existing lower bounds both for the linear reward function \citep{kveton2015tight} and the probabilistic maximum coverage problem \citep{merlisM19}. Also, for monotone functions, we can bound $\tilde{\gamma}_g^2(\meanvec;\emptyset)\ge\gamma_g^2$ and match the problem-independent upper bound of \citep{merlisM19} up to a $\sqrt{K}$ factor. This factor will be improved to a logarithmic factor in the following section. \section{Relations Between Smoothness Measures} \label{section:relations} To this point, we derived lower bounds that depend on the modified Gini-smoothness $\tilde{\gamma}_g(\meanvec;\commset)$. In this section, we show that at a cost of logarithmic factors, we can relate these bounds to the L1 Gini-smoothness. Moreover, for monotone reward functions, we also prove lower bounds that depend on the L2 Gini-smoothness, and thus match the upper bounds of \citep{merlisM19} up to log-factors. We formally state the results in the following propositions: \begin{restatable}{proposition-rst}{smoothnessRelationModified}\label{prop:smoothness-relation-modified-L1} Let $\rewardNoArgs$ be a smooth index invariant reward function and denote $p=p^{\meanvec,\commset}$ for some $\meanvec\in\mathbb{R}^K$ and $\commset\subset\brs*{K}$. Then, {\small \begin{align} \label{eq:smoothness-relation-modified-L1} \tilde{\gamma}_g^2(\meanvec;\commset)\ge\frac{\gamma_{g,1}^2(\meanvec;\commset)}{3 + \ln\frac{1}{p_1} + \ln\frac{1}{1-p_{\Nbatch_\commset}}}\enspace , \end{align} } where the r.h.s. is defined as zero if $p_1=0$ or $p_{\Nbatch_\commset}=1$. \end{restatable} \begin{restatable}{proposition-rst}{smoothnessRelationEuclid}\label{prop:smoothness-relation-euclid} Let $\rewardNoArgs$ be a monotone smooth index invariant reward function. Then, for any $\meanvec\in\mathbb{R}^K$, it holds that {\small \begin{align} \label{eq:smoothness-relation-L1-L2} \max_{\commset}&\frac{\gamma_{g,1}^2(\meanvec;\commset)}{{\Nbatch_\commset}} \ge \frac{\gamma_{g,2}^2(\meanvec;\emptyset)}{1+\lnK}\enspace. \end{align} } Furthermore, if $\meanval_{\min}\!=\!\min_{i:\meanval_i>0} \meanval_i$ and $\meanval_{\max}\!=\!\max_{i:\meanval_i<1} \meanval_i$, then {\small \begin{align*} \max_{\commset}&\frac{\tilde{\gamma}_g^2(\meanvec;\commset)}{{\Nbatch_\commset}} \!\ge\! \frac{\gamma_{g,2}^2(\meanvec;\emptyset)}{\br*{1+\lnK}\br*{3 + \ln\frac{1}{\meanval_{\min}} + \ln\frac{1}{1-\meanval_{\max}}}}. \end{align*} } \end{restatable} We emphasize that Proposition \ref{prop:smoothness-relation-modified-L1} \emph{does not} require the monotonicity assumption. However, when the components of the gradient can be either positive or negative, the bound might equal zero even when the gradient is large. When the function is monotone, Proposition \ref{prop:smoothness-relation-euclid} greatly improves the na\"ive choice of $\commset=\emptyset$ in the bounds of Theorems \ref{theorem:dependent_lower_bound} and \ref{theorem:independent_lower_bound}, by a factor of $K$, with only a logarithmic cost. Specifically, by maximizing over $\meanvec$ and applying Proposition \ref{prop:smoothness-relation-euclid}, we get a problem-dependent lower bound of $\tilde\Omega\br*{m\gamma_g^2\ln T/\dr{}}$ and a problem-independent bound of $\tilde\Omega\br*{\gamma_g\sqrt{m T}}$, both match the upper bounds of \citep{merlisM19} up to log-factors. Thus, for monotone smooth functions, we conclude that the L2 Gini-smoothness parameter characterizes both the upper and lower bounds. Of the log-factors in the propositions, the more interesting one is that of $\br*{\ln\frac{1}{\meanval_{\min}} + \ln\frac{1}{1-\meanval_{\max}}}$. In cases where the mean values $\meanvec$ are exponentially close to zero or one, this factor can be of order $1/K$. We suspect that this is due to a proof artefact, but leave the investigation of this factor to future work. Nonetheless, for any practical example, this term is at most of order $\lnK$, which leaves the bound tight up to log-factors in the problem size. Another question that arises is whether this factor is the result of a loose analysis in Proposition \ref{prop:smoothness-relation-modified-L1}, and a tighter analysis might yield a better factor (e.g., $\lnK$). Sadly, the answer is negative. In Appendix \ref{appendix:tightness modified L1}, we build an example where $\meanval_i$ are exponentially small and Inequality \eqref{eq:smoothness-relation-modified-L1} \emph{does not hold} without a factor of order $1/K$. Therefore, to remove this factor, $DB^*_r(\dr{};\meanvec)$ and $IB^*_r(\meanvec)$ also need to be improved. Due to space limitations, we defer the full proofs to Appendix \ref{appendix:relations} and only provide a proof sketch for Proposition \ref{prop:smoothness-relation-euclid}: \begin{proofsketch} Notice that $\gamma_{g,1}(\meanvec;\!\commset)$ and $\gamma_{g,2}(\meanvec;\!\commset)$ are closely related to the L1 and L2 norms of a vector whose components are $\sqrt{\meanval_i(i-\meanval_i)}\nabla_i\rewardvec{\meanvec}$. Specifically, both are norms of a vector with a subset of these components. We utilize this connection and derive, to the best of our knowledge, a new relation between the norms: \begin{restatable}{lemma-rst}{normsRelation} \label{lemma:normsRelation} Let $A$ be a nonempty subset of indices $A\subset\brs*{n}$. For any vector $x\in\mathbb{R}^n$, it holds that {\small \begin{align*} \max_{A\ne\emptyset} \frac{1}{\abs*{A}} \norm*{x_A}_1^2 \ge \frac{1}{1+\ln n} \norm{x}_2^2\enspace. \end{align*} } \end{restatable} This lemma gives a much stronger relation than the standard L1-L2 inequality and is of independent interest. Specifically, the relation between the norms is logarithmic in the vector dimension, instead of the standard relation $\norm*{x}_1\ge \norm*{x}_2$, that would have resulted in a linear term. This is due to the ability to choose the best subset of the vector on the left-hand side. We remark that this inequality is tight, as demonstrated in Appendix \ref{appendix:tightness L1 L2}. By applying Lemma \ref{lemma:normsRelation}, we get $\max_\commset \frac{\gamma_{g,1}^2(\meanvec;\commset)}{{\Nbatch_\commset}} \ge \frac{\gamma_{g,2}^2(\meanvec,\emptyset)}{1+\lnK}$, and substituting in Proposition \ref{prop:smoothness-relation-modified-L1} concludes the proof. \end{proofsketch} \section{Summary and Future Work} In this work, we presented the first lower bounds for the CMAB problem that hold for general reward functions, under very mild assumptions. Specifically, we proved both problem-dependent and problem-independent lower bounds, which depend on the modified Gini-smoothness $\tilde{\gamma}_g(\meanvec;\commset)$ and reproduce the existing bounds for specific instances. When the reward function is also monotone, we showed that the upper bounds of \citep{merlisM19}, which depend on the L2 Gini smoothness of the reward function, are tight up to logarithmic factors. There are a few directions for extending our results that we leave for future work: \textbf{Gini-smoothness and non-monotone reward functions}: One question that naturally arises is whether the L2 Gini-smoothness $\gamma_g$ also characterizes the lower bound for non-monotone reward function. If such dependence truly holds, a possible way to derive these bounds is by improving $\tilde{\gamma}_g(\meanvec;\commset)$ such that it depends on the absolute values of the gradient components. However, such modification of the analysis is highly nontrivial, and we leave it for future work. \textbf{Lower bounds for arbitrary action sets}: To derive the lower bounds, we carefully designed the action set of the problem, such that no information is gained on one action by sampling a different one. In practice, different actions might have overlapping arms, which can be sometimes used to achieve better performance. For example, in the linear reward function and when the action set contains all possible subsets of fixed size, superior regret bounds can be attained \citep{komiyama2015optimal}. To the best of our knowledge, the only similar result is by \citep{combes2015combinatorial} for the specific case of linear rewards and independent arms. They show that the best achievable performance strongly depends on the structure of the action set, and it is interesting to derive such lower bounds for general reward functions and arm distributions. \textbf{Distribution-dependent lower bounds}: To derive lower bounds that depend on the Gini-smoothness, we designed a family of arm distributions, all with binary support. Similarly, to derive the upper bounds, \citet{merlisM19} bounded the variance of the arms by the variance of Bernoulli arms. We can, therefore, conclude that Bernoulli distribution is the `worst-case' distribution, under which both the upper and the lower bounds depend on the Gini-smoothness. A possible future direction is analyzing both bounds under general distributions and deriving new smoothness criteria that depend on the specific arm distribution, rather than the worst-case distribution. \textbf{Other variants and performance measures}: In this work, we focused on regret lower bounds for the CMAB problem with semi-bandit feedback. Other interesting problems include the case of full-bandit feedback \citep{gopalan2015thompson,rejwan2020top}, where there is no feedback on specific arms, but rather on the reward of the action, or using sample complexity as the performance measure \citep{chen2017nearly,mannor2004sample,kaufmann2016complexity}. Both variants have mainly been studied for the linear reward functions, and extending the existing upper and lower bounds for general reward functions can be beneficial for many practical settings. \acks{This work was partially funded by the Israel Science Foundation under ISF grant number 1380/16.}
1,314,259,992,706
arxiv
\section{Introduction} \label{intro} Phosphorus has a great relevance for biotic chemistry, since it is a fundamental component of many important biological molecules, such as nucleic acids and phospholipids. P is therefore essential to life on Earth and can consequently play an important role in exoplanets \citep{schaefer}. Despite its importance, the chemistry of P-bearing molecules is at its infancy and highly unknown. The aim of this work is to add an important, missing piece to the puzzle: unveiling the first steps of P-chemistry via observations and chemical simulations of simple P-bearing molecules in diffuse clouds.\\ The ion $\mathrm{P^{+}}$ was detected in several diffuse clouds by \cite{jura}, where an elemental abundance of $\sim2 \times 10^{-7}$ with a low P depletion factor of $\sim2-3$ was derived. However, a more recent study by \cite{lebou}, showed that phosphorus remains mostly undepleted towards diffuse clouds. In addition, P has been identified towards dwarf and giant stars \citep{maas, caffau}, while detections of simple P-bearing molecules (PN, PO, HCP, CP, CCP, NCCP, $\mathrm{PH_3}$) have been done towards the circumstellar material of carbon- and oxygen-rich stars \citep{agundez07, agundez14a, agundez14b, tenenbaum, DeBeck, ziurys}. The species PN and only very recently PO are the only P-bearing molecules to have been discovered towards dense star-forming regions \citep{turner, fontani, rivilla16, lefloch, mininni, fontani2019} and molecular clouds in the Galactic Center \citep{rivilla18}. The limited number of observations available makes the chemical pathways for P-bearing chemistry strongly debatable. The main uncertainty is the unknown depletion factor of P in molecular clouds. In general, chemical models of dark clouds start with the so called "low-metal abundances", where the elemental abundances of heavy elements (such as P, S, Fe, Mg) are reduced by orders of magnitude to reproduce molecular observations \citep[e.g.][] {agundez13}, but with poor understanding of the chemical processes at the base of such depletions. In the case of P, the level of depletion is still very uncertain. While \cite{turner1990} and \cite{wakelam} used high depletion factors of 600-$10^4$ with respect to P cosmic abundance, recent works have shown that it could be as low as $\sim100$ \citep{rivilla16, lefloch}. As only a very limited number of P-bearing molecules have been detected in star-forming regions, it is very hard to put constraints on the elemental abundance of P in the gas phase and on the major chemical pathways. In order to elucidate the interstellar P-chemistry we aim at focusing on diffuse clouds, which represent the first steps of molecular-cloud evolution. Diffuse clouds can give us important constraints on the P-chemistry, since in these objects P is not strongly affected by depletion so that the initial P-abundance that can be used for chemical simulations is well constrained \citep{lebou}. With this approach we are able to remove an important uncertainty in our model and use a reliable starting point for our chemical simulations. So far there have been several chemical and physical models focusing solely on diffuse clouds \citep[e.g.][] {dalgarno88, lepetit04, cecchi12, godard14}. For example, in \cite{godard14}, a model including dissipation of turbulence was applied to reproduce the observed molecular abundances in the diffuse interstellar medium (ISM). The main results showed that chemical complexity is strongly linked to turbulent dissipation, which was able to reproduce the high abundances of CO and other species (such as $\mathrm{C^+}$ and $\mathrm{HCO^+}$) observed towards Galactic diffuse clouds. \cite{lepetit04} describe the development of a chemical model of the diffuse cloud towards $\zeta$ Persei, that was able to reproduce the abundance of $\mathrm{H_3^+}$ and other species, like CN and CO. This was achieved by modelling two phases, namely a small dense phase ($\sim 100 \, \mathrm{au}$) with a density of $n\mathrm{(H)} = 2\times10^4 \, \mathrm{cm^{-3}}$ and a larger diffuse region (4 pc) with $n\mathrm{(H)} = 100 \, \mathrm{cm^{-3}}$. In addition, the reproduction of the $\mathrm{CH^+}$ abundance and that of the rotationally excited $\mathrm{H_2}$ required the inclusion of shocks into the model. Similar results were achieved by \cite{cecchi12} when including the injection of hot $\mathrm{H_2}$ into the model. Previous observations \citep{corby, liszt18, thiel} prove the chemical complexity and the wide range of densities, temperatures and visual extinctions of diffuse and translucent clouds, making them promising targets for observations of P-bearing molecules. Diffuse clouds are characterised by low densities with $n\mathrm{(H)} = 100-500 \, \mathrm{cm^{-3}}$ and are thus more exposed to the interstellar radiation, which can destroy molecules. Translucent clouds on the other hand, are an intermediate state between diffuse and dense molecular clouds, being more protected by UV radiation ($1 \, \mathrm{mag} < A_V < 5 \, \mathrm{mag}$). They are denser with typical densities of $n\mathrm{(H)} = 500-5000 \, \mathrm{cm^{-3}}$ and subsequently cooler ($T_{\mathrm{gas}}= 15-50 \, \mathrm{K}$), showing higher chemical complexity \citep{snow, thiel}. One prominent candidate that has been widely studied in previous works \citep[e.g.][and references therein]{liszt18} is the gas that lies along the line of sight to the compact extragalactic continuum source B0355+508. This strong blazar is located at a very low latitude in the outer Galaxy ($b= -1.6037^\circ$), meaning that the way through the Galactic disk is long and therefore gathers a significant amount of distributed Galactic diffuse gas \citep{pety}. In fact, the line of sight towards B0355+508 shows a complex kinematic structure which incorporates several diffuse/translucent clouds. The detections of numerous molecules like sulphur- and CN-bearing species as well as small hydrocarbons towards B0355+508 also indicate the rich chemistry present in this diffuse and translucent gas \citep[e.g.][and references therein]{liszt18}. The substantial velocity structure coupled with a high chemical complexity of this line of sight enables us to adjust our chemical/physical model to every cloud component and find which physical conditions favour the abundances of P-bearing molecules the most. Other background sources that have been previously studied are either lacking the chemical (like B0224+671) or the velocity (such as B0415+479) features which are essential for the present work. In this paper, we present single-pointing observations of the (2-1) transitions of HCP, CP, PN and PO and chemical simulations of the their molecular abundances towards the line of sight to B0355+508 in order to investigate P-bearing chemistry within diffuse/translucent clouds, the precursors of molecular clouds. In Section \ref{observations} we describe the observational details. Section \ref{results} summarizes the results of the observations. In Section \ref{model} we describe our updated phosphorus chemical network as well as the grid of models that we apply in order to reproduce the observations of HNC, CN, CS and CO towards every cloud component along the line of sight. Furthermore, in Section \ref{P-stuff} we focus on the P-bearing chemistry based on our best-fit model (that was determined in Section \ref{model}). In particular, we report the predicted molecular abundances of HCP, CP, PN, PO and $\mathrm{PH_3}$ and we study their dependence on the visual extinction, the cosmic-ray ionisation rate as well as the diffusion/desorption energy ratio on dust grains. The outlook and conclusions are summarized in Sections \ref{future} and \ref{outlook}. \section{Observations} \label{observations} The observations of the HCP (2-1), CP (2-1), PN (2-1) and PO (2-1) transitions in the 3 mm range were carried out at the IRAM 30m telescope located at Pico Veleta (Spain) towards the line of sight to the compact extragalactic quasar B0355+508. Table \ref{tab:observed species} lists the observed transitions, the spectroscopic constants as well as the telescope settings at the targeted frequencies: the upper state energy is described by $E_{\mathrm{up}}$, the upper state degeneracy is given by $g_u$, while $A_{ul}$ stands for the Einstein coefficient of the transition $u \rightarrow l$. The main beam efficiency and the main beam size of the telescope at a given frequency are denoted by the parameters $B_{\mathrm{eff}}$ and $\theta_{\mathrm{MB}}$, respectively. For our observations we used the EMIR receiver with the E090 configuration (3 mm atmospheric window). We applied three observational set ups, in which every set up covered a total spectral coverage of 7.2 GHz (each sub-band covered 1.8 GHz). As a backend we used the Fast Fourier Transform Spectrometer with a frequency resolution of 50 kHz ($\mathrm{0.15 \, \mathrm{km \, s^{-1}}}$ at 100 GHz). In addition, we applied the wobbler switching mode with an amplitude offset of $\pm90''$. Pointing and focus of the telescope was performed every 2 hr on the background source B0355+508 itself and was found to be accurate within 2$''$. \begin{table*} [! h] \center \caption{Spectroscopic parameters of the observed species and telescope settings.} \label{tab:observed species} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c c c c c} \hline \hline \\ Species & Transitions & $E_{\mathrm{up}}$ & Frequency & $A_\mathrm{ul}$ & $g_u$ & $B_{eff}$ & $\theta_{\mathrm{MB}}$ & References \\ & & (K) & (GHz) & ($\mathrm{10^{-5} \, s^{-1}}$) & & ($\%$) & ($\arcsec$) & \\ \hline \\ [-1ex ] HCP & J=2-1 & 5.8 & 79.90329 & 0.04 & 5 & 83 & 31 & 1 \\ PN & J=2-1 & 6.8 & 93.97977 & 2.92 & 5 & 80 & 26 & 2 \\ CP & N =2-1, J=3/2-1/2, F = 2-1 & 6.8 & 95.16416 & 0.33 & 5 & 80 & 26 & 3 \\ PO & J=5/2-3/2, $\Omega=1/2$, F=3-2, e & 8.4 & 108.99845 & 2.13 & 7 & 78 & 23 & 4 \\ PO & J=5/2-3/2, $\Omega=1/2$, F=2-1, e & 8.4 & 109.04540 & 1.92 & 5 & 78 & 23 & 4 \\ PO & J=5/2-3/2, $\Omega=1/2$, F=3-2, f & 8.4 & 109.20620 & 2.14 & 7 & 78 & 23 & 4 \\ PO & J=5/2-3/2, $\Omega=1/2$, F=2-1, f & 8.4 & 109.28119 & 1.93 & 5 & 78 & 23 & 4 \\ \hline \end{tabular} \tablebib{(1) \cite{bizzocchi}; (2) \cite{cazzoli}; (3) \cite{saito}; (4) \cite{bailleux}.} \end{table*} The intensity of the obtained spectra was converted from antenna ($T^{*}_{\mathrm{A}}$) to main beam temperature ($T_{\mathrm{mb}}$) units, using the following relation: $T_{\mathrm{mb}} = \frac{F_{\mathrm{eff}}}{B_{\mathrm{eff}}} \times {T^{*}_{\mathrm{A}}}$, where $F_{\mathrm{eff}}$ is the forward efficiency. $F_{\mathrm{eff}}$ is equal to 95$\%$ in the targeted frequency range. \section{Results} \label{results} The compact extragalactic source B0355+508 is located at $\mathrm{\alpha = 3h \, 59m \, 29.73s}$, $\mathrm{\delta = 50^\circ 57' 50.2''}$ with a low galactic latitude of $b= -1.6037^\circ$, incorporating a large amount of Galactic gas along the line of sight that harbors up to five diffuse/translucent clouds at the velocities of $ -4,\, -8, \, -10, \, -14 \, \mathrm{and} \, -17 \, \mathrm{km \, s^{-1}}$ \citep[e.g.][and references therein]{liszt18}. The flux of the blazar B0355+508 is variable over time and has been measured at $\sim3$ mm to be on average equal to $(4.62 \pm 1.02)$ Jy, after averaging the flux of 76 different observations \citep{agudo}. This corresponds to a temperature $T_{\mathrm{c}}$ of $(0.96 \pm 0.21)$ K at a beam size of 27$''$ by taking into account the Rayleigh-Jeans-Approximation. The obtained spectra were reduced and analyzed by the GILDAS software \citep{pety_gildas}. Every detected line was fitted via the standard CLASS Gaussian fitting method. For the derivation of the peak opacity we use the radiative transfer equation \begin{eqnarray} \label{Eq:opacity} T_{\mathrm{mb}} &=& (J_{\nu}(T_{\mathrm{ex}})-J_{\nu}(T_{\mathrm{bg}}) - J_{\nu}(T_{\mathrm{c}}))\times(1-\exp{(-\tau)}) \Rightarrow \nonumber \\ \tau &=& -\ln \Bigl(1 - \frac{T_{\mathrm{mb}}}{J_{\nu}(T_{\mathrm{ex}})-J_{\nu}(T_{\mathrm{bg}})-J_{\nu}(T_{\mathrm{c}})}\Bigr), \end{eqnarray} \noindent{where $T_{\mathrm{ex}}$ is the excitation temperature, $T_{\mathrm{bg}}$ is the cosmic background temperature and $J(T) = (\frac{h \nu}{k_B})(e^{\frac{\mathrm{h \nu}}{k_{B} T }}-1)^{-1}$ describes the Rayleigh-Jeans temperature in K\footnote{In case of an emission line, $J_{\nu}(T_{\mathrm{c}})$ is neglected because $J_{\nu}(T_{\mathrm{ex}}) \gg J_{\nu}(T_{\mathrm{c}})$.}.} After obtaining the peak opacity $\tau$, the column density $N$ is then estimated by following the relation: \begin{eqnarray} \label{Eq:column_density} N = \tau \sqrt{\frac{16\pi^3}{\ln2}} \frac{\nu^3 Q_{\mathrm{rot}}(T_{\mathrm{ex}}) \Delta \varv \, e^{{E_u / k_B T_{\mathrm{ex}}}}}{c^3 A_{ul} \, g_u \, (e^{h \nu / k_B T_{\mathrm{ex}}}-1)}, \end{eqnarray} \noindent{with $k_B$ being the Boltzmann constant, $\Delta \varv$ is the linewidth (FWHM), $\nu$ is the transition frequency, $c$ is the speed of light, and $h$ stands for the Planck constant. $Q_{\mathrm{rot}}(T_{\mathrm{ex}})$ gives the partition function of a molecule at a given excitation temperature $T_{\mathrm{ex}}$.} The (2-1) transitions of HCP, CP, PN and PO were not detected within our observations (see Figure \ref{Fig:PN_PO_HCP_CP}). We derive 3$\sigma$ upper limits for the opacities and column densities of the P-bearing species by using Eqs. \eqref{Eq:opacity} and \eqref{Eq:column_density}. Due to the low densities in diffuse clouds, molecules are expected to show no collisional excitation. Thus, the column densities were calculated assuming $T_{\mathrm{ex}} = T_{\mathrm{bg}} = 2.7 \, \mathrm{K}$, which simplifies Eq. \eqref{Eq:opacity} to: \begin{eqnarray} \label{Eq:opacity_simplified} \tau &=& -\ln \Bigl(1 + \frac{T_{\mathrm{mb}}}{J_{\nu}(T_{\mathrm{c}})}\Bigr). \end{eqnarray} \noindent{The results are summarized in Table \ref{tab:upper_limits_PN_PO}.} \begin{figure*}[h!] \centering \includegraphics[width = 0.7\textwidth]{PN_PO_HCP_CP_spectrum.pdf} \caption{Spectra of the non-detected (2-1) transitions of PO, PN, HCP and CP. The upper x-axis shows the rest frequency (in MHz) and the lower one is a velocity axis (in $\mathrm{km \, s^{-1}}$). The red dashed line indicates the $3\sigma$ level and the blue dashed line shows the transition frequency of the corresponding molecule. In case of PO, we show as an example one of the observed transitions at 108.998 GHz.} \label{Fig:PN_PO_HCP_CP} \end{figure*} \begin{table*} [h] \centering \caption{Derived upper limits for the opacity and the column density of HCP, CP, PN and PO.} \label{tab:upper_limits_PN_PO} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c c } \hline \hline \\ Species & Frequency & $\tau$ & N & $\mathrm{T_{MB}}$ & rms \\ & (GHz)& & $(10^{11} \, \mathrm{cm^{-2}})$ & (K) & (mK) \\ \hline \\ [-1ex] HCP & 79.90329 & $<0.02$ & $<22.7$ & $<0.02$ & 6 \\ CP & 95.16416 & $<0.02$ & $<12.6$ & $<0.02$ & 5 \\ PN & 93.97977 & $<0.02$ & $<0.42$ & $<0.02$ & 6 \\ PO & 108.99845 & $<0.02$ & $<4.29$ & $<0.02$ & 6 \\ & 109.04540 & $<0.02$ &$<6.70$ & $<0.02$ & 6 \\ & 109.20620 & $<0.02$ & $<4.34$ & $<0.02$ & 6 \\ & 109.28119 & $<0.02$ & $<6.69$ & $<0.02$ & 6 \\ \hline \\ \end{tabular} \\ \tablefoot{The upper limits are 3$\mathrm{\sigma}$.} \end{table*} We have detected the HNC (1-0), CN (1-0) and $\mathrm{C^{34}S}$ (2-1) transitions in absorption as well as the $\mathrm{^{13}CO}$ (1-0) in emission at the 3 mm range with a high signal-to-noise-ratio (S/N), ranging from 6 to 80 \footnote{The rms levels are lying between 4 and 13 mK.}. Figure \ref{Fig:all_spectra} shows all the detected spectra towards the line of sight to B0355+508. In case of CN we were able to detect and resolve four hyperfine components from 113.123 GHz to 113.191 GHz (see Figure \ref{Fig:spectrum}). Every hyperfine component was detected in the three velocity components at $-8, \, -10, \, -17 \,\mathrm{km \, s^{-1}}$ except for the one weak transition at 113.123 GHz, which was identified only in two clouds (at $-10, \, -17 \, \mathrm{km \, s^{-1}}$). The molecule HNC was identified in all five cloud components, while $\mathrm{C^{34}S}$ (in absorption) and $\mathrm{^{13}CO}$ (in emission) were detected solely towards the densest features, at $-10 \, \mathrm{km \, s^{-1}}$ and $-17 \, \mathrm{km \, s^{-1}}$. Table \ref{tab:detected species} lists the identified species and the corresponding spectroscopic parameters. \begin{figure}[h] \centering \includegraphics[width = 0.4\textwidth]{all_species.pdf} \caption{Spectra of the detected species HNC, CN, $\mathrm{C^{34}S}$ and$\mathrm{^{13}CO}$ in the 3 mm range towards the line of sight to the extragalactitc source B0355+508. The red line represents the CLASS Gaussian fit.} \label{Fig:all_spectra} \end{figure} \begin{figure*}[h] \centering \includegraphics[width = 1.0\textwidth]{CN_hyperfine_new.pdf} \caption{Detected hyperfine components of the CN (1-0) transition between 113.12 and 113.20 GHz. The three strongest hyperfine components were detected in the three clouds with $\varv_{\mathrm{LSR}}=-8, \, -10, \, -17 \,\mathrm{km \, s^{-1}}$ except for the one weak transition $\mathrm{(N, F) = (1, 1/2) - (0, 1/2)}$, which was identified only in the two densest clouds (at $-10, \, -17 \, \mathrm{km \, s^{-1}}$). } \label{Fig:spectrum} \end{figure*} \begin{table*} [! h] \center \caption{Spectroscopic parameters of the detected species and telescope settings.} \label{tab:detected species} \setlength{\tabcolsep}{10pt} \begin{tabular}{c c c c c c c c c} \hline \hline \\ Species & Transitions & $E_{\mathrm{up}}$ & Frequency & $A_\mathrm{ul}$ & $g_u$ & $B_{eff}$ & $\theta_{\mathrm{MB}}$ & References \\ & & (K) & (GHz) & $\mathrm{10^{-5} \, s^{-1}}$ & & ($\%$) & ($\arcsec$) &\\ \hline \\ [-1ex ] HNC & J=1-0 & 4.4 & 90.66357 & 2.69 & 3 & 81 & 27 & 1 \\ CN & N= 1-0, F = 3/2-1/2 & 5.5 & 113.17049 & 0.51 & 4 & 78 & 22 & 2 \\ $\mathrm{C^{34}S}$ & J= 2-1 & 6.9 & 96.41295 & 1.60 & 5 & 80 & 26 & 3 \\ $\mathrm{^{13}CO}$ & J=1-0 & 5.3 & 110.20135 & 0.006 & 6 & 78 & 22 & 4\\ \hline \end{tabular} \tablebib{(1) \cite{saykally}; (2) \cite{dixon}; (3) \cite{gottlieb}; (4) \cite{klapper, cazzoli04}.} \end{table*} For estimating the CN column density we use the hyperfine component at 113.170 GHz. Our derived opacities and column densities of CN agree within a factor of 2-3 with previous results \citep{liszt01}, while the HNC results are well reproduced within a factor of 1.5. Table \ref{tab:CN} summarizes the derived opacities and column densities of the detected species, as well as the obtained line intensities and rms levels. \begin{table*} [h] \centering \caption{Gaussian fitting results of CN, HNC, $\mathrm{C^{34}S}$, $\mathrm{^{13}CO}$.} \label{tab:CN} \tabcolsep 2.8pt \begin{tabular} {c c c c c c c c c} \hline \hline \\ Species & Velocity & $\Delta \varv$& $\tau$ & $\mathrm{T_{MB}}$ & rms & Spectral & N & N \\ & $(\mathrm{km \, s^{-1}})$ & $(\mathrm{km \, s^{-1}})$ & & (K) & (mK) & resolution & $(\mathrm{cm^{-2}})$ & $(\mathrm{cm^{-2}})$ \\ & & & & & & $(\mathrm{km \, s^{-1}})$ & (this work) & (previous work)$^{(1)}$ \\ \hline \\ [-1ex] CN & & & & & &\\ & $-17.23 \pm 0.01$& $0.54 \pm 0.02$ & $0.23\pm0.07$ & $0.20\pm0.02$ & 9 & 0.13 &$(0.87\pm0.28) \times 10^{13}$ & $(2.14\pm0.24) \times 10^{13}$ \\ & $-10.41 \pm 0.01$ & $0.49 \pm 0.01$ & $0.36 \pm 0.09$ & $0.29\pm0.02$ & 9 & 0.13 & $(1.23\pm0.34) \times 10^{13}$ & ($3.34\pm0.37)\times 10^{13}$ \\ & $-8.47\pm0.04$ & $0.97\pm0.09$ & $0.06\pm0.03$ & $0.06\pm0.01$ & 9 & 0.13 & $(0.43\pm0.20) \times 10^{13}$ & $(0.76\pm0.21)\times 10^{13}$ \\ \hline \\ HNC & & & & & &\\ & $-17.197\pm0.004$ & $0.73\pm0.01$ & $0.50\pm0.10$ & $0.37\pm0.01$ & 6 & 0.17 & $(0.69\pm0.16)\times 10^{12}$ & $(0.74\pm0.02)\times 10^{12}$ \\ & $-10.392\pm0.003$ & $0.73\pm0.01$ & $0.87\pm0.16$ & $0.56\pm0.01$ & 7 & 0.17 & $(1.20\pm0.23)\times 10^{12}$ & $(1.14\pm0.04)\times 10^{12}$ \\ & $-8.37\pm0.01$ & $1.14\pm0.04$ & $0.15\pm0.04$ & $0.13\pm0.01$ & 6 & 0.17 & $(0.32\pm0.15)\times 10^{12}$ & $(0.39\pm0.02)\times 10^{12}$ \\ & $-13.68\pm0.07$ & $2.30\pm0.23$ & $0.04\pm0.01$ & $0.04\pm0.01$ & 6 & 0.17 & $(0.16\pm0.08)\times 10^{12}$ & $(0.10\pm0.02)\times 10^{12}$ \\ & $-4.32\pm0.06$ & $1.71\pm0.15$ & $0.04\pm0.02$ & $0.04\pm0.01$ & 6 & 0.17 & $(0.12\pm0.06)\times 10^{12}$ & $(0.14\pm0.01)\times 10^{12}$ \\ \hline \\ $\mathrm{C^{34}S}$ & & & & & & \\ &$-17.20\pm0.02$ & $0.43\pm0.04$ & $0.04\pm0.02$ & $0.04\pm0.01$ & 4 & 0.16 & $(1.64\pm0.82)\times 10^{11}$ & $<3.2\times 10^{11}$ $^{(2)}$ \\ & $-10.39\pm0.01$ & $0.48\pm0.02$ & $0.08\pm0.03$ & $0.07\pm0.01$ & 4 & 0.16 & $(3.33\pm1.23)\times 10^{11}$ & $<4.4\times 10^{11}$ $^{(2)}$\\ \hline \\ $\mathrm{^{13}CO}$ & & & & & & \\ & $-16.935\pm0.004$ & $0.82\pm0.01$ & $0.154\pm0.004$ & $0.41\pm0.01$ & 8 & 0.14 & $(3.98\pm0.16)\times 10^{14}$ & $(4.34\pm0.51)\times 10^{14}$ $^{(3)}$ \\ & $-10.03\pm0.01$ & $1.82\pm0.03$ &$0.135\pm0.004$ & $0.36\pm0.01$ & 13 & 0.14 & $(7.73\pm0.38)\times 10^{14}$ & $(1.79\pm0.26)\times 10^{14}$ $^{(3)}$\\ \hline \\ \end{tabular} \\ \tablebib{(1) \cite{liszt01}, (2) \cite{liszt_isotope}, (3) \cite{liszt98}.} \end{table*} \cite{liszt02} reported the detection of the main isotopologue $\mathrm{C^{32}S}$ done with the IRAM Plateau de Bure interferometer (PdBI) and estimated a column density of $(4.27 \pm 0.16) \times 10^{12} \, \mathrm{cm^{-2}}$ at the $-10 \,\mathrm{km s^{-1}}$ component and $(3.06 \pm 0.32) \times 10^{12} \, \mathrm{cm^{-2}}$ at $-17 \, \mathrm{km s^{-1}}$. With the above values and the column densities of $\mathrm{C^{34}S}$ calculated in this work, we derive a sulfur isotopic ratio $\mathrm{^{32}S/^{34}S}$ ratio of $12.8\pm4.8$ and $18.7\pm9.5$ for the components at $-10 \, \mathrm{km s^{-1}}$ and $-17 \, \mathrm{km s^{-1}}$, respectively. The latter value is in good agreement with the $\mathrm{^{32}S/^{34}S}$ ratio for the local ISM of $24 \pm 5$ \citep{chin1996}. However, the isotopic ratio determined for $\varv = -10 \, \mathrm{km s^{-1}}$ is quite lower than the local interstellar value, which could be the result of opacity effects of the $\mathrm{C^{32}S}$ line. In addition, the determination of the sulphur isotopic ratio was based on just one spectral line of the main species $\mathrm{C^{32}S}$ and its isotopologue, which also yields a high uncertainty. To our knowledge this is the first detection of $\mathrm{C^{34}S}$ towards this line of sight, owing to the high spectral resolution of $\sim50$ kHz and high sensitivity (rms of $\sim4$ mK) achieved with our observations. In \cite{liszt98}, detections of the main species $\mathrm{^{12}CO}$ and its isotopologue $\mathrm{^{13}CO}$ are reported, which were obtained with the PdBI as well as the NRAO 12m telescope. The single-dish observations covered a large beam of 60\arcsec, thus seeing CO and its isotopologue in emission, while the interferometric observations were sensitive only to the very narrow column of gas towards the strong background blazar, giving rise to absorption lines. For deriving the excitation temperatures and the column densities, both emission (single-dish data) and absorption lines (interferometric data) were considered. In Figure \ref{Fig:all_spectra} it is clearly visible that the strong $\mathrm{^{13}CO}$ emission line at $-10 \,\mathrm{km s^{-1}}$ overlaps with an absorption feature at around $-8 \,\mathrm{km s^{-1}}$. This is probably due to the fact that absorption is present close to the background source, so that emission and absorption lines are merged together in our observations with the IRAM 30m telescope. This contamination effect is influencing the line profile at $-10 \,\mathrm{km s^{-1}}$ which subsequently results in an unreliable fit. This could possibly explain why the $\mathrm{^{13}CO}$ column density derived at $-10 \,\mathrm{km s^{-1}}$ deviates by a factor of $\sim4$ from previous results \citep{liszt98}, while towards $-17 \,\mathrm{km s^{-1}}$, $N(\mathrm{^{13}CO})$ is well reproduced within 10\% (see Table \ref{tab:CN})\footnote{We used a $T_{\mathrm{ex}}$ of 6 K for deriving $N(\mathrm{^{13}CO})$, as it was inferred in \cite{liszt98}.}. Our derived isotopic ratio $\mathrm{^{12}CO/^{13}CO}$ at $-17 \, \mathrm{km s^{-1}}$ is equal to $16.7\pm1.4$. Herefore we used the column density of $\mathrm{^{12}CO}$ derived in \cite{liszt98} with $N(\mathrm{^{12}CO}) = \mathrm{(6.64 \pm 0.47) \times 10^{15}} \, \mathrm{cm^{-2}}$. The resulting CO isotopic ratio is almost a factor $\sim4$ lower than the local interstellar ratio $\mathrm{^{12}C/^{13}C}= 60$ \citep{liszt_isotope}. This was already confirmed by previous studies \citep{liszt07, liszt17} that show an increased insertion of $\mathrm{^{13}C}$ into CO towards clouds in the translucent regime with elevated densities and/or smaller radiation fields, which lead to an enhanced abundance of $\mathrm{^{13}CO}$ by a factor of 2-4. Under these conditions isotope exchange fractionation ($\mathrm{^{13}C^+} + \mathrm{^{12}CO} \rightarrow \mathrm{^{12}C^{+}} + \mathrm{^{13}CO + 35 \, K}$) is more dominant than selective photodissociation. \section{Chemical Modeling} \label{model} The goal of the present study is to constrain and improve our model of diffuse and translucent clouds to make reliable predictions for the abundances of phosphorus-bearing species (and also others). For this reason, we have used the observations of HNC, CN, CS and CO in order to constrain the physical parameters in our model. The chemical code that we have applied was developed by \cite{vasyunin2013} with an updated grain-surface chemistry (Vasyunin et al. 2019, in prep.). The model includes a gas-grain chemical network with 6000 gas-phase reactions, 200 surface reactions and 660 species. Accretion and desorption processes regulate and connect the gas-phase and grain surface chemistry. The code numerically solves coupled differential equations (chemical rate equations) and computes a set of time dependent molecular abundances. Since the observations were carried out towards diffuse/translucent clouds, we have considered as initial elemental abundances the standard Solar elemental composition as reported in \cite{asplund} (see Table \ref{tab:initial}). We note that our initial elemental abundances are significantly different compared to the low metal abundances used in \cite{wakelam} for dark clouds (200 times more abundant S and up to $10^4$ more abundant Fe, Cl, P, and F). In particular, the initial abundance of P is $2.6\times10^{-7}$ and thus well constrained unlike in dense molecular clouds. This approach will help us elucidate much better the chemistry of P since a key parameter for the chemical model is well determined. In addition, we begin our chemical simulations with hydrogen being completely in its atomic form, in order to have pure atomic diffuse cloud conditions as a starting point. \begin{table} [! h] \center \caption{Assumed solar initial elemental abundances \citep{asplund}.} \label{tab:initial} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c} \hline \hline \\ Species & Abundances \\ \hline \\ [-1ex ] H & 1.0 \\ He & $8.5\times10^{-2}$ \\ N & $6.8\times10^{-5}$ \\ O & $4.9\times10^{-4}$ \\ $\mathrm{C^+}$ & $2.7\times10^{-4}$ \\ $\mathrm{S^+}$ & $1.3\times10^{-5}$ \\ $\mathrm{Si^+}$ & $3.2\times10^{-5}$ \\ $\mathrm{Fe^+}$ & $3.2\times10^{-5}$ \\ $\mathrm{Na^+}$ & $1.7\times10^{-6}$ \\ $\mathrm{Mg^+}$ & $3.9\times10^{-5}$ \\ $\mathrm{Cl^+}$ & $3.2\times10^{-7}$ \\ $\mathrm{P^+}$ & $2.6\times10^{-7}$ \\ $\mathrm{F^+}$ & $3.6\times10^{-8}$ \\ \hline \end{tabular} \end{table} The phosphorus chemical network, that has been already used in previous studies \citep{fontani, rivilla16}, has been extended with new available information in the literature (new reactions, updated reaction rates, desorption energies etc.). In particular, chemical reactions of several P-bearing species, such as PN, PO, HCP, CP and $\mathrm{PH_3}$ were included and/or updated in our chemical network. The reaction rates were taken from the online chemical databases KInetic Database for Astrochemistry \citep[KIDA]{wakelam}\footnote{http://kida.obs.u-bordeaux1.fr} and the UMIST Database for Astrochemistry \citep[UDfA]{McElroy}\footnote{http://udfa.ajmarkwick.net/index.php?mode=species}, as well as from numerous previous papers \citep{thorne84, adams, millar91, anicich93, charnley94, jimenez18}. In particular we have included several reactions involving the formation and destruction of $\mathrm{PH_n}$ $(n=1,2,3)$ and their cationic species from \cite{charnley94} and \cite{anicich93}, along with the chemical network proposed by \cite{thorne84} that contains production and loss routes for P, PO, $\mathrm{P^+}$, $\mathrm{PO^+}$, $\mathrm{PH^+}$, $\mathrm{HPO^+}$ and $\mathrm{H_2PO^+}$. In addition, we extended the PN chemical network based on the work by \cite{millar87}, and we took into account the gas-phase reaction $\mathrm{P + OH \rightarrow PO + H}$ proposed by \cite{jimenez18}, as well as two formation routes of PN in the gas-phase $\mathrm{N + CP \rightarrow PN +C}$ and $\mathrm{P+ CN \rightarrow PN +C}$ by \cite{agundez07}. Finally, we included the photodissociation reactions of PN, PO, HCP, and $\mathrm{PH_n}$ $(n=1,2,3)$ based on the reaction rates given in KIDA and UDfA. The reaction rates of the photodissociation of $\mathrm{PH_n}$ were assumed to be equal to the analogous reactions for $\mathrm{NH_n}$. Concerning the chemistry taking place on grain surfaces, we have taken into account the hydrogenation reactions of P-bearing species (where the letter $``g"$ denotes a grain surface species) as well as their corresponding desorption reactions: \begin{itemize} \item{$\mathrm{gH + gP \rightarrow gPH}$}, \\ \item $\mathrm{gH + gPH \rightarrow gPH_2}$, \\ \item $\mathrm{gH + gPH_2 \rightarrow gPH_3}$. \\ \end{itemize} The desorption energy of $\mathrm{PH_3}$ was calculated based on the one of $\mathrm{NH_3}$ and accounts to $\sim5800 \, \mathrm{K}$. This corresponds to an evaporation temperature of $\sim100 \, \mathrm{K}$, which is in good agreement with the value proposed by \cite{turner1990} ($\sim90 \, \mathrm{K}$)\footnote{The evaporation temperature describes the temperature at which a given species starts to desorb thermally.}. The reactive desorption efficiency in our chemical model is set equal to 1\%. An increased reactive desorption of 10\% changes the predicted abundances of the aforementioned P-bearing molecules by less than a factor of 2. Another nonthermal desorption mechanism, that is included in our model, is the cosmic-ray desorption, which is fully described in \cite{hasegawa93}. Based on this study, dust grains are heated upon impact with cosmic-rays reaching a peak temperature $T_{\mathrm{dust}}$ of 70 K, which subsequently leads to preferential desorption of molecules from grain surfaces. This type of desorption is however negligible in diffuse clouds, where photodesorption dominates. In our model we adopt for all species a photodesorption rate of $3 \times 10^{-3}$ molecules per incident UV photon, as it was determined in \cite{oeberg2007} based on laboratory measurements of pure CO ice. This stands in good agreement with the photodesorption yield of $\sim10^{-3}$ molecules/UV photon found for other species, such as $\mathrm{H_2O}$, $\mathrm{O_2}$ and $\mathrm{CH_4}$ \citep{oeberg2009, fayolle2013, dupuy2017}. \subsection{Comparison to observations} In order to reproduce the observed abundances of HNC, CN, CS and CO in every cloud towards the line of sight to B0355+508 we produce a grid of models applying typical physical conditions for diffuse/translucent gas \citep{snow, thiel}. We note here that, since our chemical model is not treating isotopic species, we are using as a reference for our comparison the main species $\mathrm{^{12}CO}$ and $\mathrm{C^{32}S}$, instead of $\mathrm{^{13}CO}$ and $\mathrm{C^{34}S}$. For the fractional abundances of $\mathrm{^{12}CO}$ and $\mathrm{C^{32}S}$, we are adopting the column densities determined in \cite{liszt98} and \cite{liszt02}. In addition, for the clouds at $-14 \,\mathrm{km \, s^{-1}}$ and $-4 \,\mathrm{km \, s^{-1}}$ we use for CN and CS the upper limits derived in this work ($N(\mathrm{CN}) < 10^{12} \, \mathrm{cm^{-2}}$) and in \cite{liszt02}. The parameter space that we investigate is listed below: \begin{itemize} \item $n\mathrm{(H)} = 100 - 1000\, \mathrm{cm^{-3}}$, spacing of $100\, \mathrm{cm^{-3}}$, \\ \item $A_V = 1-5 \, \mathrm{mag}$, spacing of $1 \, \mathrm{mag}$, \\ \item $T_{\mathrm{gas}}=20-100 \, \mathrm{K}$, spacing of $10 \, \mathrm{K}$. \\ \end{itemize} The chemical evolution in each model is simulated over $10^7$ yrs (100 timesteps) by assuming static physical conditions. For the cosmic-ray ionisation rate $\zeta(\mathrm{CR})$ we use a value of $1.7 \times 10^{-16} \, \mathrm{s^{-1}}$, as it was derived in \cite{indriolo} (see Section \ref{P-stuff} for further explanation). This also corresponds to the values applied in \cite{godard14} and \cite{lepetit04}, where the best-fit models provided a cosmic-ray ionisation rate of $10^{-16} \, \mathrm{s^{-1}}$ and $2.5\times 10^{-16} \, \mathrm{s^{-1}}$, respectively. Given the above parameter space, we calculate the level of disagreement $D(t,r)$ between modeled and observed abundances (for the species HNC, CN, CS and CO), which, following \cite{wakelam_herbst} and \cite{vasyunin17}, we define as \begin{eqnarray} \label{agreement} D(t,r) = \sum_{j=1}^{N_{\mathrm{species}}} \lvert\log(x^{j}_{\mathrm{mod}}(t,r)) - \log(x^{j}_{\mathrm{obs}})\rvert, \end{eqnarray} \noindent{with $r=(n\mathrm{(H)}, A_V, T_{\mathrm{gas}})$ and $x^{j}_{\mathrm{obs, \, mod}}$ being the observed and modeled abundance of species $j$, respectively.} We then determine the minimal value of $D(t,r)$, (noted as $D_{\mathrm{min}}(t,r)$), that corresponds to the best-fit model; the parameters $(t,r)$ provided by the best-fit model are the ones giving the smallest deviation between observations and predictions. The smaller the $D_{\mathrm{min}}(t,r)$, the better the agreement. According to \cite{pety} and references therein, the clouds with $\varv_{\mathrm{LSR}} = -10 \, \mathrm{km \, s^{-1}}$ and $-17 \, \mathrm{km \, s^{-1}}$ towards B0355+508, are showing strong $\mathrm{^{12}CO}$ emission lines that originate from dense regions (with $n\mathrm{(H)}=300-500 \, \mathrm{cm^{-3}}$ and $N(\mathrm{H_2})>10^{21} \, \mathrm{cm^{-2}}$) which are right outside the synthesized beam (combination of the 30m and PdBI telescopes), but still within the IRAM 30m beam \citep{pety}. Based on the CO (2-1) maps shown in \cite{pety} with a 22\arcsec and 5.8\arcsec resolution, the cloud at $\varv_{\mathrm{LSR}} = -10 \,\mathrm{km \, s^{-1}}$ shows the most pronounced, dense sub-structure. This is confirmed by the fact that this particular cloud component produces the most detectable amounts of observed species, in the data presented in this work and previous works by \cite{liszt01} and \cite{liszt00}, thus being chemically the most complex one. The component at $\varv_{\mathrm{LSR}} = -8 \,\mathrm{km \, s^{-1}}$ shows a similar structure as the one at $\varv_{\mathrm{LSR}} = -10 \,\mathrm{km \, s^{-1}}$, incorporating a dense region as well. \cite{pety} suggests that the two components are part of the same cloud, even though they are distinguishable in absorption and show different levels of chemical complexity. With a higher spatial resolution of 5.8\arcsec, the CO emission at $\varv_{\mathrm{LSR}} = -8 \,\mathrm{km \, s^{-1}}$ is separated from the one at $\varv_{\mathrm{LSR}} = -10 \,\mathrm{km \, s^{-1}}$ and is clearly visible. The diffuse gas seen at velocities $ -14 \,\mathrm{km \, s^{-1}}$ and $-4 \,\mathrm{km \, s^{-1}}$ shows barely any $\mathrm{^{13}CO}$ \cite[and this work]{liszt98} and $\mathrm{^{12}CO}$ \citep{pety, liszt98} in emission, which suggests that the density of these clouds is too low to sufficiently excite CO. \cite{pety} estimated a low-moderate density of these clouds to be $\sim64-256 \, \mathrm{cm^{-3}}$ with $A_V < 2$ mag. Since we are performing single-dish observations with a beam of $\sim22\arcsec$, we also cover the high density regions that produce a significant $\mathrm{^{12}CO}$ emission \citep{pety}. For this reason we constrain our grid of models to high densities of $\geq 300 \, \mathrm{cm^{-3}}$ for the denser clouds ($-8,\, -10, \, -17 \, \mathrm{km \, s^{-1}}$), while for the low-density objects at $-14 \, \mathrm{km \, s^{-1}}$ and $-4 \, \mathrm{km \, s^{-1}}$ we restrict our input parameters to $n\mathrm{(H)} \leq 200 \, \mathrm{cm^{-3}}$ and $A_V < 2$ mag. The calculation of the molecular abundances was done with respect to the $\mathrm{H_2}$ column densities ($N(\mathrm{H_2}) \sim4-5 \times 10^{20} \, \mathrm{cm^{-2}}$), that were derived towards every cloud by \cite{liszt18}. However, our model provides the fractional abundance of a species $X$ with respect to the total number of hydrogen nuclei, as in $n(\mathrm{X})/n(\mathrm{H})$, with the total volume density of hydrogen defined as $n\mathrm{(H)} = n\mathrm{(H \, I + 2 \cdot H_2)}$. The surface mobility parameters that we set as default values in our model (see Section \ref{surface_chemistry}) enable a fast and effective formation of $\mathrm{H_2}$ on the surfaces of grains. At the end of our simulations (at $t=10^7$ yrs), the $\mathrm{H_2}$ abundance reaches a value of $40\%-50\%$ (depending on the set of parameters). This means that almost the entire hydrogen is predicted to be in its molecular form at the late phases of the chemical evolution. Following this consideration, we divide all the observed abundances and their upper limits mentioned in this paper by a factor of 2, since $n\mathrm{(X)}/n\mathrm{(H \, I + 2 \cdot H_2)} \simeq n\mathrm{(X)}/2 \cdot n\mathrm{(H_2)} = 0.5 \cdot n\mathrm{(X)}/n\mathrm{(H_2)}$. We note that this expression applies to the high density parts of the clouds and does not account for the low density (and H I rich) gas along the line of sight. \begin{figure*}[h] \centering \includegraphics[width =1.0\textwidth]{grid_of_models_reactive_desorption_1percent_up_to_5mag.pdf} \caption{Results of the grid of models applying typical conditions for diffuse/translucent clouds in order to reproduce the observations towards the cloud at $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$. The deviation between observations and model at the time of best agreement $t_{\mathrm{best}}$ is given by $D(t_{\mathrm{best}},r)$, which is plotted versus the density, temperature and visual extinction. The best-fit model is given at a time $t_{\mathrm{best}} = 6.2\times 10^6 \, \mathrm{yrs}$ and has the following parameters: $(n\mathrm{(H)}, A_V, T_{\mathrm{gas}}) = (300 \, \mathrm{cm^{-3}}, \, 3 \, \mathrm{mag}, \, 40 \, \mathrm{K})$. Between an $A_V$ of 1 and 3 mag the minimal level of disagreement $D(t_{\mathrm{best}},r)$ drops down by 13\%.} \label{Fig:best model} \end{figure*} Figure \ref{Fig:best model} shows the results of the grid of models that was applied to reproduce the observations towards $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$. In particular, we plot $D(t_{\mathrm{best}},r)$, which describes the deviation between observed and modeled abundances at the time of best agreement $t_{\mathrm{best}}$, versus the density, temperature and visual extinction. Between an $A_V$ of 1 and 3 mag the smallest level of disagreement $D(t_{\mathrm{best}},r)$ reduces by 13\%. The main discrepancy between observed and modeled abundances at low $A_V$ comes from the fact that a high visual extinction results in higher molecular abundances and is therefore able to reproduce the chemical complexity seen towards the translucent clouds. For models with $A_V > 3 \, \mathrm{mag}$ the minimal $D(t_{\mathrm{best}},r)$ is barely changing (less than 1\% of increase). The smallest $D(t_{\mathrm{best}},r)$ increases with respect to the density and temperature up to 3\% and 2\%, respectively. This is a clear indication that the most influential physical parameter in our analysis is the visual extinction. For the cloud component at $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$ the best-fit model with $D_{\mathrm{min}}(t,r)$ is reached at a time $t_{\mathrm{best}} = 6.2\times 10^6 \, \mathrm{yrs}$ and has the parameters: $r_{\mathrm{best}} = (n\mathrm{(H)}, A_V, T_{\mathrm{gas}}) = (300 \, \mathrm{cm^{-3}}, \, 3 \, \mathrm{mag}, \, 40 \, \mathrm{K})$. At $t_{\mathrm{best}} = 6.2\times 10^6 \, \mathrm{yrs}$ we also fulfill the assumption of having most of hydrogen in molecular form, as the $\mathrm{H_2}$ abundance reaches a value of 0.45. Based on this model, we show in Figure \ref{Fig:best model2} the time dependent abundances of CO, CN, CS and HNC over $10^7$ yrs as well as the corresponding observed abundances towards $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$. Our chemical model is reproducing the observed species CO, CN and CS very well within a factor of $\sim 1-1.4$ at $t_{\mathrm{best}}=6.2 \times 10^6$ yrs. As it can be seen in Figure \ref{Fig:best model2}, the predicted abundances follow the order of the observed quantities. The most significant discrepancy is found in case of HNC, where the chemical model underestimates by a factor of 4 the observed abundance at the time of best agreement (see Table \ref{tab:comparison_model_obs}). According to the model, one of the main destruction mechanisms of HNC is: $\mathrm{C^{+} + HNC \rightarrow C_2N^{+} + H}$. Based on the online chemical databases, its reaction rate remains uncertain. In UDfA the given reaction rate was determined theoretically \citep{leung}, and yields therefore a high uncertainty. Experimental studies of the above chemical route are still needed to make reliable predictions of the HNC abundance. \begin{figure*}[h] \centering \includegraphics[width = 0.7\textwidth]{best_model_observed_species.pdf} \caption{Chemical evolution of the abundances of CO, CN, CS and HNC over $10^7$ yrs predicted by our best-fit model with the parameters $(n\mathrm{(H)}, A_V, T_{\mathrm{gas}}) = (300 \, \mathrm{cm^{-3}}, \, 3 \, \mathrm{mag}, \, 40 \, \mathrm{K})$. The coloured horizontal bands correspond to the observed abundances towards the cloud with $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$, including the inferred uncertainties. The vertical dashed line indicates the time of best agreement ($t=6.2 \times 10^6$ yrs) between observations and model results.} \label{Fig:best model2} \end{figure*} \begin{table*} [! h] \center \caption{Observed abundances for the cloud component with $\varv_{\mathrm{LSR}} = -17 \, \mathrm{km \, s^{-1}}$ and predictions of the species HNC, CO, CS and CN based on our best-fit model at the time of best agreement $t= 6.2\times 10^6 \, \mathrm{yrs}$. The last column lists the ratio of observed to predicted abundances.} \label{tab:comparison_model_obs} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c } \hline \hline \\ Species & Observed & Predicted & Ratio\\ & Abundance & Abundance & (Observed/Predicted)\\ \hline \\ [-1ex ] HNC & $8.0(1.9)\times10^{-10}$ & $2.1\times10^{-10}$ & 3.8 \\ CN & $1.0(0.3)\times10^{-8}$ & $1.4\times10^{-8}$ & 0.7 \\ CS & $3.6(0.4)\times10^{-9}$ & $3.4\times10^{-9}$ & 1.1 \\ CO & $7.7(0.6)\times10^{-6}$ & $8.0\times10^{-6}$ & 1.0 \\ \hline \end{tabular} \tablefoot{For the calculation of the observed abundances we used an $N(\mathrm{H_2})$ value of $4.30 \times 10^{20} \, \mathrm{cm^{-2}}$, as determined in \cite{liszt18}.} \end{table*} The same set of physical parameters provided also for the cloud with $\varv_{\mathrm{LSR}} = -10 \,\mathrm{km \, s^{-1}}$ the smallest deviation with the observations. However, in this case the best-fit model gives a $D_{\mathrm{min}}(t,r)$, that is slightly larger by a factor of $\sim1.4$. The molecular abundances observed towards $\varv_{\mathrm{LSR}} = -8 \,\mathrm{km \, s^{-1}}$ are best reproduced with an $A_V$ of 5 mag, a density of $400 \, \mathrm{cm^{-3}}$ and a gas temperature of 40 K. The smallest level of disagreement between an $A_V$ of 3 and 5 mag differs by less than $1\%$. For the two remaining clouds with $\varv_{\mathrm{LSR}} = -4 \,\mathrm{km \, s^{-1}}$ and $\varv_{\mathrm{LSR}} = -14 \,\mathrm{km \, s^{-1}}$ the best-fit model in both cases is given by the parameters: $(n\mathrm{(H)}, A_V, T_{\mathrm{gas}}) = (200 \, \mathrm{cm^{-3}}, \, 1 \, \mathrm{mag}, \, 30 \, \mathrm{K})$ at a time $t_{\mathrm{best}} = 10^7 \, \mathrm{yrs}$. Here, the discrepancy in both clouds arises mostly from the fact that the model underestimates the CS abundance by a factor of $\sim 6-9$. In our model, CS is being effectively destroyed via photodissociation due to the low $A_V$. Table \ref{tab:best-fit models} lists the best-fit parameters, that were determined towards every cloud component. We note that towards the same line of sight there have been detections of several other molecules, as reported in \cite{liszt2008}. The best-fit model determined towards $\varv_{\mathrm{LSR}} = -17 \,\mathrm{km \, s^{-1}}$ and $\varv_{\mathrm{LSR}} = -10 \,\mathrm{km \, s^{-1}}$ is able to reproduce within one order of magnitude the species OH, $\mathrm{C_2H}$, $\mathrm{H_2CO}$, $\mathrm{NH_3}$ and CH, while other species such as HCN, SO, $\mathrm{H_2S}$ and $\mathrm{C_3H_2}$ are strongly underestimated up to two orders of magnitude. This is a clear indication that the chemical network of certain molecules (other than P-bearing ones) still needs to be extended and updated. This however will be adressed in future work, as the present paper is focusing mainly on the P-bearing chemistry. \begin{table*} [h!] \centering \caption{Set of physical parameters that give the best agreement between model results and observations towards every cloud component.} \label{tab:best-fit models} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c} \hline \hline \\ Velocity & $n(\mathrm{H})$ & $A_V$ & $T_{\mathrm{gas}}$ & $t_{\mathrm{best}}$ \\ ($\mathrm{km \, s^{-1}}$) & ($\mathrm{cm^{-3}}$) & (mag) & (K) & ($10^6$ yrs) \\ \hline \\ [-1ex] -17 & 300 & 3 & 40 & 6.2 \\ -14 & 200 & 1 & 30 & 10 \\ -10 & 300 & 3 & 40 & 6.2 \\ -8 & 400 & 5 & 40 & 2.3 \\ -4 & 200 & 1 & 30 & 10 \\ \hline \\ \end{tabular} \\ \end{table*} \section{Discussion: The chemistry of Phosphorus} \label{P-stuff} Based on the above results we can conclude that the molecular abundances observed at $\varv_{\mathrm{LSR}} = -8, \, -10, \, \, -17 \,\mathrm{km \, s^{-1}}$ can be best reproduced by a more "shielded" ($A_V>1 \, \mathrm{mag}$) interstellar medium that allows the build-up of molecules to occur more efficiently. The resulting visual extinction $A_V$ of 3 mag should be viewed as an averaged value over the region covered by our beam ($\sim22\arcsec$). Within this region the denser clumps are most likely translucent in nature. Hence, the observed cloud components are probably heterogeneous clouds, incorporating diffuse and translucent material, filled with relatively abundant molecules. This result stands in good agreement with a study done by \cite{liszt17}, which involved modeling the CO formation and fractionation towards diffuse clouds. One of the main results was that strong $\mathrm{^{13}CO}$ absorption lines observed in the mm- and UV-range can be explained by higher densities ($\geq 256 \, \mathrm{cm^{-3}}$) and weaker radiation (and thus higher visual extinction), as already mentioned in Section \ref{results}. Our conclusions also agree well with the work done by \cite{thiel}, in which the physical and chemical structure of the gas along the line of sight to SgrB2(N) was studied; here, complex organic molecules, such as $\mathrm{NH_2CHO}$ and $\mathrm{CH_3CHO}$, were detected in the majority of the clouds, which at the same time proved to have relatively high visual extinctions ($A_V = 2.5 -5 \, \mathrm{mag}$ with $N(\mathrm{H_2})>10^{21} \, \mathrm{cm^{-2}}$), thus consisting mainly of translucent gas. According to \cite{thiel} the column density of $\mathrm{H_2}$ that corresponds to an $A_V$ of 3 mag is $\sim 3\times 10^{21} \, \mathrm{cm^{-2}}$. This is also consistent with the study by \cite{pety}, which states that the bright $\mathrm{^{12}CO}$ emission originates from dense regions with $N(\mathrm{H_2})>10^{21} \, \mathrm{cm^{-2}}$. The gas observed at velocities of $-14 \,\mathrm{km \, s^{-1}}$ and $-4 \,\mathrm{km \, s^{-1}}$ on the other hand, corresponds mainly to a "classical" diffuse cloud with a visual extinction of $\sim1$ mag according to the above analysis. These clouds also yielded the smallest amounts of the detected molecular abundances. Since chemical complexity seems to be favoured towards translucent rather than diffuse gas, we use for the following discussion as a reference model the one that provided the best fit towards the dense clouds with $\varv_{\mathrm{LSR}} =-17 \,\mathrm{km \, s^{-1}}$ and $\varv_{\mathrm{LSR}} =-10 \,\mathrm{km \, s^{-1}}$. According to our best-fit model, $\mathrm{P^+}$ has a gas-phase abundance of $1.8\times10^{-7}$ at the end of our simulations, being a factor of $\sim1.4$ lower than its cosmic value, which indicates little depletion taking place. The main reservoir of phosphorus other than $\mathrm{P^+}$ is atomic P, having an abundance of $7.4\times10^{-8}$ at $10^7$ yrs. Atomic P is formed mainly through the electronic recombination of $\mathrm{P^+}$. For our models with elevated densities ($10^3 \, \mathrm{cm^{-3}}$) we reach high elemental depletion (such as for $\mathrm{C^+}$, $\mathrm{S^+}$ and $\mathrm{P^+}$) after running the code for $10^7$ yrs. This is consistent with the results presented by \cite{fuente19}, who show significant depletion of C, O and S happening already towards translucent material at the edge of molecular clouds (3-10 mag) with $1-5\times10^3 \, \mathrm{cm^{-3}}$. In the Appendix \ref{depletion} we investigate further the expected depletion of phosphorus when transitioning from diffuse- to dense-cloud conditions. We find that there is a significant depletion of atomic P on dust grains after the final density of $10^5 \, \mathrm{cm^{-3}}$ is reached. This in turn leads to a strong increase of $\mathrm{gPH_3}$, that becomes the main carrier of phosphorus in the dense phase. We also find a considerable decrease of the molecules HCP, CP, PN, PO and $\mathrm{PH_3}$ due to freeze-out on grains and their destruction route with $\mathrm{H_3^+}$ after the final density is attained at $t \sim 10^6-10^7$ yrs. The most abundant P-bearing molecules in the gas-phase are HCP and CP, with maximal abundances of $3.4\times 10^{-10}$ and $2.1\times 10^{-10}$, respectively\footnote{The maximal abundances of the P-bearing species are reached at the end of our chemical simulations, at $t=10^7$ yrs. These abundances barely differ from the abundances at $t=6.2 \times 10^6$ yrs, which is the time of the best agreement with the observations.}. The formation and destruction pathways of both HCP and CP are strongly related to the electron fraction, as they are mainly produced (throughout the entire chemical evolution) by dissociative recombination of the protonated species $\mathrm{PCH_2^+}$ and destroyed by reacting with $\mathrm{C^{+}}$, the main carrier of positive charge in diffuse clouds. Two additional P-bearing species, that are predicted by our model to have "observable" abundances in the gas phase are PN and PO, with respective maximal abundances of $4.8\times 10^{-11}$ and $1.4\times10^{-11}$. The most productive formation pathways for PN start with $\mathrm{P + CN \rightarrow PN + C}$, $\mathrm{N+PH \rightarrow PN + H}$ and end with $\mathrm{N + CP \rightarrow PN +C}$. In the late stage of evolution ($\sim0.5 \times10^6-10^7$ yrs) PN is primarily being destroyed by $\mathrm{He^{+} + PN \rightarrow P^{+} + N + He}$. The species PO is mainly produced over the entire chemical evolution of $10^7$ yrs by the dissociative recombination of $\mathrm{HPO^+}$: $\mathrm{HPO^{+} + e^{-} \rightarrow PO + H}$, and is mostly destroyed by reactions with $\mathrm{C^+}$ and $\mathrm{H^+}$. On the other hand, $\mathrm{HPO^+}$ is efficiently formed via $\mathrm{P^{+} + H_{2}O \rightarrow HPO^{+} + H}$\footnote{The species $\mathrm{H_2O}$ is formed efficiently on dust grains ($\mathrm{gH + gOH \rightarrow H_2O}$) in the first $10^3$ yrs, while it is effectively produced via desorption $\mathrm{gH_2O \rightarrow H_2O}$ at late times. Our best-fit model produces a maximal $\mathrm{H_2O}$ abundance in the gas phase of $2.3\times10^{-8}$.}. An additional reaction that becomes relevant at progressive times ($\sim10^6-10^7$ yrs) is $\mathrm{O + PH \rightarrow PO + H}$ with a $\sim10\%$ reaction significance. \begin{figure*}[h] \centering \includegraphics[width = 0.7\textwidth]{diffuse_clouds_P_molecules_best_model.pdf} \caption{Variation of the predicted abundances of PN, PO, HCP, CP and $\mathrm{PH_3}$ over $10^7$ yrs in our best-fit model. The dashed lines represent the 3$\sigma$ upper limits derived from the observations at $\varv_{\mathrm{LSR}} = -17 \, \mathrm{km \, s^{-1}}$. In case of PO we use as an upper limit the one at $5 \times 10^{-10}$ (see Table \ref{tab:comparison_prediction_upper_limits}).} \label{Fig:best model_Pstuff} \end{figure*} Another quite abundant P-bearing species in the gas-phase based on our best-fit model, is phosphine, $\mathrm{PH_3}$, with a maximal abundance of $\sim1.6 \times 10^{-11}$ at a late time of $10^7$ yrs. We note here that the species PH is also predicted to be detectable with a maximal abundance of $\sim3.6 \times 10^{-11}$. Unlike PH however, $\mathrm{PH_3}$ has already been detected in circumstellar envelopes of evolved stars \citep{agundez14a}, indicating that it could be an important P-bearing species in interstellar environments such as diffuse and translucent clouds. Thus, we will focus in the following sections on the $\mathrm{PH_3}$ rather than the PH chemistry. Based on our chemical model, $\mathrm{PH_3}$ is formed most efficiently on dust grains in the early phase, being released to the gas-phase via reactive desorption: $\mathrm{gH + gPH_2 \rightarrow PH_3}$. Its formation proceeds after $1.4 \times 10^3$ yrs with the photodesorption process $\mathrm{gPH_3 \rightarrow PH_3}$ as the most effective reaction. Since the evaporation temperature of $\mathrm{PH_3}$ lies at $\sim 100$ K, the main mechanism driving the desorption of $\mathrm{PH_3}$ at low temperatures is photodesorption (instead of thermal desorption). Switching off the photodesorption in our model, leads to a decrease of the $\mathrm{PH_3}$ gas-phase abundance of two orders of magnitude. Once in the gas-phase, $\mathrm{PH_3}$ is mostly destroyed by reactions with $\mathrm{C^+}$ and $\mathrm{H^+}$ as well as through the photodissociation reaction: $\mathrm{PH_3 + h\nu \rightarrow PH_2 + H}$. The most abundant species on grains is $\mathrm{gPH_3}$, with a maximal abundance of $7.2\times 10^{-10}$. Almost all the atomic P that depletes onto the dust grains reacts with gH and forms gPH ($\mathrm{gP + gH \rightarrow gPH}$), which subsequently forms $\mathrm{gPH_3}$ through further hydrogenation. Table \ref{tab:P_gasphase_chemistry} summarizes all the main formation and destruction pathways for the molecules PN, PO, HCP, CP and $\mathrm{PH_3}$ at three different times ($t = 10^3, 10^5, 10^7$ yrs). The last column shows the significance of the given reaction in the total formation/destruction rate of the species of interest. \begin{table*} [h] \centering \caption{Main formation and destruction mechanisms for the species PN, PO, HCP, CP and $\mathrm{PH_3}$ based on the best-fit chemical model at times: $t=10^3, \, 10^5, \, 10^7$ yrs. The last column represents the share of the given reaction in the total formation/destruction rate of the corresponding species.} \label{tab:P_gasphase_chemistry} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c} \hline \hline \\ Species & Time & Reaction type & Reaction & Reaction importance \\ & (yrs) & & & (\%) \\ \hline \\ [-1ex] PN & $10^3$ & Formation & $\mathrm{P + CN \rightarrow PN +C}$ & 47 \\ & & Formation & $\mathrm{N + PH \rightarrow PN + H}$ & 25 \\ & & Formation & $\mathrm{N + PO \rightarrow PN + O}$ & 17 \\ [1ex] & $10^5$ & Formation & $\mathrm{N + CP \rightarrow PN + C}$ & 27 \\ & & Destruction & $\mathrm{H^+ + PN \rightarrow PN^+ + H}$ & -27 \\ & & Destruction & $\mathrm{He^+ + PN \rightarrow P^+ + N + He}$ & -23 \\ & & Formation & $\mathrm{N + PH \rightarrow PN + H}$ & 10 \\ [1ex] & $10^7$ & Destruction & $\mathrm{He^+ + PN \rightarrow P^+ + N + He}$ & -41 \\ & & Formation & $\mathrm{N + CP \rightarrow PN +C}$ & 38 \\ & & Destruction & $\mathrm{H^+ + PN \rightarrow PN^+ + H}$ & -8 \\ \hline \\ PO & $10^3$ & Formation & $\mathrm{HPO^+ + e^- \rightarrow PO + H}$ & 48 \\ & & Destruction & $\mathrm{C^+ + PO \rightarrow PO^+ + C}$ & -44 \\ [1ex] & $10^5$ & Formation & $\mathrm{HPO^+ + e^- \rightarrow PO + H}$ & 47 \\ & & Destruction & $\mathrm{H^+ + PO \rightarrow PO ^+ + H}$ & -37 \\ & & Destruction & $\mathrm{C^+ + PO \rightarrow PO ^+ + C}$ & -9 \\ [1ex] & $10^7$ & Formation & $\mathrm{HPO^+ + e^- \rightarrow PO + H}$ & 36 \\ & & Destruction & $\mathrm{C^+ + PO \rightarrow PO ^+ + C}$ & -26 \\ & & Formation & $\mathrm{O + PH \rightarrow PO + H}$ & 13 \\ \hline \\ HCP & $10^3$ & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow HCP + H}$ & 55 \\ & & Destruction & $\mathrm{C^+ + HCP \rightarrow CCP^+ + H}$ & -22 \\ & & Destruction & $\mathrm{C^+ + HCP \rightarrow HCP^+ + C}$ & -22 \\ [1ex] & $10^5$ & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow HCP + H}$ & 50 \\ & & Destruction & $\mathrm{H^+ + HCP \rightarrow HCP^+ + H}$ & -39 \\ [1ex] & $10^7$ & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow HCP + H}$ & 50 \\ & & Destruction & $\mathrm{C^+ + HCP \rightarrow CCP^+ + H}$ & -17 \\ & & Destruction & $\mathrm{C^+ + HCP \rightarrow HCP^+ + C}$ & -17 \\ \hline \\ CP & $10^3$ & Destruction & $\mathrm{C^+ + CP \rightarrow CP^+ + C}$ & -45 \\ & & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow CP + H_2}$ & 34 \\ [1ex] & $10^5$ & Destruction & $\mathrm{H^+ + CP \rightarrow CP^+ + H}$ & -38 \\ & & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow CP + H_2}$ & 29 \\ & & Formation & $\mathrm{HCP^+ + e^- \rightarrow CP + H}$ & 18 \\ [1ex] & $10^7$ & Formation & $\mathrm{PCH_2^+ + e^- \rightarrow CP + H_2}$ & 35 \\ & & Destruction & $\mathrm{C^+ + CP \rightarrow CP^+ + C}$ & -32 \\ & & Destruction & $\mathrm{H^+ + CP \rightarrow CP^+ + H}$ & -9 \\ \hline \\ $\mathrm{PH_3}$ & $10^3$ & Destruction & $\mathrm{C^+ + PH_3 \rightarrow PH_3^+ + C}$ & -45 \\ & & Formation & $\mathrm{gH + gPH_2 \rightarrow PH_3}$ & 31 \\ & & Formation & $\mathrm{gPH_3 \rightarrow PH_3}$ & 23 \\ [1ex] & $10^5$ & Formation & $\mathrm{gPH_3 \rightarrow PH_3}$ & 49 \\ & & Destruction & $\mathrm{H^+ + PH_3 \rightarrow PH_3^+ + H}$ & -26 \\ & & Destruction & $\mathrm{C^+ + PH_3 \rightarrow PH_3^+ + C}$ & -20 \\ [1ex] & $10^7$ & Formation & $\mathrm{gPH_3 \rightarrow PH_3}$ & 49 \\ & & Destruction & $\mathrm{C^+ + PH_3 \rightarrow PH_3^+ + C}$ & -36 \\ & & Destruction & $\mathrm{PH_3 + h\nu \rightarrow PH_2 + H}$ & -6 \\ \hline \\ \end{tabular} \\ \end{table*} \begin{table*} [h] \centering \caption{Observed and predicted abundances at time $t= 10^7 \, \mathrm{yrs}$ for the species PO, PN, HCP, CP and $\mathrm{PH_3}$ given by our best-fit model.} \label{tab:comparison_prediction_upper_limits} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c} \hline \hline \\ Species & Observed & Predicted \\ & Abundance & Abundance \\ \hline \\ [-1ex] PN & $<4.9 \times 10^{-11}$ & $4.8 \times 10^{-11}$ \\ PO & $<5.0 \times 10^{-10}$ & $1.4 \times 10^{-11}$ \\ HCP & $<2.6 \times 10^{-9}$ & $3.4 \times 10^{-10}$ \\ CP & $<1.5 \times 10^{-9}$ & $2.1 \times 10^{-10}$ \\ $\mathrm{PH_3}$ & - & $1.6 \times 10^{-11}$ \\ \hline \\ \end{tabular} \\ \tablefoot{The upper limits are 3$\mathrm{\sigma}$. For the calculation of the upper-limit-abundances we used an $N(\mathrm{H_2})$ value of $4.30 \times 10^{20} \, \mathrm{cm^{-2}}$ \citep{liszt18}. For PO we show the stringest upper limit. There are no observed data available for $\mathrm{PH_3}$.}\\ \end{table*} Figure \ref{Fig:best model_Pstuff} depicts the time dependent abundances of PN, PO, HCP, CP and $\mathrm{PH_3}$ over $10^7$ yrs predicted by the best-fit model along with the computed 3$\sigma$ upper limits. The predicted abundance for PO lies a factor of $\sim40$ below the observational upper limit at $t= 10^7 \, \mathrm{yrs}$, while the current upper limits of HCP and CP are $\sim1$ order of magnitude higher than the model predictions. Finally, for PN the modeled abundance is almost reaching the observed value at the end of our simulations. This means, that in all cases the predicted abundances of P-bearing species are lower than the derived upper limits. Future observations of the ground energy transitions (1-0) will help us constrain even more these upper limits (see Section \ref{future} for further justification). Table \ref{tab:comparison_prediction_upper_limits} lists the predicted abundances of the above species given by our chemical model at $t= 10^7 \, \mathrm{yrs}$ along with the corresponding upper limits. In the following discussion we focus on how deviations from our best-fit model can affect the P-bearing chemistry. In particular, we examine the dependence of the abundances of HCP, CP, PN, PO and $\mathrm{PH_3}$ on increasing visual extinction $A_V$, increasing cosmic-ray ionisation rate $\zeta(\mathrm{CR})$ as well as alternating the surface mobility constants (diffuse/desorption ratio $E_b/E_D$ and possibility of quantum tunneling for light species). \subsection{Effects of visual extinction on the P-bearing chemistry} In this Section we analyze how an increase of $A_V$ is affecting the predicted abundaces of P-bearing species. For this purpose we consider the parameters of the best-fit model with $n(H) = 300 \, \mathrm{cm^{-3}}$ and $T_{\mathrm{gas}} = 40 \, \mathrm{K}$, while varying the $A_V$ from 1 to 10 mag. By keeping the density constant, we avoid high levels of elemental depletion. The increase in visual extinction can then be explained by a figurative increase of the source's size. Figure \ref{Fig:visual extinction} (left panel) shows the predicted abundances of P-bearing species at the end of our simulations ($t=10^7$ yrs) under the effect of varying the visual exinction. All species reach a maximal abundance at an $A_V$ of 4 mag. The abundances of HCP, CP and PN barely change for $A_V>4$ mag, while for the rest of the molecules the abundances drop; especially in case of $\mathrm{PH_3}$ we denote a substantial decrease of almost two orders of magnitude. As already mentioned in Section \ref{P-stuff}, the most effective formation process of $\mathrm{PH_3}$ is the photodesorption $\mathrm{gPH_3 \rightarrow PH_3}$. Thus, a high visual extinction attenuates the incoming UV-field and therefore the desorption of $\mathrm{gPH_3}$. \begin{figure*}[h] \centering \includegraphics[width = 1\textwidth]{Av_P_bearing_molecules_density_300.pdf} \caption{Predicted abundances of P-bearing molecules as a function of visual extinction $A_V$. The molecular abundances shown here are computed at $t=10^7$ yrs. The right panel illustrates the predicted abundances of $\mathrm{PCH_2^+}$, $\mathrm{C^+}$, $\mathrm{P^+}$, $\mathrm{He^+}$, $\mathrm{H^+}$, $\mathrm{HPO^+}$ as they are contributing the most to the formation and destruction of HCP, CP, PN, PO and $\mathrm{PH_3}$ (left panel).} \label{Fig:visual extinction} \end{figure*} In order to better understand the $A_V$ dependence of the remaining molecular abundances, we have plotted in Figure \ref{Fig:visual extinction} the predicted abundances of the species that mainly form and destroy HCP, CP, PN and PO (see Table \ref{tab:P_gasphase_chemistry}) as a function of the visual extinction. In particular, we have simulated the abundances of $\mathrm{PCH_2^+}$, $\mathrm{C^+}$, $\mathrm{P^+}$, $\mathrm{He^+}$, $\mathrm{H^+}$ as well as $\mathrm{HPO^+}$. In case of HCP (and also CP) its abundance increases up to an $A_V$ of 4 mag and subsequently stays constant above that value. This behaviour is correlated with the increase of the $\mathrm{PCH_2^+}$ abundance up to an $A_V$ of 3 mag as well as the decrease of $\mathrm{C^+}$ up to a visual extinction of 4 mag. The species PO seems to be more strongly affected by the increasing $A_V$. Its abundance will also increase for $A_V \leq 4 \, \mathrm{mag}$ which again stands in correlation with the decrease of the $\mathrm{C^+}$ abundance (the main "destroyer" of PO), followed by a drop in abundance up to 7 mag. This on the other hand results from the decrease of the $\mathrm{HPO^+}$ abundance (the main precursor of PO) in the same $A_V$ range. An increase in $A_V$ will decrease the $\mathrm{P^+}$ abundance (due to the decrease of the total ionisation rate), as it can be seen in Figure \ref{Fig:visual extinction}. In addition, an enhanced $A_V$ is slightly decreasing the $\mathrm{H_2O}$ abundance (by a factor of 2), since the most effective formation for $\mathrm{H_2O}$ at late times is the photodesorption $\mathrm{gH_2O \rightarrow H_2O}$ (see footnote 7). Therefore, for higher $A_V$, both $\mathrm{P^+}$ and $\mathrm{H_2O}$ decrease, so that $\mathrm{HPO^{+}}$ and subsequently PO reduce in abundance as well. \subsection{Effects of the cosmic-ray ionisation rate on the P-bearing chemistry} As already mentioned in Section \ref{model}, for all the applied models we use for the cosmic-ray ionisation rate $\zeta(\mathrm{CR})$ a value of $1.7 \times 10^{-16} \, \mathrm{s^{-1}}$, as it was derived by \cite{indriolo}. This is also consistent with previous work in which diffuse and translucent clouds were studied as well \citep{fuente19, godard14, lepetit04}. However, we should note here that in \cite{indriolo} several cosmic-ray ionisation rates were derived towards 50 diffuse line of sights, ranging from $1.7 \times 10^{-16} \, \mathrm{s^{-1}}$ to $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ with a mean value of $3.5 \times 10^{-16} \, \mathrm{s^{-1}}$. Due to the complex and yet not fully known nature of our observed clouds we test our chemical model by also applying the elevated values of $\zeta(\mathrm{CR}) = 3.5 \times 10^{-16} \, \mathrm{s^{-1}}$ and $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ in order to examine the influence of the cosmic-ray ionisation rate on the P-bearing chemistry. As for the remaining parameters of the code (such as $A_V$ and $T_{\mathrm{gas}}$) we use the values given by our best-fit model (see Section \ref{model}). Figure \ref{Fig:zeta} shows the chemical evolution of the species PN, PO, HCP, CP and $\mathrm{PH_3}$ over $10^7$ yrs for a cosmic-ray ionisation rate of $\zeta(\mathrm{CR}) = 1.7 \times 10^{-16} \, \mathrm{s^{-1}}$ and $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ in the left and right panel, respectively. \begin{figure*}[h] \centering \includegraphics[width = 1\textwidth]{P_bearing_species_zetas_extreme_cases.pdf} \caption{Chemical evolution of P-bearing molecules as a function of time under the effects of a cosmic-ray ionisation rate of $\zeta(\mathrm{CR}) = 1.7 \times 10^{-16} \, \mathrm{s^{-1}}$ (left panel) and $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ (right panel).} \label{Fig:zeta} \end{figure*} \begin{table*} [! h] \center \caption{Predicted abundances of the species PN, PO, HCP, CP and $\mathrm{PH_3}$ at $t= 10^7$ yrs for three different cosmic-ray ionisation rates (see text for explanation).} \label{tab:cosmic_rays} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c} \hline \hline \\ Species & Predicted Abundances &Predicted Abundances & Predicted Abundances \\ & ($\zeta(\mathrm{CR}) = 1.7 \times 10^{-16} \, \mathrm{s^{-1}} $) & ($\zeta(\mathrm{CR}) = 3.5 \times 10^{-16} \, \mathrm{s^{-1}} $) & ($\zeta(\mathrm{CR}) = 10.6 \times 10^{-16} \, \mathrm{s^{-1}} $) \\ \hline \\ [-1ex ] PN & $4.8\times10^{-11}$ & $2.9\times10^{-12}$ & $6.6\times10^{-14}$ \\ PO & $1.4\times10^{-11}$ & $4.4\times10^{-12}$ & $7.5\times10^{-13}$ \\ HCP & $3.4\times10^{-10}$ & $6.7\times10^{-11}$ & $3.8\times10^{-12}$\\ CP & $2.1\times10^{-10}$ & $4.2\times10^{-11}$ &$2.5\times10^{-12}$\\ $\mathrm{PH_3}$ & $1.6\times10^{-11}$ & $2.7\times10^{-12}$ & $3.2\times10^{-13}$\\ \hline \end{tabular} \end{table*} Table \ref{tab:cosmic_rays} summarizes the predicted abundances of P-bearing species for the three different cosmic-ray ionisation rates given in \cite{indriolo}. As one can recognize, PN shows the most substantial decrease in abundance with increasing $\zeta(\mathrm{CR})$. From the lowest ($\zeta(\mathrm{CR}) = 1.7 \times 10^{-16} \, \mathrm{s^{-1}} $) to the highest ($\zeta(\mathrm{CR}) = 10.6 \times 10^{-16} \, \mathrm{s^{-1}} $) cosmic-ray ionisation rate, the PN abundance decreases by a factor of $\sim730$, while for HCP, CP and $\mathrm{PH_3}$ we have a drop by a factor of $\sim85$ and $\sim 50$, respectively. As already mentioned, PN is heavily destroyed by $\mathrm{He^+}$ with a $\sim40 \%$ reaction significance. An increase of $\zeta(\mathrm{CR})$ up to a value of $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ is significantly enhancing the ionisation of He and H by a factor of $\sim 20$ and $\sim 30$, respectively via cosmic-ray-induced secondary UV photons: $\mathrm{He + CRP \rightarrow He^+ + e^-}$ and $\mathrm{H + CRP \rightarrow H^+ + e^-}$ (the abundance of $\mathrm{C^+}$ increases by $\sim6$). Therefore, the destruction path with $\mathrm{H^+}$ becomes relevant for all P-bearing species showing a 10-40\% loss efficiency. The effect is the strongest in case of PN, because PN is mainly formed through CP which is drastically decreasing and is also efficiently destroyed by both $\mathrm{He^+}$ and $\mathrm{H^+}$. The PO abundance is only reduced by a factor of $\sim20$ after increasing $\zeta(\mathrm{CR})$ up to $10.6 \times 10^{-16} \, \mathrm{s^{-1}} $, despite being heavily destroyed by $\mathrm{H^+}$. On the other hand, the significance of the dissociative recombination of $\mathrm{HPO^+}$ increases up to 50\% which in turn counterbalances the loss through $\mathrm{H^+}$. An increased $\zeta(\mathrm{CR})$ of $10.6 \times 10^{-16} \, \mathrm{s^{-1}}$ is enhancing the abundance of $\mathrm{P^+}$ up to $\sim2.5\times10^{-7}$, nearly reaching its cosmic value of $\sim2.6\times10^{-7}$ \citep{asplund}, while the abundance of atomic P decreases down to $\sim9.5\times10^{-9}$ via the enhanced reaction with $\mathrm{C^+}$ and $\mathrm{H^+}$. \subsection{Effects of the diffusion/desorption ratio on the P-bearing chemistry} \label{surface_chemistry} The chemistry in the ISM is heavily influenced by the presence of dust grains \citep{caselli_cecca}. The mobility of the depleted species on the surface of dust grains depends on two mechanisms: thermal hopping and quantum tunneling for the lightest species H and $\mathrm{H_2}$ through potential barriers between surface sites \citep{hasegawa}. Without the possibility of tunneling, the species are not able to scan the grain surface quickly at low temperatures and the total mobility decreases. The parameters that strongly determine the surface chemistry are the diffusion/desorption energy ratio $E_b/E_D$ as well as the thickness of the potential barrier between adjacent sites. Based on previous studies \citep{hasegawa, ruffle, garrod}, \cite{vasyunin2013} proposed three different values for the $E_b/E_D$ ratio: 0.3, 0.5 and 0.77. In case of low ratios ($E_b/E_D=0.3$) we activate in our model the possibility of quantum tunneling for light species, while for the other two cases, surface mobility is only controlled by thermal hopping (and quantum tunneling is deactivated). The potential barriers are assumed to have rectangular shape and a thickness of 1 $\mathrm{\mathring{A}}$ \citep{vasyunin2013}. In our model we utilize the first set of parameters ($E_b/E_D=0.3$, with tunneling), nevertheless, since P-bearing chemistry is still highly uncertain, we examine how the remaining two sets of parameters ($E_b/E_D=0.5, \, 0.77$, no tunneling), influence the predicted abundances. Table \ref{tab:tunneling} lists the predictions of PN, PO, HCP, CP and $\mathrm{PH_3}$ as well as $\mathrm{H_2}$ at $t= 10^7$ yrs for the three different sets of surface mobility parameters proposed in \cite{vasyunin2013}. As Table \ref{tab:tunneling} shows, the $\mathrm{H_2}$ abundance decreases by a factor of 4, by switching from set up 1 ($E_b/E_D=0.3$ with tunneling) to set up 2 ($E_b/E_D=0.5$ no tunneling), and finally experiences a dramatic drop of a factor 50 when increasing the $E_b/E_D$ up to 0.77 (overall change of a factor 200 between set up 1 and 3). \begin{table*} [! h] \center \caption{Predicted abundances of the species PN, PO, HCP, CP and $\mathrm{PH_3}$ as well as $\mathrm{H_2}$ at $t= 10^7$ yrs for three different sets of surface mobility parameters (see text for explanation).} \label{tab:tunneling} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c} \hline \hline \\ Species & Predicted Abundances &Predicted Abundances & Predicted Abundances \\ & ($E_b/E_D=0.3$ with tunneling) & ($E_b/E_D=0.5$ no tunneling) & ($E_b/E_D=0.77$ no tunneling) \\ \hline \\ [-1ex ] PN & $4.8\times10^{-11}$ & $5.0\times10^{-13}$ & $1.6\times10^{-13}$ \\ PO & $1.4\times10^{-11}$ & $1.2\times10^{-12}$ & $6.7\times10^{-13}$ \\ HCP & $3.4\times10^{-10}$ & $1.3\times10^{-11}$ & $4.6\times10^{-12}$\\ CP & $2.1\times10^{-10}$ & $9.5\times10^{-12}$ & $3.2\times10^{-12}$\\ $\mathrm{PH_3}$ & $1.6\times10^{-11}$ & $1.8\times10^{-12}$ & $1.2\times10^{-12}$\\ $\mathrm{H_2}$ & $4.8\times10^{-1}$ & $1.3\times10^{-1}$ & $2.4\times10^{-3}$ \\ \hline \end{tabular} \end{table*} The reduction of the $\mathrm{H_2}$ abundance has a significant impact on the formation of $\mathrm{PCH_2^+}$ and PH, which affects the PN, PO, HCP and CP abundances through the following reactions:\\ \textbf{PN} \begin{itemize} \item $\mathrm{P^+ + H_2 \rightarrow PH_2^+}$ \\ \item $\mathrm{PH_2^+ + e^- \rightarrow PH + H}$ \\ \item $\mathrm{N + PH \rightarrow PN + H}$ \\ \end{itemize} \textbf{PO} \begin{itemize} \item $\mathrm{P^+ + H_2 \rightarrow PH_2^+}$ \\ \item $\mathrm{PH_2^+ + e^- \rightarrow PH + H}$ \\ \item $\mathrm{O + PH \rightarrow PO + H}$ \\ \end{itemize} \textbf{HCP} \begin{itemize} \item $\mathrm{HCP^+ + H_2 \rightarrow PCH_2^+}$ \\ \item $\mathrm{PCH_2^+ + e^- \rightarrow HCP + H}$ \\ \end{itemize} \textbf{CP} \begin{itemize} \item $\mathrm{HCP^+ + H_2 \rightarrow PCH_2^+}$ \\ \item $\mathrm{PCH_2^+ + e^- \rightarrow CP + H_2}$ \\ \end{itemize} Both PH and $\mathrm{PCH_2^+}$ decrease by a factor of $\sim 20$ when increasing the $E_b/E_D$ up to 0.77. In addition, the abundance of $\mathrm{H^+}$ is increased by a factor of $\sim25$, since the reduction of $\mathrm{H_2}$ formation leads to more atomic hydrogen and subsequently also $\mathrm{H^+}$. The enhanced $\mathrm{H^+}$ abundance results in a stronger destruction of all P-bearing species through their reaction with $\mathrm{H^+}$. The species HCP and CP are also strongly affected by changing the surface mobility parameters, with an overall decrease of a factor $\sim70$ and $\sim65$ in abundance, respectively. In both cases the dissociative recombination of $\mathrm{PCH_2^+}$ is essential during the whole chemical evolution for the formation of HCP and CP showing a reaction significance of 30 to 99\%. A decrease of $\mathrm{PCH_2^+}$ due to lower $\mathrm{H_2}$ abundance is therefore resulting in a reduced HCP and CP formation. The largest effect is seen for PN, where a diffusion/desorption ratio of 0.77 and no quantum tunneling of light species reduces the PN abundance by a factor of 300 (see Figure \ref{Fig:diff_des}). \begin{figure*}[h] \centering \includegraphics[width = 1\textwidth]{P_bearing_tunneling_extreme_cases.pdf} \caption{Chemical evolution of P-bearing molecules as a function of time for a diffusion/desorption ratio $E_b/E_D$ of 0.3 (with quantum tunneling) shown in the left panel and for a $E_b/E_D$ of 0.77 (without quantum tunneling) in the right panel.} \label{Fig:diff_des} \end{figure*} Besides the effective loss through $\mathrm{H^+}$, the substantial decrease in PN is also related to the reduction of CP, which is the main precursor of PN at late times. In addition, the reaction $\mathrm{N + PH \rightarrow PN +H}$ is significant to the PN formation over the entire chemical evolution of $10^7$ yrs with a 10-50\% formation efficiency (for $E_b/E_D=0.77$ and no tunneling). This means that the reduction of the $\mathrm{H_2}$ abundance is decreasing PH, which in turn produces less PN. In case of PO however, the change in abundance between the two extreme cases is just a factor of $\sim 20$. Here, the route $\mathrm{O +PH \rightarrow PO + H}$ increases in significance only up to 3\% at late times ($4\times 10^6- 10^7$ yrs), indicating that the decrease of PH will not considerably affect the PO production. Furthermore, the reduction of PO due to $\mathrm{H^+}$ is compensated through its effective formation via the dissociative recombination of $\mathrm{HPO^+}$. Finally, the abundance of $\mathrm{PH_3}$ decreases only by a factor 13 in total when changing the surface chemistry constants. Despite being heavily destroyed by $\mathrm{H^+}$, $\mathrm{PH_3}$ is still sufficiently formed through the photodesorption of $\mathrm{gPH_3}$. \section{Future observations} \label{future} Thanks to the sensitive observations (rms of $\sim 6$ mK) of the (2-1) transitions of HCP, CP, PN and PO we were able to obtain good upper limits for the column densities and abundances of the above species (see Tables \ref{tab:upper_limits_PN_PO} and \ref{tab:comparison_prediction_upper_limits}) and thus constrain the P-chemistry. The observations of HNC, CN, CS and CO helped us put important constraints on the main physical parameters of the targeted diffuse/translucent clouds, i.e. the visual extinction, the density and the gas temperature. For the prospect of future observations we want to estimate the expected line instensities of the (1-0) transitions of HCP, CP, PN and PO (at $\sim40-65$ GHz) based on our new and improved diffuse-cloud model. Since the densities present in diffuse/translucent clouds are too low to show any collisional excitation ($T_{\mathrm{ex}} = T_{\mathrm{bg}} = 2.7 \, \mathrm{K}$), the (1-0) transitions are expected to be more strongly populated than the higher energy transition levels. For these calculations, we take into account that the emission of the blazar is non-thermal, meaning that the flux increases with decreasing frequency. In particular, we apply a power law for the blazar's emission with $\frac{F}{F_0} = (\frac{\nu}{\nu_0})^{-\alpha}$, where $F$ is the flux, $\nu$ is the corresponding frequency and $\alpha$ is the spectral index. By using the fluxes determined in \cite{agudo} at 3 and 1.3 mm we infer a spectral index of $\alpha \sim1.06$. Following this, we determine the flux at 7 mm to be $\sim11$ Jy, which in turn corresponds to a temperature of $\sim26$ K with a beam size of $17\arcsec$ (at 7 mm with the Green Bank Telescope). As Table \ref{tab:expected_intens} shows, the derived peak intensities of the species PN, PO, HCP and CP vary from 10 to 200 mK, making these lines "detectable" with radio telescopes, such as the Green Bank Telescope (GBT) and the Effelsberg Telescope. The capabilities of these instruments will allow us to reach rms levels down to 4 mK and enable possible detections up to a 50$\sigma$ level. The only exception is $\mathrm{PH_3}$ with a (1-0) transition at 266.944 GHz. The flux of the background source at that frequency based on the above power law is equal to 1.91 Jy. This corresponds to a background temperature $T_{\mathrm{c}}$ of 0.4 K with a beam size of $9\arcsec$ (with the IRAM telescope), which in the end results in a very weak, non-detectable absorption line. \begin{table*} [! h] \center \caption{Estimated absorption line intensities for the (1-0) transitions of HCP, CP, PN and PO towards B0355+508 for $T_{\mathrm{ex}}=2.73 \,\mathrm{K}$, a FWHM linewidth of $\Delta \varv = 0.5 \, \mathrm{km \, s^{-1}}$ and based on the predicted abundances given by our best-fit model at $t=10^7$ yrs.} \label{tab:expected_intens} \setlength{\tabcolsep}{10pt} \begin{tabular} {c c c c c c c c} \hline \hline \\ [-2ex] Species & Transitions & $E_{\mathrm{up}}$ & Frequency & $A_\mathrm{ul}$ & $g_u$ & Estimated Intensities & References \\ & & (K) & (GHz) & ($\mathrm{10^{-6} \, s^{-1}}$) & & (mK) \\ \hline \\ [-1ex ] HCP & J=1-0 & 1.9 & 39.95190 & 0.04 & 3 & 23 & 1 \\ PN & J=1-0 & 2.3 & 46.99028 & 3.04 & 3 & 214 & 2 \\ CP & N= 1-0, J=3/2-1/2, F=2-1 &2.3 & 47.98288 & 0.43 & 5 & 51 & 3\\ PO & J=3/2-1/2, $\mathrm{\Omega}$=1/2, F= 2-1, e & 3.2 & 65.31224 & 3.83 & 5 & 12 & 4\\ \hline \end{tabular} \tablebib{(1) \cite{bizzocchi}; (2) \cite{cazzoli}; (3) \cite{saito}; (4) \cite{bailleux}.} \end{table*} \section{Conclusions} \label{outlook} The aim of this work is to understand through observations and chemical simulations which physical conditions favour the production of P-bearing molecules in the diffuse interstellar medium and to what degree. Observing diffuse clouds offers us the opportunity to constrain an important parameter in our chemical simulations, which is the depletion level of phosphorus (and in general the initial elemental abundances). We performed single-pointing observations (IRAM 30m telescope) of the (2-1) transitions of the species PN, PO, HCP and CP at 3 mm towards the line of sight to the bright continuum source B0355+508. None of the above transitions was detected. Nevertheless, the sensitive observations yielding an rms level of $\sim6$ mK, have allowed us to obtain reliable upper limits (see Tables \ref{tab:upper_limits_PN_PO} and \ref{tab:comparison_prediction_upper_limits}). We have obtained high SNR detections of the (1-0) lines of HNC, CN and $\mathrm{^{13}CO}$ between 80 and 110 GHz. We also show a first detection of $\mathrm{C^{34}S}$ (2-1) at 96 GHz towards the two densest cloud components at $-10 \, \mathrm{km \, s^{-1}}$ and $-17 \, \mathrm{km \, s^{-1}}$. Following this, we were able to derive a sulfur isotopic ratio $\mathrm{^{32}S/^{34}S}$ ratio of $12.8\pm4.8$ and $18.7\pm9.5$ towards the $-10 \,\mathrm{km \, s^{-1}}$ and $-17 \,\mathrm{km \, s^{-1}}$ features, with the latter being close to the local interstellar value of $24\pm5$ \citep{chin1996}. The detected molecular species show the highest abundances towards the two components at $-10 \,\mathrm{km \, s^{-1}}$ and $-17 \,\mathrm{km \, s^{-1}}$, as already shown in previous work \citep[e.g.][and references therein]{liszt18}. Based on the detected molecular abundances, we updated our chemical model in order to provide reliable predictions of abundances and line intensities of P-containing molecules that will serve as a guide for future observations. For this purpose we ran a grid of chemical models, with typical physical conditions of diffuse/translucent clouds, trying to reproduce the observed abundances and upper limits of HNC, CN, CO and CS in every cloud component along the line of sight (at $ -4,\, -8, \, -10, \, -14 \, \mathrm{and} \, -17 \, \mathrm{km \, s^{-1}}$). For the clouds with $\varv_{\mathrm{LSR}} = -10 \, \mathrm{km \, s^{-1}}$ and $-17 \, \mathrm{km \, s^{-1}}$, the best agreement between observed and modeled abundances is reached at a time $t_{\mathrm{best}}=6.2 \times 10^6$ yrs and at $r_{\mathrm{best}}= (n\mathrm{(H)}, A_V, T_{\mathrm{gas}}) = (300 \, \mathrm{cm^{-3}}, \, 3 \, \mathrm{mag}, \, 40 \, \mathrm{K})$. We chose this set of parameters as a reference for modeling the phosphorus chemistry. According to our best-fit model mentioned above, the most abundant P-bearing species are HCP and CP ($\sim 10^{-10}$) at a time of $t=10^7$ yrs. The species PN, PO and $\mathrm{PH_3}$ also show relatively high predicted abundances of $4.8\times 10^{-11}$ to $1.4\times 10^{-11}$ at the end of our simulations. All species are effectively destroyed through reactions with $\mathrm{C^+}$, $\mathrm{H^+}$ and $\mathrm{He^+}$. The molecules HCP, CP and PO are efficiently formed throughout the entire chemical evolution via the dissociative electron recombination of the protonated species $\mathrm{PCH_2^+}$ and $\mathrm{HPO^+}$, respectively. In addition, the species $\mathrm{PH_3}$ is mainly formed on dust grains through subsequent hydrogenation reactions of P, PH and $\mathrm{PH_2}$ and then released to the gas-phase via photodesorption. Finally, PN is formed at late times ($10^5-10^7$ yrs) mainly through the reaction $\mathrm{N + CP \rightarrow PN +C}$. We have also examined how the visual extinction $A_V$, the cosmic-ray ionisation rate $\zeta(\mathrm{CR})$ and the surface mobility on dust grains are affecting the P-bearing chemistry. We found that all P-bearing species are strongly sensitive to the visual extinction: low $A_V$ values of 1 and 2 mag lead to very low P-bearing molecular abundances of $\sim 10^{-14}-10^{-12}$, indicating that a translucent region rather than a diffuse one is needed to produce observable amounts of P-containing species. All examined species in our study are influenced by the cosmic-ray ionisation rate as well. An increasing $\zeta(\mathrm{CR})$ enhances the abundance of $\mathrm{He^+}$, $\mathrm{H^+}$ and $\mathrm{C^+}$, which in turn are effectively destroying all P-bearing species. A similar conclusion was found when changing the diffusion/desorption ratio to $E_b/E_D= 0.77$ and deactivating the possibility of quantum tunneling of light species on grain surfaces. This set up increases the $\mathrm{H^+}$ abundance, which in turn efficiently reacts with and destroys PN, PO, HCP, CP and $\mathrm{PH_3}$. Finally, we performed a study of the P-depletion level by tracing the phosphorus chemistry from a diffuse to a dense cloud with the application of a dynamical model, that varies the density, the gas and dust temperature, the cosmic-ray ionisation rate and the visual extinction with time (see Appendix \ref{depletion}). We came to the main conlusion that at high denities of $\sim 10^5 \, \mathrm{cm^{-3}}$ atomic P is strongly depleted through freeze-out on dust grains, resulting in a significant increase of the $\mathrm{gPH_3}$ abundance. The molecules PN, PO, HCP, CP and $\mathrm{PH_3}$ are also affected by freeze-out on grains and are strongly destroyed by their reaction with $\mathrm{H_3^+}$ when reaching the dense phase at timescales of $\sim 10^6-10^7$ yrs. Based on the predictions of our improved diffuse-cloud model, the (1-0) transitions of HCP, CP, PN and PO are expected to be detectable with estimated intensities ranging from 10 to 200 mK. A possible detection of the above species will help us constrain even further the physical and chemical properties of our model and help us understand better the yet unknown interstellar phosphorus chemistry. \acknowledgements{We thank the anonymous referee for his/her comments that significantly improved the present manuscript. The authors also wish to thank the IRAM Granada staff for their help during the observations. V.M.R. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 664931. Work by A.V. is supported by Latvian Science Council via the project lzp-2018/1-0170. J.C. acknowledges Dr. J. C. Laas for his support with the Python programming.}
1,314,259,992,707
arxiv
\section{Dissipation dilution of a torsion ribbon} In this section we derive the central result of the main text: tensioning a ribbon increases its torsional stiffness and, proportionately, the quality factor of its torsion modes: \begin{equation}\label{eq:dissipationdilution} \frac{Q}{Q_0}=1+\frac{k_\sigma}{k_E}\approx \frac{\sigma}{2E}\left(\frac{w}{h}\right)^2 \end{equation} We derive Eq. S1 in two ways. First we provide a heuristic derivation based on on the bifilar theory of Buckley \cite{Buckley1914}, which reveals that the twisting of a tensioned ribbon does not change its length---and therefore magnitude of the tension---to first order. This geometric nonlinearity is what accounts for the losslessnes of the torsion constant due to tension. Moving beyond the lump mass model, we provide a full continuum mechanics model, following the generalized dissipation dilution theory of Federov, \emph{et. al.} \cite{Federov2019_Generalized}, that confirms the heuristic result while accounting for the mode-shape dependence of Eq. S1. Comparing to transverse flexural modes of the ribbon, we find that torsional modes are naturally ``soft-clamped." \subsection{Lumped mass model: The bifilar effect} Quinn, et. al. \cite{Quinn1997} used a ribbon as the torsion fiber in a balance apparatus designed for measurement of the universal gravitation constant. The $Q$ of this torsion pendulum was enhanced via the bifilar effect described by Buckley \cite{Buckley1914}, and this macroscopic example of dissipation dilution was the inspiration for our investigation of nanomechanical torsion. In short we reasoned that a bifilar effect might also offer a path to dissipation dilution in the torsion of nanoribbons. To retrace our reasoning, consider that the restoring torque $\kappa$ for the ribbon of Fig. \ref{fig2}a is known from torsion balance experiments (e.g. \cite{Quinn1997}) to consist of two components \begin{equation}\label{eq:torsionstiffness} \kappa =\kappa_E + \kappa_{\sigma}. \end{equation} The first component, attributable to Saint-Venant \cite{SaintVentant1856memoire,timoshenko1951theory}, \begin{equation}\label{eq:torsionstiffnessE} \kappa_E = \frac{E h^3 w}{6 l}\theta, \end{equation} is due to shear deformation the ribbon and is a source of loss. The second component, attributable to Buckley \cite{Buckley1914}, \begin{equation}\label{eq:torsionstiffnessT} \kappa_\sigma = \frac{\sigma h w^3}{12 l}\theta, \end{equation} is due to the tensile load $T = \sigma h w$, and is lossless. The lossless nature of the tensile component of the restoring torque is perhaps most easily appreciated in terms of a bifilar suspension, such as a child's swing. When we twist a swing about its longitudinal axis, we tend to lift the mass of its passenger (a child, perhaps) in a screw-like fashion, doing work against gravity. When we release the swing, it unwinds, gathering speed, spinning the passenger until completely unwound, continuing until it lifts the passenger to the original height, reverses, and starts the oscillation again. The behavior is that of a simple pendulum in the limit of small angles, and in the absence of friction would continue forever. A key assumption in our description of the child's swing is that the suspensions are inelastic. As such, the potential energy of the child's motion is stored entirely in the conservative gravitational field. Buckley's insight was that this assumption could be extended to a elastic ribbon suspension, by partitioning the ribbon into a set of infinitesimally thin pairs of strings, each behaving like a bifilar suspension. Constraining each suspension to lift the mass by the same amount entails a shear deformation of the ribbon, and thereby an extra elastic restoring torque $\kappa_E$. For a thin, wide ribbon, however, the dominant restoring torque $\kappa_\sigma$ is due to gravity, or equivalenty, the tensile pre-stress $\sigma$ produced by loading the elastic ribbon. \begin{figure}[b!] \vspace{-4mm} \includegraphics[width=1\columnwidth]{fig_supplement/fig2b.pdf} \caption{Twisting of a ribbon. The drawing on the left shows a ribbon in equilibrium. The blue dashed lines indicate the excursion of the ribbon's edge if the middle of the ribbon twists. The triangle on the lower right shows the geometry of the edge. } \label{fig2} \vspace{-1mm} \end{figure} As an extension of Buckley's treatment, we now consider a torsion balance that uses fully constrained ribbons (clamped on each end) whose tensile load is not due to the weight of the torsion paddle, but rather due to thin film residual stress. Our aim is to show that the restoring torque due to this stess is lossless. As such, we must consider if the added constraint leads to stretching of the ribbon, a deformation with attendant loss beyond the aforementioned shear term. Heuristically, we follow Buckley, and consider each ribbon to be composed of parallel infinitesimal strings of length $l$ and infinitesimal width $dr$, located a distance $r$ from the center line. Each string is fastened at one end to the fixed frame and at the other end to the torsion paddle, which is free to rotate through angle $\theta$. The imagined strings taken as a whole represent a very high aspect ratio ($l/h > 10^4$) membrane of thickness $h$ and width $w$ loaded in tension from residual stress $\sigma$. The stress is equivalent to a tensile force $T=\sigma A$, where $A=hw$. The tensile force per breadth of ribbon $T_0=T/w$ imposes a tensile load $T_0 dr$ along the length of each infinitesimal string. To analyze the torsional stiffness of the ribbon, we consider rotating the paddle through an angle $\theta$. As illustrated in Fig. S1, each infinitesimal string displaces vertically from equilibrium a distance $\Delta z \approx \theta r$ at the paddle end. It also elongates by an amount $\Delta l \approx \Delta z^2/(2l)$, yielding a (longitudinal) strain \begin{equation}\label{eq:geometricnonlinearity} \epsilon = \frac{\Delta l}{l} \approx \frac{1}{2}\frac{r^2\theta^2}{l^2} \end{equation} The quadratic dependence of strain on rotation angle is responsible for the apparent losslessness of the bifilar effect, since it implies that the restoring torque due to tension contains negligible contribution from deformation. To see this, note that the longitudinal stiffness per breadth of the ribbon is \begin{equation} k=\frac{EA}{wl} \end{equation} where $E$ is the modulus of elasticity. The change in tension of each string due to elongation is then \begin{equation} \Delta T_0=k dr \Delta L =\frac{EA dr}{w}\epsilon. \end{equation} and the vertical restoring force applied by each string to the paddle (containing contributions from both tension $T_0$ and elongation $\Delta T_0$) is \begin{subequations}\begin{align} F_z &= (T_0 + \Delta T_0) dr \sin\alpha\\ & \approx (T_0 + \frac{EA }{w}\epsilon) \frac{r\theta dr}{l(1+\epsilon)}\\ & \approx T_0\frac{r\theta}{l}\mathrm{d}r + \frac{1}{2}\bigg(\frac{EA}{w}-T_0\bigg)\frac{r^3\theta^3}{l^3}\mathrm{d}r, \end{align}\end{subequations} where $\alpha\approx r\theta/l $ is the angle of each string from horizontal. To compute the total restoring torque on the paddle, we sum the contribution $\kappa_0 \approx F_z r$ from each string, remembering they occur in pairs, one on each side of the neutral line: \begin{equation} \kappa_\sigma \approx \int_0^{w/2} 2 F_z r dr = \frac{T_0}{12} \frac{w^3\theta}{l} + \frac{1}{160}\bigg(\frac{EA}{w}-T_0\bigg)\frac{w^5\theta^3}{l^3} \end{equation} Substituting $\sigma h$ for $T_0$, the torsional stiffness is \begin{equation} k_\sigma = \frac{d\kappa_\sigma}{d\theta}\approx \frac{\sigma}{12} \frac{hw^3}{l} + \frac{1}{160}\bigg(E-\sigma\bigg)\frac{hw^5\theta^2}{l^3} \end{equation} and we see we have recovered the result due to Buckley, with an additional term due to the linear longitudinal strain that is nonlinear in the displacement $\theta$. The term that is nonlinear in $\theta$ is negligible in the usual small angle limit, but likely important for large deflections, which we hope to investigate. \subsection{Continuum mechanics model} We now derive Eq. \ref{eq:dissipationdilution} using a continuum mechanics model recently developed for nanomechanical resonators \cite{Federov2019_Generalized}, and compare the dilution factor for flexural and torsional modes of a rectangular beam. We find that the $Q$ factor of torsional modes should scale as the beam's aspect ratio squared, mirroring the behavior of ``soft-clamped" flexural modes. Following \cite{Federov2019_Generalized}, consider an elastic solid with modulus $E = E_0(1+iQ_0^{-1})$ subject to a static stress field $\overline{\sigma}_{ij}$. Vibrations of the solid are described by the displacement field \begin{equation} U_i(\vec{r},t)=\overline{U}_i(\vec{r})+u_i(\vec{r},t) \end{equation} where $\overline{U}_i$ is the static deformation produced by $\overline{\sigma}_{ij}$ and \begin{equation} u_i(\vec{r},t)=\sum_n \phi_i^{(n)}(\vec{r})A_ne^{i\omega_n t} \end{equation} is the time-dependent vibration decomposed into normal modes with frequency $\omega_n$ and mode shape $\phi_i^{(n)}$. The relationship between the stress and displacement fields is given by the strain tensor (here in Cartesian coordinates) \begin{equation} \epsilon_{ij}=\frac{1}{2}\left(\partial_i u_j + \partial_j u_i + \partial_i u_k \partial_j u_k \right) \end{equation} and the constitutive relation (Hooke's law) \begin{equation} \sigma_{ij} = E\epsilon_{ij} \end{equation} We wish to determine the quality factor $Q_n$ of mode $n$, defined as the ratio of the time-averaged energy stored in the mode $\langle W^{(n)} \rangle$ to the energy dissipated per cycle $\langle W_\t{diss}^{(n)} \rangle$ \begin{equation} Q_n \equiv \frac{\langle W^{(n)} \rangle}{\langle W_\t{diss}^{(n)} \rangle} \end{equation} Towards this end, identify the strain energy of the solid as \begin{equation}\label{eq:strainenergy} W(t)=\frac{1}{2}\int \sigma_{ij}(t)\epsilon_{ij}(t)dV \end{equation} and the power dissipated as \begin{subequations}\begin{align} P_\t{diss}(t)&=\int \sigma_{ij}(t)\dot{\epsilon}_{ij}(t)dV\\ &=\int \sigma_{ij}(t)\dot{\Delta\epsilon}_{ij}(t)dV \end{align}\label{eq:dissipation}\end{subequations} where \begin{equation} \Delta\epsilon_{ij}(t) = \epsilon_{ij}(t) - \overline{\epsilon}_{ij} \end{equation} is the time-dependent part of the strain tensor. Some manipulation of Eqs. \ref{eq:strainenergy}-\ref{eq:dissipation} gives \begin{equation} \langle W^{(n)} \rangle = E\int\left(\langle\overline{\epsilon}_{ij}\Delta\epsilon_{ij}^{(n)}\rangle+\frac{1}{2}\langle\Delta\epsilon_{ij}^{(n)}\Delta\epsilon_{ij}^{(n)}\rangle\right) dV \end{equation} and \begin{equation} \langle W_\t{diss}^{(n)} \rangle=Q_0^{-1}E\int \langle\Delta\epsilon_{ij}^{(n)} \Delta\epsilon_{ij}^{(n)}\rangle dV, \end{equation} yielding \begin{equation}\label{eq:DQ} \frac{Q^{(n)}}{Q_0} = 1+\frac{\int \overline{\epsilon}_{ij}\langle\Delta\epsilon^{(n)}_{ij}\rangle dV}{\tfrac{1}{2}\int \langle\Delta\epsilon^{(n)}_{ij}\Delta\epsilon^{(n)}_{ij}\rangle dV}\equiv D^{(n)}_Q \end{equation} Eq. \ref{eq:DQ} reveals that static strain $\overline{\epsilon}_{ij}$ gives rise to dissipation dilution ($D_Q>1$) in the presence of a geometric strain nonlinearity, $\langle\Delta\epsilon_{ij}\rangle>0$. In more physical terms, we can identify \begin{equation} \langle W_\sigma^{(n)}\rangle \equiv \int\overline{\sigma}_{ij}\langle \Delta\epsilon_{ij}(t)\rangle dV \equiv \frac{1}{2}k_\sigma^{(n)}A_n^2 \end{equation} as an effective lossless potential due to static stress and \begin{equation} \langle W_E^{(n)}\rangle \equiv \frac{1}{2}E\int \langle\Delta\epsilon^{(n)}_{ij}(t)\Delta\epsilon^{(n)}_{ij}(t)\rangle dV \equiv \frac{1}{2}k_E^{(n)}A_n^2 \end{equation} as an effective lossy potential due to elastic deformation, with associated spring constants $k_\sigma$ and $k_E$, respectively, and \begin{equation}\label{eq:generalizedDD} D_Q^{(n)}=1+\frac{\langle W_\sigma^{(n)}\rangle}{\langle W_E^{(n)}\rangle}=1+\frac{k_\sigma^{(n)}}{k_E^{(n)}} \end{equation} We now apply the above formalsim to modes of a tensile-strained rectangular beam. \subsubsection{Flexural modes of a rectangular beam} As a well-studied base case, consider flexural modes of a doubly-clamped beam of thickness $h$, width $w$, and length $L\gg{h,w}$ oriented along the $y$, $x$, and $z$ axis, respectively. The beam is subject to a static tensile stress $\overline{\sigma}_{zz} = \sigma$, yielding string-like flexural vibrations along principle axis $y$ of the form \begin{equation}\label{eq:modeshape_string} \phi_y^{(n)}(z) = \sin(k_nz)+\phi_{y,\t{clamp}}^{(n)}(z) \end{equation} where $k_n = \pi n/L$ and $\phi_{y,\t{clamp}}(z)$ is a correction to the ideal string modeshape which ensures satifaction of the boundary conditions $\phi_y(0)=\phi_y(L)=\partial_z\phi_y(0)=\partial_z\phi_y(L) = 0$. To compute $D_Q^{(n)}$, first recognize that only the axial component of the strain tensor is relevant \begin{equation} \Delta\epsilon_{zz} \approx \frac{\partial u_z}{\partial z}+\frac{1}{2}\left(\frac{\partial u_y}{\partial z}\right)^2. \end{equation} It is tempting to ignore the leading term on the grounds that the vibration is transverse to $\hat{z}$; however, the finite thickness of the beam introduces an axial strain due to curvature: \begin{equation} \frac{\partial u_z}{\partial z}\approx-\frac{\partial^2 u_y}{\partial z^2}y \end{equation} This term vanishes from from $\langle W_\sigma \rangle $ (because $\langle u \rangle = 0$) and dominates for $\langle W_E \rangle$, yielding \begin{subequations}\begin{align} \langle W_\sigma \rangle &= \frac{\sigma A}{2}\int \langle\left(\frac{\partial u_y}{\partial z}\right)^2\rangle dz\\ \langle W_E \rangle &= \frac{EI}{2} \int \langle\left(\frac{\partial^2 u_y}{\partial z^2}\right)^2\rangle dz \end{align}\end{subequations} where $A=wh$ and $I = wh^3/12$ are the area and area moment of inertia of the beam, respectively. Evaluating for modeshape $\phi^{(n)}_y(z)$ gives \begin{equation} k_\sigma^{(n)}= \sigma A \frac{ k_n^2L}{2} = \frac{2\sigma hw}{L}\left(\frac{\pi n}{2}\right)^2 \end{equation} and \begin{equation} k_E^{(n)} = k_{E,\t{free}}^{(n)} + k_{E,\t{clamp}}^{(n)} \end{equation} where \begin{equation} k_{E,\t{free}}^{(n)} = EI\frac{k_n^4L}{2} = \frac{2Ewh^3}{3L^3}\left(\frac{\pi n}{2}\right)^4 \end{equation} is the stiffness due to curvature at antinodes (the stiffness of a free-free beam) and \begin{equation} k_{E,\t{clamp}}^{(n)} = EI\int\left(\tfrac{\partial^2\phi_\t{clamp}^{(n)}}{\partial z^2}\right)^2dz \end{equation} is the stiffness due to curvature at the clamps. An approximate $\phi_\t{clamp}^{(n)}$ is given by smoothly transitioning to the mode shape of a cantilever with length $L_c = \sqrt{2EI/\sigma A}$ \cite{schmid2011damping}: \begin{equation} \phi^{(n)}_\t{clamp}(z) \approx -k_n L_c\left(\frac{z}{L_c}-\frac{z^2}{L_c^2}+\frac{z^3}{3L_c^3}\right), \end{equation} yielding \begin{subequations}\begin{align} k_{E,\t{clamp}}^{(n)} &\approx 2EI\int_0^{L_c}\left(\frac{\partial^2\phi_\t{clamp}^{(n)}}{\partial z^2}\right)^2dz\\ &=\frac{8EIk_n^2}{3L_c} = \sqrt{\frac{EI\sigma A}{9/32}}k_n^2\approx\frac{2wh^2}{L^2}\sqrt{E\sigma}\left(\tfrac{\pi n}{2}\right)^2 \end{align}\end{subequations} Collecting terms yields a dissipation dilution factor of \cite{Federov2019_Generalized,villanueva2014evidence,schmid2011damping} \begin{subequations}\begin{align} D_Q^{(n)}&=1+\frac{k_\sigma^{(n)}}{k_{E,\t{free}}^{(n)}+k_{E,\t{clamp}}^{(n)}}\\ &\approx 1+\left(\frac{E}{\sigma}\left(\frac{h}{L}\right)^2\frac{\pi^2 n^2}{12}+1.09\sqrt{\frac{E}{\sigma}}\frac{h}{L}\right)^{-1}\label{eq:DQtorsion} \end{align}\end{subequations} which is bound by the ``soft-clamping" limit \begin{equation} D_{Q,\t{SC}}^{(n)} =1+\frac{k_\sigma^{(n)}}{k_{E,\t{free}}^{(n)}}<\frac{\sigma}{E }\left(\frac{L}{h}\right)^2\frac{12}{\pi^2 n^2}. \end{equation} \subsubsection{Torsional modes of a rectangular beam} We now consider torsion modes of a tensile-stressed rectangular beam, \mbox{which we assume take the soft-clamped form \cite{timoshenko1951theory}} \begin{equation}\label{eq:modeshape_torsion_cylindrical} \phi_\theta^{(n)}(z) = \sin(k_nz) \end{equation} where $\theta$ denotes the angle of rotation about the beam axis $z$. To compute $D_Q^{(n)}$, we use the same procedure as before; however, the contribution of shear stresses must be considered. Following the approach of Saint-Venant \cite{SaintVentant1856memoire} with $\phi_\theta\ll 1$ and ``warping function" $W(x,y)$, assume that the mode shape can be expressed in Cartesian coordinates as \cite{chopin2019extreme,sapountzakis2013bars, love2013treatise} \begin{subequations}\label{eq:modeshape_torsion_cartesian}\begin{align} \phi_x^{(n)}(x,y,z) & = -y\phi_\theta^{(n)}(z)\\ \phi_y^{(n)}(x,y,z) & = x\phi_\theta^{(n)}(z)\\ \phi_z^{(n)}(x,y,z) & = W(x,y)(\phi_\theta^{(n)}(z))'_z \end{align}\end{subequations} with the associated strains \begin{subequations}\label{eq:axialstrain_torsion}\begin{align} \Delta\epsilon_{zz} &= W(x,y)\frac{\partial^2u_\theta}{\partial z^2}+\frac{1}{2}(x^2+y^2)\left(\frac{\partial u_\theta}{\partial z}\right)^2\\ \Delta \epsilon_{xz}&=\frac{1}{2}\left(\frac{\partial W}{\partial x}-y\right)\frac{\partial u_\theta}{\partial z}=\Delta \epsilon_{zx}\\ \Delta \epsilon_{yz}&=\frac{1}{2}\left(\frac{\partial W}{\partial y}+x\right)\frac{\partial u_\theta}{\partial z}=\Delta \epsilon_{zy}. \end{align}\end{subequations} The warping function of a thin beam ($h\ll w$) with a constant twist rate $(u_\theta)'_z = 0$ is known to be $W(x,y)\approx -xy$, and we will use it here as an approximation to obtain \begin{subequations}\label{eq:torsionstrain_thinbeam}\begin{align} \Delta\epsilon_{zz} &\approx -xy\frac{\partial^2u_\theta}{\partial z^2}+\frac{1}{2}x^2\left(\frac{\partial u_\theta}{\partial z}\right)^2\\ \Delta \epsilon_{xz}&\approx -y\frac{\partial u_\theta}{\partial z}\\ \Delta \epsilon_{yz}&\approx 0. \end{align}\end{subequations} (Note we have dropped the term $y^2 (u_\theta)''_z$ from $\Delta\epsilon_{zz}$ as it contributes negligibly to the total strain energy for $h\ll w$). By inspection, $\Delta\epsilon_{zz}$ is identical to that for a flexural mode; however, rotation gives rise to an additional shear strain $\epsilon_{xz}=-2y(u_\theta)'_z$. Only the nonlinear term in $\Delta\epsilon_{zz}$ contributes to $\langle W_\sigma\rangle$, whereas the shear term dominates $\langle W_E\rangle$, yielding \begin{subequations}\begin{align} \langle W_\sigma \rangle &= \frac{\sigma}{2}\frac{hw^3}{12}\int \langle\left(\frac{\partial u_\theta}{\partial z}\right)^2\rangle dz\\ \langle W_E \rangle &= \frac{E}{2}\frac{h^3w}{12}\int\langle 2\left(\frac{\partial u_\theta}{\partial z}\right)^2 + \frac{w^2}{12}\left(\frac{\partial^2 u_\theta}{\partial z^2}\right)^2\rangle dz \end{align}\end{subequations} Evaluating for modeshape $\phi_y^{(n)}(x,z)$ gives \begin{equation} k_\sigma^{(n)} = \sigma \frac{hw^3}{12} \frac{k_n^2L}{2} = \frac{\sigma hw^3}{6L}\left(\frac{\pi n }{2}\right)^2 \end{equation} and \begin{equation} k_E^{(n)} = k_{E,\t{free-shear}}^{(n)} + k_{E,\t{free-bend}}^{(n) \end{equation} where \begin{equation} k_{E,\t{free-shear}}^{(n)} = E \frac{h^3w}{6} \frac{k_n^2L}{2} = \frac{E}{3}\frac{h^3w}{L} \left(\frac{\pi n }{2}\right)^2 \end{equation} is the stiffness due to shear deformation and \begin{equation} k_{E,\t{free-bend}}^{(n)} = E \frac{h^3w^3}{(12)^2} \frac{k_n^4L}{2} = \frac{3E}{2} \left(\frac{hw}{3L}\right)^3 \left(\frac{\pi n}{2}\right)^4 \end{equation} is the stiffness due to distributed curvature. Collecting terms yields a dissipation dilution factor of \begin{subequations}\begin{align} D_Q^{(n)}&=1+\frac{k_\sigma^{(n)}}{k_{E,\t{free-shear}}^{(n)}+k_{E,\t{free-bend}}^{(n)}}\\ &=1+\frac{\sigma}{2E}\left(\frac{w}{h}\right)^2\left(1+\frac{1}{6}\left(\frac{\pi n w}{2L}\right)^2\right)^{-1}. \end{align}\end{subequations} which exhibits ``soft-clamped" scaling for $w\lesssim(L/n)$. \vspace{-3mm} \section{Miscellaneous properties of torsion modes} \vspace{-3mm} In this section we highlight various scalar properties of the torsion modes relevant to the main text (e.g. effective mass), building on the preceding continuum mechanics model. \subsection{Resonance frequency and moment of inertia} Torsion mode frequencies $\omega_n$ can be predicted from the mode stiffness $k_n$ by the formula \begin{equation} k_n = k_\sigma^{(n)} + k_E^{(n)} = I_n\omega_n^2 \end{equation} where \begin{equation} \label{eq:momentOfInteria} I_n = \frac{\int{(\phi_\theta^{(n)})^2 r_\perp^2 dm}}{(\phi_{\theta,\t{max}}^{(n)})^2} = \frac{\rho (hw^3+h^3w)}{12}\frac{L}{2}\approx \frac{\rho Lhw^3}{24} \end{equation} is the effective moment of inertia of mode $n$. For high aspect ratio beams, $\{w,h\}\ll L$, the non-angular resonance frequency of a torsion mode is \begin{subequations}\begin{align} \frac{\omega_n}{2\pi}&\approx \frac{1}{2\pi}\sqrt{\frac{k_\sigma^{(n)} + k_{E,\t{free-shear}}^{(n)}}{I}}\\ &=\frac{ n}{2L}\sqrt{\frac{\sigma}{\rho}\left(1+\frac{ 4E }{\sigma }\frac{h^2}{w^2}\right)} \end{align}\end{subequations} which is identical to that of a flexural mode in the limit $h\rightarrow 0$. \subsection{Effective mass} The effective mass of a torsion mode (defined relative to the point of maximum displacement $\phi_y^\t{max}$) is given by \begin{subequations}\begin{align} m_n &= \frac{\int (\phi_y^{(n)}(x,y,z))^2 dm}{(\phi_{y,\t{max}}^{(n)})^2}\\ &=\rho\frac{\int (x\phi_\theta^{(n)}(z))^2 dxdydz}{(x_\t{max}\phi_{\theta,\t{max}}^{(n)})^2}\\ &=\rho \frac{hw}{3}\int_0^L \sin^2(k_n z) dz\\ \label{eq:effectiveMass} &=\rho \frac{hwL}{6} = \frac{1}{6}m_\t{phys} \end{align}\end{subequations} which is notably 3 times smaller than that of a flexural mode. Te effective moment of inertia and mass are related by \begin{equation} \label{eq:momentOfInteria2} I_n = m_n r_{\perp,\t{max}}^2 = m_n (w/2)^2. \end{equation} \subsection{Zero-point and thermal displacement} In the limit of high strain ($k_\sigma^{(n)}\gg k_E^{(n)}$), the zero point angular displacement of a torsion mode is given by \begin{equation} \theta^{(n)}_\t{zp} = \sqrt{\frac{\hbar}{2 I_n \omega_n} =\sqrt{\frac{12\hbar }{ h w^3 \pi \sqrt{\rho\sigma}}} \end{equation} The resonant zero-point displacement spectral density---Eq. 4 in the main text---is given by \begin{subequations}\begin{align} S_{\theta}^\t{zp,(n)} &= \frac{ 4(\theta^{(n)}_\t{zp})^2}{\gamma_m} = \frac{2\hbar Q_n}{I_n \omega_n^2}\\ &\le\frac{24L\hbar Q_0[h]}{ h^3 w \pi^2 E}\propto \frac{L}{h^2 w}\label{eq:szp} \end{align}\end{subequations} where $\gamma_m = \omega_n/Q_n$ is the mechanical damping rate and to obtain Eq. \ref{eq:szp} we've assumed the dissipation dilution factor in Eq. \ref{eq:DQtorsion} and the surface loss scaling $Q_0[h]\propto h$. From these expressions we obtain the thermal displacement \begin{equation} \theta^{(n)}_\t{th} = \sqrt{2n_\t{th}^{(n)}}\theta^{(n)}_\t{zp} = \sqrt{\frac{k_B T}{I_n \omega_n^2}} \end{equation} and resonant thermal displacement spectral density \begin{subequations}\begin{align} S_{\theta}^\t{th,(n)} &= \frac{ 4(\theta^{(n)}_\t{th})^2}{\gamma_n} = \frac{k_B T Q_n}{I_n \omega_n^3}\\ &\le\frac{24L^2 k_B T Q_0[h]}{ h^3 w \pi^3 E \sqrt{\sigma/\rho}}\propto \frac{L^2}{h^2 w}. \end{align}\end{subequations} where $n_\t{th}^{(n)} = k_B T/\hbar\omega_n$ is the thermal mode occupation. \subsection{Thermal torque sensitivity} In the limit of high strain, the thermal torque sensitivity of a torsion mode is given by \begin{subequations}\begin{align} S_{\tau}^{\t{th},(n)}&=4k_B T I_n\gamma_n\\ &\le\frac{ \pi E k_B T h^3 w }{ 3 \sqrt{\sigma/\rho}Q_0[h]}\propto h^2w \end{align}\end{subequations} using the surface loss scaling $Q_0[h]\propto h$. \subsection{Example: A Si$_3$N$_4$ nanoribbon} To give an example relevant to the main text, consider a $\{L,w,h\}=\{7\,\t{mm},\,100\,\mu\t{m},\,75\,\t{nm}\}$ Si$_3$N$_4$ beam. Predicted values for the fundamental torsional mode are \begin{equation} \frac{\omega_1}{2\pi} = 40\;\t{kHz}\cdot\frac{7\,\t{mm}}{L}\sqrt{\frac{2700\,\tfrac{\t{kg}}{\t{m}^3}}{\rho}\frac{\sigma}{0.85\,\t{GPa}}} \end{equation} and \begin{equation} Q_1 =1.4\cdot 10^7\cdot\frac{75\,\t{nm}}{h}\left(\frac{100\,\mu\t{m}}{w}\right)^2\frac{\sigma}{0.85\,\t{GPa}}\frac{250\,\t{GPa}}{E}\frac{Q_0[h]/h}{60/\t{nm}}. \end{equation} We measure $\omega_1/2\pi = 40\t{kHz}$ and $Q_1 = 1.6\times10^6$, in good agreement with these predictions. Other predicted values are (using $\rho$, $E$, $\sigma$, $Q_0$ as above): \begin{subequations}\begin{align} I_1 & = 5.9\cdot 10^{-20}\,\t{kg\cdot m^2}\cdot\frac{L}{3.5\,\t{mm}}\frac{h}{75\,\t{nm}}\left(\frac{w}{100\,\mu\t{m}}\right)^3\\ m_1 & = 24\,\t{ng}\cdot\frac{h}{75\,\t{nm}}\frac{w}{100\,\mu\t{m}}\frac{L}{3.5\,\t{mm}}\\ S_{\theta}^{\t{zp},(1)}&=\left(6.7\cdot10^{-10}\frac{\t{rad}}{\sqrt{\t{Hz}}}\right)^2\cdot\frac{L}{7\,\t{mm}}\left(\frac{75\,\t{nm}}{h}\right)^2\frac{100\,\mu\t{m}}{w}\\ S_{\theta}^{\t{th},(1)}&=\left(1.2\cdot10^{-5}\frac{\t{rad}}{\sqrt{\t{Hz}}}\right)^2\cdot\left(\frac{L}{7\,\t{mm}}\frac{75\,\t{nm}}{h}\right)^2\frac{100\,\mu\t{m}}{w}\\ S_{\tau}^{\t{th},(1)}&=\left(4.2\,\frac{\t{zN}\cdot\t{m}}{\sqrt{\t{Hz}}}\right)^2\left(\frac{h}{75\,\t{nm}}\right)^2 \frac{w}{100\,\mu\t{m}} \end{align}\end{subequations} \newpage \section{Fabrication and numerical modeling} In this section, we provide details on the fabrication and modeling of Si$_3$N$_4$ nanobeams decribed in the main text. The basic fabrication process flow is shown in Fig. \ref{fig:processflow}. Also shown is the mask pattern defining the shape of the beam with and without a central paddle for mass-loading. \subsection{Fabrication} \subsubsection{Unloaded Si$_3$N$_4$ nanobeams} Fabrication begins by coating a 1.5-um-thick S1813 positive tone photoresist on a double-sided, 100-nm-thick Si$_3$N$_4$-on-silicon wafer. The resist on one side of the wafer (the front side) is patterned in the shape of diagonal beam using a photolithography system (MLA 150), while the resist on the other side protects the wafer from handling scratches. The pattern is transferred to the Si$_3$N$_4$ thin film using fluorine-based (Ar + SF$_6$) reactive ion dry etching. The remaining resist is then removed and a fresh resist layer is applied to protect the front side of the wafer. The back side is then patterned with square windows while making sure it is aligned with the front side\cite{norte2016mechanical}. After dry etching the back side pattern, the remaining resist is removed and the wafer is cleaned using oxygen plasma to remove any lingering resist residues. A thick layer of resist is then coated on both sides and the wafer is diced into $12\times12$ mm$^2$ chips. The chips are then cleaned using 10 second dip in hydrofluoric acid (HF) followed by a DI water and Isopropanol (IPA) rinse\cite{reinhardt2016ultralow}. Chips are then mounted onto a custom Teflon holder to secure them in a vertical orientation. The assembly is then etched in a potassium hydroxide (KOH) bath at 85 $^\circ$C for 21 hours in order to remove the silicon in the patterned region and subsequently release the beam. The released structure is dried using a gradual dilution process which includes iteratively replacing KOH with DI water followed by a 10 min HF dip and then an IPA and methanol rinse\cite{norte2016mechanical}. Finally, the chips with wider beams are dried in air and the ones with thinner beams are dried using critical point drier. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{fig_supplement/Fab_processflowV3.pdf} \caption{\textbf{Fabrication process flow and mask patterns.} (a) Process flow: (1) double-sided LPCVD Si$_3$N$_4$-on-Si wafer; (2) front side photolithography using S1813 photoresist, followed by Si$_3$N$_4$ dry etch; (3) backside pattern, Si$_3$N$_4$ dry etch, and sample cleaning using O$_2$ plasma and HF dip; (4) KOH wet etch and dry release, resulting in beam with or without mass-loading depending on mask pattern. (b) Mask patterns for unloaded (left) and mass-loaded (right) beams.} \label{fig:processflow} \end{figure} \subsubsection{Mass-loaded Si$_3$N$_4$ nanobeams (torsion microbalances)} Mass-loaded nanobeams were fabricated using the same procedure as above, except that the the beam pattern includes a $600\times600\;\mu\t{m}^2$ pad in the center. Due to the large pad size and anisotropic etching of Si along <100> crystal plane, in this case there remains an approximately $100$-$\mu\t{m}$-thick Si mass suspended beneath the pad region at the end of the wet etch. \subsection{Numerical Modeling} To predict the mode shapes and eigenfrequencies of non-trivial beam geometries---in particular, the torsion modes filleted beams shown in Fig. 2 of the main text---we performed finite element method (FEM) based simulations using the COMSOL 5.4 structure mechanics module. We used the plate physics interface, which allowed us to simulate resonator geometries with large aspect ratios. Unlike the analytical model (Eq. 1 in the main text), which assumes a uniform stress field, we computed the non-uniform stress distribution using a pre-stressed eigenfrequency study. The study is carried out in two steps: the first step is to calculate the von Mises stress distribution due to in-plane stress in the Si$_3$N$_4$ thin film; the second step is to compute eigenfrequencies and their corresponding mode shapes using the stress distribution from step 1\cite{sadeghi2019influence}. All simulations were performed using triangular mesh settings, with dense meshing around the curves and edges. For dissipation dilution models (Sec. \ref{sec:dd}), mesh size was reduced until results changed by less than $1\%$ with successive reduction. \begin{figure}[h] \vspace{-4mm} \includegraphics[width=0.95\columnwidth]{fig_supplement/COMSOLPlot_AppendixV2.pdf} \caption{Simulation of quality factors for 75 nm thick nanobeams with (green point) and without (blue point) filleting, assuming surface loss $Q_0[h]=60h/\t{nm}$. The red curve is the lump mass model given by Eq. 1 in the main text. Inset: ``wrinkling" of the rectangular beam at large widths, coinciding with reduced quality factor.} \vspace{-2mm} \label{fig:DD_COMSOL} \end{figure} \subsubsection{Dissipation dilution of filleted vs rectangular nanobeams: Buckling instabilities}\label{sec:dd} The numerical $Q$ model (dashed line) in Fig. 2d of the main text was generated using COMSOL-simulated modeshapes and computing the dissipation dilution factor ($Q/Q_0$) as the ratio of the total kinetic energy stored in the mode (COMSOL function plate.Wk\_tot) to the total elastic strain energy (COMSOL function plate.Ws\_tot). From these simulations, we predict that the $Q$ factor of torsional modes is highly sensitive to the beam aspect ratio and fillet geometry in the clamps, due to buckling instabilities \cite{kudrolli2018tension,green1937elastic}. To see this, in Fig. \ref{fig:DD_COMSOL}, we compare the lumped mass model in Fig. 2 (red line) to simulations for filleted beams (green points) and rectangular beams (blue points) of different widths. For the filleted beams, we assume the diagonal geometry of our actual devices (Fig. \ref{fig:processflow}b), which have a fillet radius of $r = 100\;\mu\t{m}$ for beam widths $w<100\;\mu\t{m}$ and $r = w$ for widths $w>100\;\mu\t{m}$. For rectangular beams, we assume a straight-edged clamp and $r=0$ for all widths. Both simulations match well to the lump massed model for small widths; however, beyond a critical width, the dissipation dilution factor drops. Inspection of the modeshope reveals that this coincides with ``wrinkling" of the beam due to buckling instabilities \cite{kudrolli2018tension}. The use of fillets (ours are inspired by the designs in \cite{sadeghi2019influence}) appears to have the effect of "pulling out" the wrinkles, allowing for higher aspect ratios and concomittently higher $Q$ factors. \newpage \section{Ringdown characterization of nanobeams} In this section we provide details on the ringdown measurements shown in Fig. 2 of the main text. \vspace{-2mm} \subsection{\label{sec:level1} Experimental Setup} \begin{figure}[h] \vspace{-4mm} \centering \includegraphics[width=0.9\columnwidth]{fig_supplement/ringdownsetup.pdf} \caption{Diagram of the optical setup for ringdown measurements. BPD = balanced photodetector, LD = laser diode, PZT = piezo-electric transducer, PBS = polarizing beam splitter, WDM = wavelength division multiplexer, SPD = split photodiode used in the optical lever configuration.} \label{fig:ringdownexperimentalsetup} \vspace{-2mm} \end{figure} For the ringdown measurements shown in Fig. 2 of the main text, Si$_3$N$_4$ nanobeams were housed inside a Kimball Physics 2.75" spherical cube ultra-high-vacuum (UHV) chamber with a typical base pressure of $4\times 10^{-8}$ mbar. The optical setup is shown in Fig. 4 and consists of two readout schemes operating at a nominal wavelength of 850 nm: homodyne interferometry and optical lever. We alternated between these schemes at different stages of the experiment and obtained the same results. The balanced photodetector (BPD) used for homodyne readout was a Newport 1807. The split photodetector (SPD) was a Thorlabs PDQ80. In addition to the 850 nm probe field (provided by a Titatinum-Sapphire laser, M-Squared SolsTiS), light from a 650 nm diode laser was coupled into the setup via a dichroic fiber beamsplitter (WDM). This laser was used for alignment and radiation pressure actuation. \subsection{Mode Identification} Before perfoming ringdowns, flexural and torsional modes were identified by comparing resonance peaks in broadband thermal noise spectra to COMSOL silmualations. An example of a thermal noise spectrum is shown in Fig. \ref{fig:higherordermodes}. For the beams studied in the main text, the fundamental torsional mode frequency was predicted to be $\sim1-10\%$ higher than the fundamental flexural mode frequency. Measured values were in good agreement, as shown in Fig. \ref{fig:modefrequencies} by plotting measured and simulated mode frequency versus beam width, for both the first and second order flexural and torsional modes. \begin{figure}[h!] \centering \includegraphics[width=0.9\columnwidth]{fig_supplement/modefrequencies.pdf} \caption{Nanobeam vibrational mode frequency versus beam width. Solid and open markers are measurements for the first order (circle) and second order (diamond) order torsional and flexural mode, respectively. Solid and dashed lines are COMSOL models for the first order torsional and flexural modes, respectively.} \vspace{-2mm} \label{fig:modefrequencies} \end{figure} \subsection{Ringdown Measurements} Ringdown measurements were performed using the 650 nm diode laser as a radiation pressure actuator\footnote{We note that for torsion modes, the efficiency of the radiation pressure drive was found to be highly sensitive to the alignment of the optical beam.} and either the homodyne or optical lever measurement for readout. The diode laser was intensity modulated via its current driver using an arbitrary waveform generator (National Instruments PXI 5106) synchronized with a digitizer (National Instruments PXI 5122) recording the photocurrent. For each measurement, the drive frequency was swept in small increments across mechanical resonance $\omega_n$ while monitoring the photocurrent power spectrum in a window $\Delta\omega\gg \omega_n/Q_n$ centered around $\omega_n$. When the power rose above a nominal threshold (due to mechanical excitation), the drive beam was shuttered off and the free energy decay of the mode was recorded by tracking the power spectrum as a function of time. \subsubsection{Investigation of photothermal heating} For ringdowns of unloaded beams in Fig. 2 of the main text, a typical probe power of 10-50 $\mu$W was used. We observed that at these powers, the ringdown times were not affected by photothermal damping. For example, in Fig. \ref{fig:photothermalheatingringdown}, we show ringdowns of the fundamental torsional mode of a 400 $\mu$m wide beam for probe powers varying from 100 to 500 $\mu$W, and find that the inferred $Q$ factor ($Q\approx 77\times 10^6$) remains unchanged to within a few percent. \begin{figure}[h] \label{fig:photothermalheatingringdown} \centering \includegraphics[width=0.9\columnwidth]{fig_supplement/ringdownVpower.pdf} \caption{Ringdowns of the fundamental torsion mode of a 400 um nanobeam ($\omega_1 = 2\pi\cdot 52.5$ kHz) with different probe powers.} \vspace{-3mm} \end{figure} \subsubsection{Comparison of torsional and flexural modes} \begin{figure}[b] \centering \includegraphics[width=0.9\columnwidth]{fig_supplement/flexural_modes.pdf} \caption{Compilation of quality factors for flexural and torsional modes with different widths and thicknesses.} \label{fig:flexuretorsioncomparison} \end{figure} As mentioned in the main text, we recorded $Q$ factors for both flexural and torsional modes, and observed qualitatively different scaling versus beam width and thickness, consistent with ``hard-clamping" versus "soft-clamping." A full compilation of measurements is shown in Fig. \ref{fig:flexuretorsioncomparison}. Unlike torsional modes (solid markers), flexural modes (open markers) were observed to have roughly constant Q versus width and thickness, as expected for hard-clamping (dashed lines, corresponding to Eq. S36). The measured Q factors were roughly an order of magnitude lower than predicted, and had significantly larger statistical spread than for torsional modes. We conjecture that this may be due to larger coupling of flexural modes to low $Q$ modes of the underlying Si frame. \subsubsection{Investigation of viscous damping} We investigated the possiblity that our measurments may be limited by gas damping in an attempt to explain the apparent rolloff in nanobeam quality factor evident in Fig. 2d of the main text for widths $\geq$ 400 $\mu m$. Toward this end, we compared the measured quality factors to a model for a nanobeam in the free molecular flow damping regime, a regime for which the mean free path is longer than the largest dimension of the resonator, given by \cite{verbridge2008gasdamping}: \begin{equation} \label{eq:gasdamping} Q_{gas} = \frac{\rho t \Omega_m}{4}\sqrt{\frac{\pi}{2}}\sqrt{\frac{RT}{M}}\frac{1}{P} \end{equation} In this model we assume the density of Si$_3$N$_4$ to be $\rho = $2700 kg/m$^3$, a nanobeam thickness $t=$75 nm, $T$=298 K, $M\approx$ 0.03 kg/mol (molar mass of air), and a total pressure $P\approx4\times10^{-8}$ mbar. As shown in Fig. \ref{fig:gasdamping}, the limit on quality factor imposed by the viscous damping model is at least an order of magnitude greater than the measured values of the fundamental modes. We also show here that second-order modes are more robust to gas damping than their first-order counterparts. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{fig_supplement/Gas_Damping.pdf} \caption{Quality factors of nanobam modes versus resonance frequency, overlaid with a model of the quality factor of a nanobeam limited by gas damping in the free molecular flow regime for $P\approx4\times10^{-8}$ mbar. Markers denote the same modes as in Figs. S5 and S7.} \label{fig:gasdamping} \end{figure} To further this investigation, an ExTorr XT100 quadrupole residual gas analyzer (RGA) outfitted with a hot cathode Bayard/Alpert (B/A) ion gauge was used to determine the relative gas composition in the science chamber (for species with masses < 100 amu) and to confirm the pressure inferred by monitoring the current of the test chamber's Varian Star-Cell 55 ion pump with the direct reading from the ExTorr's ion gauge. We found that the ion pump's inferred pressure ($P \approx4\times 10^{-8}$ mbar) compares favorably with the total pressure reading given by the B/A ($P = 4.1\times 10^{-8}$ mbar) and that the dominant gas species in the science chamber are atomic hydrogen, $H_2$, and nitrogen ($N_2$), all with partial pressures $p \approx6.7\times10^{-9}$ mbar. Other significant partial pressures present include water vapor, carbon dioxide, carbon-12, and small traces of 2-propanol and other organic solvents (used to clean the conflat flanges of the UHV chamber). Gas species in the mass spectrum were idenfied using the molecular weight search function of the NIST Chemistry WebBook, SRD 69. However, gas damping alone does not appear to be responsible for the reduction in quality factor that we observe (as compared with the lumped mass model for $w\geq$ 400 $\mu m$). When we include the finite element model, which accounts for the buckling instability discussed in the numerical modeling section, the culprit becomes apparent. We argue that both gas damping and buckling instabilities conspire to limit the ultimate quality factor of a nanobeam to $Q<1\times10^{9}$, shown in Fig. \ref{fig:FEMgasdamping}. \begin{figure}[ht] \centering \includegraphics[width=1.05\columnwidth]{fig_supplement/FEM_Gas_Damping.pdf} \caption{Quality factor versus width for the lumped mass model, finite element model, and finite element model including viscous damping effects. Red dots correspond to the fundamental torsion modes of a nanobeam.} \label{fig:FEMgasdamping} \end{figure} \newpage \section{Quantum-limited optical lever: Theory} In this section, we derive the shot-noise-limited angular resolution of an optical lever measurement, given by Eq. 2 in the main text: \begin{equation}\label{eq:SNLSI} S_{\theta}^{\textrm{imp}} \ge \frac{1}{w_0^2} \frac{\hbar c \lambda}{8 P}. \end{equation} As shown in Fig. \ref{fig:OLtheory}, an optical lever is formed by reflecting a laser beam from a test surface onto a split photodiode located a distance $z$ away. A small angular displacement of the surface $\theta$ results in a lateral displacement $x = 2\theta z$ of the laser beam on the photodiode. The resulting photocurrent is proportional to the split power difference \begin{equation} \label{eq:sensitivityIntegral} \Delta P(x) = \int_{-\infty}^{x} I(x',y') dx'dy' - \int_{x}^{\infty} I(x',y') dx'dy' \end{equation} where $I(x',y')$ is the intensity of the laser field in the plane of the photodiode $x'-y'$ and $x'=0$ is the photodiode mid-line. \begin{figure}[b] \vspace{-3mm} \centering \includegraphics[width=0.6\columnwidth]{fig_supplement/Fig3_supp.pdf} \caption{Schematic of optical lever measurement. Angular displacement $\theta$ of a test surface produces a deflection $x = 2\theta z$ of a laser field on a split photodiode located a distance $z$ away.} \label{fig:OLtheory} \vspace{-5mm} \end{figure} To obtain Eq. \ref{eq:SNLSI}, we let the laser be in a TEM$_{00}$ Guassian mode propagating normal to the photodiode with total power $P$, wavelength $\lambda$, waist size $w_0$, and waist location coinciding with the test surface\footnote{A careful analysis reveals this to be the optimal choice \cite{putman1992}.}, such that \begin{equation} I(x, y,z) = \frac{2 P}{\pi w(z)^2} \exp \left[ \frac{-2 (x^2 + y^2)}{w(z)^2} \right], \end{equation} where \begin{equation} w(z)=w_0\sqrt{1+z^2/z_0^2} \end{equation} is the radius of the beam on the photodiode and $z_0 = \pi w_0^2/\lambda$ is the beam's Rayleigh length. Evaluating Eq. \ref{eq:sensitivityIntegral} yields the knife-edge signal \begin{equation} \Delta P (x,z) = P\;\t{Erf}\left[\frac{\sqrt{2}x}{w(z)}\right] \end{equation} which has a lateral displacement sensitivity of \cite{Treps_Quantum_2003} \begin{equation} \frac{\partial \Delta P}{\partial x}(x,z) = \frac{P}{w(z)}\sqrt{\frac{8}{\pi}} e^{-\frac{2x^2}{w(z)^2}}\approx \frac{ P}{w(z)}\sqrt{\frac{8}{\pi}} \end{equation} and therefore an angular displacement sensitivity (referring to the displacement of the test surface, $\theta = x/(2z)$) of \begin{equation} \label{eq:OLSensitivity} \frac{\partial \Delta P}{\partial \theta}(z) \approx \frac{2 P z}{w(z)}\sqrt{\frac{8}{\pi}} \end{equation} in the limit of small displacements $x\ll w(z)\ll z$. Eq. \ref{eq:OLSensitivity} appears to suggest that arbitrarily high sensitivity can be achieved by increasing the ``lever arm'' $z$; however, diffraction counterbalances the lever arm in the far field \begin{equation} \frac{w(z)}{z}\xrightarrow[z\gg z_0]{} \frac{\lambda}{\pi w_0}\equiv\theta_0 \end{equation} so that the sensitivity of the optical lever is bound above by \begin{equation}\label{eq:OLsensitivity} \frac{\partial \Delta P}{\partial \theta}\xrightarrow[z\gg z_0]{} \frac{2P }{\theta_0}\sqrt{\frac{8}{\pi}} \end{equation} where $\theta_0$ is the beam diffraction angle. \begin{figure}[b] \includegraphics[width=0.95\columnwidth]{fig_supplement/deflectionSNL_2.pdf} \caption{Shot noise limited resolution versus lever arm length for different waist sizes. Power is fixed at 1 mW.} \label{fig:SNL} \end{figure} To obtain Eq. 2 in the main text, we compare Eq. \ref{eq:OLsensitivity} to the fluctuations in optical power due to shot noise, here expressed as a single sided power spectral density (and assuming the laser is in a coherent state): \begin{equation} S_{P}^{\textrm{shot}} = \frac{4\pi\hbar c}{\lambda}P. \end{equation} The angular displacement resolution (imprecision) is defined as the shot-noise-equivalent angular displacement \begin{equation}\label{eq:OLshotresolution} S^{\t{imp}}_{\theta} = \frac{1}{\eta}\left(\frac{\partial \Delta P}{\partial \theta}\right)^{-2} S_{P}^{\textrm{shot}} \xrightarrow[z\gg z_0]{} \frac{1}{w_0^2} \frac{\hbar c \lambda}{8 \eta P} \end{equation} where $\eta\le 1$ is a parameter characterizing the measurement efficiency (including e.g. the photodetector efficiency). Equation \ref{eq:OLshotresolution} implies that optical levers with larger waist sizes achieve higher shot-noise resolution, at the expense of having to place the detector farther from the sample. We visualize this in Fig. \ref{fig:SNL} by plotting $S^{\t{imp}}_{\theta}$ for optical levers with different waist sizes $w_0$ and fixed power $P=1$ mW, as a function of surface-detector distance ("lever arm") $z$. \newpage \section{Quantum-limited optical lever: Methods} In this we section we give details on the high resolution optical lever measurement shown in Fig. 2 of the main text, including the experimental setup, calibration method, and evidence for the absense of photothermal heating, shot noise as the primary noise source, and off-resonant thermal noise (of higher order vibrational modes) as the technical noise limit. \subsection{Experimental setup} The experimental setup for Fig. 2 was the same as described for ringdown measurements, except the optical lever field was aligned normal to the sample in an autocolliminating configuration (using a PBS and a quarter waveplate as shown in Fig. \ref{fig:OLMeasurementSetup}) and the commercial split photodetector (Thorlabs PDQ80) was replaced by a custom split detector utilizing a straight-edged "splitting" mirror (SM) and a low noise balanced photodetector (Newport 1807). This setup allowed us to achieve shot noise limited performance with up to several milliwatts of optical power reflected from the sample. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{fig_supplement/OLMeasurementSetup_2.pdf} \caption{Optical lever setup for measurement in Fig. 2 of main text. PMF = polarization maintaining fiber, L1 = lens focusing light out of fiber, HWP = half-wave plate, PBS = polarizing beam splitter, QWP = quarter-wave plate, TB = torsion beam, SM = splitting mirror, BPD = balanced photodetector, SA = spectrum analyzer.} \label{fig:OLMeasurementSetup} \vspace{-4mm} \end{figure} \subsection{Calibration}\label{sec:Calbration} The measurement shown in Fig. 2 was calibrating by fitting the photocurrent spectrum, with detector noise substracted, to a noise model including the thermal motion of the torsion beam and imprecision noise: \begin{equation} \label{eq:totalNoiseModel} S_{\theta}[\omega] = S_{\theta}^{\textrm{th}}[\omega] + S_{\theta}^{\textrm{imp}}[\omega]. \end{equation} The imprecision noise is well approximated by white noise around resonance. For thermal noise we assume the angular displacement spectrum of a single mode, structurally damped oscillator \cite{gonzalez_brownian_1995}: \begin{equation} \label{eq:TNM} S_{\theta}^{\textrm{th}}[\omega] = \frac{4 k_B T \omega_1 / (I_1 Q_1)} {(\omega_1^2 - \omega^2)^2 + \omega_1^2 \omega^2/Q_1^2}, \end{equation} where $k_B$ is Boltzmann's constant, $T$ is the modal temperature (taken to be room temperature, $T = 292$ K), and $\omega_1$, $I_1$, and $Q_1$ are the angular resonance frequency effective moment of inertia, and quality factor of the fundamental torsion mode, respectively. The calibration factor is determined by knowledge of $\omega_1$ and $Q_1$, both of which we measure, as well as $I_1$, which we determine using Eq. \ref{eq:momentOfInteria}. We only fit around resonance, so for simplicity we neglect the effects of structural damping and approximate the noise peak as a Lorentzian. When fitting, the only free parameter is the magnitude of the noise floor, $S_\theta^\t{imp}$. Because the resolution the measured spectrum (0.1 Hz) is smaller than the linewidth of the mechancial oscillator (0.0005 Hz), \mbox{we mask the center of the noise peak when fitting.} \begin{figure}[t!] \label{fig:calibrationComparison} \centering \includegraphics[width=0.95\columnwidth]{fig_supplement/calibrationComparison_3.pdf} \caption{Comparison of calibration methods. Green curve: Direct calibration. Red curve: thermal noise calibration. Blue curve: thermal noise model without imprecision noise.} \end{figure} As a cross-check, in Fig. 10 we present an indepedent calibration of the angular dispacement spectrum by directly measuring the lateral position sensitivity of the split photodiode. The transduction factor from angular displacement to voltage $V$ at the output of the split photodetector is given by \begin{equation} \frac{\partial V}{\partial \theta} = \frac{\partial V}{\partial x}\frac{\partial x }{\partial \theta}=2z\frac{\partial V}{\partial x}, \end{equation} where $z$ is the distance from the beam to the photodetector (Fig. \ref{fig:OLtheory}) and is the sensitivity to lateral beam displacements. We measured this quantity by translating the splitting mirror with a micrometer and recording the resulting voltage change. As shown in Fig. 10, the two calibration methods agree to within a factor of 2 in amplitude spectral density units. Finally, we re-emphasize, as stated in the main text, that the imprecision relative to the zero point motion, $S_\t{imp}/S_\t{zp}$ is independent of calibration method, and depends only the magnitude of the signal-to-noise ratio on resonance, viz. \begin{equation} \frac{S_\theta^\t{th}}{S_\theta^\t{imp}}=2n_\t{th}\frac{S_\theta^\t{zp}}{S_\theta^\t{imp}} \end{equation} where $n_\t{th} = k_B T/\hbar\omega_n$ is the thermal mode occupation. In our case $\omega_1 = 52.5$ kHz, $n_\t{th} = 1.2\times 10^8$, and $S_\t{th}/S_\t{imp} = 3.7\times 10^{10}$, from which we infer that $S_\t{zp}/S_\t{imp} = 140$. \subsection{Beam waist characterization} An important parameter for attaining the maximum angular displacement sensitivity is the beam waist size, which determines the diffraction-limited divergence angle of the beam. We characterize the beam waist by performing a knife-edge measurement at the focus. As a secondary check, we analyze the diffraction of the beam by performing knife edge measurements at different positions along the axis of propogation. Because the beam waist sets the length scale for the resolution of our measurement versus distance, we can utilize a scan of the sensitivity versus length to infer the beam waist as well. Figure \ref{fig:lengthSweep} shows a scan of the optical lever arm length versus the sensitivity, while fixing the power and beam waist size. The data was collected with a different split detection setup than used in the main text (Thorlabs PDQ80A), which contributed larger extraneous noise but made alignment easier, and utilized a larger beam size ($\sim 350 \mu$m). \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{fig_supplement/lengthSweep_3.pdf} \caption{Imprecision versus detector-sample distance (``lever arm"). Black curve is a fit to Eq. \ref{eq:OLshotresolution} with efficiency $\eta$ and beam waist $w_0$ as free parameters, \textcolor{black}{yielding $w_0 = 413 \pm 216 \mu$m.}} \label{fig:lengthSweep} \end{figure} \begin{comment} \subsection{Photothermal heating} \textcolor{black}{Heating due to optical absorption can lead to a misestimate of the thermal noise calibration technique described in Sec. \ref{sec:Calbration}. To investigate this source of error, a radiation pressure drive was applied using a 650 nm laser field, and the ratio of the driven motion $S_{\theta}^{\textrm{mod}}$ and the thermal noise peak $S_{\theta}^{\textrm{th}}$ was monitored as a function of probe power. From this measurement we inferred a photothermal heating of less than $10\%$.} \end{comment} \subsection{Shot noise model validation} The noise floor in Fig. 2 of the main text is dominated by shot noise. To confirm this, as shown in Fig. \ref{fig:shotNoiseScaling}, \textcolor{black}{we recorded imprecision versus probe power $P$.} We then fit the data to a power law: \begin{equation}\label{eq:shotnoisefit} S_\theta^\t{imp} = \frac{\hbar c\lambda}{8 w_0^2 \eta} P^{b}. \end{equation} The fits yield $b = -1.12 \pm .09$ for a spot size of 200 $\mu$m, in good agreement with the expected scaling for shot noise ($b = -1$). \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{fig_supplement/shotNoiseScaling_2.pdf} \caption{Imprecision versus power for two waist sizes. Points are measurements. Solid lines are fits to Eq. \ref{eq:shotnoisefit}. Dashed lines are the ideal shot noise limit given by Eq. \ref{eq:OLshotresolution}. Black line is an estimate of the extraneous noise due to off resonant thermal \mbox{motion (Sec. \ref{sec:otherModes}).}} \label{fig:shotNoiseScaling} \end{figure} \subsection{Common mode rejection of classical intensity noise} Our measurements highlight that the optical lever technique naturally surpresses classical laser intensity noise; viz., the shot-noise (1/$P$) scaling in Fig. \ref{fig:shotNoiseScaling} was observed at powers for which the classical intensity noise of our Ti-Saphire laser, shown in Fig. \ref{fig:laserIntensityNoise}, overwhelmed shot noise by several orders of magnitude. Noise cancellation in excess of 30 dB was achieved by carefully translating the splitting mirror in \ref{fig:OLMeasurementSetup}, in order to balance the intensities on the two photodiodes. \begin{figure}[h] \centering \includegraphics[width=0.95\columnwidth]{fig_supplement/laserIntensityNoise.pdf} \caption{Relative intensity noise of the Ti-Sapphire laser used for Fig. 2 of the main text, compared to shot noise for $P=0.01-4$ mW.} \label{fig:laserIntensityNoise} \end{figure} \subsection{Multi-mode thermal noise spectra} \label{sec:otherModes} We have found that our optical lever measurements are sensitive to a broad variety of flexural and torsional modes, limited by the size and location of the laser spot on the beam. Fig. \ref{fig:higherordermodes} shows a measurement with a slightly different alignment than in the main text. Owing to a slight misalightment, the lever is sensitive to both flexural modes and ``potato chip". Noteably, the off-resonant thermal noise from the potato chip modes at 57.5 kHz and 69 kHz is roughly 10\% (in power units) of the total noise in the viscinity of the fundamental torsional mode, at 52.5 kHz, corresponding to an extraneous noise (Eq. \eqref{eq:shotnoisefit}) of $S_\theta^\t{(ext)}= 2.7\times10^{-22} \textrm{ rad}^2/\textrm{Hz}$. \begin{figure}[htbp] \centering \includegraphics[width=0.85\columnwidth]{fig_supplement/higherOrderModes_3.pdf} \caption{Broadband version optical lever measurement ($w = 400 \mu$m, $w_0 = 200\;\mu\t{m}$, P = 4 mW), with a slightly different alignment. Thermal noise peaks of flexural (f), torsional (t), and potato-chip (p) modes are highlighted with corresponding modeshapes simulated in COMSOL.} \label{fig:higherordermodes} \end{figure} \newpage \section{Acceleration sensitivity of a microtorsion pendulum} \begin{figure}[b!] \includegraphics[width=0.9\columnwidth]{fig_supplement/microtorsionbalance.pdf} \caption{Schematic of torsion pendulum, formed by suspending a rectangular balance beam of mass $m$ from a ribbon-like torsion fiber under tension $T$. Angular displacement of the balance beam $\theta$ is measured using an optical lever, described elsewhere. The center of gravity (c.g.) of the balance beam is offset a distance $r$ from the rotation axis, resulting in a gravitational restoring torque, $\tau_g\approx mg r\theta $. } \label{fig:torsionbalance} \end{figure} In this section, we derive Eqs. 4-5 of the main text, describing the fundamental resonance frequency of a torsion balance with a tensile-stressed torsion fiber and the sensitivity of this frequency to local changes in the gravitational field strength. Figure \ref{fig:torsionbalance} shows the basic geometry of the torsion balance with its torsion fiber oriented perpendicular to the earth's local gravitational field and a rectangular balance beam of mass $m$ free to oscillate, with moment of inertia $I$, about the fiber axis $xx$. The local acceleration of gravity $g$ acts on the balance beam with force $m g$ through its center of gravity (c.g.), offset from the pivot by a distance $r$. Tilting the beam by a small angle $\theta$ produces a gravitational torque $\tau_g = mgr\sin(\theta)\approx m g r\theta$. We assume the balance beam is symmetric about the torsion axis, so that its equilibrium tilt angle is zero. The beam behaves in this case like a pendulum bob, executing simple harmonic motion (in the absense of damping or external torques) of the form \begin{equation}\label{eq:torsionbalance1} I\ddot{\theta}+\kappa\theta = 0, \end{equation} where \begin{equation}\label{eq:torsionstiffnessG} \kappa = \kappa_E + \kappa_\sigma \pm \kappa_g \end{equation} is the combined torsional stiffness do to elastic deformation $\kappa_E$, tensile stress $\kappa_\sigma$, and gravity \begin{equation} \kappa_g = \frac{d\tau_g}{d\theta}= m g r, \end{equation} respectively. The sign ($\pm$) of the gravitational term in Eq. \ref{eq:torsionstiffnessG} depends on the orientation of the pendulum, and is negative when the pendulum is inverted (c.g. above pivot). Dividing Eq. \ref{eq:torsionbalance1} through by the moment of inertia, the squared frequency of the oscillator is given by \begin{equation}\label{eq:torsionbalance2} \omega_\pm^2 = \frac{\kappa_E + \kappa_\sigma \pm \kappa_g}{I} \end{equation} which depends on gravity through the restoring torque $k_g$. It follows that the sensitivity of the oscillator frequency to gravity in the non-inverted (Fig. \ref{fig:torsionbalance}) orientation is \begin{equation}\label{eq:torsionbalancesensitivity} \frac{d\omega_+}{d g}\approx\frac{\omega_+^2-\omega_-^2}{4 \omega_+ g_0}, \end{equation} where $g_0 \approx 9.8\,\t{m}/\t{s}^2$ is the standard acceleration due to gravity on earth's surface. We now seek an expression for the smallest detectable change in gravity, $\Delta g_\t{min}$. In the main text, we define $\Delta g_\t{min}$ as the shift in gravity needed to shift the oscillator frequency by it's full-width-half-maximum linewidth: \begin{equation} \Delta\omega_+ = \frac{\omega_+}{Q_+}, \end{equation} where $Q_+$ ($Q_-$) is the $Q$ factor of the oscillator in its non-inverted (inverted) configuration, ideally given by \begin{equation}\label{eq:torsionbalanceQ} Q_\pm = Q_0\left(1+\frac{k_\sigma\pm k_g}{k_E}\right) = Q_\mp\left(\frac{\omega_\mp}{\omega_\pm}\right)^2 \end{equation} assuming that the gravitational stiffness is lossless (Eq. S1). Thus we obtain Eq. 4 in the main text: \begin{equation}\label{eq:gsensitivity} \Delta g_\t{min} \equiv \left(\frac{d\omega_+}{d g}\right)^{-1}\Delta\omega_+ = \frac{4 g_0}{Q_+}\frac{\omega_+^2}{\omega_+^2-\omega_-^2} = \frac{2 g_0}{Q_0}\frac{k_E}{k_g} \end{equation} where the latter equality assumes ideal dissipation dilution given by Eq. \ref{eq:torsionbalanceQ}. Notably, $\Delta g_\t{min}$ is independent of the tensile stiffness $k_\sigma$. This can be understood by observing that tension decreases the sensitivity of the oscillator (Eq. \ref{eq:torsionbalancesensitivity}) in the same proportion that it increases its $Q$ factor (Eq. \ref{eq:torsionbalanceQ}). \subsection{Allan deviation measurement} As mentioned in the main text, we carried out a long term measurement to assess the frequency stability of the 35 Hz micro-torsion pendula in Fig. 4. The measurement is shown in Fig \ref{fig:allandeviation}. He we tracked the free running resonance frequency of the non-inverted pendulum $f_+(t)=\omega_+(t)/(2\pi)$ by Fourier transforming a weak ($P\approx 5\mu$W) optical lever measurement over the course of a night. The fractional Allan Deviation of the time trace reaches a minimum value of $\sigma_{\delta f_+/f_+}\approx 2\times 10^{-6}$ at 600 seconds, corresponding to a gravitational acceleration uncertainty of $\sigma_{\delta f_+/f_+}\approx 8\times 10^{-6}$ according to Eq. \ref{eq:gsensitivity}. We note that for this measurement, the temperature of the room was not stabilized, which may account for the observed $\sim \t{mHz}/\t{hr}$ frequency drift. \begin{figure}[t!] \includegraphics[width=0.8\columnwidth]{fig_supplement/AllanDeviation.pdf} \caption{Frequency stability of microtorsion pendulum in Fig. 4 of main text. Above: frequency of pendulum (relatve to nominal start value of $f_0 \approx 35$ Hz) as a function of time. Below: Allan Deviation of frequency versus time measurement, normalized to $f_0$.} \label{fig:allandeviation} \end{figure} \newpage \bibliographystyle{apsrev4-1}
1,314,259,992,708
arxiv
\section{Introduction}\label{sec:introduction} While random network coding \cite{HO_IT06} has proved to be a powerful tool for disseminating information in networks, it is highly susceptible to errors. Thus, error control for random network coding is critical and has received growing attention recently. Error control schemes proposed for random network coding assume two types of transmission models: some (see, e.g., \cite{yeung_cis06, cai_cis06}) depend on the underlying network topology or the particular linear network coding operations performed at various network nodes; others \cite{koetter_arxiv07, silva_arxiv07} assume that the transmitter and receiver have no knowledge of such channel transfer characteristics. The contrast is similar to that between coherent and noncoherent communication systems. Error control for noncoherent random network coding is first considered in \cite{koetter_arxiv07}. Motivated by the property that random network coding is vector-space preserving, \cite{koetter_arxiv07} defines an operator channel that captures the essence of the noncoherent transmission model. Hence, codes defined in finite field Grassmannians \cite{chihara_siam87}, referred to as constant-dimension codes, play a significant role in error control for noncoherent random network coding. In \cite{koetter_arxiv07}, a Singleton bound for constant-dimension codes and a family of codes that are nearly Singleton-bound achieving are proposed. Despite the asymptotic optimality of the Singleton bound and the codes designed in \cite{koetter_arxiv07}, the maximal cardinality of a constant-dimension code with finite dimension and minimum distance remains unknown, and it is not clear how an optimal code that achieves the maximal cardinality can be constructed. It is difficult to answer the above questions based on constant-dimension codes directly since the set of all subspaces of the ambient space lacks a natural group structure \cite{silva_arxiv07}. The class of nearly Singleton bound achieving constant-dimension codes in \cite{koetter_arxiv07} are related to rank metric codes. The relevance of rank metric codes to noncoherent random network coding is further established in \cite{silva_arxiv07}. In addition to network coding, rank metric codes \cite{delsarte_jct78, gabidulin_pit0185, roth_it91} have been receiving steady attention in the literature due to their applications in storage systems \cite{roth_it91}, public-key cryptosystems \cite{gabidulin_lncs91}, and space-time coding \cite{lusina_it03}. The pioneering works in \cite{delsarte_jct78, gabidulin_pit0185, roth_it91} have established many important properties of rank metric codes. Independently in \cite{delsarte_jct78, gabidulin_pit0185, roth_it91}, a Singleton bound (up to some variations) on the minimum rank distance of codes was established, and a class of codes that achieve the bound with equality was constructed. We refer to codes that attain the Singleton bound as maximum rank distance (MRD) codes, and the class of MRD codes proposed in \cite{gabidulin_pit0185} as Gabidulin codes henceforth. In this paper, we investigate the properties of constant-rank codes, which are the counterparts in rank metric codes of constant (Hamming) weight codes \cite{agrell_it00}. We first introduce a relation between vectors in $\mathrm{GF}(q^m)^n$ and subspaces of $\mathrm{GF}(q)^m$ or $\mathrm{GF}(q)^n$, and use it to establish a relation between constant-rank codes and constant-dimension codes. We also derive a lower bound on the maximum cardinality of constant-rank codes which depends on the maximum cardinality of constant-dimension codes. We then derive bounds on the maximum cardinality of constant-rank codes with given rank and minimum rank distance. Finally, we characterize the asymptotic behavior of the maximal cardinality of constant-rank codes with given rank and minimum rank distance, and compare it with asymptotic behavior of the maximal cardinality of constant-dimension codes. The rest of the paper is organized as follows. Section~\ref{sec:preliminaries} briefly reviews some important concepts in order to keep this paper self-contained. In Section~\ref{sec:prel_results}, we establish a relation between constant-dimension and constant-rank codes. In Section~\ref{sec:bounds}, we derive bounds on the maximum cardinality of constant-rank codes with a given minimum rank distance. Finally, Section~\ref{sec:asymptotics} investigates the asymptotic behavior of the maximum cardinality of constant-rank codes. \section{Preliminaries}\label{sec:preliminaries} \subsection{Rank metric codes and elementary linear subspaces}\label{sec:rank_metric} Consider a vector ${\bf x}$ of length $n$ over $\mathrm{GF}(q^m)$. The field $\mathrm{GF}(q^m)$ may be viewed as an $m$-dimensional vector space over $\mathrm{GF}(q)$. The rank weight of ${\bf x}$, denoted as $\mathrm{rk}({\bf x})$, is defined to be the \emph{maximum} number of coordinates of ${\bf x}$ that are linearly independent over $\mathrm{GF}(q)$ \cite{gabidulin_pit0185}. For any basis $B_m$ of $\mathrm{GF}(q^m)$ over $\mathrm{GF}(q)$, each coordinate of ${\bf x}$ can be expanded to an $m$-dimensional column vector over $\mathrm{GF}(q)$ with respect to $B_m$. The rank weight of ${\bf x}$ is hence the rank of the $m\times n$ matrix over $\mathrm{GF}(q)$ obtained by expanding all the coordinates of ${\bf x}$. For all ${\bf x}, {\bf y}\in \mathrm{GF}(q^m)^n$, it is easily verified that $d_{\mbox{\tiny{R}}}({\bf x},{\bf y})\stackrel{\mbox{\scriptsize def}}{=} \mathrm{rk}({\bf x} - {\bf y})$ is a metric over GF$(q^m)^n$, referred to as the \emph{rank metric} henceforth \cite{gabidulin_pit0185}. The {\em minimum rank distance} of a code $C$, denoted as $d_{\mbox{\tiny{R}}}$, is simply the minimum rank distance over all possible pairs of distinct codewords. It is shown in \cite{delsarte_jct78, gabidulin_pit0185, roth_it91} that the minimum rank distance of a block code of length $n$ and cardinality $M$ over $\mathrm{GF}(q^m)$ satisfies $d_{\mbox{\tiny{R}}} \leq n-\log_{q^m}M+1.$ In this paper, we refer to this bound as the Singleton bound for rank metric codes and codes that attain the equality as maximum rank distance (MRD) codes. We refer to the subclass of linear MRD codes introduced independently in \cite{delsarte_jct78, gabidulin_pit0185, roth_it91} as Gabidulin codes. We denote the number of vectors of rank $r$ ($0 \leq r \leq \min\{m,n\}$) in $\mathrm{GF}(q^m)^n$ as $N_r(q^m,n) = {n \brack r} \alpha(m,r)$ \cite{gabidulin_pit0185}, where $\alpha(m,0) \stackrel{\mbox{\scriptsize def}}{=} 1$ and $\alpha(m,r) \stackrel{\mbox{\scriptsize def}}{=} \prod_{i=0}^{r-1}(q^m-q^i)$ for $r \geq 1$. The ${n \brack r}$ term is often referred to as a Gaussian polynomial~\cite{andrews_book76}, defined as ${n \brack r} \stackrel{\mbox{\scriptsize def}}{=} \alpha(n,r)/\alpha(r,r)$. The volume of a ball with rank radius $r$ in $\mathrm{GF}(q^m)^n$ is denoted as $V_r(q^m,n) = \sum_{i=0}^r N_i(q^m,n)$. For all $q$, $1 \leq d \leq r \leq n \leq m$, the number of codewords of rank $r$ in an $(n, n-d+1, d)$ linear MRD code over $\mathrm{GF}(q^m)$ is given by \cite{gabidulin_pit0185} \begin{equation} \label{eq:Mdr_def} M_{d,r} \stackrel{\mbox{\scriptsize def}}{=} {n \brack r} \sum_{j=d}^r (-1)^{r-j} {r \brack j} q^{{r-j \choose 2}} \left( q^{m(j-d+1)} - 1\right). \end{equation} An {\em elementary linear subspace} (ELS) \cite{gadouleau_it06} is defined to be a linear subspace $\mathcal{V} \subseteq \mathrm{GF}(q^m)^n$ for which there exists a basis of vectors in $\mathrm{GF}(q)^n$. We denote the set of all ELS{}'s of $\mathrm{GF}(q^m)^n$ with dimension $v$ as $E_v(q^m,n)$. It can be easily shown that $|E_v(q^m,n)| = {n \brack v}$ for all $m$. An ELS has properties similar to those for a set of coordinates \cite{gadouleau_it06}. In particular, any vector belonging to an ELS{} with dimension $r$ has rank no more than $r$; conversely, any vector ${\bf x} \in \mathrm{GF}(q^m)^n$ with rank $r$ belongs to a unique ELS{} in $E_r(q^m,n)$. \subsection{Constant-dimension codes} A {\em constant-dimension code} \cite{koetter_arxiv07} of length $n$ and constant-dimension $r$ over $\mathrm{GF}(q)$ is defined to be a nonempty subset of $E_r(q,n)$. For all $\mathcal{U}, \mathcal{V} \in E_r(q,n)$, it is easily verified that \begin{equation}\label{eq:ds} d_{\mbox{\tiny{S}}}(\mathcal{U}, \mathcal{V}) \stackrel{\mbox{\scriptsize def}}{=} \dim(\mathcal{U} + \mathcal{V}) - \dim(\mathcal{U} \cap \mathcal{V}) = 2\dim(\mathcal{U} + \mathcal{V}) - 2r \end{equation} is a metric over $E_r(q,n)$, referred to as the {\em subspace metric} henceforth \cite{koetter_arxiv07}. The subspace distance between $\mathcal{U}$ and $\mathcal{V}$ thus satisfies $d_{\mbox{\tiny{S}}}(\mathcal{U}, \mathcal{V}) = 2\mathrm{rk}({\bf X}^T \,|\, {\bf Y}^T) - 2r$, where ${\bf X}$ and ${\bf Y}$ are generator matrices of $\mathcal{U}$ and $\mathcal{V}$, respectively. The {\em minimum subspace distance} of a constant-dimension code $\Omega \subseteq E_r(q,n)$, denoted as $d_{\mbox{\tiny{S}}}$, is the minimum subspace distance over all possible pairs of distinct subspaces. We say $\Omega$ is an $(n,d_{\mbox{\tiny{S}}},r)$ constant-dimension code over $\mathrm{GF}(q)$ and we denote the maximum cardinality of an $(n,2d,r)$ constant-dimension code over $\mathrm{GF}(q)$ as $A_{\mbox{\tiny{S}}}(q,n,2d,r)$. Since $A_{\mbox{\tiny{S}}}(q,n,2d,r) = A_{\mbox{\tiny{S}}}(q,n,2d,n-r)$ \cite{xia_arxiv07}, only the case where $2r \leq n$ needs to be considered. Also, since $A_{\mbox{\tiny{S}}}(q,n,2,r) = {n \brack r}$ and $A_{\mbox{\tiny{S}}}(q,n,2d,r) = 1$ for $d>r$, we shall assume $2 \leq d \leq r$ henceforth. Upper and lower bounds on $A_{\mbox{\tiny{S}}}(q,n,2d,r)$ were derived in \cite{wang_it03, koetter_arxiv07, xia_arxiv07}. In particular, for all $q$, $2r \leq n$, and $2 \leq d \leq r$, \begin{equation}\label{eq:bounds_As} q^{(n-r)(r-d+1)} \leq A_{\mbox{\tiny{S}}}(q,n,2d,r) \leq \frac{\alpha(n,r-d+1)}{\alpha(r,r-d+1)}. \end{equation} \subsection{Preliminary graph-theoretic results}\label{sec:graph_theory} We review some results in graph theory given in \cite{el_rouayheb_isit07}. Two adjacent vertices $u,v$ in a graph are denoted as $u \sim v$. \begin{definition}\label{def:homo} Let $G$ and $H$ be two graphs. A mapping $f$ from $V(G)$ to $V(H)$ is a homomorphism if for all $u,v \in V(G)$, $u \sim v \Rightarrow f(u) \sim f(v)$. \end{definition} \begin{definition}\label{def:auto} Let $G$ be a graph and $\phi$ a bijection from $V(G)$ to itself. $\phi$ is called an automorphism of $G$ if for all $u,v \in V(G)$, $u \sim v \Leftrightarrow \phi(u) \sim \phi(v)$. \end{definition} \begin{definition}\label{def:vertex_transitive} We say that the graph $G$ is vertex transitive if for all $u,v \in V(G)$, there exists an automorphism $\phi$ of $G$ such that $\phi(u) = v$. \end{definition} An {\em independent set} of a graph $G$ is a subset of $V(G)$ with no adjacent vertices. The independence number $\alpha(G)$ of $G$ is the maximum cardinality of an independent set of $G$. If $H$ is a vertex transitive graph and if there is a homomorphism from $G$ to $H$, then \cite{el_rouayheb_isit07} \begin{equation}\label{eq:alpha_G_H} \alpha(G) \geq \alpha(H) \frac{|G|}{|H|}. \end{equation} \section{Constant-Rank and Constant-Dimension Codes}\label{sec:prel_results} \subsection{Definitions and technical results} \begin{definition}\label{def:constant-rank A constant-rank code of length $n$ and constant-rank $r$ over $\mathrm{GF}(q^m)$ is a nonempty subset of $\mathrm{GF}(q^m)^n$ such that all elements have rank weight $r$. \end{definition} We denote a constant-rank code with length $n$, minimum rank distance $d$, and constant-rank $r$ as an $(n,d,r)$ constant-rank code over $\mathrm{GF}(q^m)$. We define the term $A_{\mbox{\tiny{R}}}(q^m,n,d,r)$ to be the maximum cardinality of an $(n,d,r)$ constant-rank code over $\mathrm{GF}(q^m)$. If $C$ is an $(n,d,r)$ constant-rank code over $\mathrm{GF}(q^m)$, then the code obtained by transposing all the expansion matrices of codewords in $C$ forms an $(m,d,r)$ constant-rank code over $\mathrm{GF}(q^n)$ with the same cardinality. Therefore $A_{\mbox{\tiny{R}}}(q^m,n,d,r) = A_{\mbox{\tiny{R}}}(q^n,m,d,r)$, and henceforth we assume $n \leq m$ without loss of generality. We now define two families of graphs which are instrumental in our analysis of constant-rank codes. \begin{definition}\label{def:R_q The {\em bilinear forms graph} $R_q(m,n,d)$ has as vertices all the vectors in $\mathrm{GF}(q^m)^n$ and two vertices ${\bf x}$ and ${\bf y}$ are adjacent if and only if $d_{\mbox{\tiny{R}}}({\bf x}, {\bf y}) < d$. The {\em constant-rank graph} $K_q(m,n,d,r)$ is the subgraph of $R_q(m,n,d)$ induced by the vectors in $\mathrm{GF}(q^m)^n$ with rank $r$. \end{definition} The orders of the bilinear forms and constant-rank graphs are thus given by $|R_q(m,n,d)| = q^{mn}$ and $|K_q(m,n,d,r)| = N_r(q^m,n)$. An independent set of $R_q(m,n,d)$ corresponds to a code with minimum rank distance $\geq d$. Due to the existence of MRD codes for all parameter values, we have $\alpha(R_q(m,n,d)) = q^{m(n-d+1)}$. Similarly, an independent set of $K_q(m,n,d,r)$ corresponds to a constant-rank code with minimum rank distance $\geq d$, and hence $\alpha(K_q(m,n,d,r)) = A_{\mbox{\tiny{R}}}(q^m,n,d,r)$. \begin{lemma}\label{lemma:vertex_transitive} The bilinear forms graph $R_q(m,n,d)$ is vertex transitive for all $q$, $m$, $n$, and $d$. The constant-rank graph $K_q(m,m,d,m)$ is vertex transitive for all $q$, $m$, and $d$. \end{lemma} \begin{proof} Let ${\bf u}, {\bf v} \in \mathrm{GF}(q^m)^n$. For all ${\bf x} \in \mathrm{GF}(q^m)^n$, define $\phi({\bf x}) = {\bf x} + {\bf v} - {\bf u}$. It is easily shown that $\phi$ is a graph automorphism of $R_q(m,n,d)$ satisfying $\phi({\bf u}) = {\bf v}$. By Definition~\ref{def:vertex_transitive}, $R_q(m,n,d)$ is hence vertex transitive. Let ${\bf u}, {\bf v} \in \mathrm{GF}(q^m)^m$ have rank $m$, and denote their expansions with respect to a basis $B_m$ of $\mathrm{GF}(q^m)$ over $\mathrm{GF}(q)$ as ${\bf U}$ and ${\bf V}$, respectively. For all ${\bf x} \in \mathrm{GF}(q^m)^m$ with rank $m$, define $\phi({\bf x}) = {\bf y}$ such that ${\bf Y} = {\bf X}{\bf U}^{-1} {\bf V}$, where ${\bf X}, {\bf Y}$ are the expansions of ${\bf x}$ and ${\bf y}$ with respect to $B_m$, respectively. We have $\phi({\bf u}) = {\bf v}$, $\mathrm{rk}(\phi({\bf x})) = m$, and for all ${\bf x}, {\bf z} \in \mathrm{GF}(q^m)^m$, $d_{\mbox{\tiny{R}}}(\phi({\bf x}), \phi({\bf z})) = \mathrm{rk}({\bf X}{\bf U}^{-1} {\bf V} - {\bf Z}{\bf U}^{-1} {\bf V}) = \mathrm{rk}({\bf X} - {\bf Z}) = d_{\mbox{\tiny{R}}}({\bf x}, {\bf z})$. By Definition~\ref{def:auto}, $\phi$ is an automorphism which takes ${\bf u}$ to ${\bf v}$ and hence $K_q(m,m,d,m)$ is vertex transitive. \end{proof} It is worth noting that $K_q(m,n,d,r)$ is not vertex transitive in general. \subsection{Constant-dimension and constant-rank codes}\label{sec:dimension_v_rank} In \cite{koetter_arxiv07}, constant-dimension codes were constructed from rank distance codes as follows. Let $C$ be a code with length $n$ over $\mathrm{GF}(q^m)$. For any ${\bf c} \in C$, consider its expansion ${\bf C}$ with respect to the basis $B_m$ of $\mathrm{GF}(q^m)$ over $\mathrm{GF}(q)$, and construct $I({\bf C}) = ({\bf I}_m \,|\, {\bf C}) \in \mathrm{GF}(q)^{m \times m+n}$. Then $I(C) \stackrel{\mbox{\scriptsize def}}{=} \{ I({\bf C}) | {\bf c} \in C \}$ is a constant-dimension code in $E_m(q,m+n)$. This relation between rank codes and constant-dimension codes was also commented in graph-theoretic terms in \cite{brouwer_book89}. We introduce a relation between vectors in $\mathrm{GF}(q^m)^n$ and subspaces of $\mathrm{GF}(q)^m$ or $\mathrm{GF}(q)^n$. For any ${\bf x} \in \mathrm{GF}(q^m)^n$ with rank $r$, consider the matrix ${\bf X} \in \mathrm{GF}(q)^{m \times n}$ obtained by expanding all the coordinates of ${\bf x}$ with respect to a basis $B_m$ of $\mathrm{GF}(q^m)$ over $\mathrm{GF}(q)$. The column span of ${\bf X}$, denoted as $\mathfrak{S}({\bf x})$, is an $r$-dimensional subspace of $\mathrm{GF}(q)^m$, which corresponds to the subspace of $\mathrm{GF}(q^m)$ spanned by the coordinates of ${\bf x}$. The row span of ${\bf X}$, denoted as $\mathfrak{T}({\bf x})$, is an $r$-dimensional subspace of $\mathrm{GF}(q)^n$, which corresponds to the unique ELS $\mathcal{V} \in E_r(q^m,n)$ such that ${\bf x} \in \mathcal{V}$. \begin{lemma}\label{lemma:S_T} For all $\mathcal{S} \in E_r(q,m)$ and $\mathcal{T} \in E_r(q,n)$, there exists ${\bf x} \in \mathrm{GF}(q^m)^n$ with rank $r$ such that $\mathfrak{S}({\bf x}) = \mathcal{S}$ and $\mathfrak{T}({\bf x}) = \mathcal{T}$. \end{lemma} \begin{proof} Consider the generator matrices ${\bf G} \in \mathrm{GF}(q)^{r \times m}$ and ${\bf H} \in \mathrm{GF}(q)^{r \times n}$ of $\mathcal{S}$ and $\mathcal{T}$, respectively. Let ${\bf X} = {\bf G}^T {\bf H}$ and ${\bf x}$ be the vector whose expansion with respect to $B_m$ is given by ${\bf X}$. Then $\mathfrak{S}({\bf x}) = \mathcal{S}$ and $\mathfrak{T}({\bf x}) = \mathcal{T}$. \end{proof} By Lemma~\ref{lemma:S_T}, the functions $\mathfrak{S}$ and $\mathfrak{T}$ are surjective. They are not injective, however. For all $\mathcal{V} \in E_r(q^m,n)$, there exist exactly $\alpha(m,r)$ vectors ${\bf x} \in \mathcal{V}$ with rank $r$ \cite{gadouleau_it06}, hence for all $\mathcal{T} \in E_r(q,n)$ there exist exactly $\alpha(m,r)$ vectors ${\bf x}$ such that $\mathfrak{T}({\bf x}) = \mathcal{T}$. By transposition, it follows that there exist exactly $\alpha(n,r)$ vectors ${\bf x}$ such that $\mathfrak{S}({\bf x}) = \mathcal{S}$ for all $\mathcal{S} \in E_r(q,m)$. For any $C \subseteq \mathrm{GF}(q^m)^n$, define $\mathfrak{S}(C) \stackrel{\mbox{\scriptsize def}}{=} \{ \mathfrak{S}({\bf c})| {\bf c} \in C \}$ and $\mathfrak{T}(C) \stackrel{\mbox{\scriptsize def}}{=} \{ \mathfrak{T}({\bf c})| {\bf c} \in C \}$. We obtain the following lemma. \begin{lemma}\label{lemma:|S(C)|} For all $C \subseteq \mathrm{GF}(q^m)^n$, we have $|\mathfrak{S}(C)| \leq |C| \leq \alpha(n,r)|\mathfrak{S}(C)|$ and $|\mathfrak{T}(C)| \leq |C| \leq \alpha(m,r)|\mathfrak{T}(C)|.$ \end{lemma} \begin{proposition}\label{prop:subspace_v_rank} For any constant-dimension code $\Gamma \subseteq E_r(q,m)$, there exists a constant-rank code $C$ with length $n$ and constant-rank $r$ over $\mathrm{GF}(q^m)$ such that $r \leq n \leq m$ and $\mathfrak{S}(C) = \Gamma$. The cardinality of $C$ satisfies $|\Gamma| \leq |C| \leq \alpha(n,r)|\Gamma|$. On the other hand, for any constant-dimension code $\Delta \subseteq E_r(q,n)$, there exists a constant-rank code $D$ with length $n$ and constant-rank $r$ over $\mathrm{GF}(q^m)$ such that $r \leq n \leq m$ and $\mathfrak{T}(D) = \Delta$. The cardinality of $D$ satisfies $|\Delta| \leq |D| \leq \alpha(m,r)|\Delta|$. \end{proposition} \begin{proof} By Lemma~\ref{lemma:S_T}, for any $\mathcal{U} \in \Gamma$ there exists ${\bf c}_\mathcal{U} \in \mathrm{GF}(q^m)^n$ with rank $r$ such that $\mathfrak{S}({\bf c}_\mathcal{U}) = \mathcal{U}$. Therefore, the code $C = \{{\bf c}_\mathcal{U}| \mathcal{U} \in \Gamma\}$ satisfies $\mathfrak{S}(C) = \Gamma$. $C$ is a constant-rank code with length $n$ and constant-rank $r$ over $\mathrm{GF}(q^m)$, and by Lemma~\ref{lemma:|S(C)|}, $|C|$ satisfies $|\Gamma| \leq |C| \leq \alpha(n,r)|\Gamma|$. The proof for $\Delta \subseteq E_r(q,n)$ is similar and hence omitted. \end{proof} Proposition~\ref{prop:subspace_v_rank} shows that constant-dimension codes can be viewed as a special class of constant-rank codes. Although the rank metric is not directly related to the subspace metric in general, the maximal cardinalities of constant-dimension codes and constant-rank codes are related. \begin{proposition}\label{prop:A>As} For all $q$ and $1 \leq r <d \leq n \leq m$, \begin{equation}\label{eq:A>As} A_{\mbox{\tiny{R}}}(q^m,n,d,r) \geq \min \{ A_{\mbox{\tiny{S}}}(q,n,2(d-r),r), A_{\mbox{\tiny{S}}}(q,m,2r,r)\}. \end{equation} \end{proposition} \begin{proof} Let $\Gamma$ be an optimal $(m,2r,r)$ constant-dimension code over $\mathrm{GF}(q)$ and $\Delta$ be an optimal $(n,2d,r)$ constant-dimension code over $\mathrm{GF}(q)$. Denote their cardinalities as $\mu = A_{\mbox{\tiny{S}}}(q,m,2r,r)$ and $\nu = A_{\mbox{\tiny{S}}}(q,n,2d,r)$ and the generator matrices of their component subspaces as $\{{\bf X}_i\}_{i=0}^{\mu-1}$ and $\{ {\bf Y}_j \}_{j=0}^{\nu-1}$, respectively. By~(\ref{eq:ds}), for all $0 \leq i < j \leq \nu-1$, $2\mathrm{rk}({\bf Y}_i^T \,|\, {\bf Y}_j^T) - 2r \geq 2d$, and hence $\mathrm{rk}({\bf Y}_i^T \,|\, {\bf Y}_j^T) \geq d+r$. For all $0 \leq i \leq \mu-1$, define ${\bf b}_i = (\beta_{i,0}, \beta_{i,1}, \ldots, \beta_{i,r-1}) \in \mathrm{GF}(q^m)^r$ such that the expansion of $\beta_{i,l}$ with respect to a basis $B_m$ of $\mathrm{GF}(q^m)$ is given by the $l$-th row of ${\bf X}_i$. For all $0 \leq i < j \leq \nu-1$, the matrix $({\bf X}_i^T \,|\, {\bf X}_j^T)$ has full rank by~(\ref{eq:ds}) and hence the elements $\{\beta_{i,0}, \ldots, \beta_{i,r-1}, \beta_{j,0}, \ldots, \beta_{j,r-1} \}$ are linearly independent. We thus define the basis $\gamma_{i,j} = \{\beta_{i,0}, \ldots, \beta_{i,r-1}, \beta_{j,0}, \ldots, \beta_{j,r-1}, \gamma_{2r}, \ldots, \gamma_{m-1}\}$ of $\mathrm{GF}(q^m)$ over $\mathrm{GF}(q)$. We define the code $C \subseteq \mathrm{GF}(q^m)^n$ such that ${\bf c}_i = {\bf b}_i {\bf Y}_i^T$ for $0 \leq i \leq \min\{\mu,\nu\}-1$. Expanding ${\bf c}_i$ and ${\bf c}_j$ with respect to the basis $\gamma_{i,j}$, we obtain $\mathrm{rk}({\bf c}_i) = \mathrm{rk} \left( {\bf Y}_i^T \,|\, {\bf 0} \right) = r$ and $d_{\mbox{\tiny{R}}}({\bf c}_i, {\bf c}_j) =\mathrm{rk} \left({\bf Y}_i^T\,|\, -{\bf Y}_j^T \,|\, {\bf 0} \right) = \mathrm{rk}({\bf Y}_i^T \,|\, {\bf Y}_j^T) \geq d+r.$ Therefore, $C$ is an $(n,d+r,r)$ constant-rank code over $\mathrm{GF}(q^m)$ with cardinality $\min\{\mu,\nu\}$. \end{proof} \begin{corollary}\label{cor:A>As} For all $q$ and $m$, \begin{eqnarray} \label{eq:As_2r} A_{\mbox{\tiny{R}}}(q^m,n,2r,r) &\geq& A_{\mbox{\tiny{S}}}(q,n,2r,r) \quad \mbox{for }\, n \leq m\\ \label{eq:As_2(d-r)} A_{\mbox{\tiny{R}}}(q^m,m,d,r) &\geq& A_{\mbox{\tiny{S}}}(q,m,2r,r) \quad \mbox{for }\, r <d. \end{eqnarray} \end{corollary} Therefore, a lower bound on $A_{\mbox{\tiny{S}}}$ is also a lower bound on $A_{\mbox{\tiny{R}}}$ for $r < d$. We may use the lower bound on $A_{\mbox{\tiny{S}}}$ in~(\ref{eq:bounds_As}). \section{Bounds on constant-rank codes}\label{sec:bounds} We derive bounds on the maximum cardinality of constant-rank codes. We first observe that $A_{\mbox{\tiny{R}}}(q^m,n,d,r)$ is a non-decreasing function of $m$ and $n$, and a non-increasing function of $d$. We also remark that the bounds on $A_{\mbox{\tiny{R}}}(q^m,n,d,r)$ derived in Section~\ref{sec:dimension_v_rank} for $2r \leq n$ can be easily adapted for $2r > n$ by applying them to $n-r$ instead. Finally, since $A_{\mbox{\tiny{R}}}(q^m,n,1,r) = N_r(q^m,n)$ and $A_{\mbox{\tiny{R}}}(q^m,n,d,r) = 1$ for $d > 2r$, we shall assume $2 \leq d \leq 2r$ henceforth. By considering the Singleton bound for rank metric codes or MRD codes, we obtain a lower bound and some upper bounds on $A_{\mbox{\tiny{R}}}(q^m,n,d,r)$. \begin{proposition}\label{prop:A_MRD} For all $q$ and $1 \leq r,d \leq n \leq m$, \begin{eqnarray} \label{eq:A_lower_MRD} A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\geq& M_{d,r} \quad \mbox{for }\, r \geq d\\ \label{eq:A_upper_MRD2} A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\leq& q^{m(n-d+1)} - \sum_{j \in J_a} A_{\mbox{\tiny{R}}}(q^m,n,d,j)\\ \label{eq:A_upper_MRD1} A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\leq& q^{m(n-d+1)} - \sum_{i \in I_r} M_{d,i}\\ \label{eq:A_upper_MRD} A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\leq& q^{m(n-d+1)} - 1 \quad \mbox{for }\, r \geq d, \end{eqnarray} where $I_r \stackrel{\mbox{\scriptsize def}}{=} \{ i \,:\, 0 \leq i \leq n, |i-r| \geq d \}$ and $J_a \stackrel{\mbox{\scriptsize def}}{=} I_r \cap \{a + kd \,:\, k \in \mathbb{Z}\}$ for $0 \leq a < d$. \end{proposition} \begin{proof} The codewords of rank $r$ in an $(n, n-d+1,d)$ linear MRD code over $\mathrm{GF}(q^m)$ form an $(n,d,r)$ constant-rank code. Thus, $A_{\mbox{\tiny{R}}}(q^m,n,d,r) \geq M_{d,r}$ for $r \geq d$. Let $C$ be an $(n,n-d+1,d)$ linear MRD code over $\mathrm{GF}(q^m)$, and denote its codewords with ranks belonging to $I_r$ as $C'$. For $0 \leq j \leq n$, let $C_j$ be optimal $(n,d,j)$ constant-rank codes and define $C'' \stackrel{\mbox{\scriptsize def}}{=} \bigcup_{j \in J_a} C_j$. The Singleton bound on the codes $C_r \cup C'$ and $C_r \cup C''$ yields~(\ref{eq:A_upper_MRD1}) and~(\ref{eq:A_upper_MRD2}), respectively. Finally, the Singleton bound on $C \cup \{0\}$, where $C$ is an $(n,d,r)$ ($r \geq d$) constant-rank code over $\mathrm{GF}(q^m)$, yields~(\ref{eq:A_upper_MRD}). \end{proof} \begin{proposition}\label{prop:A_bounds} For all $q$ and $1 \leq r,d \leq n \leq m$, \begin{eqnarray} \label{eq:bassalygo} A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\geq& N_r(q^m,n) q^{m(-d+1)}\\ \nonumber A_{\mbox{\tiny{R}}}(q^m,m,d,m) &\leq& A_{\mbox{\tiny{R}}}(q^{m-1},m-1,d,m-1)\\ \label{eq:johnson_m-1} &\cdot& q^{m-1}(q^m-1) \quad \mbox{for }\, d<m\\ \nonumber A_{\mbox{\tiny{R}}}(q^m,n,d,r) &\leq& A_{\mbox{\tiny{R}}}(q^m,n-1,d,r)\\ \label{eq:johnson_r} &\cdot& \frac{q^n-1}{q^{n-r}-1} \quad \mbox{for }\, r<n. \end{eqnarray} \end{proposition} \begin{proof} Since $K_q(m,n,d,r)$ is a subgraph of $R_q(m,n,d)$, the inclusion map is a trivial homomorphism from $K_q(m,n,d,r)$ to $R_q(m,n,d)$. By Lemma~\ref{lemma:vertex_transitive}, $R_q(m,n,d)$ is vertex transitive. We hence apply~(\ref{eq:alpha_G_H}) to these graphs, which yields~(\ref{eq:bassalygo}). Let $B_{m-1}$ and $B_m$ be bases sets over $\mathrm{GF}(q)$ of $\mathrm{GF}(q^{m-1})$ and $\mathrm{GF}(q^m)$, respectively. For all ${\bf x} \in \mathrm{GF}(q^{m-1})^{m-1}$ with rank $m-1$, define $g({\bf x}) = {\bf y} \in \mathrm{GF}(q^m)^m$ such that \begin{equation} \label{eq:g} {\bf Y} = \left(\begin{array}{c|c} {\bf X} & {\bf 0}\\ \hline {\bf 0} & 1 \end{array}\right) \in \mathrm{GF}(q)^{m \times m}, \end{equation} where ${\bf X}$ and ${\bf Y}$ are the expansions of ${\bf x}$ and ${\bf y}$ with respect to $B_{m-1}$ and $B_m$, respectively. By~(\ref{eq:g}), for all ${\bf x}, {\bf z} \in \mathrm{GF}(q^{m-1})^{m-1}$ with rank $m-1$, we have $\mathrm{rk}(g({\bf x})) = \mathrm{rk}({\bf x}) + 1 = m$ and $\mathrm{rk}(g({\bf x}) - g({\bf z})) = \mathrm{rk}({\bf x} - {\bf z})$. Therefore $g$ is a homomorphism from $K_q(m-1,m-1,d,m-1)$ to $K_q(m,m,d,m)$. Applying~(\ref{eq:alpha_G_H}) to these graphs, and noticing that $\alpha(m,m) = q^{m-1} (q^m-1) \alpha(m-1,m-1)$, we obtain~(\ref{eq:johnson_m-1}). We now prove~(\ref{eq:johnson_r}). Note that any vector ${\bf x} \in \mathrm{GF}(q^m)^n$ with rank $r$ belongs to ${n-r \brack 1}$ ELS's of dimension $n-1$. Indeed, such ELS's are of the form $\mathcal{E}({\bf x}) \oplus \mathcal{N}$, where $\mathcal{N} \in E_{n-r-1}(q^m,n-r)$. Let $C$ be an optimal $(n,d,r)$ constant-rank code over $\mathrm{GF}(q^m)$. For all ${\bf c} \in C$ and all $\mathcal{V} \in E_{n-1}(q^m,n)$, we define $f(\mathcal{V},{\bf c}) = 1$ if ${\bf c} \in \mathcal{V}$ and $f(\mathcal{V}, {\bf c}) = 0$ otherwise. For all ${\bf c}$, $\sum_{\mathcal{V} \in E_{n-1}(q^m,n)} f(\mathcal{V}, {\bf c}) = {n-r \brack 1}$, and for all $\mathcal{V}$, $\sum_{{\bf c} \in C} f(\mathcal{V},{\bf c}) = | C \cap \mathcal{V}|$. Summing over all possible pairs, we obtain \begin{eqnarray*} \sum_{\mathcal{V} \in E_{n-1}(q^m,n)} \sum_{{\bf c} \in C} f(\mathcal{V},{\bf c}) &=& \sum_{{\bf c} \in C} \sum_{\mathcal{V} \in E_{n-1}(q^m,n)} f(\mathcal{V},{\bf c})\\ = \sum_{{\bf c} \in C} {n-r \brack 1} &=& {n-r \brack 1} A_{\mbox{\tiny{R}}}(q^m,n,d,r). \end{eqnarray*} Hence there exists $\mathcal{U} \in E_{n-1}(q^m,n)$ such that $|C \cap \mathcal{U}| = \sum_{{\bf c} \in C} f(\mathcal{U},{\bf c}) \geq \frac{{n-r \brack 1}}{{n \brack 1}} A_{\mbox{\tiny{R}}}(q^m,n,d,r)$. The restriction of $C \cap \mathcal{U}$ to the ELS $\mathcal{U}$ \cite{gadouleau_it06} is an $(n-1,d,r)$ constant-rank code over $\mathrm{GF}(q^m)$, and hence its cardinality satisfies $\frac{q^{n-r} - 1}{q^n-1} A_{\mbox{\tiny{R}}}(q^m,n,d,r) \leq |C \cap \mathcal{U}| \leq A_{\mbox{\tiny{R}}}(q^m,n-1,d,r)$. \end{proof} Eq.~(\ref{eq:bassalygo}) is the counterpart in rank metric codes of the Bassalygo-Elias bound \cite{bassalygo_pit68}, while~(\ref{eq:johnson_r}) is analogous to a well-known result by Johnson \cite{johnson_it62}. Note that~(\ref{eq:bassalygo}) can be trivial for $d$ approaching $2r$. \begin{proposition}\label{prop:A_r_r} For all $q$ and $1 \leq r \leq n \leq m$, \begin{equation}\label{eq:A_r_r} A_{\mbox{\tiny{R}}}(q^m,n,r,r) = {n \brack r}(q^m-1). \end{equation} \end{proposition} \begin{proof} First, by~(\ref{eq:A_lower_MRD}), we obtain $A_{\mbox{\tiny{R}}}(q^m,n,r,r) \geq {n \brack r}(q^m-1).$ Second, applying~(\ref{eq:johnson_r}) successively $n-r$ times leads to $A_{\mbox{\tiny{R}}}(q^m,n,r,r) \leq {n \brack r} A_{\mbox{\tiny{R}}}(q^m,r,r,r).$ By~(\ref{eq:A_upper_MRD}), we obtain $A_{\mbox{\tiny{R}}}(q^m,n,r,r) \leq {n \brack r} (q^m-1)$. \end{proof} Equality in~(\ref{eq:A_r_r}) is thus achieved by the codewords of rank $r$ in an $(n,n-r+1,r)$ linear MRD code. \begin{proposition}\label{prop:A_upper_n_brack_r} For all $q$ and $0 \leq r < d \leq n \leq m$, \begin{equation}\label{eq:A_upper_n_brack_r} A_{\mbox{\tiny{R}}}(q^m,n,d,r) \leq {n \brack r}. \end{equation} \end{proposition} \begin{proof} Consider a code $C$ with minimum rank distance $d$ and constant-rank $r<d$. If $|C| > {n \brack r} = |E_r(q^m,n)|$, then there exist two codewords in $C$ belonging to the same ELS $\mathcal{V} \in E_r(q^m,n)$. Their distance is hence at most equal to $r$, which contradicts the minimum distance of $C$. Therefore, $|C| \leq {n \brack r}$. \end{proof} \begin{corollary} For all $q$, $m$, and $n$, $A_{\mbox{\tiny{R}}}(q^m,n,2,1) = {n \brack 1}$. \end{corollary} \begin{proof} First, by Proposition~\ref{prop:A_upper_n_brack_r}, we obtain $A_{\mbox{\tiny{R}}}(q^m,n,2,1) \leq {n \brack 1}$. Second, by Corollary~\ref{cor:A>As}, we obtain $A_{\mbox{\tiny{R}}}(q^m,n,2,1) \geq A_{\mbox{\tiny{S}}}(q,n,2,1)$. We now prove that $A_{\mbox{\tiny{S}}}(q,n,2,1) = {n \brack 1}$. For any $\mathcal{U}, \mathcal{V} \in E_1(q,n)$, $\mathcal{U} \neq \mathcal{V}$, we have $\dim(\mathcal{U} \cap \mathcal{V}) = 0$ and hence $d_{\mbox{\tiny{S}}}(\mathcal{U}, \mathcal{V}) = 2$. Therefore, $E_1(q,n)$ is a constant-dimension code with minimum subspace distance $2$ and $A_{\mbox{\tiny{S}}}(q,n,2,1) = {n \brack 1}$. \end{proof} \section{Asymptotic results}\label{sec:asymptotics} In this section, we study the asymptotic behavior of $A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r)$. In order to compare it to the asymptotic behavior of $A_{\mbox{\tiny{S}}}(q,m,d_{\mbox{\tiny{S}}},r)$, we use a set of normalized parameters different from those introduced in \cite{koetter_arxiv07}: $\nu = \frac{n}{m}$, $\rho = \frac{r}{m}$, $\delta_{\mbox{\tiny{R}}} = \frac{d_{\mbox{\tiny{R}}}}{m}$, and $\delta_{\mbox{\tiny{S}}} = \frac{d_{\mbox{\tiny{S}}}}{2m}$. By definition, $0 \leq \rho, \delta_{\mbox{\tiny{R}}} \leq \nu$, and since we assume $n \leq m$, $\nu \leq 1$. We consider the asymptotic rates defined as $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) \stackrel{\mbox{\scriptsize def}}{=} \lim_{m \rightarrow \infty} \sup \left[\log_{q^{m^2}} A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r) \right]$ and $a_{\mbox{\tiny{S}}}(\delta_{\mbox{\tiny{S}}},\rho) \stackrel{\mbox{\scriptsize def}}{=} \lim_{m \rightarrow \infty} \sup \left[ \log_{q^{m^2}} A_{\mbox{\tiny{S}}}(q,m,d_{\mbox{\tiny{S}}},r) \right].$ Adapting the results in \cite{silva_arxiv07} using the parameters defined above, we obtain $a_{\mbox{\tiny{S}}}(\delta_{\mbox{\tiny{S}}}, \rho) = \min\{(1-\rho)(\rho-\delta_{\mbox{\tiny{S}}}),\rho(1-\rho-\delta_{\mbox{\tiny{S}}})\}$ for $0 \leq \delta_{\mbox{\tiny{S}}} \leq \min\{\rho,1-\rho\}$ and $a_{\mbox{\tiny{S}}}(\delta_{\mbox{\tiny{S}}},\rho) = 0$ otherwise. We now investigate how the $A_{\mbox{\tiny{R}}}(q^m,n,d,r)$ term behaves as the parameters tend to infinity. Without loss of generality, we only consider the case where $0 \leq \delta_{\mbox{\tiny{R}}} \leq 2\rho$, since $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) = 0$ for $\delta_{\mbox{\tiny{R}}} > 2\rho$. \begin{proposition}\label{prop:a} Suppose $\nu \leq 1$. For $0 \leq \delta_{\mbox{\tiny{R}}} \leq \rho$, \begin{equation}\label{eq:a1} a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) = \rho(1+\nu-\rho) - \delta_{\mbox{\tiny{R}}}. \end{equation} For $\rho \leq \delta_{\mbox{\tiny{R}}} \leq \min\{ 2\rho, \nu\}$, \begin{equation}\label{eq:a2} \max\{0, \rho(1+\nu-\rho) - \delta_{\mbox{\tiny{R}}}\} \leq a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) \leq \rho(\nu-\delta_{\mbox{\tiny{R}}}). \end{equation} Suppose $\nu > 1$. For $0 \leq \delta_{\mbox{\tiny{R}}} \leq \rho$, $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) = \rho(1+\nu-\rho) - \nu \delta_{\mbox{\tiny{R}}}.$ For $\rho \leq \delta_{\mbox{\tiny{R}}} \leq \min\{ 2\rho, 1\}$, $\max\{0, \rho(1+\nu-\rho) - \nu \delta_{\mbox{\tiny{R}}}\}\leq a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) \leq\rho(1-\delta_{\mbox{\tiny{R}}})$. \end{proposition} \begin{proof} We give the proof for $\nu \leq 1$, and the proof for $\nu > 1$ is similar and hence omitted. We first derive a lower bound on $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho)$ for all $\rho$. Using the combinatorial bounds in \cite{gadouleau_it06},~(\ref{eq:bassalygo}) yields $A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r) > q^{r(m+n-r) - \sigma(q) + m(-d_{\mbox{\tiny{R}}}+1)}$, where $\sigma(q) < 2$ for $q \geq 2$. This asymptotically becomes $a_{\mbox{\tiny{R}}}(\nu, \delta_{\mbox{\tiny{R}}}, \rho) \geq \rho(1+\nu-\rho) - \delta_{\mbox{\tiny{R}}}$ for $0 \leq \delta_{\mbox{\tiny{R}}} \leq \min\{ 2\rho, \nu \}$. We now derive an upper bound on $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho)$. First, suppose $r \geq d_{\mbox{\tiny{R}}}$. Applying~(\ref{eq:johnson_r}), we easily obtain $A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r) \leq {n \brack r} A_{\mbox{\tiny{R}}}(q^m,r,d_{\mbox{\tiny{R}}},r).$ Combining with~(\ref{eq:A_upper_MRD}), we obtain $A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r) \leq {n \brack r} q^{m(r-d_{\mbox{\tiny{R}}}+1)} < q^{r(n-r) + \sigma(q) + m(r-d_{\mbox{\tiny{R}}}+1)}.$ Asymptotically, this becomes $a_{\mbox{\tiny{R}}}(\nu,\delta_{\mbox{\tiny{R}}},\rho) \leq \rho(\nu-\rho) - \delta_{\mbox{\tiny{R}}} + \rho$ for $\rho \geq \delta_{\mbox{\tiny{R}}}$. Second, suppose $r < d_{\mbox{\tiny{R}}}$. By the same token, we obtain $A_{\mbox{\tiny{R}}}(q^m,n,d_{\mbox{\tiny{R}}},r) \leq \frac{{n \brack r}}{{d_{\mbox{\tiny{R}}} \brack r}} A_{\mbox{\tiny{R}}}(q^m,d_{\mbox{\tiny{R}}},d_{\mbox{\tiny{R}}},r) \leq q^{r(n-d_{\mbox{\tiny{R}}}) +\sigma(q) + m}$, and hence $a_{\mbox{\tiny{R}}}(\nu, \delta_{\mbox{\tiny{R}}}, \rho) \leq \rho(\nu-\delta_{\mbox{\tiny{R}}})$ for $\rho \leq \delta_{\mbox{\tiny{R}}}$. \end{proof} We observe that the asymptotic behavior of the maximal cardinality of constant-dimension codes depends on whether $\rho = \frac{r}{m}\leq \frac{1}{2}$, while the asymptotic behavior of the maximal cardinality of constant-rank codes depends on whether $\nu = \frac{n}{m}\leq 1$. This is due to the different behaviors of rank metric codes of length $n$ over $\mathrm{GF}(q^m)$ for $m \geq n$ and $m<n$ respectively. The construction of an asymptotically optimal constant-dimension code in $E_r(q,m)$ given in \cite{koetter_arxiv07} and reviewed in Section~\ref{sec:dimension_v_rank} is based on a rank metric code of length $m-r$ over $\mathrm{GF}(q^r)$. Hence $r \geq m-r$ for the rank metric code is equivalent to $r \geq m/2$ (or $\rho \geq 1/2$) for the constant-dimension code. By the Singleton bound on rank metric codes, the asymptotic behavior of the cardinality of an $(n,n-d_{\mbox{\tiny{R}}}+1,d_{\mbox{\tiny{R}}})$ linear MRD code over $\mathrm{GF}(q^m)$ with $\nu \leq 1$ is given by $\nu-d_{\mbox{\tiny{R}}}$. However, by~(\ref{eq:a1}), $a_{\mbox{\tiny{R}}}(\nu, \delta_{\mbox{\tiny{R}}}, \nu) = \nu-d_{\mbox{\tiny{R}}}$ for $\nu \leq 1$ and hence the maximum cardinality of a constant-rank code with rank $n$ is asymptotically equivalent to the cardinality of an MRD code with the same minimum rank distance. We hence conjecture that the code formed by the codewords of rank $n$ in an $(n,n-d_{\mbox{\tiny{R}}}+1,d_{\mbox{\tiny{R}}})$ linear MRD code achieves the maximal cardinality asymptotically. \bibliographystyle{IEEETran}
1,314,259,992,709
arxiv
\section{Introduction} It has been known that the internal space for N=2 supersymmetric one-dimensional sigma model is a K\"ahler manifold \cite{Zumino}, and the internal space for N=4 supersymmetric one-dimensional sigma model is a hyper-K\"ahler manifold \cite{Curtright} \cite{Hitchin}. It means that there exists a torsion-free connection with holonomy in $\U(n)$ or $\SP(n)$ respectively on the internal space. It is also known for a fairy long time that when the Wess-Zumino term is present in the sigma model, then the internal space has linear connections with holonomy in $\U(n)$ or $\SP(n)$ depending on the numbers of supersymmetry. However, the connection has torsion and the torsion tensor is totally skew-symmetric \cite{Zumino} \cite{HP1} \cite{GPS}. The geometry of a connection with totally skew-symmetric torsion and holonomy in $\U(n)$ is referred to KT-geometry by physicists. When the holonomy is in $\SP(n)$, the geometry is referred to HKT-geometry. If one ignores the metric and the connection of a HKT-geometry, the remaining object on the manifold is a hypercomplex structure. The subject of hypercomplex manifolds has been studied by many people since the publication of \cite{Salamon2} and \cite{Boyer}. Considerable amount of information is known. It has a twistor correspondence \cite{Salamon2} \cite{PP1}. There are homogeneous examples \cite{Joyce2}. There are inhomogeneous examples \cite{BGM} \cite{PP2}. There is a reduction construction modeled on symplectic reduction and hyper-K\"ahler reduction \cite{Joyce1}. However, all these work focus on the hypercomplex structure and the associated Obata connection which is a torsion-free connection preserving the hypercomplex structure. What is not discussed in these work is hyper-Hermitian geometry. On the other hand, Hermitian connections on almost Hermitian manifolds are studied rather thoroughly by Gauduchon \cite{Gauduchon}. He considered a subset of Hermitian connections determined by the form of their torsion tensor, called canonical connections. Guided by physicists' work and based on the results on hypercomplex manifolds, we review and further develop the theory of HKT-geometry. When some of our observations are re-interpretation of physicists' results, especially those in \cite{HP1} \cite{HP2} and \cite{OP} \cite{Spindel}, some of the results in this paper are new. In Section \ref{HKT Geometry}, we review the basic definitions of HKT-geometry along the line of classical Hermitian geometry developed by Gauduchon \cite{Gauduchon}. Based on Joyce's construction of homogeneous hypercomplex manifolds \cite{Joyce2}, we review the construction of homogeneous HKT-geometry with respect to compact semi-simple Lie groups \cite{OP}. In Section \ref{Associated}, we find that a hyper-Hermitian manifold admits HKT-connection if and only if for each complex structure, there is a holomorphic (0,2)-form. This characterization easily implies that some hyper-Hermitian structures are not HKT-structure. Furthermore, when this characterization is given a twistorial interpretation, the associated object on the twistor space of the hypercomplex structure is holomorphic with respect to a non-standard almost complex structure ${\cal J}_2$. This almost complex structure ${\cal J}_2$ is first discussed by Eells and Salamon in a different context \cite{ES}. Since this almost complex structure is never integrable, we focus on the holomorphic (0,2)-forms. From this perspective, we verify that there are HKT-structures on nilmanifolds, and that the twist of a HKT-manifold is again a HKT-manifold. Based on results in Section \ref{Associated}, we study potential theory for HKT-geometry in Section \ref{Potential}. We shall see that local HKT-geometry is very flabby in the sense that the existence of one generates many through a perturbation of potential functions. In particular, we show that hyper-K\"ahler potentials generate many HKT-potentials. The results in this section and Section \ref{Associated} allow us to construct a large family of inhomogeneous HKT-structures on compact manifolds including $S^1\times S^{4n+3}$. Finally, a reduction theory based on hyper-K\"ahler reduction for HKT-geometry is developed in Section \ref{Reduction}. \section{Hyper-K\"ahler Geometry with Torsion}\label{HKT Geometry} \subsection{K\"ahler Geometry with Torsion} Let $M$ be a smooth manifold with Riemannian metric $g$ and an integrable complex structure $J$. It is a Hermitian manifold if $g(JX, JY)=g(X, Y)$. The K\"ahler form $F$ is a type (1,1)-form defined by $F(X, Y)=g(JX, Y)$. A linear connection $\nabla$ on $M$ is Hermitian if it preserves the metric $ g$ and the complex structure $J$. i.e., \[ \nabla g=0 \mbox{ and } \nabla J=0. \] Since the connection preserves the metric, it is uniquely determined by its torsion tensor $T$. We shall also consider the following (3,0)-tensor \begin{equation} c(X, Y, Z)=g(X, T(Y,Z)). \end{equation} Gauduchon found that on any Hermitian manifold, the collection of canonical Hermitian connections is an affine subspace of the space of linear connections \cite{Gauduchon}. This affine subspace is at most one dimensional. It is one point if and only if the Hermitian manifold is K\"ahler, i.e., when the K\"ahler form is closed, then the family of canonical Hermitian connections collapses to the Levi-Civita connection of the given metric. It is one-dimensional if and only if the Hermitian manifold is non-K\"ahler. In the latter case, there are several distinguished Hermitian connections. For examples, Chern connection and Lichnerwicz's \it first canonical connection \rm are in this family. We are interested in another connection in this family. Physicists find that the presence of the Wess-Zumino term in N=2 supersymmetry yields a Hermitian connection whose torsion $c$ is totally skew-symmetric. In other words, $c$ is a 3-form. Such a connection turns out to be another distinguished Hermitian connection \cite{Bismut} \cite{Gauduchon}. The geometry of such a connection is called by physicists a KT-connection. Among some mathematicians, this connection is called the Bismut connection. According to Gauduchon \cite{Gauduchon}, on any Hermitian manifold, there exists a unique Hermitian connection whose torsion tensor $c$ is a 3-from. Moreover, the torsion form can be expressed in terms of the complex structure and the K\"ahler form. Recall the following definitions and convention \cite[Equations 2.8 and 2.15-2.17]{Besse}. For any n-form $\omega$, when \begin{equation} (J\omega)(X_1, \dots, X_n):=(-1)^n\omega(JX_1, \dots, JX_n) \quad \mbox{ then } \quad d^c\omega=(-1)^nJdJ\omega. \end{equation} And \begin{equation} \partial=\frac12(d+id^c)=\frac12(d+(-1)^niJdJ), \quad {\overline\partial} =\frac12(d-id^c)=\frac12(d-(-1)^niJdJ). \end{equation} By \cite{Gauduchon}, the torsion 3-form of the Bismut connection is \begin{equation} c(X,Y,Z)=-\frac12d^cF(X,Y,Z). \end{equation} \subsection{Hyper-K\"ahler Connection and HKT-Geometry} Three complex structures $I_1, I_2$ and $I_3$ on $M$ form a hypercomplex structure if \begin{equation}\label{quaternion} I_1^2=I_2^2=I_3^2=-1, \quad \mbox{ and } \quad I_1I_2=I_3=-I_2I_1. \end{equation} A triple of such complex structures is equivalent to the existence of a 2-sphere worth of integrable complex structures: \begin{equation} \mathcal{I}=\{a_1I_1+a_2I_2+a_3I_3: a_1^2+a_2^2+a_3^2=1\}. \end{equation} When $g$ is a Riemannian metric on the manifold $M$ such that it is Hermitian with respect to every complex structure in the hypercomplex structure, $(M, \mathcal{I}, g)$ is called a hyper-Hermitian manifold. Note that $g$ is hyper-Hermitian if and only if \begin{equation} g(X, Y)=g(I_1X, I_1Y)=g(I_2X, I_2Y)=g(I_3X, I_3Y). \end{equation} On a hyper-Hermitian manifold, there are two natural torsion-free connections, namely the Levi-Civita connection and the Obata connection. However, in general the Levi-Civita connection does not preserve the hypercomplex structure and the Obata connection does not preserve the metric. We are interested in the following type of connections. \begin{definition} A linear connection $\nabla $ on a hyper-Hermitian manifold $ (M,\mathcal{I},g)$ is hyper-Hermitian if \begin{equation} \nabla g=0,\quad \mbox{ and }\quad \nabla I_{1}=\nabla I_{2}=\nabla I_{3}=0. \end{equation} \end{definition} \begin{definition} A linear connection $\nabla $ on a hyper-Hermitian manifold $ (M,\mathcal{I},g)$ is hyper-K\"{a}hler if it is a hyper-Hermitian and its torsion tensor is totally skew-symmetric. \end{definition} A hyper-K\"ahler connection is referred to a HKT-connection in physics literature. The geometry of this connection or this connection is also referred to a HKT-geometry. Note that a HKT-connection is also the Bismut connection for each complex structure in the given hypercomplex structure. For the complex structures $\{I_1, I_2, I_3\}$, we consider their corresponding K\"ahler forms $\{F_1, F_2, F_3\}$ and the complex operators $\{d_1, d_2, d_3\}$ where $d_i = {d_i}^c$. Due to Gauduchon's characterization of Bismut connection, we have \begin{proposition} A hyper-Hermitian manifold $(M,\mathcal{I},g)$ admits a hyper-K\"{a}hler connection if and only if $d_{1}F_{1}=d_{2}F_{2}=d_{3}F_{3}$. If it exists, it is unique. \end{proposition} In view of the uniqueness, we say that $(M, \mathcal{I}, g)$ is a HKT-structure if it admits a hyper-K\"ahler connection. If the hyper-K\"ahler connection is also torsion-free, then the HKT-structure is a hyper-K\"ahler structure. \subsection{Homogeneous Examples} Due to Joyce \cite{Joyce2}, there is a family of homogeneous hypercomplex structures associated to any compact semi-simple Lie group. In this section, we briefly review his construction and demonstrate, as Opfermann and Papadopoulos did \cite{OP}, the existence of homogeneous HKT-connections. Let $G$ be a compact semi-simple Lie group. Let $U$ be a maximal torus. Let $\mathfrak{g}$ and $\mathfrak{u}$ be their algebras. Choose a system of ordered roots with respect to $\mathfrak{u}_{\bf C}$. Let $ \alpha_1$ be a maximal positive root, and $\mathfrak{h}_1$ the dual space of $\alpha_1$. Let $\partial_1$ be the $\mathfrak{sp}(1)$-subalgebra of $ \mathfrak{g}$ such that its complexification is isomorphic to $\mathfrak{h} _1\oplus\mathfrak{g}_{\alpha_1}\oplus\mathfrak{g}_{-\alpha_1}$ where $ \mathfrak{g}_{\alpha_1}$ and $\mathfrak{g}_{-\alpha_1}$ are the root spaces for $\alpha_1$ and $-\alpha_1$ respectively. Let $\mathfrak{b}_1$ be the centralizer of $\partial_1$. Then there is a vector subspace $\mathfrak{f}_1$ composed of root spaces such that $\mathfrak{g}=\mathfrak{b} _1\oplus\partial_1\oplus \mathfrak{f}_1$. If $\mathfrak{b}_1$ is not Abelian, Joyce applies this decomposition to it. By inductively searching for $\mathfrak{sp}(1)$ subalgebras, he finds the following \cite[Lemma 4.1] {Joyce2}. \begin{lemma} The Lie algebra $\mathfrak{g}$ of a compact Lie group $G$ decomposes as \begin{equation} \mathfrak{g}=\mathfrak{b}\oplus _{j=1}^{n}\mathfrak{\partial}_{j}\oplus _{j=1}^{n}\mathfrak{f}_{j}, \label{decomposition} \end{equation} with the following properties. {\rm (1)} $\mathfrak{b}$ is Abelian and $ \mathfrak{\partial}_{j}$ is isomorphic to $\mathfrak{sp}(1)$. {\rm (2)} $\mathfrak{b}\oplus _{j=1}^{n}\mathfrak{\partial}_{j}$ contains $\mathfrak{u}$. {\rm (3)} Set $\mathfrak{b}_{0}=\mathfrak{g}$, $\mathfrak{b}_{n}=\mathfrak{b}$ and $\mathfrak{b}_{k}=\mathfrak{b}\oplus _{j=k+1}^{n}\partial _{j}\oplus _{j=k+1}^{n}\mathfrak{f}_{j}$. Then $[\mathfrak{b}_{k},\partial _{j}]=0$ for $k\geq j$. {\rm (4)} $[\mathfrak{\partial}_{l},\mathfrak{f}_{l}]\subset \mathfrak{f}_{l}$. {\rm (5)} The adjoint representation of $\mathfrak{\partial}_{l} $ on $\mathfrak{f}_{l}$ is reducible to a direct sum of the irreducible 2-dimensional representations of $\mathfrak{sp}(1)$. \label{joyce decomposition} \end{lemma} When the group $G$ is semi-simple, the Killing-Cartan form is a negative definite inner product on the vector space $\mathfrak{g}$. \begin{lemma} The Joyce Decomposition of a compact semi-simple Lie algebra is an orthogonal decomposition with respect to the Killing-Cartan form. \end{lemma} \noindent{\it Proof: } Since Joyce Decomposition given as in (\ref{decomposition}) is inductively defined, it suffices to prove that the decomposition \begin{equation} \mathfrak{g}_{\bf C}=\mathfrak{b}_1\oplus\partial_1\oplus\mathfrak{f}_1 \end{equation} is orthogonal. Recall that \begin{eqnarray*} \partial_1 &=& \langle \mathfrak{h}_1, X_{\alpha_1}, X_{-\alpha_1}\rangle, \mbox{ } \mathfrak{f}_1=\oplus_{\alpha_1\neq \alpha>0, \langle \alpha, \alpha_1\rangle\neq 0} \mathfrak{g}_{\alpha}\oplus\mathfrak{g}_{-\alpha}, \\ \mathfrak{b}_1&=&\{h\in\mathfrak{u}_{\bf C}: \alpha_1(h)=0\} \oplus_{\alpha_1\neq\alpha>0, \langle \alpha, \alpha_1\rangle=0} \mathfrak{g} _{\alpha}\oplus\mathfrak{g}_{-\alpha}. \end{eqnarray*} Since the Cartan subalgebra $\mathfrak{u}_{\bf C}$ is orthogonal to any root space, and it is an elementary fact that two root spaces $\mathfrak{g} _{\alpha}$, $\mathfrak{g}_{\beta}$ are orthogonal whenever $\alpha\neq \pm \beta$, $\mathfrak{f}_1$ is orthogonal to both $\mathfrak{b}_1$ and $\partial_1$. For the same reasons, $\partial_1$ is orthogonal to the summand $\oplus_{\alpha>0, \langle \alpha, \alpha_1\rangle=0} \mathfrak{g} _{\alpha}\oplus\mathfrak{g}_{-\alpha}$ in $\mathfrak{b}_1$, and $\mathfrak{b} _1$ is orthogonal to the summand $\langle X_{\alpha_1}, X_{-\alpha_1}\rangle$ in $\partial_1$. Then $\partial_1$ is orthogonal to $\mathfrak{b}_1$ because for any element $h$ in the Cartan subalgebra in $\mathfrak{b}_1$, $\langle h, h_1\rangle=\alpha_1(h)=0$. \ q.~e.~d. \vspace{0.2in} Let $G$ be a compact semi-simple Lie group with rank $r$. Then \begin{equation} (2n-r)\mathfrak{u}(1)\oplus\mathfrak{g}\cong {\mathbf{R}}^n\oplus_{j=1}^n \mathfrak{\partial}_j\oplus_{j=1}^n\mathfrak{f}_j. \label{vvv} \end{equation} At the tangent space of the identity element of $T^{2n-r}\times G$, i.e.\ the Lie algebra $(2n-r)\mathfrak{u}(1)\oplus \mathfrak{g}$, a hypercomplex structure $\{I_1, I_2, I_3\}$ is defined as follows. Let $\{ E_1, \dots, E_n\}$ be a basis for ${\mathbf{R}}^n$. Choose isomorphisms $\phi_j$ from $ \mathfrak{sp}(1)$, the real vector space of imaginary quaternions, to $ \mathfrak{\partial}_j$. It gives a real linear identification from the quaternions $\mathbf{H}$ to $\langle E_j\rangle\oplus\mathfrak{\partial}_j$. If $H_j$, $X_j$ and $Y_j$ forms a basis for $\mathfrak{\partial}_j$ such that $[H_j, X_j]=2Y_j$ and $[H_j, Y_j]=-2X_j$, then \begin{equation} I_1E_j=H_j, I_2E_j=X_j, I_3E_j=Y_j. \end{equation} Define the action of $I_a$ on $\mathfrak{f}_j$ by $I_a(v)=[v, \phi_j(\iota_a)]$ where $\iota_1=i, \iota_2=j, \iota_3=k$. The complex structures $\{I_1, I_2, I_3\}$ at the other points of the group $ T^{2n-r}\times G$ are obtained by left translations. These complex structures are integrable and form a hypercomplex structure \cite{Joyce2}. \begin{lemma} When $G$ is a compact semi-simple Lie group with rank $r$, there exists a negative definite bilinear form $\hat{B}$ on the decomposition $(2n-r)\mathfrak{u}(1)\oplus \mathfrak{g}\cong {\mathbf{R}}^{n}\oplus _{j=1}^{n}\mathfrak{\partial}_{j}\oplus _{j=1}^{n}\mathfrak{f}_{j}$ such that {\rm (1)} its restriction to $\mathfrak{g}$ is the Killing-Cartan form, {\rm (2)} it is hyper-Hermitian respect to the hypercomplex structure, and {\rm (3)} the above decomposition is orthogonal. \end{lemma} \noindent{\it Proof: } In $\partial_j$, we choose an orthogonal basis $\{H_j, X_j, Y_j\}$ such that $H_j$ is in the Cartan subalgebra and \begin{equation} B(H_j, H_j)=B(X_j, X_j)=B(Y_j, Y_j)=-\lambda_j^2. \end{equation} On ${\mathbf{R}}^n=(2n-r)\mathfrak{u}(1)\oplus\mathfrak{b}$, choose $E_1, \dots, E_n$ and extend the Killing-Cartan form so that \begin{equation} {\hat B}(E_i, E_j)=-\delta_{ij}\lambda_j^2. \end{equation} It is now apparent that the extended Killing-Cartan form is hyper-Hermitian with respect to $I_1, I_2$ and $I_3$. To show that the Killing-Cartan form is hyper-Hermitian on $ \oplus_{j=1}^n\mathfrak{f}_j$, it suffices to verify that the Killing-Cartan form is hyper-Hermitian on $\mathfrak{f}_1$. It follows from the fact that $B(X, [Y, Z])$ is totally skew-symmetric with respect in $X, Y, Z$ and the Jacobi identity. \ q.~e.~d. \vspace{0.2in} Let $g$ be the left-translation of the extended Killing-Cartan form $-\hat B$. It is a bi-invariant metric on the manifold $ T^{2n-r}\times G$. The Levi-Civita connection $D$ is the bi-invariant connection. Let $\nabla$ be the left-invariant connection. When $X$ and $Y$ are left-invariant vector fields \[ D_XY=\frac{1}{2}[X,Y], \mbox{ and } \nabla_XY=0. \] Since the hypercomplex structure and the hyper-Hermitian metric are left-invariant, the left-invariant connection is hyper-Hermitian. The torsion tensor for the left-invariant connection is $T(X,Y)=-[X, Y]$. The (3,0)-torsion tensor is \[ c(X,Y,Z)=-{\hat B}([X,Y],Z). \] It is well-known that $c$ is a totally skew-symmetric 3-form. Therefore, the left-invariant connection is a HKT-structure on the group manifold $T^{2n-r}\times G$. It is apparent that if one extends the Killing-Cartan form in an arbitrary way, then the resulting bi-invariant metric and left-invariant hypercomplex structure cannot make a hyper-Hermitian structure. The above construction can be generalized to homogeneous spaces \cite{OP}. \section{Characterization of HKT-Structures}\label{Associated} In this section, we characterize HKT-structures in terms of the existence of a holomorphic object with respect to any complex structure in the hypercomplex structure. Through this characterization, we shall find other examples of HKT-manifolds. Toward the end of this section, we shall also reinterpret the twistor theory for HKT-geometry developed by Howe and Papadopoulos \cite{HP2}. The results seem to indicate that the holomorphic characterization developed in the next paragraph will serve all the purpose that one wants the twistor theory of HKT-geometry to serve. \subsection{Holomorphic Characterization} \begin{proposition}\label{character} Let $(M,\mathcal{I},g)$ be a hyper-Hermitian manifold and $ F_{a}$ be the K\"{a}hler form for $(I_{a},g)$. Then $(M,\mathcal{I},g)$ is a HKT-structure if and only if $\partial _{1}(F_{2}+iF_{3})=0$; or equivalently ${\overline{\partial }}_{1}(F_{2}-iF_{3})=0$. \end{proposition} \noindent{\it Proof: } Since $\partial_1 (F_2 + iF_3) = \frac{1}{2}(dF_2 - d_1F_3) + \frac{i}{2}(d_1F_2 + dF_3)$, it is identically zero if and only if $d_1F_2=-dF_3$, and $dF_2=d_1F_3$. Note that $F_2(I_1X, I_1Y) = g(I_2 I_1 X, I_1Y) = -g(I_2X, Y) = -F_2(X, Y)$. It follows that $d_1F_2 = (-1)^2I_1 d I_1 (F_2) = -I_1 dF_2$. As $dF_2$ is a 3-form, for any $X, Y, Z$ tangent vectors, \begin{eqnarray*} -I_1dF_2(X,Y,Z) &=& dF_2(I_1X, I_1Y, I_1Z)=dF_2(I_2I_3X, I_2I_3Y, I_2I_3Z) \\ &=& -I_2dF_2(I_3X, I_3Y, I_3Z)=I_3I_2dF_2(X, Y, Z). \end{eqnarray*} Since $F_2$ is type (1,1) with respect to $I_2$, $I_2F_2=F_2$. Then $d_1F_2 =-I_1dF_2=I_3I_2dF_2=I_3I_2dI_2F_2=I_3d_2F_2$. On the other hand, $-dF_3=I_3I_3dF_3=I_3I_3dI_3F_3=I_3d_3F_3$. Therefore, $d_2F_2=d_3F_3$ if and only if $d_1F_2=-dF_3$. Similarly, one can prove that $d_2F_2=d_3F_3$ if and only if $d_1F_3 = dF_2$. It follows that $\partial_1(F_2+iF_3)=0$ if and only if $d_2F_2=d_3F_3$. It is equivalent to $\nabla^2=\nabla^3$ where $\nabla^a$ is the Bismut connection of the Hermitian structure $(M, I_a, g)$. Since $I_1=I_2I_3$, and $\nabla^2=\nabla^3$, $I_1$ is parallel with respect to $\nabla^2=\nabla^3$. By the uniqueness of Bismut connection, $\nabla^1=\nabla^2=\nabla^3$. \ q.~e.~d. \vspace{0.2in} On any hypercomplex manifold $(M, {\cal I})$, if $F_2-iF_3$ is a 2-form such that $-F_{2}(I_{2}X,Y)=g(X,Y)$ is positive definite and it is a non-holomorphic (0,2)-form with respect to $I_1$, then $(M, g, {\cal I})$ is a hyper-Hermitian manifold but it is not a HKT-structure. For example, a conformal change of a HKT-structure by a generic function gives a hyper-Hermitian structure which is not a HKT-structure so long as the dimension of the underlying manifold is at least eight. On the other hand Proposition \ref{character} implies that every four-dimensional hyper-Hermitian manifold is a HKT-structure, a fact also proven in \cite[Section 2.2]{GT}. In the proof of Proposition \ref{character}, we also derive the following \cite{HP2}. \begin{corollary} Suppose $F_{1},F_{2}$ and $F_{3}$ are the K\"{a}hler forms of a hyper-Hermitian structure. Then the hyper-Hermitian structure is a HKT-structure if and only if \begin{equation}\label{difj} d_{i}F_{j}=-{2}\delta _{ij}c-\epsilon _{ijk}dF_{k}. \end{equation} \end{corollary} \begin{theorem}\label{holomorphic} Let $(M,\mathcal{I})$ be a hypercomplex manifold and $F_{2}-iF_{3}$ be a {\rm (0,2)}-form with respect to $I_{1}$ such that ${\overline{\partial }}_{1}(F_{2}-iF_{3})=0$ or equivalently ${\partial }_{1}(F_{2}+iF_{3})=0$ and $-F_{2}(I_{2}X,Y)=g(X,Y)$ is a positive definite symmetric bilinear form. Then $(M,\mathcal{I},g)$ is a HKT-structure. \end{theorem} \noindent{\it Proof: } In view of the last proposition, it suffices to prove that the metric $g$ along with the given hypercomplex structure $ \mathcal{I}$ is hyper-Hermitian. Note that $F_2-iF_3$ is type (0,2) with respect to $I_1$. Since $X-iI_1X$ is a type (1,0)-vector with respect to $I_1$, $(F_2-iF_3)(X-iI_1X, Y)=0$ for any vectors $X$ and $Y$. It is equivalent to the identity $F_2(I_1X, Y) = -F_3(X, Y).$ Then \[ F_3(I_3X, Y) = -F_2(I_1I_3X, Y) = F_2(I_2X, Y)=-g(X, Y). \] So $F_3(I_3X, I_3Y) = F_3(X, Y)$, and $g$ is Hermitian with respect to $I_3$. Since the metric $g$ is Hermitian with respect to $I_2$ and $I_1=I_2I_3$, $g$ is also Hermitian with respect $I_1$. \ q.~e.~d. \vspace{0.2in} \subsection{HKT-Structures on Compact Nilmanifolds} In this section, we apply the last theorem to construct a homogeneous HKT-structure on some compact nilmanifolds. Let $\{ X_1, ... , X_{2n},Y_1, ... , Y_{2n}, Z \}$ be a basis for ${\mathbf R}^{4n + 1}$. Define commutators by: $[X_i, Y_i] = Z$, and all others are zero. These commutators define on ${\mathbf R}^{4n + 1}$ the structure of the \it Heisenberg Lie algebra \rm ${\it h}_{2n}$. Let ${\mathbf R}^3$ be the 3-dimensional Abelian algebra. The direct sum ${\bf n} = {\it h}_{2n} \oplus {\mathbf R}^3$ is a 2-step nilpotent algebra whose center is four dimensional. Fix a basis $\{ E_1, E_2, E_3\}$ for ${\mathbf R}^3$ and consider the following endomorphisms of ${\bf n}$ \cite{Dotti} : \begin{eqnarray*} I_1 &:& X_i \rightarrow Y_i, Z \rightarrow E_1, E_2 \rightarrow E_3;\\ I_2 &:& X_{2i+1} \rightarrow X_{2i}, Y_{2i-1} \rightarrow Y_{2i}, Z \rightarrow E_2, E_1 \rightarrow E_3; \\ I_1^2 &=& I_2^2=-{\rm identity}, \hspace{.5in} I_3 = I_1I_2. \end{eqnarray*} Clearly $I_1I_2 = -I_2I_1$. Moreover, for $a = 1, 2, 3$ and $X, Y \in {\bf n} $, $[I_aX, I_aY] = [X, Y]$ so $I_a$ are Abelian complex structures on ${\bf n}$ in the sense of \cite{Dotti} and in particular are integrable. It implies that $ \{ I_a : a =1, 2, 3 \} $ is a left invariant hypercomplex structure on the simply connected Lie group $N$ whose algebra is $\bf n$. It is known that the complex structures $I_a$ on $\bf n$ satisfy: \[ d( \Lambda^{1,0}_{I_a} {\bf n}^* ) \in \Lambda^{1,1}_{I_a} {\bf n}^* \] where ${\bf n}^* $ is the space of left invariant 1-forms on $N$ and $ \Lambda^{i,j}_{I_a} {\bf n}^* $ is the $(i, j)$-component of ${\bf n}^* \otimes {\bf C}$ with respect to $I_a$ \cite{Dotti}. But then we have $d( \Lambda^{2,0}_{I_a} {\bf n}^* ) \in \Lambda^{2,1}_{I_a} {\bf n}^*$ and any left invariant (2,0)-form is $\partial_1$-closed. Now consider the invariant metric on $N$ for which the basis $ \{ X_i , Y_i , Z, E_a \}$ is orthonormal. Since it is compatible with the structures $I_a$ in view of Theorem \ref{holomorphic} we obtain a left-invariant HKT-structure on $N$. Noting that $N$ is isomorphic to the product $H_{2n} \times {\mathbf R}^3$ of the Heisenberg Lie group $H_{2n}$ and the Abelian group ${\mathbf R}^3$ we have: \begin{corollary} Let ${\Gamma}$ be a cocompact lattice in the Heisenberg group $H_{2n}$ and ${\bf Z}^3$ a lattice in ${\mathbf R}^3$. The compact nilmanifold $(\Gamma \times {\bf Z}^3 ) \backslash N$ admits a HKT-structure. \end{corollary} \subsection{Twist of Hyper-K\"ahler Manifolds with Torsions} Suppose that $(M, {\cal I})$ is a hypercomplex manifold, a $\U(1)$-instanton $P$ is a principal $\U(1)$-bundle with a $\U(1)$-connection 1-form $\theta$ such that its curvature 2-form is type-(1,1) with respect to every complex structure in ${\cal I}$ \cite{CS} \cite{GP}. Let $\Psi_M:\U(1)\to \Aut (M)$ be a group of hypercomplex automorphism, and let $\Psi_P:\U(1)\to \Aut (P)$ be a lifting of $\Psi_M$. Let $\Phi:\U(1)\to \Aut P$ be the principal $U(1)$-action on the bundle $P$, and $\triangle (g)$ be the diagonal product $\Phi(g)\Psi_P(g)$ action on $P$. A theorem of Joyce \cite[Theorem 2.2]{Joyce2} states that the quotient space $W=P/\triangle (\U(1))$ of the total space of $P$ with respect to the diagonal action $\triangle$ is a hypercomplex manifold whenever the vector fields generated by $\triangle (\U(1))$ are transversal to the horizontal distribution of the connection $\theta$. The quotient space $W$ is called a twist of the hypercomplex manifold $M$. Now suppose that $(M, {\cal I}, g)$ is a HKT-structure, and $P$ is a $\U(1)$-instanton with connection form $\theta$. Suppose that $\Psi_M: \U(1)\to \Aut(M)$ is a group of hypercomplex isometry. Due to the uniqueness of HKT-structure, $\Psi_M$ is a group of automorphism of the HKT-structure. \begin{corollary} The twist manifold $W$ admits a HKT-structure. \end{corollary} \noindent{\it Proof: } Let $\phi : P \rightarrow M$ and $\Delta : P \rightarrow W$ be the projections from the instanton bundle P to $M$ and the twist $W$ respectively. The connection $\theta$ defines a splitting of the tangent bundle of P into horizontal and vertical components: $TP = \cal{H} \oplus \cal{V}$ where $\cal{H} = \it{Ker} \theta$. We define endomorphisms $\tilde{I_a}$ on $TP$ as follows: $\tilde{I_a} = 0$ on vertical directions, and when $\tilde v$ is a horizontal lift of a tangent vector $v$ to $M$, define $ \tilde{I_a} \tilde{v} = \widetilde {I_a v}$. Since the fibers of the projection $\Delta$ is transversal to the horizontal distribution, for any tangent vector $\hat v$ to $W$, there exists a horizontal vector $\tilde v$ such that $d\Delta {\tilde v}={\hat v}$. Define ${\hat I}_a$ and $\hat g$ on $W$ by ${\hat I}_a {\hat v} = d\Delta(\tilde{I_a} \tilde{v})$ and ${\hat g}({\hat v},{\hat w}) = \tilde{g}( \tilde{v}, \tilde{w})$. As the diagonal action is a group of hyper-holomorphic isometries, the almost complex structures ${\hat I}_a$ and metric $\hat g$ are well-defined. To verify that ${\hat I}_a$ are integrable complex structures on $W$, we first observe that: for horizontal vector fields $X$ and $Y$, $d\Delta [X, Y] = [d\Delta X, d\Delta Y]$, $d\phi [X, Y] = [d\phi X, d\phi Y]$ and $d\Delta {\tilde I}_a = {\hat I}_a d\Delta$, $d\phi{\tilde I}_a = I_a d\phi$. Through these relations, we establish the following relations between Nijenhius tensors of $I_a$, ${\hat I}_a$ and $\tilde{I_a}$: \[ d\Delta \tilde{N_a} (X, Y) = {\hat N}_a (d\Delta X, d\Delta Y) \hspace{.2in} \mbox{ and } \hspace{.2in} d\phi\tilde{N_a} (X, Y) = N_a (d\phi X, d\phi Y). \] The second identity implies that the horizontal part of $\tilde{N_a} (X, Y)$ vanishes because the complex structures $I_a$ are integrable. With the first identity, it follows that the Nijenhius tensor for ${\hat I}_a$ vanishes if the vertical part of $\tilde{N_a} (X, Y)$ also vanishes. To calculate the vertical part, we have \begin{eqnarray*} \theta (\tilde{N_a} (X, Y)) &=& \frac14 \theta ([X, Y]+{\tilde I}_a[{\tilde I}_aX, Y] +{\tilde I}_a[X, {\tilde I}_aY]-[{\tilde I}_aX, {\tilde I}_aY])\\ &=& \frac14 \theta ([X, Y]-[{\tilde I}_aX, {\tilde I}_aY]) = \frac14(d\theta(X, Y) - d\theta(I_aX, I_aY)). \end{eqnarray*} Since $\theta$ is an instanton, $d\theta(X, Y) - d\theta(I_aX, I_aY)=0$. It follows that ${\hat I}_a$ are integrable. To check that $\hat{g}$ is a HKT-metric, we first observe that $d\Delta$ and $d\phi$ give rise to isomorphisms of $\Lambda^{(p,q)}M$, $\Lambda^{(p,q)} \cal{H}$ and $\Lambda^{(p,q)} W$ when we fix the structures $I_1$, ${\hat I}_1$ and ${\tilde I}_1$. Let the K\"ahler forms of the structures $I_a$ and ${\hat I}_a$ be denoted by $F_a$ and ${\hat F}_a$ respectively. Now if $X, Y$ and $Z$ are sections of ${\cal H}^{(1,0)}$ then \[ X(\Delta^* ({\hat F}_2 + i{\hat F}_3))(Y, Z) = X(\phi^* (F_2 + iF_3))(Y, Z). \] Since $d\theta$ is type (1,1), $\theta([X, Y]) = d\theta(X, Y) = 0$. It means that $[X, Y]$ is a section of ${\cal H}^{(1,0)}$. Therefore, $\Delta^* ({\hat F}_2 + i{\hat F}_3)([X, Y], Z) = \phi^* (F_2 +iF_3)([X, Y], Z)$. It follows that \[ (\Delta^* d({\hat F}_2 + i{\hat F}_3))|_{\Lambda^{(3,0)} \cal{H}} = (d\Delta^* ({\hat F}_2 + i{\hat F}_3))|_{\Lambda^{(3,0)} \cal{H}} = d\phi^* (F_2 +iF_3))|_{\Lambda^{(3,0)} \cal{H}} = 0. \] Hence $d({\hat F}_2 + i{\hat F}_3)|_{\Lambda^{(3,0)}W}= 0$ and the corollary follows from Proposition \ref{character}. \ q.~e.~d. \vspace{0.2in} \subsection{Twistor Theory of HKT-Geometry} When $(M, \mathcal{I})$ is a 4n-dimensional hypercomplex manifold, the smooth manifold $Z=M\times S^2$ admits an integrable complex structure. It is defined as follows. For a unit vector ${{\vec{a}}}=(a_1, a_2, a_3)\in\mathbf{R}^3$, let $I_{{\vec{a}}}$ be the complex structure $ a_1I_1+a_2I_2+a_3I_3$ in the hypercomplex structure $\mathcal{I}$. Let $J_{{ \vec{a}}}$ be the complex structure on $S^2$ defined by cross product in $ \mathbf{R}^3$: $J_{{\vec{a}}}{{\vec{w}}}={{\vec{a}}}\times{{\vec{w}}}$. Then the complex structure on $Z=M\times S^2$ at the point $(x, {{\vec{a}}})$ is $\mathcal{J}_{(x, {{\vec{a}}})}=I_{{\vec{a}}}\oplus J_{{\vec{a}}}.$ It is well-known from twistor theory that this complex structure is integrable \cite{Salamon2}. We shall have to consider a non-integrable almost complex structure $\mathcal{J}_2=I\oplus (-J).$ Unless specified the otherwise, we discuss holomorphicity on $Z$ in terms of the integrable complex structure $\mathcal{J}$. With respect to $\mathcal{J}$, the fibers of the projection $ \pi$ from $Z=M\times S^2$ onto its first factor are holomorphic curves with genus zero. It can be proved that the holomorphic normal bundles are $ \oplus^{2n}\mathcal{O}(1)$. The antipodal map $\tau$ on the second factor is an anti-holomorphic map on the twistor space $Z$ leaving the fibers of the projection $\pi$ invariant. The projection $p$ onto the second smooth factor of $ Z=M\times S^2$ is a holomorphic map such that the inverse image of a point $ (a_1, a_2, a_3)$ is the manifold $M$ equipped with the complex structure $ a_1I_1+a_2I_2+a_3I_3$. If $\mathcal{D}$ is the sheaf of kernel of the differential $dp$, then we have the exact sequence \begin{equation} 0\to \mathcal{D}\to \Theta_Z \stackrel{dp}{\longrightarrow} p^*\Theta_{ \mathbf{C}\mathbf{P}^1} \to 0. \end{equation} Real sections, i.e. $\tau$-invariant sections, of the holomorphic projection $p$ are fibers of the projection from $Z$ onto $M$. Twistor theory shows that there is a one-to-one correspondence between hypercomplex manifold $(M, \mathcal{I})$ and its twistor space $Z$ with the complex structure $\mathcal{J}$, the anti-holomorphic map $\tau$, the holomorphic projection $p$ and the sections of the projection $p$ with prescribed normal bundle \cite{PP1}. It is not surprising that when a hypercomplex manifold has a HKT-structure, there is an additional geometric structure on the twistor space. The following theorem is essentially developed in \cite{HP2}. \begin{theorem} Let $(M,\mathcal{I},g)$ be a 4n-dimensional HKT-structure. Then the twistor space $Z$ is a complex manifold such that \begin{enumerate} \item the fibers of the projection $\pi :Z\rightarrow M$ are rational curves with holomorphic normal bundle $\oplus ^{2n}\mathcal{O}(1)$, \item there is a holomorphic projection $p:Z\rightarrow \mathbf{C}\mathbf{P}^{1}$ such that the fibers are the manifold $M$ equipped with complex structures of the hypercomplex structure $\mathcal{I}$, \item there is a ${\cal J}_{2}$-holomorphic section of $\wedge ^{(0,2)}\mathcal{D}\otimes p^{\ast }{\overline{\Theta }}_{\mathbf{C}\mathbf{P }^{1}}$ defining a positive definite (0,2)-form on each fiber, \item there is an anti-holomorphic map $\tau $ compatible with 1, 2 and 3 and inducing the antipodal map on $\mathbf{C}\mathbf{P} ^{1}$. \end{enumerate} Conversely, if $Z$ is a complex manifold with a non-integrable almost complex structure $J_{2}$ with the above four properties, then the parameter space of real sections of the projection $p$ is a 4n-dimensional manifold $M$ with a natural HKT-structure for which $Z$ is the twistor space. \end{theorem} \noindent{\it Proof: } Given a HKT-structure, then only part {\it 3} in the first half of this theorem is a new observation. It is a generalization of Theorem \ref{holomorphic}. Through the stereographic projection, \begin{equation} \zeta \mapsto {{\vec{a}}} =\frac{1}{1+|\zeta |^{2}} (1-|\zeta |^{2}, -i(\zeta -{\overline{\zeta }}), -(\zeta +{\overline{\zeta }})) \end{equation} $\zeta $ is a complex coordinate of the Riemann sphere. Note that \[ \frac{1}{1+|\zeta |^{2}}\left( \begin{array}{ccc} 1-|\zeta |^{2} & i(\zeta -{\overline{\zeta }}) & \zeta +{\overline{\zeta }} \\ -i(\zeta -{\overline{\zeta }}) & 1+\frac{1}{2}(\zeta ^{2}+{\overline{\zeta }}^{2}) & -\frac{i}{2}(\zeta ^{2}-{\overline{\zeta }}^{2}) \\ -(\zeta +{\overline{\zeta }}) & -\frac{i}{2}(\zeta ^{2}-{\overline{\zeta }}^{2}) & 1-\frac{1}{2}(\zeta ^{2}+{\overline{\zeta }}^{2}) \end{array} \right) \] is a special orthogonal matrix. Let $\vec{b}$ and $\vec{c}$ be the second and third column vectors respectively. Consider the complex structure \[ I_{\vec{a}}=\frac{1}{1+|\zeta |^{2}}\left( (1-|\zeta |^{2})I_{1}-i(\zeta -{\overline{\zeta }})I_{2} -(\zeta +{\overline{\zeta }}) I_{3}\right). \] According to Theorem \ref{holomorphic}, the 2-form \begin{equation} F_{\vec{b}}-iF_{\vec{c}} =\frac{1}{1+|\zeta |^{2}} \left( (F_{2}-iF_{3})-2i{\overline{\zeta }}F_{1} +{\overline{\zeta }}^{2}(F_{2}+iF_{3}) \right) \end{equation} is holomorphic with respect to $I_{\vec{a}}$. Due to the integrability of the complex structure $I_{\vec{a}}$, $d_{\vec{a}}$ is linear in $\vec{a}$. Therefore, \begin{equation}\label{da} d_{\vec{a}}=\frac{1}{1+|\zeta |^{2}} \left( (1-|\zeta |^{2})d_{1} -i(\zeta -{\overline{\zeta }})d_{2} -(\zeta +{\overline{\zeta }}) d_{3} \right). \end{equation} Note that ${\overline{\zeta }}$ is holomorphic with respect to the almost complex structure ${\cal J}_{2}$. More precisely, consider the $\overline{ \partial }$-operator with respect to the almost complex structure ${\cal J}_{2}$: on n-forms, it is \begin{equation} {\overline{\delta }}=\frac{1}{2}(d-i(-1)^{n}{\cal J}_{2}d{\cal J}_{2}), \end{equation} then ${\cal J}_{2}d{\overline{\zeta }}=i{\overline{\zeta }}$, and ${\overline{\delta }} {\overline{\zeta }}=0$. It follows that at $(x,{\vec{a}})$ on $Z=M\times S^{2}$, \begin{eqnarray*} &&{\overline{\delta }} \left( -2i{\overline{\zeta }}F_{1} +(1+{\overline{\zeta }}^{2})F_{2} -i(1-{\overline{\zeta }}^{2})F_{3} \right) \\ &=& -2i{\overline{\zeta }}{\overline{\delta }}F_{1} +(1+{\overline{\zeta }}^{2}){\overline{\delta }}F_{2} -i(1-{\overline{\zeta }}^{2}){\overline{\delta }}F_{3} = -2i{\overline{\zeta }}{\overline{\partial }}_{\vec{a}}F_{1} +(1+{\overline{\zeta }}^{2}){\overline{\partial }}_{\vec{a}}F_{2} -i(1-{\overline{\zeta }}^{2}){\overline{\partial }}_{\vec{a}}F_{3} \\ &=& \frac12\left( -2i{\overline{\zeta }}{d}F_{1} +(1+{\overline{\zeta }}^{2}){d}F_{2} -i(1-{\overline{\zeta }}^{2}){d}F_{3} \right)\\ && -\frac{i}{2}\left( -2i{\overline{\zeta }}{d}_{\vec{a}}F_{1} +(1+{\overline{\zeta }}^{2}){d}_{\vec{a}}F_{2} -i(1-{\overline{\zeta }}^{2}){d}_{\vec{a}}F_{3} \right) \end{eqnarray*} Now (\ref{da}) and (\ref{difj}) together imply that the twisted 2-form $(F_{2}-iF_{3})-2i{\overline{\zeta }}F_{1} +{\overline{\zeta }}^{2}(F_{2}+iF_{3})$ is closed with respect to $\overline\delta$. Therefore, it is a ${\cal J}_2$-holomorphic section. Since $\zeta$ is a holomorphic coordinate on $S^2$, the homogeneity shows that this section is twisted by $\overline{{\cal O}(2)}$. The inverse construction is a consequence of the inverse construction of hypercomplex manifold \cite{PP1} and Theorem \ref{holomorphic}. \ q.~e.~d. \vspace{0.2in} As the almost complex structure ${\cal J}_2$ is never integrable \cite{ES}, twistor theory loses substantial power of holomorphic geometry when we study HKT-structure. Therefore, we focus on application of Theorem \ref{holomorphic}. \section{Potential Theory}\label{Potential} Theorem \ref{holomorphic} shows that the form $F_2+iF_3$ is a $\partial_1$-closed (2,0)-form on a HKT-manifold. It is natural to consider a differential form $\beta_1$ as potential 1-form for $F_2+iF_3$ if $\partial_1\beta_1=F_2+iF_3$. A priori, the 1-form $\beta_1$ depends on the choice of the complex structure $I_1$. The potential 1-form for $F_3+iF_1$, if it exists, depends on $I_2$, and so on. In this section, we seek a function that generates all K\"ahler forms. \subsection{Potential Functions} A function $\mu$ is a potential function for a hyper-K\"ahler manifold $(M, \mathcal{I}, g)$ if the K\"ahler forms $F_a$ are equal to $dd_a\mu$. Since $d_a=(-1)^nI_adI_a$ on n-forms, $d_a\mu=I_ad\mu.$ Therefore, \[ d_1d_2\mu=d_1I_2d\mu=-I_1dI_1I_2d\mu=-I_1dI_3d\mu=-I_1dd_3\mu=-I_1\Omega_3 =\Omega_3=dd_3\mu. \] Now we generalize this concept to HKT-manifolds. \begin{definition} Let $(M,\mathcal{I},g)$ be a HKT-structure with K\"{a}hler forms $F_{1},F_{2}$ and $F_{3}$. A possibly locally defined function $\mu$ is a potential function for the HKT-structure if \begin{equation}\label{f123} F_{1}=\frac{1}{2}(dd_{1}+d_{2}d_{3})\mu ,\quad F_{2}=\frac{1}{2} (dd_{2}+d_{3}d_{1})\mu ,\quad F_{3}=\frac{1}{2}(dd_{3}+d_{1}d_{2})\mu . \end{equation} \end{definition} Due to the identities $dd_a+d_ad=0$ and $d_ad_b+d_bd_a=0$, $\mu$ is a potential function if and only if \[ F_{\vec{a}}=\frac{1}{2}(dd_{\vec{a}}+d_{\vec{b}}d_{\vec{c}})\mu, \] when ${\vec{a}}={\vec{b}}\times{\vec{c}}$ and $F_{\vec{a}}$ is the K\"ahler form for the complex structure $I_{\vec{a}}=a_1I_1+a_2I_2+a_3I_3$. Moreover, the torsion 3-form is given by $d_1F_1=d_2F_2=d_3F_3=\frac12d_1d_2d_3\mu$. Furthermore, since $\partial_a=\frac{1}{2}(d+id_a)$ and ${\overline\partial}_a=\frac{1}{2}(d-id_a)$, \begin{equation} F_2+iF_3=\frac12 (dd_2+idd_3+id_1d_2-d_1d_3)\mu= 2\partial_1I_2{\overline\partial}_1\mu. \end{equation} Conversely, if a function $\mu$ satisfies the above identity, it satisfies the last two identities in (\ref{f123}). Since the metric is hyper-Hermitian, for any vectors $X$ and $Y$, $F_1(X, Y)=F_2(I_3X, Y)$. Through the integrability of the complex structures $I_1, I_2, I_3$, the quaternion identities (\ref{quaternion}) and the last two identities in (\ref{f123}), one derives the first identity in (\ref{f123}). Therefore, we have the following theorem which justifies our definition for potential functions. \begin{theorem} Let $(M,\mathcal{I},g)$ be a HKT-structure with K\"{a}hler form $F_{1},F_{2}$ and $F_{3}$. A possibly locally defined function $\mu $ is a potential function for the HKT-structure if \begin{equation} F_2+iF_3=2\partial_1I_2{\overline\partial}_1\mu. \end{equation} \end{theorem} In this context, a HKT-structure is hyper-K\"ahler if and only if the potential function satisfies the following identities. \begin{equation} dd_1\mu=d_2d_3\mu, \quad dd_2\mu=d_3d_2\mu, \quad dd_3\mu=d_1d_2\mu. \end{equation} \noindent{\bf Remark:} As in the K\"ahler case, compact manifolds do not admit globally defined HKT potential. To verify, let $f$ be a potential function and $g$ be the corresponding induced metric. Define the complex Laplacian of $f$ with respect to $g$: $$ \overline{\partial}^* \overline{\partial} f = {\triangle^c} f = g(dd_1f, F_1) $$ Then $0 \leq 2g(F_1, F_1) = g(dd_1 f + d_2d_3 f, F_1) = 2 {\triangle^c} f,$ because $$ g(d_2d_3 f, F_ 1) = g(-I_2dd_1 f, F_1) = -g(dd_1 f, I_2F_1) = g(dd_1 f, F_1) = {\triangle^c}f. $$ Now the remark follows from the standard arguments involving maximum principle for second order elliptic differential equation just like in the K{\"a}hler case since $\triangle^c f$ does not have zero-order terms. \ \noindent{\bf Remark:} If we introduce the following quaternionic operators acting on quaternionic valued forms on the left: $\partial^H=d+id_1+jd_2+kd_3$, and ${\overline\partial}^H=d-id_1-jd_2-kd_3$, then a real-valued function $\mu$ is a HKT-potential if $\partial^H{\overline\partial}^H\mu=-2iF_1-2jF_2-2kF_3$. If we identify ${\bf H}^n$ with ${\bf C}^{2n}$, we deduce from Theorem \ref{holomorphic} that any pluri-subharmonic function in domain ${\bf C}^{2n}$ is an HKT-potential. The converse however is wrong. As we shall see in Example \ref{hopf} the function $\log(|z|^2+ |w|^2)$ is a HKT potential in ${\bf C}^{2n}\backslash\{ 0\}$ but is not pluri-subharmonic. \ \noindent{\bf Remark:} Given a HKT-metric $g$ with K\"ahler forms $F_1$, $F_2$ and $F_3$, for any real-valued function $\mu$ we consider \[ {\hat F}_2+i{\hat F}_3=F_2+iF_3+\partial_1I_2{\overline\partial}_1\mu. \] According to Theorem \ref{holomorphic} and other results in this section, whenever the form ${\hat g}(X, Y):=-{\hat F}_2(I_2X,Y)$ is positive definite, we obtain a new HKT-metric with respect to the old hypercomplex structure. \subsection{HKT-Potentials Generated by Hyper-K\"ahler Potentials} Let $(M, \mathcal{I}, g)$ be a hyper-K\"ahler manifold with hyper-K\"ahler potential $\mu$. The K\"ahler forms are given by $\Omega_a=dd_a\mu$. We consider HKT-structures generated by potential functions through $\mu$. \begin{theorem}\label{modification} Suppose $(M, \mathcal{I}, g)$ is a hyper-K\"ahler manifold with hyper-K\"ahler potential $\mu$. For any smooth function $f$ of one variable, let $U$ be the open subset of $M$ on which $\mu$ is defined and \begin{equation}\label{Inequality} f^{\prime}(\mu )+\frac{1}{4}f^{\prime\prime}(\mu )|\nabla\mu |^2>0. \end{equation} Define a symmetric bilinear form $\hat g$ by \begin{equation}\label{ghat} {\hat g}=f^{\prime}(\mu )g+\frac{1}{4}f^{\prime\prime}(\mu ) (d\mu\otimes d\mu+I_1d\mu\otimes I_1d\mu+ I_2d\mu\otimes I_2d\mu + I_3d\mu\otimes I_3d\mu). \end{equation} Then $(U, \mathcal{I}, {\hat g})$ is a HKT-structure with $f(\mu)$ as its potential. \end{theorem} \noindent{\it Proof: } Since $\mu$ is a hyper-K\"ahler potential for the metric $g$, $\Omega_2+i\Omega_3=2\partial_1I_2{\overline\partial}_1\mu.$ It follows that \begin{eqnarray*} 2\partial_1I_2{\overline\partial}_1f &=& 2\partial_1f^{\prime}(\mu)I_2{ \overline\partial}_1\mu = 2f^{\prime}(\mu)\partial_1I_2{\overline\partial}_1\mu +2f^{\prime\prime}(\mu )\partial_1\mu\wedge I_2{\overline\partial}_1\mu \\ &=& f^{\prime}(\mu )(\Omega_2+i\Omega_3)+\frac12 f^{\prime\prime}(\mu) (d\mu+id_1\mu )\wedge (I_2d\mu-iI_2d_1\mu ). \end{eqnarray*} When $F_2$ and $F_3$ are the real and imaginary part of $2\partial_1I_2{ \overline\partial}_1f $ respectively, then \begin{equation} F_2=f^{\prime}(\mu )\Omega_2+\frac12f^{\prime\prime}(\mu )(d\mu\wedge I_2d\mu+d_1\mu\wedge I_2d_1\mu ). \end{equation} It is now straight forward to verify that $-{\hat F}_2(I_2X, Y)= {\hat g}(X, Y)$. Therefore, $\hat g$ together with given hypercomplex structure defines a HKT-structure with the function $f$ as its potential so long as $\hat g$ is positive definite. Since $g$ is hyper-Hermitian, the vector fields $ Y_0=\nabla\mu$ and $Y_a=I_a\nabla\mu$ are mutually orthogonal with equal length. At any point where $Y_0$ is not the zero vector, we extend $\{Y_0, Y_1, Y_2, Y_3\}$ to an orthonormal frame with respect to the hyper-K\"ahler metric $g$. Any vector $X$ can be written as $X=a_0Y_0+a_1Y_1+a_2Y_2+a_3Y_3+X^\perp$ where $X^\perp$ is in the orthogonal complement of $\{Y_0, Y_1, Y_2, Y_3\}$. Note that \[ d\mu (X^\perp) = g(\nabla\mu, X^\perp )=0, \mbox{ and } I_ad\mu (X^\perp)=-g(\nabla\mu, I_aX^\perp )=g(I_a\nabla\mu, X^\perp )=0. \] Also, for $1\leq a\neq b\leq 3$, \begin{eqnarray*} d\mu (Y_a) &=& g(\nabla\mu, I_a\nabla\mu)=0, \quad d\mu (Y_0)=|\nabla\mu|^2, \\ \ I_bd\mu (Y_a) &=& -g(\nabla\mu, I_bI_a\nabla\mu )=0, \quad I_ad\mu (Y_a)=-g(\nabla\mu, I_a^2\nabla\mu )=|\nabla\mu|^2. \end{eqnarray*} Then \[ {\hat g}(X, X) =f^{\prime}(\mu )(\sum_{\ell=0}^3a_\ell^2)|\nabla\mu|^2 + \frac{f^{\prime\prime}(\mu )}{4}(\sum_{\ell=0}^3a_\ell^2)|\nabla\mu|^4 =(f^{\prime}(\mu )+\frac{f^{\prime\prime}(\mu )}{4}|\nabla\mu |^2)(\sum_{\ell=0}^3a_\ell^2)|\nabla\mu|^2. \] Therefore, $\hat g$ is positive definite on the open set defined by the inequality (\ref{Inequality}). \ q.~e.~d. \vspace{0.2in} Note that for any positive integer $m$, $f(\mu)=\mu^m$ satisfies (\ref{Inequality}) whenever $\mu$ is positive. So does $f(\mu )=e^{\mu}$. Therefore, if $g$ is a hyper-K\"ahler metric with a positive potential function $\mu$, the following metrics are HKT-metrics. \begin{eqnarray*} g_m &=&m\mu^{m-2}(\mu g+\frac{m-1}{4} (d\mu\otimes d\mu+I_1d\mu\otimes I_1d\mu+ I_2d\mu\otimes I_2d\mu + I_3d\mu\otimes I_3d\mu)),\\ g_\infty &=& e^\mu(g+\frac{1}{4} (d\mu\otimes d\mu+I_1d\mu\otimes I_1d\mu+ I_2d\mu\otimes I_2d\mu + I_3d\mu\otimes I_3d\mu)). \end{eqnarray*} \subsection{Inhomogeneous HKT-Structures on $S^1\times S^{4n-3}$}\label{hopf} On the complex vector space $(\mathbf{C}^n\oplus \mathbf{C}^n)\backslash\{0\}$, let $(z_\alpha, w_\alpha)$, $1\leq\alpha\leq n $, be its coordinates. We define a hypercomplex structure to contain this complex structure as follow. \[ \begin{array}{cccc} I_1dz_\alpha =-idz_\alpha, & I_1dw_\alpha=-idw_\alpha, & I_1d{\overline z} _\alpha = id{\overline z}_\alpha, & I_1d{\overline w}_\alpha=id{\overline w} _\alpha. \\ I_2dz_\alpha=d{\overline w}_\alpha, & I_2dw_\alpha=-d{\overline z}_\alpha, & I_2d{\overline z}_\alpha = d{w}_\alpha, & I_2d{\overline w}_\alpha=-d{z} _\alpha. \\ I_3dz_\alpha=id{\overline w}_\alpha, & I_3dw_\alpha=-id{\overline z}_\alpha, & I_3d{\overline z}_\alpha = -id{w}_\alpha, & I_3d{\overline w}_\alpha=id{z} _\alpha. \end{array} \] The function $\mu=\frac12(|z|^2+|w|^2)$ is the hyper-K\"ahler potential for the standard Euclidean metric: \begin{equation} g = \frac{1}{2}(dz_{\alpha}\otimes d{\overline z}_\alpha +d{\overline z} _\alpha \otimes dz_{\alpha} +dw_{\alpha}\otimes d{\overline w}_\alpha +d{ \overline w}_\alpha \otimes dw_{\alpha}). \end{equation} Since $|\nabla\mu|^2=2\mu$, the function $f(\mu)=\ln\mu$ satisfies the inequality (\ref{Inequality}) on ${\bf C}^{2n}\backslash\{0\}$. By Theorem \ref{modification}, $\ln\mu$ is the HKT-potential for a HKT-metric $\hat g$ on ${\bf C}^{2n}\backslash\{0\}$. Next for any real number $r$, with $0<r<1$, and $\theta_1, \dots, \theta_n$ modulo $2\pi$, we consider the integer group $\langle r\rangle$ generated by the following action on $(\mathbf{C}^n\oplus \mathbf{C }^n)\backslash\{0\}$. \begin{equation} (z_\alpha, w_\alpha)\mapsto (re^{i\theta_\alpha}z_{\alpha}, re^{-i\theta_\alpha} w_\alpha). \end{equation} One can check that the group $\langle r\rangle$ is a group of hypercomplex transformations. As observed in \cite{PP2}, the quotient space of $(\mathbf{C} ^n\oplus \mathbf{C}^n)\backslash\{0\}$ with respect to $\langle r\rangle$ is the manifold $S^1\times S^{4n-1}=S^1\times \SP(n)/\SP(n-1)$. Since the group $\langle r\rangle$ is also a group of isometries with respect to the HKT-metric $\hat g$ determined by $f(\mu)=\ln \mu$, the HKT-structure descends from $(\mathbf{C}^n\oplus \mathbf{C}^n)\backslash\{0\}$ to a HKT-structure on $S^1\times S^{4n-1}$. Since the hypercomplex structures on $S^1\times S^{4n-1}$ are parametrized by $(r, \theta_1, \dots, \theta_n)$ and a generic hypercomplex structure in this family is inhomogeneous \cite{PP2}, we obtain a family of inhomogeneous HKT-structures on the manifold $S^1\times S^{4n-1}$. \begin{theorem} Every hypercomplex deformation of the homogeneous hypercomplex structure on $S^1\times S^{4n-1}$ admits a HKT-metric. \end{theorem} Furthermore ${\hat F}_2+i{\hat F}_3=2\partial_1I_2{ \overline\partial}_1\mu$ descends to $S^1\times S^{4n-1}$. However, the function $\mu$ does not descend to $S^1\times S^{4n-1}$. Therefore, this (2,0)-form has a potential form $I_2{\overline\partial}_1\mu$ but not a globally defined potential function. \subsection{Associated Bundles of Quaternionic K\"ahler Manifolds} When $M$ is a quaternionic K\"ahler manifold, i.e. the holonomy of the Riemannian metric is contained in the group $\SP(n)\cdot\SP (1)$, the representation of $\SP(1)$ on quaternions $\mathbf{H}$ defines an associated fiber bundle $\mathcal{U}(M)$ over the smooth manifold $M$ with $ \mathbf{H}\backslash\{0\}/\mathbf{Z}_2$ as fiber. Swann finds that there is a hyper-K\"ahler metric $g$ on $\mathcal{U}(M)$ whose potential function $\mu $ is the length of the radius coordinate vector field along each fiber \cite{Swann}. As in the last example, $\ln\mu$ is the potential function of a HKT-structure with metric $\hat g$. Again, the metric $\hat g$ and the hypercomplex structure are both invariant of fiberwise real scalar multiplication. Therefore, the HKT-structure with metric $\hat g$ descends to the compact quotients defined by integer groups generated by fiberwise real scalar multiplications. \section{Reduction}\label{Reduction} First of all, we recall the construction of hypercomplex reduction developed by Joyce \cite{Joyce1}. Let $G$ be a compact group of hypercomplex automorphisms on $M$. Denote the algebra of hyper-holomorphic vector fields by $\mathfrak{g}$. Suppose that $\nu =(\nu _{1},\nu _{2},\nu _{3}):M\longrightarrow {\bf R}^{3}\otimes \mathfrak{g}$ is a $G$-equivariant map satisfying the following two conditions. The Cauchy-Riemann condition: $I_{1}d\nu _{1}=I_{2}d\nu _{2}=I_{3}d\nu _{3}$, and the transversality condition: $I_a d\nu_a(X)\neq 0$ for all $X\in \mathfrak{g}$. Any map satisfying these conditions is called a $G$-moment map. Given a point $\zeta =(\zeta _{1},\zeta _{2},\zeta _{3})$ in ${\bf R}^{3}\otimes \mathfrak{g}$, denote the level set $\nu ^{-1}(\zeta )$ by $P$. Since the map $\nu $ is $G$-equivariant, level sets are invariant if the group $G$ is Abelian or if the point $\varsigma $ is invariant. Assuming that the level set $P$ is invariant, and the action of $G$ on $P$ is free, then the quotient space $N=P/G$ is a smooth manifold. Joyce proved that the quotient space $N=P/G$ inherits a natural hypercomplex structure \cite{Joyce1}. His construction runs as follows. For each point $m$ in the space $P$, its tangent space is \[ T_{m}P=\{t\in T_{m}M:d\nu _{1}(t)=d\nu _{2}(t)=d\nu _{3}(t)=0\}. \] Consider the vector subspace \[ U_{m}=\{t\in T_{m}P:I_{1}d\nu _{1}(t)=I_{2}d\nu _{2}(t)=I_{3}d\nu _{3}(t)=0\}. \] Due to the transversality condition, this space is transversal to the vectors generated by elements in $\mathfrak{g}.$ Due to the Cauchy-Riemann condition, this space is a vector subspace of $T_{m}P$ with co-dimension $ \dim \mathfrak{g}$, and hence it is a vector subspace of $T_{m}M$ with co-dimension $4\dim \mathfrak{g}$. The same condition implies that, as a subbundle of $TM_{|P}$, $U$ is closed under $I_a$. We call the distribution $U$ the hypercomplex distribution of the map $\nu $. Let $\pi :P\to N$ be the quotient map. For any tangent vector $v$ at $\pi (m)$, there exists a unique element $\widetilde{v}$ in $U_{m}$ such that $d\pi (\widetilde{v})=v$. The hypercomplex structure on $N$ is defined by \begin{equation} I_av=d\pi (I_a\widetilde{v}),\quad \mbox{ i.e. }\quad \widetilde{I_av} =I_a\widetilde{v}. \end{equation} \begin{theorem}\label{reduction} Let $(M, \mathcal{I}, g)$ be a HKT-manifold. Suppose that $G$ is compact group of hypercomplex isometries. Suppose that $\nu $ is a $G$-moment map such that along the invariant level set $P=\nu ^{-1}(\zeta )$, the hypercomplex distribution $U$ is orthogonal to the Killing vector fields generated by the group $G$, then the quotient space $N=P/G$ inherits a natural HKT-structure. \end{theorem} \noindent{\it Proof: } Under the condition of this theorem, the hypercomplex distribution along the level set $P$ is identical to the orthogonal distribution \[ H_{m}=\{t\in T_{m}P:g(t,X)=0, X\in {\mathfrak{g}}\}. \] Now, we define a metric structure $h$ at $T_{\pi (m)}N$ as follows. For $ v,w\in T_{\pi (m)}N,$ \begin{equation} h_{\pi (m)}(v,w)=g_{m}(\widetilde{v},\widetilde{w}). \end{equation} It is obvious that this metric on $N$ is hyper-Hermitian. To find the hyper-K\"{a}hler connection $D$ on the quotient space $N$, let $v$ and $w$ be locally defined vector fields on the manifold $N$. They lift uniquely to $G$-invariant sections $\widetilde{v}$ and $\widetilde{w}$ of the bundle $U$. As $U$ is a subbundle of the tangent bundle of $P$, and $P$ is a submanifold of $M$, we consider $\widetilde{v}$ as a section of $TP$ and $\widetilde{ w}$ as a section of $TM_{|P}$. Restricting the hyper-K\"{a}hler connection $ \nabla $ onto $P$, we consider $\nabla _{\widetilde{v}}\widetilde{w}$ as a section of $TM_{|P}$. Recall that there is a direct sum decomposition \begin{equation} TM_{|P}=U\oplus \mathfrak{g\oplus }I_{1}\mathfrak{g\oplus }I_{2}\mathfrak{g\oplus }I_{3} \mathfrak{g}. \end{equation} Let $\theta $ be the projection from $TM_{|P}$ onto its direct summand $U$. Since $\mathfrak{g}$ is orthogonal to the distribution $U$, and $U$ is hypercomplex invariant, $\theta $ is an orthogonal projection. Define \begin{equation} D_{v}w:=d\pi (\theta (\nabla _{\widetilde{v}}\widetilde{w})).\quad \mbox{i.e. \quad }\widetilde{D_{v}w}=\theta (\nabla _{\widetilde{v}} \widetilde{w}). \end{equation} Now we have to prove that it is a HKT-connection. We claim that the connection $D$ preserves the hypercomplex structure. This claim is equivalent to $D_{v}(I_aw)=I_aD_{v}w$. Lifting to $U$, it is equivalent to $\theta (\nabla _{\widetilde{v}}I_a\widetilde{w})=I_a\theta (\nabla _{ \widetilde{v}}\widetilde{w})$. Since the direct sum decomposition is invariant of the hypercomplex structure, the projection map $\theta $ is hypercomplex. Therefore, it commutes with the complex structures. Then the above identity is equivalent to $\theta (\nabla _{\widetilde{v}}I_a\widetilde{w})=\theta (I_a\nabla _{ \widetilde{v}}\widetilde{w})$. This identity holds because $\nabla $ is hypercomplex. To verify that the connection $D$ preserves the Riemannian metric $h$, let $u,v,$ and $w$ be vector fields on $N$. The identity $uh(v,w)-h(D_{u}v,w)-h(v,D_{u}w)=0$ is equivalent to the following identity on $P$: $ \widetilde{u}g(\widetilde{v},\widetilde{w})-g(\theta (\nabla _{\widetilde{u}} \widetilde{v}),\widetilde{w})-g(\widetilde{v},\theta (\nabla _{\widetilde{u}} \widetilde{w}))=0. $ Since $\theta $ is the orthogonal projection along $\mathfrak{g}$, the above identity is equivalent to $ \widetilde{u}g(\widetilde{v},\widetilde{w})-g(\nabla _{\widetilde{u}} \widetilde{v},\widetilde{w})-g(\widetilde{v},\nabla _{\widetilde{u}} \widetilde{w})=0. $ This identity on $P$ is satisfied because $\nabla $ is a HKT- connection. Finally, we have to verify that the torsion of the connection $D$ is totally skew-symmetric. By definition and the fact that $\theta$ is an orthogonal projection, the torsion of $D$ is $T^{D}(u,v,w)= g(\nabla _{\widetilde{u}}\widetilde{v},\widetilde{w})-g(\nabla _{ \widetilde{v}}\widetilde{u},\widetilde{w})-g(\widetilde{[u,v]},\widetilde{w})$. Note that $[\widetilde{u},\widetilde{v}]$ is a vector tangent to $P$ such that $d\pi \circ \theta ([\widetilde{u},\widetilde{v}])=[d\pi (\widetilde{u} ),d\pi (\widetilde{v})]=[u,v].$ Therefore, $[\widetilde{u},\widetilde{v}]$ and $\widetilde{[u,v]}$ differ by a vector in $\mathfrak{g}$. Since the Killing vector fields are orthogonal to the hypercomplex distribution, $g(\widetilde{[u,v]},\widetilde{w})=g([\widetilde{u},\widetilde{v}], \widetilde{w})$. Then we have $T^{D}(u,v,w)=T^{\nabla }(\widetilde{u},\widetilde{v},\widetilde{w})$. This is totally skew-symmetric because the connection $\nabla $ is the Bismut connection on $M$. \ q.~e.~d. \vspace{0.2in} Suppose that the group $G$ is one-dimensional. Let $X$ be the Killing vector field generated by $G$. The hypercomplex distribution $U$ and the horizontal distribution $H$ are identical if and only if the 1-forms $I_{1}d\nu _{1}=I_{2}d\nu _{2}=I_{3}d\nu _{3}$ are pointwisely proportional to the 1-form $\iota _{X}g$ along the level set $P$. i.e. for any tangent vector $Y $ to $P$, $I_ad\nu_a(Y) =fg(X,Y)$. Equivalent $d\nu_a =f\iota _{X}F_a$. In the next example, we shall make use of this observation. \subsection{Example: HKT-Structure on $\mathcal{V}\left( {\bf C}{\bf P}^2\right) =S^{1}\times (SU(3)/U(1))$} We construct a HKT-structure on $\mathcal{V}\left( {\bf C}{\bf P}^2\right) $by a $U(1)$-reduction from a HKT-structure on ${\bf H}^{3}\backslash\{0\}.$ Choose a hypercomplex structure on ${\bf R}^{6}\cong {\bf C}^{3}\oplus {\bf C}^{3}$ by \begin{equation} I_{1}(\chi ,\varrho )=(i\chi ,-i\varrho ),\quad I_{2}(\chi ,\varrho )=(i\varrho ,i\chi ),\quad I_{3}(\chi ,\varrho )=(-\varrho ,\chi ). \end{equation} It is apparent that the holomorphic coordinates with these complex structures are $(\chi ,\overline{\varrho })$, $(\chi +\varrho ,\overline{ \chi }-\overline{\varrho }),$ and $(\varrho -i\chi ,\overline{\varrho }-i \overline{\chi })$ respectively. As in Example \ref{hopf}, the hyper-K\"ahler potential for the Euclidean metric $g$ on $({\bf C}^{3}\oplus {\bf C}^{3})\backslash\{0\}$ is $\mu=\frac12(|\chi|^2+|\varrho|^2)$. We apply Proposition \ref{modification} to $f(\mu)=\ln\mu$ to obtain a new HKT-metric \begin{equation} {\hat g}=\frac{1}{\mu}g-\frac{1}{\mu^2}(d\mu\otimes d\mu+I_1d\mu\otimes I_1d\mu+ I_2d\mu\otimes I_2d\mu + I_3d\mu\otimes I_3d\mu). \end{equation} Define a hypercomplex moment map $\nu=(\nu_{1},\nu _{2},\nu _{3})$ by \begin{equation} \nu _{1}(\chi ,\varrho )=|\chi |^{2}-|\varrho |^{2},\quad (\nu _{2}+i\nu _{3})(\chi ,\varrho )=2\left\langle \chi ,\varrho \right\rangle . \end{equation} where $\left\langle ,\right\rangle $ is a Hermitian inner product on ${\bf C} ^{3}$. Let $\Gamma \cong \U(1)$ be the one-parameter group acting on $({\bf C} ^{3}\oplus {\bf C}^{3})\backslash\{0\}$ defined by \begin{equation} (t;(\chi ,\varrho ))\mapsto (e^{it}\chi ,e^{it}\varrho ). \end{equation} Let $\left\langle r\right\rangle $ be the integer group generated by a real number between $0$ and $1$. It acts on $({\bf C}^{3}\oplus {\bf C}^{3})\backslash\{0\}$ by \begin{equation} (n;(\chi ,\varrho ))\mapsto (r^{n}\chi ,r^{n}\varrho ). \end{equation} Both $\Gamma $ and $\left\langle r\right\rangle $ are groups of hypercomplex automorphisms leaving the zero level set of $\nu $ invariant. Then the quotient space $\nu ^{-1}(0)/\Gamma $ is a hypercomplex reduction. The discrete quotient space $\mathcal{V}=\nu ^{-1}(0)/\Gamma\times\left\langle r\right\rangle$ is a compact hypercomplex manifold. From the homogeneity of the metric $\hat g$, we see that both $\Gamma$ and the discrete group $\left\langle r\right\rangle$ are group of isometries for the metric $\hat g$. Therefore, the quotient space $\mathcal{V}$ inherits a hyper-Hermitian metric. On $({\bf C}^{3}\oplus {\bf C}^{3})\backslash\{0\}$, the real vector field generated by the group $\Gamma $ is \[ X=i\chi \frac{\partial }{\partial \chi }-i\overline{\chi }\frac{\partial }{ \partial \overline{\chi }}-i\overline{\varrho }\frac{\partial }{\partial \overline{\varrho }}+i\varrho \frac{\partial }{\partial \varrho }. \] Let ${\hat F}_a$ be the K\"ahler form for the HKT-metric $\hat g$. We check that $d\nu_a=-2\mu \iota_X{\hat F}_a$. Therefore, Theorem \ref{reduction} implies that the quotient space $\cal V$ inherits a HKT-structure. Note that if $(\chi ,\varrho )$ is a point in the zero level set, then it represents a pair of orthogonal vectors. Therefore, the triple $(\frac{\chi }{|\chi |}, \frac{\varrho }{|\varrho |}, \frac{\overline{\chi }}{|\chi |}\times \frac{\overline{\varrho }}{|\varrho |})$ forms an element in the matrix group $\SU(3).$ The action of $\Gamma $ induces an action on $\U(3)$ by the left multiplication of $\diag(e^{it},e^{it},e^{-2it})$. Denote the $\Gamma $-coset of $(\frac{\chi }{|\chi |}, \frac{\varrho }{|\varrho |}, \frac{\overline{\chi }}{|\chi |}\times \frac{\overline{\varrho }}{|\varrho |})$ by $[\frac{\chi }{|\chi |}, \frac{\varrho }{|\varrho |}, \frac{\overline{\chi }}{|\chi |}\times \frac{\overline{\varrho }}{|\varrho |}]$. The quotient space $\mathcal{V}$ is isomorphic to the product space $S^{1}\times \SU(3)/\U(1)$. The quotient map is \[ (\chi ,\varrho )\mapsto \left( \exp \left( 2\pi i\frac{\ln |\chi |}{\ln r} \right) ,\quad \lbrack \frac{\chi }{|\chi |},\frac{\varrho }{|\varrho |}, \frac{\overline{\chi }}{|\chi |}\times \frac{\overline{\varrho }}{|\varrho |}] \right) . \] \noindent{\bf Remark:} A fundamental question on HKT-structures remains open. Does every hypercomplex manifold admit a metric such that it is a HKT-structure? \vspace{.1in} \noindent{\bf Acknowledgment} We thank G.W. Gibbons for introducing the topic in this paper to us. The second author thanks J.-P. Bourguignon for providing an excellent research environment at the I.H.E.S..
1,314,259,992,710
arxiv
\subsection*{Abstract} In this note, a new concept called {\em $SDR$-matrix} is proposed, which is an infinite lower triangular matrix obeying the generalized rule of David star. Some basic properties of $SDR$-matrices are discussed and two conjectures on $SDR$-matrices are presented, one of which states that if a matrix is a $SDR$-matrix, then so is its matrix inverse (if exists). \medskip {\bf Keywords}: Narayana triangle, Pascal triangle, Lah triangle, $SDR$-matrix. \noindent {\sc 2000 Mathematics Subject Classification}: Primary 05A10; Secondary 15A09 \section{Introduction} The \emph{Star of David rule} ~\cite{web}, originally stated by Gould in 1972, is given by \begin{equation*} \binom{n}{k}\binom{n+1}{k-1}\binom{n+2}{k+1}=\binom{n}{k-1}\binom{n+1}{k+1} \binom{n+2}{k}, \end{equation*} for any $k$ and $n$, which implies that \begin{equation*} \binom{n}{k+1}\binom{n+1}{k}\binom{n+2}{k+2}=\binom{n}{k}\binom{n+1}{k+2} \binom{n+2}{k+1}. \end{equation*} In 2003, the author observed in his Master dissertation \cite{Sun} that if multiplying the above two identities and dividing by $n(n+1)(n+2)$, one can arrive at \begin{equation*} N_{n,k+1}N_{n+1,k}N_{n+2,k+2}=N_{n,k}N_{n+1,k+2}N_{n+2,k+1}, \end{equation*} where $N_{n,k}=\frac{1}{n}\binom{n}{k}\binom{n}{k-1}$ is the Narayana number \cite[A001263]{sloane}. In the summer of 2006, the author asked Mansour \cite{Mansour} for a combinatorial proof of the above Narayana identity to be found. Later, by Chen's bijective algorithm for trees \cite{Chen}, Li and Mansour \cite{LiMan} provided a combinatorial proof of a general identity \begin{eqnarray*} &&N_{n,k+m-1}N_{n+1,k+m-2}N_{n+2,k+m-3}\cdots N_{n+m-2,k+1}N_{n+m-1,k}N_{n+m,k+m}\qquad \\ &&\qquad \qquad \qquad =N_{n,k}N_{n+1,k+m}N_{n+2,k+m-1}\cdots N_{n+m-2,k+3}N_{n+m-1,k+2}N_{n+m,k+1}. \end{eqnarray*} This motivates the author to reconsider the Star of David rule and to propose a new concept called {\em $SDR$-matrix} which obeys the generalized rule of David star. \begin{definition}\label{defi 1.1} Let $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}$ be an infinite lower triangular matrix, for any given integer $m\geq 3$, if there hold \begin{eqnarray*} \prod_{i=0}^rA_{n+i, k+r-i}\prod_{i=0}^{p-r-1}A_{n+p-i, k+r+i+1}= \prod_{i=0}^{r}A_{n+p-i, k+p-r+i}\prod_{i=0}^{p-r-1}A_{n+i, k+p-r-i-1}, \end{eqnarray*} for all $2\leq p\leq m-1$ and $0\leq r\leq p-1$, then $\mathscr{A}$ is called an {\em $SDR$-matrix of order $m$}. \end{definition} \begin{figure}[h] \setlength{\unitlength}{0.4mm} \begin{center} \begin{pspicture}(13,5.3) \psset{xunit=25pt,yunit=25pt}\psgrid[subgriddiv=1,griddots=5, gridlabels=4pt](0,0)(15,6) \psline(1,4)(2,2)(3,3)(1,4)\psline(1,3)(3,2)(2,4)(1,3) \pscircle*(1,4){0.06}\pscircle*(1,3){0.06}\pscircle*(1,2){0.06} \pscircle*(2,4){0.06}\pscircle*(2,3){0.06}\pscircle*(2,2){0.06} \pscircle*(3,4){0.06}\pscircle*(3,3){0.06}\pscircle*(3,2){0.06} \psline[linewidth=1.2pt](5,4.5)(6,1.5)(8,3.5)(5,4.5)\psline[linewidth=1.2pt](5,2.5)(8,1.5)(7,4.5)(5,2.5) \psline[linewidth=.5pt](5,3.5)(6,4.5)(7,1.5)(8,2.5)(5,3.5) \pscircle*(5,1.5){0.06}\pscircle*(5,4.5){0.06}\pscircle*(5,3.5){0.06}\pscircle*(5,2.5){0.06} \pscircle*(6,1.5){0.06}\pscircle*(6,4.5){0.06}\pscircle*(6,3.5){0.06}\pscircle*(6,2.5){0.06} \pscircle*(7,1.5){0.06}\pscircle*(7,4.5){0.06}\pscircle*(7,3.5){0.06}\pscircle*(7,2.5){0.06} \pscircle*(8,1.5){0.06}\pscircle*(8,4.5){0.06}\pscircle*(8,3.5){0.06}\pscircle*(8,2.5){0.06} \psline[linewidth=1.2pt](10,5)(11,1)(14,4)(10,5)\psline[linewidth=1.2pt](14,1)(10,2)(13,5)(14,1) \psline[linewidth=.5pt](10,4)(11,5)(14,3)(12,1)(10,4)\psline[linewidth=.5pt](10,3)(12,5)(14,2)(13,1)(10,3) \pscircle*(10,5){0.06}\pscircle*(10,5){0.06}\pscircle*(10,4){0.06}\pscircle*(10,3){0.06}\pscircle*(10,2){0.06}\pscircle*(10,1){0.06} \pscircle*(11,5){0.06}\pscircle*(11,5){0.06}\pscircle*(11,4){0.06}\pscircle*(11,3){0.06}\pscircle*(11,2){0.06}\pscircle*(11,1){0.06} \pscircle*(12,5){0.06}\pscircle*(12,5){0.06}\pscircle*(12,4){0.06}\pscircle*(12,3){0.06}\pscircle*(12,2){0.06}\pscircle*(12,1){0.06} \pscircle*(13,5){0.06}\pscircle*(13,5){0.06}\pscircle*(13,4){0.06}\pscircle*(13,3){0.06}\pscircle*(13,2){0.06}\pscircle*(13,1){0.06} \pscircle*(14,5){0.06}\pscircle*(14,5){0.06}\pscircle*(14,4){0.06}\pscircle*(14,3){0.06}\pscircle*(14,2){0.06}\pscircle*(14,1){0.06} \put(1.3,.3){$p=2$}\put(5.25,.3){$p=3$}\put(10.1,.3){$p=4$} \end{pspicture} \caption{The case $m=5$. }\label{fDD1} \end{center} \end{figure} In order to give a more intuitive view on the definition, we present a pictorial description of the generalized rule for the case $m=5$. See Figure \ref{fDD1}. Let $SDR_{m}$ denote the set of $SDR$-matrices of order $m$ and $SDR_{\infty}$ be the set of $SDR$-matrices $\mathscr{A}$ of order $\infty$, that is $\mathscr{A}\in SDR_{m}$ for any $m\geq 3$. By our notation, it is obvious that the Pascal triangle $\mathscr{P}=\Big(\binom{n}{k}\Big)_{n\geq k\geq 0}$ and the Narayana triangle $\mathscr{N}=\Big(N_{n+1,k+1}\Big)_{n\geq k\geq 0}$ are $SDR$-matrices of order $3$. In fact, both of them will be proved to be $SDR$-matrices of order $\infty$. \begin{eqnarray*} \begin{array}{cc} \mathscr{P}=\left( \begin{array}{rrrrrr} 1& & & & & \\ 1& 1& & & & \\ 1& 2& 1& & & \\ 1& 3& 3& 1& & \\ 1& 4& 6& 4& 1& \\ 1& 5& 10& 10& 5& 1 \\ & &\cdots & & & \\ \end{array}\right), & \hskip0.2cm \mathscr{N}=\left( \begin{array}{rrrrrr} 1& & & & & \\ 1& 1& & & & \\ 1& 3& 1& & & \\ 1& 6& 6& 1& & \\ 1& 10& 20& 10& 1& \\ 1& 15& 50& 50& 15& 1 \\ & &\cdots & & & \\ \end{array}\right). \end{array} \end{eqnarray*} In this paper, we will discuss some basic properties of the sets $SDR_m$ and propose two conjectures on $SDR_m$ for $3\leq m\leq \infty$ in the next section. We also give some comments on relations between $SDR$-matrices and Riordan arrays in Section $3$. \section{The basic properties of $SDR$-matrices} For any infinite lower triangular matrices $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}$ and $\mathscr{B}=\Big(B_{n,k}\Big)_{n\geq k\geq 0}$, define $\mathscr{A}\circ\mathscr{B}=\Big(A_{n,k}B_{n,k}\Big)_{n\geq k\geq 0}$ to be the Hadamard product of $\mathscr{A}$ and $\mathscr{B}$, denote by $\mathscr{A}^{\circ j}$ the $j$-th Hadamard power of $\mathscr{A}$; If $A_{n,k}\neq 0$ for $n\geq k\geq 0$, then define $\mathscr{A}^{\circ(-1)}=\Big(A_{n,k}^{-1}\Big)_{n\geq k\geq 0}$ to be the Hadamard inverse of $\mathscr{A}$. From Definition \ref{defi 1.1}, one can easily derive the following three lemmas. \begin{lemma}\label{lemma 2.1} For any $\mathscr{A}\in SDR_{m}$, $\mathscr{B}\in SDR_{m+i}$ with $i\geq 0$, there hold $\mathscr{A}\circ\mathscr{B}\in SDR_{m}$, and $\mathscr{A}^{\circ(-1)}\in SDR_{m}$ if it exists. \end{lemma} \begin{lemma}\label{lemma 2.2} For any $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}\in SDR_{m}$, then $\Big(A_{n+i,k+j}\Big)_{n\geq k\geq 0}\in SDR_{m}$ for fixed $i, j\geq 0$. \end{lemma} \begin{lemma}\label{lemma 2.3} Given any sequence $(a_n)_{n\geq 0}$, let $A_{n,k}=a_n$, $B_{n,k}=a_k$ and $C_{n,k}=a_{n-k}$ for $n\geq k\geq 0$, then $\Big(A_{n,k}\Big)_{n\geq k\geq 0}, \Big(B_{n,k}\Big)_{n\geq k\geq 0}, \Big(C_{n,k}\Big)_{n\geq k\geq 0}\in SDR_{\infty}$. \end{lemma} \begin{example}{\rm Let $a_n=n!$ for $n\geq 0$, then we have \begin{eqnarray*} \mathscr{P} &=& \Big(n!\Big)_{n\geq k\geq 0}\circ \Big(k!\Big)_{n\geq k\geq 0}^{\circ(-1)}\circ \Big((n-k)!\Big)_{n\geq k\geq 0}^{\circ(-1)}, \\ \mathscr{N} &=& \Big(\frac{1}{k+1}\Big)_{n\geq k\geq 0}\circ \mathscr{P}\circ \Big(\binom{n+1}{k}\Big)_{n\geq k\geq 0}, \\ \mathscr{L} &=& \Big((n+1)!\Big)_{n\geq k\geq 0}\circ\mathscr{P}\circ \Big((k+1)!\Big)_{n\geq k\geq 0}^{\circ(-1)}, \end{eqnarray*} which, by Lemmas \ref{lemma 2.1}-\ref{lemma 2.3}, produce that the Pascal triangle $\mathscr{P}$, the Narayana triangle $\mathscr{N}$ and the Lah triangle $\mathscr{L}$ belong to $SDR_{\infty}$, where $(\mathscr{L})_{n,k}=\binom{n}{k}\frac{(n+1)!}{(k+1)!}$ is the Lah number \cite{comtet}. } \end{example} \begin{theorem}\label{theo 0} For any sequences $(a_n)_{n\geq 0}$, $(b_n)_{n\geq 0}$ and $(c_n)_{n\geq 0}$ such that $b_0=1$, $a_n\neq 0$ and $c_n\neq 0$ for $n\geq 0$, let $\mathscr{A}=\Big(a_kb_{n-k}c_n\Big)_{n\geq k\geq 0}$, then $\mathscr{A}^{-1}\in SDR_{\infty}$. \end{theorem} \noindent {\it Proof.} By Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, we have $\mathscr{A}\in SDR_{\infty}$. It is not difficult to derive the matrix inverse $\mathscr{A}^{-1}$ of $\mathscr{A}$ with the generic entries \begin{eqnarray*} \Big(\mathscr{A}^{-1}\Big)_{n,k}&=&a_n^{-1}B_{n-k}c_{k}^{-1}, \end{eqnarray*} where $B_n$ with $B_0=1$ are given by \begin{eqnarray}\label{eqn 2.0} B_n&=&\displaystyle\sum_{j=1}^n(-1)^j\sum_{i_1+i_2+\cdots+i_j=n, i_1, \dots, i_j\geq 1}b_{i_1}b_{i_2}\cdots b_{i_j}, \ (n\geq 1). \end{eqnarray} Hence, by Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, one can deduce that \begin{eqnarray*} \mathscr{A}^{-1} &=& \Big(a_n^{-1}\Big)_{n\geq k\geq 0}\circ\Big(B_{n-k}\Big)_{n\geq k\geq 0}\circ \Big(c_k^{-1}\Big)_{n\geq k\geq 0}\in SDR_{\infty}, \end{eqnarray*} as desired. \hfill $\Box$\vskip0.2cm Specially, when $c_n:=1$ or $a_n:=\frac{a_n}{n!}$, $b_n:=\frac{b_n}{n!}$, $c_n:=n!$, both $\mathscr{B}=\Big(a_kb_{n-k}\Big)_{n\geq k\geq 0}$ and $\mathscr{C}=\Big(\binom{n}{k}a_kb_{n-k}\Big)_{n\geq k\geq 0}$ are in $SDR_{\infty}$, then so $\mathscr{B}^{-1}$ and $\mathscr{C}^{-1}$. More precisely, let $a_n^{-1}=b_n^{-1}=c_n=n!(n+1)!$ for $n\geq 0$, note that the Narayana triangle $\mathscr{N}\in SDR_{\infty}$ and \begin{eqnarray*} N_{n+1,k+1}=\frac{1}{n+1}\binom{n+1}{k+1}\binom{n+1}{k}=\frac{n!(n+1)!}{k!(k+1)!(n-k)!(n-k+1)!}. \end{eqnarray*} Then one has $\mathscr{N}^{-1}\in SDR_{\infty}$ by Theorem \ref{theo 0}. Theorem \ref{theo 0} suggests the following conjecture. \begin{conjecture} For any $\mathscr{A}\in SDR_{m}$, if the inverse $\mathscr{A}^{-1}$ of $\mathscr{A}$ exists, then $\mathscr{A}^{-1}\in SDR_{m}$. \end{conjecture} \begin{theorem}\label{theo 1} For any sequences $(a_n)_{n\geq 0}$, $(b_n)_{n\geq 0}$ with $b_0=1$ and $a_n\neq 0$ for $n\geq 0$, let $\mathscr{A}=\Big(a_nb_{n-k}a_k^{-1}\Big)_{n\geq k\geq 0}$, then the matrix power $\mathscr{A}^{j}\in SDR_{\infty}$ for any integer $j$. \end{theorem} \noindent {\it Proof.} By Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, we have $\mathscr{A}\in SDR_{\infty}$. Note that it is trivially true for $j=1$ and $j=0$ (where $\mathscr{A}^0$ is the identity matrix by convention). It is easy to obtain the $(n,k)$-entries of $\mathscr{A}^{j}$ for $j\geq 2$, \begin{eqnarray*} \Big(\mathscr{A}^{j}\Big)_{n,k}&=&\sum_{k\leq k_{j-1}\leq \cdots\leq k_1\leq n}\mathscr{A}_{n,k_1}\mathscr{A}_{k_1,k_2}\cdots\mathscr{A}_{k_{j-2},k_{j-1}}\mathscr{A}_{k_{j-1},k} \\ &=&a_{n}C_{n-k}a_{k}^{-1}, \end{eqnarray*} where $C_n$ with $C_0=1$ is given by $C_n=\sum_{i_1+i_2+\cdots+i_j=n, i_1,\dots,i_j\geq 0}b_{i_1} b_{i_2}\cdots b_{i_{j}}$ for $n\geq 1$. By Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, one can deduce that \begin{eqnarray*} \mathscr{A}^{j} &=& \Big(a_n\Big)_{n\geq k\geq 0}\circ\Big(C_{n-k}\Big)_{n\geq k\geq 0}\circ \Big(a_k^{-1}\Big)_{n\geq k\geq 0}\in SDR_{\infty}. \end{eqnarray*} By Theorem \ref{theo 0} and its proof, we have $\mathscr{A}^{-1}\in SDR_{\infty}$ and $\big(\mathscr{A}^{-1}\big)_{n,k}=a_nB_{n-k}a_{k}^{-1}$, where $B_n$ is given by (\ref{eqn 2.0}). Note that $\mathscr{A}^{-1}$ has the form as required in Theorem \ref{theo 1}, so by the former part of this proof, we have $\mathscr{A}^{-j}\in SDR_{\infty}$ for $j\geq 1$. Hence we are done. \hfill $\Box$ \vskip0.2cm Let $a_n=b_n=n!$, $a_n=b_n=n!(n+1)!$ or $a_n=n!(n+1)!$ and $b_n^{-1}=n!$ for $n\geq 0$ in Theorem \ref{theo 1}, one has \begin{corollary} For $\mathscr{P}$, $\mathscr{N}$ and $\mathscr{L}$, then $\mathscr{P}^j, \mathscr{N}^j, \mathscr{L}^j\in SDR_{\infty}$ for any integer $j$. \end{corollary} \begin{remark} {\rm In general, for $\mathscr{A}, \mathscr{B}\in SDR_{m}$, their matrix product $\mathscr{A}\mathscr{B}$ is possibly not in $SDR_{m}$. For example, $\mathscr{P},\mathscr{N}\in SDR_{3}$, but \begin{eqnarray*} \mathscr{P}\mathscr{N}&=&\left( \begin{array}{rrrrrr} 1& & & & & \\ 2& 1& & & & \\ 4& 5& 1& & & \\ 8& 18& 9& 1& & \\ 16& 56& 50& 14& 1& \\ 32& 160& 220& 110& 20& 1 \\ & &\cdots & & & \\ \end{array}\right) \notin SDR_{3}. \end{eqnarray*} } \end{remark} \begin{theorem}\label{theo 2} For any $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}$ with $A_{n,k}\neq 0$ for $n\geq k\geq 0$, then $\mathscr{A}\in SDR_{m+1}$ if and only if $\mathscr{A}\in SDR_{m}$. \end{theorem} \noindent {\it Proof.} Note that $SDR_{m+1}\subset SDR_{m}$, so the necessity is clear. It only needs to prove the sufficient condition. For the symmetry, it suffices to verify \begin{eqnarray*} \prod_{i=0}^rA_{n+i, k+r-i}\prod_{i=0}^{m-r}A_{n+m-i+1, k+r+i+1}= \prod_{i=0}^{r}A_{n+m-i+1, k+m-r+i+1}\prod_{i=0}^{m-r}A_{n+i, k+m-r-i}, \end{eqnarray*} for $0\leq r\leq [m/2]-1$. We just take the case $r=0$ for example, others can be done similarly. It is trivial when $A_{n, k+m}=A_{n+1, k+m+1}=0$. So we assume that $A_{n, k+m}\neq 0, A_{n+1, k+m+1}\neq 0$, then all $A_{n+i, k+j}$ to be considered, except for $A_{n, k+m+1}$, must not be zero. By Definition \ref{defi 1.1}, we have \begin{eqnarray}\label{eqn 2.1} \lefteqn{A_{n+m-i, k+i}A_{n+m-i-1, k+i+1}A_{n+m-i+1, k+i+2} } \nonumber\\ &=&A_{n+m-i+1, k+i+1}A_{n+m-i, k+i+2}A_{n+m-i-1, k+i}, \hskip0.2cm (0\leq i\leq m-1). \end{eqnarray} \begin{eqnarray}\label{eqn 2.2} A_{n+m+1, k+m+1}\prod_{i=0}^{m-1}A_{n+i, k+m-i}=A_{n+1, k+1}\prod_{i=0}^{m-1}A_{n+m-i+1, k+i+2}. \end{eqnarray} \begin{eqnarray}\label{eqn 2.3} A_{n+1, k+1}\prod_{i=0}^{m-1}A_{n+m-i, k+i+1}=A_{n+m, k+m}\prod_{i=0}^{m-1}A_{n+i+1, k+m-i-1}. \end{eqnarray} \begin{eqnarray}\label{eqn 2.4} A_{n+m, k+m}\prod_{i=0}^{m-1}A_{n+i, k+m-i-1}=A_{n, k}\prod_{i=0}^{m-1}A_{n+m-i, k+i+1}. \end{eqnarray} Multiplying (\ref{eqn 2.1})$-$(\ref{eqn 2.4}) together, after cancellation, one can get \begin{eqnarray*} A_{n, k}\prod_{i=0}^{m}A_{n+m-i+1, k+i+1}=A_{n+m+1, k+m+1}\prod_{i=0}^{m}A_{n+i, k+m-i}, \end{eqnarray*} which confirms the case $r=0$. \hfill $\Box$ \vskip0.2cm \begin{remark} {\rm The condition $A_{n,k}\neq 0$ for $n\geq k\geq 0$ in Theorem \ref{theo 2} is necessary. The following example verifies this claim. \begin{eqnarray*} \Big(\binom{\frac{n+k}{2}}{\frac{n-k}{2}}\Big)_{n\geq k\geq 0}=\left( \begin{array}{cccccc} 1& & & & & \\ 0& 1& & & & \\ 1& 0& 1& & & \\ 0& 2& 0& 1& & \\ 1& 0& 3& 0& 1& \\ 0& 3& 0& 4& 0& 1 \\ & &\cdots & & & \\ \end{array}\right) \in SDR_{3}, \mbox{but\ not\ in}\ SDR_{4}. \end{eqnarray*} } \end{remark} Recall that the Narayana number $\mathscr{N}_{n+1,k+1}$ can be represented as \begin{eqnarray*} \mathscr{N}_{n+1,k+1}=\frac{1}{n+1}\binom{n+1}{k+1}\binom{n+1}{k}= \det\left( \begin{array}{cc} \binom{n}{k} & \binom{n}{k+1} \\[5pt] \binom{n+1}{k} & \binom{n+1}{k+1} \end{array} \right), \end{eqnarray*} so we can come up with the following definition. \begin{definition}\label{defi 2.1} Let $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}$ be an infinite lower triangular matrix, for any integer $j\geq 1$, define $\mathscr{A}_{[j]}=\Big(A_{n,k}^{[j]}\Big)_{n\geq k\geq 0}$, where \begin{eqnarray*} A_{n,k}^{[j]}=\det\left( \begin{array}{ccc} A_{n,k} & \cdots & A_{n,k+j-1} \\[5pt] \vdots & \cdots & \vdots \\[5pt] A_{n+j-1,k} & \cdots & A_{n+j-1,k+j-1} \end{array} \right). \end{eqnarray*} \end{definition} \begin{theorem}\label{theo 3} For any sequences $(a_n)_{n\geq 0}$, $(b_n)_{n\geq 0}$ and $(c_n)_{n\geq 0}$ such that $b_0=1$, $a_n\neq 0$ and $c_n\neq 0$ for $n\geq 0$, let $\mathscr{A}=\Big(a_kb_{n-k}c_n\Big)_{n\geq k\geq 0}$, then $\mathscr{A}_{[j]}\in SDR_{\infty}$ for any integer $j\geq 1$. \end{theorem} \noindent {\it Proof.} By Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, we have $\mathscr{A}\in SDR_{\infty}$. It is easy to derive the determinant \begin{eqnarray*} \det\left( \begin{array}{ccc} a_kb_{n-k}c_n & \cdots & a_{k+j-1}b_{n-k-j+1}c_n \\[5pt] \vdots & \cdots & \vdots \\[5pt] a_kb_{n-k+j-1}c_{n+j-1} & \cdots & a_{k+j-1}b_{n-k}c_{n+j-1} \end{array}\right) =B_{n-k}\prod_{i=0}^{j-1}a_{k+i}c_{n+i}, \end{eqnarray*} where $B_n$ with $B_0=1$ are given by \begin{eqnarray*} B_n=\det\left( \begin{array}{ccc} b_{n} & \cdots & b_{n-j+1} \\[5pt] \vdots & \cdots & \vdots \\[5pt] b_{n+j-1} & \cdots & b_{n} \end{array}\right). \end{eqnarray*} Hence, by Lemmas \ref{lemma 2.1} and \ref{lemma 2.3}, one can deduce that \begin{eqnarray*} \mathscr{A}_{[j]} &=& \Big(\prod_{i=0}^{j-1}a_{k+i}\Big)_{n\geq k\geq 0}\circ\Big(B_{n-k}\Big)_{n\geq k\geq 0}\circ \Big(\prod_{i=0}^{j-1}c_{n+i}\Big)_{n\geq k\geq 0}\in SDR_{\infty}, \end{eqnarray*} as desired. \hfill $\Box$\vskip0.2cm Let $a_n^{-1}=b_n^{-1}=c_n=n!$, $a_n^{-1}=b_n^{-1}=c_n=n!(n+1)!$ or $a_n^{-1}=c_n=n!(n+1)!$ and $b_n^{-1}=n!$ for $n\geq 0$ in Theorem \ref{theo 3}, one has \begin{corollary} For $\mathscr{P}$, $\mathscr{N}$ and $\mathscr{L}$, then $\mathscr{P}_{[j]}, \mathscr{N}_{[j]}, \mathscr{L}_{[j]}\in SDR_{\infty}$ for any integer $j\geq 1$. \end{corollary} Theorem \ref{theo 3} suggests the following conjecture. \begin{conjecture} If $\mathscr{A}\in SDR_{\infty}$, then $\mathscr{A}_{[j]}\in SDR_{\infty}$ for any integer $j\geq 1$. \end{conjecture} \begin{remark} {\rm The conjecture on $SDR_{m}$ is generally not true for $3\leq m< \infty$. For example, let $\mathscr{A}=\Big(A_{n,k}\Big)_{n\geq k\geq 0}$ with $A_{n,k}=\binom{\frac{n+k}{2}}{\frac{n-k}{2}}$, then we have $\mathscr{A}\in SDR_{3}$, but \begin{eqnarray*} \begin{array}{rr} \mathscr{A}_{[2]}=\left( \begin{array}{rrrrr} 1& & & & \\ -1& 1& & & \\ \bf 2& -\textrm{2}& 1& & \\ -\textrm{2}& 6& \bf-3& 1& \\ 3& \bf-9& \textrm{12}& -4& 1 \\ & & \cdots & & \end{array}\right) \notin SDR_{3}, & \mathscr{A}_{[3]}=\left( \begin{array}{rrrrr} 1& & & & \\ 0& 1& & & \\ 2& 0& 1& & \\ 0& 15& 0& 1& \\ 9& 0& 36& 0& 1 \\ & & \cdots & & \end{array}\right) \in SDR_{3}. \end{array} \end{eqnarray*} } \end{remark} \vskip0.5cm \section{Further Comments} We will present some further comments on the connections between $SDR$-matrices and Riordan arrays. The concept of Riordan array introduced by Shapiro et al \cite{SGWW}, plays a particularly important role in studying combinatorial identities or sums and also is a powerful tool in study of many counting problems \cite{MRSV,MSV,MV}. For examples, Sprugnoli \cite{MSV,Sp1,Sp2} investigated Riordan arrays related to binomial coefficients, colored walks, Stirling numbers and Abel-Gould identities. To define a Riordan array we need two analytic functions, $d(t)=d_0+d_1t+d_2t^2+\cdots$ and $h(t)=h_1t+h_2t^2+\cdots$. A {\em Riordan array} is an infinite lower triangular array $\{d_{n,k}\}_{n,k\in \mathbb{N}}$, defined by a pair of formal power series $(d(t),h(t))$, with the generic element $d_{n,k}$ satisfying \begin{eqnarray*} d_{n,k}&=&[t^n]d(t)(h(t))^k, \ \ \ (n,\ k\geq 0). \end{eqnarray*} Assume that $d_0\neq 0\neq h_1$, then $(d(t), h(t))$ is an element of the {\em Riordan group} \cite{SGWW}, under the group multiplication rule: \begin{eqnarray*} (d(t),h(t))(g(t),f(t))=(d(t)g(h(t)),f(h(t))). \end{eqnarray*} This indicates that the identity is $I=(1,t)$, the usual matrix identity, and that \begin{eqnarray*} (d(t),h(t))^{-1}=(\frac{1}{d(\overline{h}(t))},\overline{h}(t)), \end{eqnarray*} where $\overline{h}(t)$ is the compositional inverse of $h(t)$, i.e., $\overline{h}(h(t))=h(\overline{h}(t))=t$. By our notation, we have \begin{eqnarray*} \mathscr{P}&=&(\frac{1}{1-t}, \frac{t}{1-t})\in SDR_{\infty},\\ \mathscr{P}^j&=&(\frac{1}{1-jt}, \frac{t}{1-jt})\in SDR_{\infty},\\ \Big(\binom{\frac{n+k}{2}}{\frac{n-k}{2}}\Big)_{n\geq k\geq 0} &=& (\frac{1}{1-t^2}, \frac{t}{1-t^2})\in SDR_{3},\\ (\frac{1}{1-t^2}, \frac{t}{1-t^2})^{-1}&=& (\frac{1-\sqrt{1-4t^2}}{2t^2}, \frac{1-\sqrt{1-4t^2}}{2t})\in SDR_{3},\\ \Big(d_{n-k}\Big)_{n\geq k\geq 0} &=& (d(t), t)\in SDR_{\infty}. \end{eqnarray*} Hence, it is natural to ask the following question. \begin{question} Given a formal power series $d(t)$, what conditions $h(t)$ should satisfy, such that $(d(t), h(t))$ forms an $SDR$-matrix. \end{question} \vskip0.5cm \section*{Acknowledgements} The author is grateful to the anonymous referees for the helpful suggestions and comments. This work was supported by The National Science Foundation of China.
1,314,259,992,711
arxiv
\section{Introduction} The Galactic Center (GC) is home to the greatest concentration of molecular gas in the Galaxy. The environment in the inner 300 pc (known as the Central Molecular Zone or CMZ) contains molecular clouds that have higher average densities and pressures than those in other regions ($n \sim$ 10$^4$ - 10$^5$ cm$^{-3}$, P/$k \sim$ 10$^5$ K cm$^{-3}$ - e.g., Blitz et al. 1993; Martin et al. 2004). Most of the information on the dense molecular clouds in the CMZ has come from studies of the CO(1-0) transition and higher-density molecular mass tracers (e.g., Jackson et al. 1996; Morris 1997; Tsuboi, Handa, \& Ukita 1999). However, a survey of the C$^{18}$O(1-0) transition in the GC region by Dahmen et al. (1998), when compared to a similar CO(1-0) survey by (Bitran et al. 1997), indicates that in addition to the dense, hot, molecular clouds, there is a molecular gas component with relatively low density ($\sim$ 10$^{2.5}$ cm$^{-3}$) and high kinetic temperature ($\sim$ 150 K) whose total mass is a significant fraction of the better-studied denser component. A similar conclusion was reached by Oka et al. (1998) on the basis of CO(2-1) observations of the GC. These authors speculate that there are two components of molecular emission associated with the molecular clouds at the GC: the well-established, high-density gas arises in molecular \lq\lq clumps" within the clouds that have relatively low filling factors, and a more pervasive \lq\lq diffuse" component has a filling factor of $\sim$ 1. Oka et al. (1998) associate the diffuse molecular component with individual molecular clouds at the GC; for example, Sgr B2 is known to have a hot, low-density, molecular envelope (H\"uttenmeister et al. 1995). However, the low-density molecular component may also be produced by the strong tidal forces at the GC that can shear clouds with densities less than a critical density (e.g., Stark \& Blitz 1978; Stark \& Bania 1986). In this case, the low-density gas would likely fill the volume not occupied by the dense molecular clouds and also attain a surface filling factor of $\sim$ 1. In order to survey this pervasive, diffuse, low-density molecular component, we observed 0.25 square degrees in the direction of the GC in the 3335 MHz transition of methylidine (CH). The 3335 MHz CH line ($^2\Pi_{1/2}$, J = 1/2 ground state, F = 1-1 main-line transition) is a good linear tracer of low-density molecular gas ($n <$ 10$^4$ cm$^{-3}$ - Magnani et al. 2003 and references therein). Because the transition is optically thin in the interstellar medium, many of the difficulties inherent in interpreting CO(1-0) data are avoided. In addition to the molecular gas at the GC, an optically thin tracer like the 3335 MHz CH line reveals more clearly the presence of foreground and background gas along the entire line of sight. In particular, all our CH spectra show a distinct, sharp, emission feature at v$_{LSR} \sim$ 0 km s$^{-1}$. We argue in \S 4 that this feature arises from molecular clouds in the near and far halves of the Galaxy whose systematic motion is perpendicular to the line of sight. The width of the feature, in turn, reflects the radial random motions of this ensemble of clouds. As a measure of the one-dimensional random motions of the cloud ensemble, the line width can be used to determine the scale height of the molecular clouds in the beam; something that cannot be done with optically thick transitions such as the CO(1-0) line. In this paper, we present the most sensitive CH observations of the GC to date. Unlike previous observations, the bandwidth of our observations is sufficient to cover the full velocity extent of the Galactic Center CH emission. In \S 2 we describe our observations and review all previous CH observations of the GC in the literature. The column densities of CH and H$_2$ and the molecular mass traced by the CH 3335 MHz line are derived in \S 3. We discuss how the CH emission compares to the CO(1-0) emission over similar velocity intervals and what the differences between the two tracers reveal about the molecular gas distribution at the GC. The value of N(H$_2$) derived from the CH data allows us to determine the CO-H$_2$ conversion factor in the diffuse molecular gas, and we compare the conversion factor for the diffuse and dense components. In \S 4 the scale height of molecular gas along the line of sight is determined and compared to previous work. A short summary closes the paper. \section{Observations} The CH 3335 line was observed in the direction of the GC during 1999 March using the now-defunct NRAO\footnote{The National Radio Astronomy Observatory (NRAO) is operated by Associated Universities, Inc., under contract with the National Science Foundation.} 140 ft telescope in Green Bank, West Virginia. At 3.3 GHz the beam size of the 140 ft was 9$\arcmin$ which, at the distance of the GC (taken to be 8 kpc for the remainder of the paper), is 21 pc. The observing configuration consisted of a front end with a corrugated dual-hybrid mode feed in which two linear polarizations were fed into a dual-channel FET amplifier receiver. The system temperature on the sky was in the range 50 - 80 K, depending on the atmospheric conditions, the antenna elevation, and the continuum flux at 3 GHz. The autocorrelator was configured into two sections of 512 channels, with each section covering a bandwidth of 10 MHz at a velocity resolution of 1.8 km s$^{-1}$ per channel. The total velocity coverage of each spectrum was $\sim$ 900 km s$^{-1}$ centered on v$_{LSR} =$ 0 km s$^{-1}$. A 3 $\times$ 3 map of the GC region was made with the central spectrum at $\ell =$ 0$\arcdeg$, $b =$ 0$\arcdeg$ and all other spectra offset by 0.125$\arcdeg$ in latitude and/or longitude. Each position was observed in ON-OFF mode with one hour total on-source integrations. The OFF positions were determined from the catalog of Verter et al. (1983). For each line of sight, the two polarizations were added together and the resulting spectrum was baselined with a polynomial of order 6 (discussed in the following section) and Hann smoothed to yield an {\it rms} noise level of $\sim$ 5-8 mK per channel. A raw spectrum for one of the lines of sight before baselining or Hann smoothing is shown in Figure 1. Baseline fitting windows were determined by looking at the raw data as in Figure 1 and by using CO spectra from Bitran et al. (1997) to indicate the maximum extent of the molecular emission. Individual reduced spectra for each position are shown in Figures 2$a$ - 2$i$ along with the corresponding CO(1-0) spectra from the Bitran et al. (1997) survey of the GC. The CO data are at comparable angular and velocity resolution (8.8$\arcmin$ and 1.3 km s$^{-1}$, respectively). Given the historical dearth of radio telescopes with 3 GHz receivers, there are very few observations of the GC in the CH ground state, hyperfine transitions at 3 GHz. Moreover, previous observations of CH at the GC did not have sufficient sensitivity and, often, bandwidth, to detect the broad component seen in our spectra, and tended to focus on individual, narrow, emission features at the GC (Gardner \& Robinson 1974; Gardner, Robinson, \& Sinclair 1976; Whiteoak, Gardner, \& Sinclair 1978; Genzel et al. 1979; Whiteoak et al. 1985). Thus, the extended, broad CH component described in this paper has not been noted before. \section{Results} The CH spectra presented in Figures 2$a$ - 2$i$ all show a velocity extent that is nearly that of the CO(1-0) emission. However, the CH line profiles look markedly different from the corresponding CO profiles. This is in contrast to the results of Magnani, Lugo, \& Dame (2005) who compared CH and CO for 15 lines of sight along the Galactic plane, In those instances, the CH and CO line profiles are strikingly similar, indicating that most of the gas in these clouds is at low density. Blitz (1991) quotes an average value of Galactic plane GMCs of 50 cm$^{-3}$; three orders of magnitude lower than for the clouds at the GC. Given the very different nature of the molecular clouds in the plane vs. the GC, it is not surprising that in the former case the CO and CH profiles are very similar, while in the latter they are different. However, some of the GC molecular gas {\it is} at low density: The widespread molecular component reported by Dahmen et al. (1998) has physical parameters ($n \sim$ 10$^{2.5}$; T $\sim$ 150 K) ideal for CH 3 GHz observations (however, see \S 3.4). Following Dahmen et al. (1998) we will refer to this molecular gas as the ``thin" component. The CH emission evident in Figures 2$a$ -2$i$ is likely tracing the thin gas and, at some level, the denser gas from GMCs in the region. Unfortunately, without extensive CH mapping of GMCs, both at the GC and elsewhere in the plane, it is not possible to determine the fraction of CH emission that arises from each component. Even though the CO and CH line profiles are different, it is not surprising that the velocity extent is similar; if tidal stripping of molecular clouds produces the diffuse thin gas, then it likely fills the CMZ, and its velocity extent should be considerable given the GC gravitational potential. In the next section we will examine the relationship between the CO and CH emission in detail. \subsection{Comparison of velocity-integrated CO and CH} Because of the large velocity extent of the CH emission, we compare in this section the CO and CH data over similar velocity intervals. Given the complicated noncircular motions at the GC (e.g., Morris \& Serabyn 1996), gas at velocities separated by only a few km s$^{-1}$ within our 9$\arcmin$ beam can arise from very different regions. Thus, in order to compare emission from similar regions, we broke up each CH and CO spectrum into a series of equivalent velocity intervals. Each interval extends 9 - 10 km s$^{-1}$ in velocity and comprises 5 or 6 channels of the CH spectra and 7 or 8 channels of the CO spectra. Despite the differing velocity resolutions, the velocity intervals were matched as closely as possible and differ by no more than 1 km s$^{-1}$ at either extreme. In this manner, dozens of data points from each spectrum comparing the velocity-integrated antenna temperatures for both species [defined as W$_{CH}$ and W$_{CO}$, respectively] can be analyzed.. The datasets at a given $b$ for the 3 longitudes in our map are divided into positive, negative, and near 0 LSR velocities. The parameters of the best-fit line to the positive and negative data are calculated excluding the velocity interval closest to 0 km s$^{-1}$ (i.e., the interval [$-$9, 0 km s$^{-1}$] for the negative velocities and [0, 9 km s$^{-1}$] for the positive velocities). The results are shown in Table 1 and reproduced in graphical form in Figure 3. The CH emission centered on v$_{LSR} =$ 0 km s$^{-1}$ is composed of emission from the GC and also from foreground or background molecular gas with respect to the GC (see \S 4). The least squares fit to the 18 data points in this set did not show any W$_{CH}$-W$_{CO}$ correlation. A glance at Figure 3 indicates that the data from this component clearly differs from the positive and negative datasets. Because the molecular gas in this dataset does not arise entirely at the GC, we will not discuss it further. Breaking up the CH emission into $\sim$ 9 km s$^{-1}$ intervals allows for a meaningful comparison with translucent and dark cloud data. Magnani \& Onello (1995) and Magnani et al. (1998) observed CO and CH from 48 lines of sight in translucent clouds and 12 in dark clouds. The slopes of the W$_{CH}$-W$_{CO}$ relation for those two data sets (8.2 and 10.1, respectively) are virtually identical to the slopes of the relation for positive and negative v$_{LSR}$ total points in Table 1. The results shown in Table 1 indicate that despite clear differences between the CO and CH profiles shown in Figure 2, there is a general correlation between the CO and CH emission over similar velocity intervals. Moreover, the slope of the W$_{CH}$ - W$_{CO}$ relation is similar to that determined previously for a sample of local dark and translucent clouds. Similar slopes for local and GC clouds imply that the physical conditions responsible for CH 3335 MHz emission from the molecular gas at the GC are likely similar to those in local gas. This re-enforces our contention that the CH 3335 MHz emission from the GC arises primarily in low-density molecular gas - just as is the case for CH emission in local molecular clouds. \subsection{N(H$_2$) from W$_{CH}$ and W$_{CO}$} All the CH spectral profiles consist of a broad, velocity-extended component (more than 350 km s$^{-1}$ FWZP) and a distinct spike component at $\sim$ 0 km s$^{-1}$. In \S 4 we argue that the spike feature arises from molecular gas outside the GC region. By determining the zeroth moment of the CH emission after subtracting the contribution from the 0 km s$^{-1}$ component (determined by fitting a Gaussian to the spike), we can derive the column density of CH at the GC using the standard relationship between W$_{CH}$ and N(CH) (Rydbeck et al. 1976). The values of W$_{CH}$ and N(CH) for the broad component for all the observed positions are given in Table 2. The largest source of uncertainty in the analysis is produced by the baseline fit to the raw spectra. Varying the order of the polynomial used for the baseline from 4 to 8 produced variations in the integrated antenna temperature of the broad component of up to 50\%, but did not change the velocity extent of the emission. With the demise of the 140 ft telescope, there is no possibility, at the moment, of re-observing the GC at 3 GHz from the Western Hemisphere in order to confirm the values of N(CH) in Table 2. Thus, the numbers we derive below are uncertain at the 50\% level because of baseline uncertainties. The relationship between N(CH) and N(H$_2$) is linear for values of N(CH) less than 2 $\times$ 10$^{14}$ cm$^{-2}$, corresponding to N(H$_2$) $\le$ 5 $\times$ 10$^{21}$ cm$^{-2}$ (Mattila 1986; Rachford et al. 2002; Magnani et al. 2003; Weselak et al. 2004). Using the relationship between E(B-V) and total hydrogen column density [Bohlin, Savage, \& Drake (1978)], and a value for R$_V$ of 3.1 (e.g., Sneden et al. 1978), a column density of 5 $\times$ 10$^{21}$ cm$^{-2}$ corresponds to a visual extinction of nearly 3 magnitudes, squarely in the translucent molecular gas regime (van Dishoeck \& Black 1988). However, for N(H$_2$) greater than 5 $\times$ 10$^{21}$ cm$^{-2}$, the linearity of N(CH) and N(H$_2$) begins to break down. This was evident even in the first large-scale surveys of CH (Rydbeck et al. 1976; Hjalmarson et al. 1977). Recently, using a compendium of CH data including observations from the FUSE satellite, Liszt and Lucas (2002) also note a marked decline in the CH/H$_2$ ratio as N(H$_2$) increases from diffuse to dark cloud values. Moreover, theoretical chemical models of molecular clouds invariably show that the CH abundance decreases rapidly at high extinctions or H$_2$ volume densities (e.g., Viala 1986; Lee, Bettens, \& Herbst 1996). The most comprehensive empirical study of the N(CH)-N(H$_2$) relation was conducted by Mattila (1986). Figure 10 of his paper shows a marked deviation from linearity at values of N(H$_2$) $>$ 10$^{22}$ cm$^{-2}$. However, this deviation is based on only 6 data points from CH observations of GMCs including two lines of sight to Sgr A and Sgr B2 in the GC. The CH data for Sgr A and Sgr B2 are taken from Genzel et al. (1979) and, as mentioned in \S 2, do not have sufficient velocity coverage or sensitivity to reveal the CH emission in its entirety. Thus, the values of N(CH) quoted by Mattila (1986) for at least those two points are underestimated and should be considered lower limits to N(CH). The question of how significantly the CH 3335 MHz line underestimates N(H$_2$) in GMCs has been addressed by Magnani, Lugo, \& Dame (2005) who examined the relationship between CH and H$_2$ for a small sample of lines of sight through GMCs along the Galactic plane. They demonstrate that, for 10 lines of sight clustered at ($\ell, b$) = (50$\arcdeg$, 0$\arcdeg$) and (110$\arcdeg$, 0$\arcdeg$), the 3335 MHz line underestimates N(H$_2$) by only a factor of 2-3. This result is likely a consequence that a large fraction of a GMC's volume is composed of low-density gas (e.g., Blitz 1991; Lada, Bally, \& Stark 1991). As mentioned above, Dahmen et al. (1998) and Oka et al. (1998) have shown that not all the molecular gas at the GC is at high-density. Thus, the CH 3335 MHz line can still effectively trace some of the molecular gas. Dahmen et al. (1998) estimate that $\sim$ 1.0 $\times$ 10$^7$ M$_\odot$ is in the thin gas regime, while the total molecular gas mass ranges from 3.1 - 7.0 $\times$ 10$^7$ M$_\odot$ (Sodroski et al. 1994; Blitz et al. 1985). However, these estimates are for the central 600 pc of the GC; a much larger volume than that covered by our observations. It cannot be assumed that the ratio of thin to dense gas remains constant as one nears the GC. Without knowing the fraction of thin molecular gas in the area covered by our observations, we cannot estimate the total amount of N(H$_2$) in the region solely on the basis of the CH data; but we can determine N(H$_2$) in the thin gas [defined as N(H$_2$)$_{\rm thin}$]. Table 2 shows that N(H$_2$)$_{\rm thin}$ derived from the CH observations ranges from 5.3 $\times$ 10$^{22}$ to 1.5 $\times$ 10$^{23}$ cm$^{-2}$ with an average of 9.6 $\times$ 10$^{22}$ cm$^{-2}$. Using a distance of 8 kpc for the GC, 0.25 square degrees as the size of the observed region, and the average N(H$_2$) derived above, we obtain a lower limit for the thin molecular gas in the mapped region of 9 $\times$ 10$^6$ M$_\odot$. \subsection{X$_{CO}$ at the Galactic Center} In order to obtain N(H$_2$) from CO(1-0) data, an empirically-derived CO-H$_2$ conversion factor - defined as X$_{CO} =$ N(H$_2$)/W$_{CO}$, where W$_{CO}$ is the velocity-integrated CO(1-0) antenna temperature - is used. Typical values of X$_{CO}$ in the Galactic plane range from 1.6 - 4 $\times$ 10$^{20}$ cm$^{-2}$ [K km s$^{-1}$]$^{-1}$ (e.g., Combes 1991; Strong \& Mattox 1996; Hunter et al. 1997; Dame, Hartmann, \& Thaddeus 2001 - we drop the units of X$_{CO}$ for the remainder of the paper for brevity). If we use a value of 1.8 $\times$ 10$^{20}$ as derived from far-infrared calibration mainly from the solar neighborhood by Dame, Hartmann, \& Thaddeus (2001), N(H$_2$) for the 9 CO spectra shown in Figures 2$a$ - 2$i$ ranges from 0.89 - 2.7 x 10$^{23}$ cm$^{-2}$. However, there is strong evidence that X$_{CO}$ at the GC is probably lower than the Galactic value. Blitz et al. (1985) proposed a lower X$_{CO}$ at the GC based on a deficit of gamma-rays in the region. Later, Sodroski et al. (1994) suggested, based on dust-to-gas ratio arguments, that the value of X$_{CO}$ in the GC region is lower by a factor of 4-9 than the disk value. Similar results were found by Oka et al. (1998) and Sakano et al. (1999). Thus, instead of 1.8 $\times$ 10$^{20}$, X$_{CO}$ at the GC is more likely in the 0.2 - 0.5 $\times$ 10$^{20}$ range. Magnani \& Onello (1995) describe in detail how the CH 3335 MHz transition can be used to determine X$_{CO}$ in translucent molecular clouds. If we apply this technique to the CH data presented here to determine X$_{CO_{\rm thin}}$, a value of 0.8 $\times$ 10$^{20}$ is obtained. This is a lower limit because even if all the CH emission comes from N(H$_2$)$_{\rm thin}$, the CO(1-0) emission still arises from both the thin and the dense molecular components. If W$_{CO_{\rm dense}}$/W$_{CO_{\rm thin}}$ is proportional to the ratio of the mass in the thin gas to that in the dense gas, then W$_{CO_{\rm dense}}$/W$_{CO_{\rm thin}}$ ranges from 3-7 (see \S 3.2). In turn, X$_{CO_{\rm thin}}$/X$_{CO_{\rm dense}}$ would range over the same values. The CH data coupled with the above argument indicate that X$_{CO_{\rm thin}}$ may be in the 2 - 6 $\times$ 10$^{20}$ range. This result does not necessarily contradict that of Dahmen et al. (1998) who find that the lower values of X$_{CO}$ proposed by Sodroski et al. (1994) also provide good agreement between the CO(1-0) GC mass and their estimate based on C$^{18}$O(1-0) mapping of the region. The molecular component they are referring to is the dense component. Although they derive a mass for the thin component, they do not determine what value of X$_{CO}$ might be appropriate for it. However, they do point out that the molecular gas seen in the CO(1-0) line but not in the C$^{18}$O transition is probably not virialized and the CO emission for this component may not be optically thick, in contrast with the dense gas in the GC GMCs. Given such disparate physical conditions for the various components of the molecular gas, usage of a single coversion factor for all the GC molecular gas is likely not valid. It may be that the best way to determine X$_{CO}$ empirically for the thin gas is by using the CH method. The first step to address this issue would be to determine the complete extent of the CH emission from the GC region, and then to make a detailed comparison with the Bitran et al. (1997) and Dahmen et al. (1997) data. In the section below, we discuss this type of comparison for the limited region we mapped. \subsection{Comparison of the CH 3335 MHz and C$^{18}$O transitions} The C$^{18}$O data used by Dahmen et al. (1998) to argue for a diffuse, warm molecular component at the GC is presented by Dahmen et al. (1997). The spectra were taken with the Southern Millimeter-Wave Telescope - just like the CO(1-0) data described above - and thus have similar spatial and velocity resolution to the CH data. The sampling of the C$^{18}$O observations is on a slightly coarser grid than our data (0.15$\arcdeg$ vs. 0.125$\arcdeg$), but the difference is small enough that we can make a direct comparison between our 9 CH spectra and the 9 C$^{18}$O spectra taken by Dahmen et al. and centered on and around ($\ell, b$) = (0$\arcdeg$, 0$\arcdeg$). It is immediately clear that the velocity of the CH emission is substantially more extended than that of C$^{18}$O. For instance, the C$^{18}$O spectrum at ($\ell, b$) = (0$\arcdeg$, $-$0.125$\arcdeg$) shows emission only from $-$70 to 100 km s$^{-1}, $\footnote{The emission near $-$200 km s$^{-1}$ is produced by the HNCO(5$_{05}$ - 4$_{04}$) transition.} while the CH 3335 MHz emission clearly extends from $-$175 to 200 km s$^{-1}$. This behavior is similar for all nine CH lines of sight. Because the C$^{18}$O directly traces the dense molecular clumps in the GMCs at the GC, it is the {\it absence} of C$^{18}$O(1-0) emission compared to CO(1-0) emission that led Dahmen et al. (1998) to conclude, on the basis of LVG models, that the C$^{18}$O-deficient regions likely contained warmer, lower density, molecular gas. The CH 3335 MHz emission tracks the CO(1-0) emission in velocity very well, and indicates that the thin gas component arises predominantly in those regions that produce the most extreme velocities of molecular emission. This behavior is consistent with what would be expected from a molecular component produced by tidal stripping of gas from GMCs at the GC; this component would fill the region and rapidly assume a velocity distribution commensurate with the GC potential. In order to confirm this idea, more extensive CH mapping of the GC region should be done. At the moment, only the Parkes radiotelescope in Australia is equipped for this endeavor. \section{The Molecular Scale Height of the Galaxy} All the CH profiles show a narrow emission feature at v$_{LSR}$ $\sim$ 0 km s$^{-1}$ . The CO data show this feature clearly only in the spectra at $b = +$0.125$\arcdeg$. The other spectra do not show a spike at this velocity and may even have evidence of self-absorption (cf. the spectra at $b = -$0.125$\arcdeg$). This very different behavior of the CH 3335 MHz and CO 115 GHz line is most likely due to the very different opacities of the two transitions. The optically thin CH line is picking up emission from all the clouds along the line of sight in both the near and far halves of the Galaxy, while the CO emission at $\sim$ 0 km s$^{-1}$ from the GC is opaque and dominates the spectral profiles at that velocity. The CO spectra at $b = +$0.125$\arcdeg$ show less overall CO emission than the others so the emission $\sim$ 0 km s$^{-1}$ is not overwhelmed and is easier to discern. Figure 4 shows the composite CH spectrum of the mapped region, and Table 3 shows the parameters of the Gaussian fits to both the composite and individual spectral profiles. The optically thin CH spike at 0 km s$^{-1}$ is likely sampling the emission from molecular clouds in the foreground and background with respect to the GC whose motion is primarily transverse to the line of sight. Thus, the width of the narrow feature samples the radial one-dimensional velocity dispersion of the cloud ensemble in the beam. A simple numerical simulation shows that about 2/3 of this emission arises from clouds in the near half of the Galaxy while the remainder of the emission comes from clouds beyond the GC. The simulation populates a solid angle the size of the CH beam with equivalent CH-emitting units, representing GMCs, at varying distances from the Sun. The number of units in the beam is an input parameter. Beyond a distance of 6 kpc, beam dilution decreases the contribution of each CH-emitting unit (this is equivalent to assuming that the CH-emitting units represent GMCs about 15 pc in diameter) by a factor of (6/d)$^2$, where d is the distance of the unit from the Sun in kpc. Using the relation given by Magnani et al. (2000), we can calculate the scale height of the clouds in the beam given the one-dimensional velocity dispersion of the clouds, the stellar scale height, and mass surface density in the Inner Galaxy. The velocity dispersion of the clouds is readily obtained from the FWHM of the Gaussian fits to the CH narrow feature, the mass surface density is taken to be 50 $\pm$ 10 M$_\odot$ pc$^{-2}$ (Kuijken \& Gilmore 1991; Flynn \& Fuchs 1994), and the stellar scale height is 300 $\pm$ 20 pc (Gilmore \& Reid 1983; Binney \& Merrifield 1998). With the preceding values, the molecular scale heights for the individual lines of sight vary between 27 and 73 pc (see Table 2), and for the composite profile the scale height is 109 pc. These values are similar to the values obtained from CO observations of GMCs (88 pc - Fich \& Blitz 1984; 65-80 pc - Scoville \& Sanders 1987; 51 pc - Bronfman et al. 1988; 74 pc - Dame et al. 1987; 35 pc - Stark and Lee 2005). The general agreement of the molecular scale height derived here with that derived from CO implies that the bulk of the molecular gas in the Galactic disk is moving on very nearly circular orbits. We do note that the scale height in the composite profile may indicate a slightly larger scale height for the molecular gas, more reminiscent of that of the local, small molecular clouds (e.g., Magnani, Blitz, \& Mundy 1985). It would be useful to probe a larger region to study the variation in linewidth of this feature as a function of position. \section{Summary} We have presented the most sensitive and velocity-extended CH 3335 MHz observations of the GC to date. The CH emission profiles cover nearly the same velocity extent as CO spectra of the corresponding regions, though the shapes of the profiles are markedly different. The values of N(H$_2$) at the GC obtained from the CH data range from 5.3 $\times$ 10$^{22}$ - 1.5 $\times$ 10$^{23}$ cm$^{-2}$. The CH emission is likely produced by a low-density, intercloud, molecular component which pervades the GC, and a component associated with the outer envelopes of GMCs at the GC. The relative contribution from each source to the CH profile is yet to be determined. The CO-H$_2$ conversion factor, X$_{CO}$, can be determined from the CH data for the lower density molecular component described above. The resulting value, 0.8 $\times$ 10$^{20}$, is lower than the values obtained for disk GMCs but is likely underestimated by a factor 3 - 7. This implies that X$_{CO}$ is greater for the lower density gas than for dense GC GMCs (whose X$_{CO}$ is thought to be in the 0.2 - 0.5 $\times$ 10$^{20}$ range). The mass of molecular gas within $\sim$ 30 pc of the GC as determined from the CH data is $\sim$ 9 $\times$ 10$^6$ M$_\odot$. Although the mapped region was fairly small (30$\arcmin \times$ 30$\arcmin$), the CH 3335 MHz emission was readily detected for all the observed lines of sight. A more complete survey of the GC in the CH $^2\Pi_{1/2}$, J=1/2, F=0-1, 1-1, and 1-0 transitions may better trace the lower-density molecular gas than conventional CO surveys and elucidate the relation between this diffuse molecular component and atomic hydrogen at and around the GC. An unexpected consequence of the CH survey of the GC was the detection of prominent emission at v$_{LSR} \sim$ 0 km s$^{-1}$. This feature most likely arises from foreground and background clouds with respect to the GC and can be used to determine the scale height of this ensemble. The values we obtain (27 - 109 pc) are similar to the scale height of GMCs in the Inner Galaxy as determined from CO surveys. \acknowledgments Part of the work here was undertaken while S.Z. was a summer intern at the University of Georgia under a Research Experience for Undergraduates program sponsored by the National Science Foundation (PHY 00-97457). We thank G\"oran Sandell for a critical reading of an early version of the manuscript. We also thank an anonymous referee for comments that greatly improved the presentation of the results and the organization of the paper; in particular, with regard to the section on X$_{CO}$. \clearpage \begin{deluxetable}{rrrrr} \tabletypesize{\scriptsize} \tablecaption{CH - CO Intensity Relations - Linear Least Squares Fitting\tablenotemark{a}} \tablewidth{0pt} \tablehead{ \colhead{Dataset} & \colhead{Number of} & \colhead{Intercept} & \colhead{Slope} & \colhead{Correlation} \\ \colhead{} & \colhead{points} & \colhead{} & \colhead{$\times$ 10$^{-3}$} & \colhead{Coefficient} } \startdata positive v$_{LSR}$\tablenotemark{b} & & & & \\ $\ell$ = 0.125$\arcdeg$ & 60 & 0.163 & 6.53 & 0.69 \\ $\ell$ = 0.000$\arcdeg$ & 58 & $-$0.026 & 9.30 & 0.88 \\ $\ell$ = $-$0.125$\arcdeg$ & 41 & $-$0.002 & 5.83 & 0.88 \\ total & 159 & $-$0.011 & 8.19 & 0.83 \\ & & & & \\ negative v$_{LSR}$\tablenotemark{c} & & & & \\ $\ell$ = 0.125$\arcdeg$ & 51 & 0.028 & 22.21 & 0.89 \\ $\ell$ = 0.000$\arcdeg$ & 43 & 0.327 & 8.36 & 0.53 \\ $\ell$ = $-$0.125$\arcdeg$ & 48 & 0.105 & 3.95 & 0.34 \\ total & 142 & 0.087 & 9.95 & 0.56 \\ & & & & \\ non-GC gas$\tablenotemark{d}$ & & & & \\ total & 18 & 1.29 & -1.85 & -0.06 \\ \enddata \tablenotetext{a}{In the form W$_{CH}$ = A + B W$_{CO}$, where A is the y-intercept and B is the slope.} \tablenotetext{b}{Excluding data points from velocity interval 0 $<$ v$_{LSR} <$ 9 km s$^{-1}$. See \S 3.1 for details.} \tablenotetext{c}{Excluding data points from velocity interval $-$9 $<$ v$_{LSR} <$ 0 km s$^{-1}$. See \S 3.1 for details.} \tablenotetext{d}{Includes all data points from velocity interval $-$9 $<$ v$_{LSR} < +$9 km s$^{-1}$. See \S 3.1 for details.} \end{deluxetable} \begin{deluxetable}{rrrrrrrrr} \tabletypesize{\scriptsize} \tablecaption{CH and CO Observations and Derived Quantities for the GC} \tablewidth{0pt} \tablehead{ \colhead{$\ell$ } & \colhead{$b$} & \colhead{W$_{CH}$\tablenotemark{a}} & \colhead{W$_{CH}$\tablenotemark{b}} & \colhead{N(CH)\tablenotemark{c}} & \colhead{N(H$_2$)$_{\rm thin}$\tablenotemark{d}} & \colhead{W$_{CO}$\tablenotemark{e}} & \colhead{N(H$_2$)\tablenotemark{f}} & \colhead{N(H$_2$)$_{CO}$/N(H$_2$)$_{CH}$ \tablenotemark{g}} \\ \colhead{degrees} & \colhead{degrees} & \colhead{ K km s$^{-1}$} & \colhead{ K km s$^{-1}$} & \colhead{ cm$^{-2}$} & \colhead{ cm$^{-2}$} & \colhead{ K km s$^{-1}$} & \colhead{ cm$^{-2}$} & } \startdata 0.125 & 0.125 & 8.5 & 7.5 & 2.7 $\times$ 10$^{15}$ & 5.7 $\times$ 10$^{22}$ & 888.8 & 1.60 $\times$ 10$^{23}$ & 2.8 \\ 0.125 & 0.000 & 19.2 & 18.3 & 6.5 $\times$ 10$^{15}$ & 13.7 $\times$ 10$^{22}$ & 1515.7 & 2.73 $\times$ 10$^{23}$ & 2.0 \\ 0.125 & -0.125 & 16.5 & 15.3 & 5.4 $\times$ 10$^{15}$ & 11.4 $\times$ 10$^{22}$ & 1278.0 & 2.30 $\times$ 10$^{23}$ & 2.0 \\ 0.000 & 0.125 & 9.8 & 8.5 & 3.0 $\times$ 10$^{15}$ & 6.3 $\times$ 10$^{22}$ & 853.1 & 1.54 $\times$ 10$^{23}$ & 2.4 \\ 0.000 & 0.000 & 22.7 & 20.4 & 7.3 $\times$ 10$^{15}$ & 15.4 $\times$ 10$^{22}$ & 1472.9 & 2.65 $\times$ 10$^{23}$ & 1.7 \\ 0.000 & -0.125 & 18.0 & 16.0 & 5.7 $\times$ 10$^{15}$ & 12.0 $\times$ 10$^{22}$ & 1032.6 & 1.86 $\times$ 10$^{23}$ & 1.6 \\ -0.125 & 0.125 & 8.3 & 6.9 & 2.5 $\times$ 10$^{15}$ & 5.3 $\times$ 10$^{22}$ & 494.0 & 8.89 $\times$ 10$^{22}$ & 1.7 \\ -0.125 & 0.000 & 9.6 & 8.0 & 2.9 $\times$ 10$^{15}$ & 6.1 $\times$ 10$^{22}$ & 1171.3 & 2.11 $\times$ 10$^{23}$ & 3.5 \\ -0.125 & -0.125 & 15.3 & 14.0 & 5.0 $\times$ 10$^{15}$ & 10.5 $\times$ 10$^{22}$ & 1237.2 & 2.23 $\times$ 10$^{23}$ & 2.1 \\ \enddata \tablenotetext{a}{Velocity-integrated CH 3335 MHz antenna temperature. The uncertainty in this quantity is driven overwhelmingly by the baseline fit (see \S 3.2} \tablenotetext{b}{Velocity-integrated CH 3335 MHz antenna temperature for the broad, extended component only (see \S 3). } \tablenotetext{c}{N(CH) is derived from the integrated antenna temperature in column 3 after correcting for the beam efficiency, the beam filling fraction, and assuming $\vert$ T$_{ex} \vert \gg$ T$_{bg}$. See Magnani \& Onello (1995) for details.} \tablenotetext{d}{N(H$_2$) derived from N(CH) via the relation established by Mattila (1986). We refer to this gas as ``thin" for reasons elaborated in \S 3.2} \tablenotetext{e}{Integrated CO(1-0) line emission from the data of Bitran et al. (1997).} \tablenotetext{f}{N(H$_2$) derived from W$_{CO}$ using a conversion factor of 1.8 $\times$ 10$^{20}$.} \tablenotetext{g}{Ratio of N(H$_2$) derived from CO(1-0) data divided by N(H$_2$) derived from CH data.} \end{deluxetable} \clearpage \begin{deluxetable}{rrrrrr} \tabletypesize{\scriptsize} \tablecaption{The CH Narrow Component at v$_{LSR} \sim$ 0 km s$^{-1}$} \tablewidth{0pt} \tablehead{ \colhead{$\ell$ } & \colhead{$b$} & \colhead{T$_A$} & \colhead{$\Delta$v} & \colhead{v$_{LSR}$} & \colhead{Scale Height \tablenotemark{a}} \\ \colhead{degrees} & \colhead{degrees} & \colhead{mK } & \colhead{km s$^{-1}$} & \colhead{km s$^{-1}$} & \colhead{pc} } \startdata 0.125 & 0.125 & 115 & 8.41 & $-$0.31 & 54 \\ 0.125 & 0.000 & 213 & 4.26 & $+$1.66 & 27 \\ 0.125 & -0.125 & 104 & 11.38 & $-$3.08 & 73 \\ 0.000 & 0.125 & 230 & 5.40 & $-$0.13 & 34 \\ 0.000 & 0.000 & 270 & 7.76 & $-$0.25 & 49 \\ 0.000 & -0.125 & 167 & 11.34 & $-$4.46 & 72 \\ -0.125 & 0.125 & 186 & 7.05 & $-$1.76 & 45 \\ -0.125 & 0.000 & 137 & 10.54 & $-$0.49 & 67 \\ -0.125 & -0.125 & 111 & 10.62 & $-$3.40 & 68 \\ composite \tablenotemark{b} & & 182 & 17.61 & $-$1.20 & 109 \\ \enddata \tablenotetext{a}{Scale height is derived using formulation given by Magnani et al. (2000 - see \S 4 for details).} \tablenotetext{b}{Average of the 9 individual spectra. See Figure 4.} \end{deluxetable} \clearpage
1,314,259,992,712
arxiv
\section{Introduction} \label{sec:intro} Most recent success of deep supervised learning, in the context of medical image analysis, critically depends on the availability of large sets of annotated images. The performance of supervised learning methods, on tasks such as anomaly detection, is then limited when the studied pathology is rare or when a fine expert annotation is required. A typical example is that of \textit{de novo} (just diagnosed) Parkinson Disease (PD) patients, for which brain structural abnormalities are subtle and hardly visible in standard T1w or diffusion MR images. A natural alternative to supervised methods is \textit{outlier detection} or \textit{Unsupervised Anomaly Detection} (UAD). This formalism requires only the manual identification of "normal" data to construct a tractable model of normality, while \textit{outliers} are then automatically detected as samples deviating from this normal model. Different categories of UAD methods have been applied to medical image segmentation or detection tasks. They mainly differ in the features used to learn the normal model and the score computed to assess the distance to this model, which in brain anomaly detection is typically assessed at the voxel level. Illustrations include several auto-encoders (AE) architectures that have been compared in \cite{baur_autoencoders_2020}. These AE models are trained to perform a ``pretext'' task on normal images consisting in the reconstruction of these images. For an arbitrary image, voxel-wise anomaly scores are then computed as the reconstruction errors, \textit{i.e.} the differences between the image voxels and the reconstructed ones. Such errors are expected to be much larger for unseen voxels from patient images, provided the chosen architecture has initially well captured the normal subjects main features. To further investigate the importance of the normal model construction, building on this standard deep UAD formalism, we recently compared different auto-encoders architectures for the detection of subtle anomalies in the diffusion parametric maps of \textit{de novo} PD patients \cite{mlcn2021}. This comparison included an auto-encoder (AE), taking 2D transverse slices as input, and the adaptation of a patch-based siamese auto-encoder (SAE) proposed in \cite{alaverdyan_regularized_2020}. Our results demonstrated encouraging performance with the SAE model slightly outperforming the AE, thus indicating that patches may indeed be advantageous, in particular for their ability to capture local spatial neighborhood information around each voxel. However, as regards the detection score, the study also confirmed recent observations outlining the limitations of the reconstruction error scores for the detection of very subtle abnormalities \cite{meissen2022_pitfalls}. In this work, we propose to investigate other detection procedures combining 1) enhanced normal models and 2) scoring rules derived from multivariate statistics. Following the approach reported in \cite{alaverdyan_regularized_2020}, we consider a patch-based approach but propose to perform the detection step in the latent space of the auto-encoder. More specifically, latent space representations of the normal images are extracted from the patch-based SAE of \cite{mlcn2021}, and then used as features to build a normal model. Two types of models are considered, a non parametric discriminative one class support vector machine (OC-SVM) \cite{ElAzami_PlosOne2016} and a parametric generative mixture model \cite{arnaud_fully_2018} (see Figure \ref{SAE_diagram} and next section for details). So doing, the hope is to combine the representation power of patch-based AE networks to extract relevant and subtle features, with the efficiency of multivariate statistical models. These two combinations are then compared to a baseline UAD model based on the reconstruction error and to two standard \textit{supervised} CNN, namely 3D ResNet and DenseNet. \\ \vspace{-15px} \section{UAD pipeline} \label{sec:methods} The proposed framework for unsupervised brain anomaly detection is depicted on Figure \ref{SAE_diagram}. The central AE is first trained to learn the representation space of normal samples and reconstruct pseudo-normal images. The standard setting consists of computing \textit{reconstruction error} maps (as the difference between the input and output images) on which anomalous unseen regions are expected to exhibit poor reconstructions or equivalently high anomaly scores. In this work, we also investigate two other outlier detection rules based respectively on a \textit{generative} and a \textit{discriminative} model designed to capture information from the AE latent space. \vspace{-10px} \subsection{Latent space feature extraction} To construct an efficient normal model, we consider a patch-based network to enrich the latent space with local information at the voxel level. Leveraging the architecture proposed in \cite{alaverdyan_regularized_2020}, we use a SAE \cite{lecun_siamese} composed of two replica of an auto-encoder sharing the same weights, associated to the following loss term : $L_{SAE}(\mathbf{x_1}, \mathbf{x_2}) = \sum_{t=1}^{2} { {||\mathbf{x_{t}} -\mathbf{\hat{x}_{t}}||_2^2}} - \alpha \cdot {cos(\mathbf{z_{1}}, \mathbf{z_{2}})}$ \noindent which balances two objectives 1) decoding the representation $\mathbf{z}$ learned from the encoder fed with patches $\mathbf{x}$ into a reconstruction $\mathbf{\hat{x}}$ that is close from the original patch $\mathbf{x}$ and 2) having close (in the sens of the cosine similarity) $\mathbf{z}$ for similar patches\footnote{In the case of learning on brain MR images,``similar'' patches means that the patches are located in the same place in the brain, which is possible because all MRI's are registered to a common atlas beforehand.}. \begin{figure*} \centering \includegraphics[scale=0.092]{AE_patch_ocsvm_mmst_recons_diagram_2.png} \caption{\small The trained encoder extracts latent representation $\mathbf{z}$ of patches, used by 1) a decoder to compute reconstruction error in the image space 2) OC-SVM and 3) $\mathcal{MMST}$ to perform outlier detection in the latent space. Anomaly maps representing the percentage of abnormal voxels per brain structures are shown on the right, warm colors corresponding to the highest percentages.} \label{SAE_diagram} \end{figure*} \vspace{-10px} \subsection{Outlier detection in the latent space} As an alternative to the reconstruction error $|| \mathbf{x} - \mathbf{\hat{x}}||_2^2$ between a patch ${\mathbf{x}}$ and its reconstruction $\mathbf{\hat{x}}$, we present below two outlier detection procedures built from a collection of normal patch representations $(\mathbf{z}_i)_{1 \leq i \leq n}$ to account for normality in the latent space. \noindent {\bf A discriminative approach: One-Class SVM.} \hfill \break \noindent The goal of the OC-SVM \cite{scholkopf_support_1999} is to construct a decision function $f$, positive on the estimated support of the distribution of normal samples $\mathbf{z}_i$, negative elsewhere and null on the frontier. The training samples from the normal class are first mapped to a higher dimensional space via a feature map $\boldsymbol{\phi}(\cdot)$ associated with a kernel $k$ such that $k(\mathbf{z}_i,\mathbf{z}_j)$ = $\boldsymbol{\phi}(\mathbf{z}_i) \cdot \boldsymbol{\phi}(\mathbf{z}_j)$. As the problem is linear in this redescription space, the parameters $\boldsymbol{w}$ and $\rho$ of the hyperplane $\boldsymbol{w} \cdot \boldsymbol{\phi}(\mathbf{z}) - \rho = 0$ are obtained by solving a convex optimization problem aiming at maximizing the distance of the hyperplane from the origin. \noindent The decision function can then be expressed as $f(\mathbf{z}) = \boldsymbol{w} \cdot \boldsymbol{\phi}(\mathbf{z}) -\rho$. In a typical scenario, samples with negatives scores of $f$ would be considered outliers. During inference, $\mathbf{z}$ extracted from patches can be evaluated by the decision function to get an anomaly score corresponding to their distance to the hyperplane. This anomaly score, attributed to the central voxel of each patch, then provides an anomaly score map for the whole image. An ensemble of OC-SVM scores, trained on different $\mathbf{z_i}$ is used to provide a more robust anomaly map. \hfill \break \vspace{-10px} \noindent {\bf A generative approach: multivariate mixtures.} \hfill \break While OC-SVM estimates only the support of the normal model, the goal here is to estimate the full normal distribution. To this end, we use a mixture model distribution $p$, denoted by $\mathcal{MMST}$, whose individual components are multiple scale t-distributions ($\mathcal{MST}$): $p(\mathbf{z}; \boldsymbol{\Theta}) = \sum _{k=1}^{K} \pi_k \mathcal{MST}(\mathbf{z}; \boldsymbol{\theta}_k)$ \noindent with $\boldsymbol{\Theta} = (\pi_k, \boldsymbol{\theta}_k)_{1 \leq k \leq K}$, $\pi_k \in [0,1]$ and $\sum\limits_{k=1:K} \pi_k = 1$ \noindent $\mathcal{MST}$ distributions are generalizations of the multivariate t-distribution that extend its Gaussian scale mixture representation. The standard univariate scale variable is replaced by a $M$-dimensional scale variable $(W_m)_{1 \leq m \leq M} \in \mathbb{R}^M$ where $M$ denotes the latent space dimension. This allows a richer variety of shapes beyond elliptical distributions. The scale variable $W_m$ for dimension $m$ can be interpreted as accounting for the reliability of this dimension and is typically small when $\mathbf{z}$ is far from the mean parameter. The specific definition can be found in \cite{MST}. Given a learning set of $(\mathbf{z}_i)_{1 \leq i \leq n}$, the estimation of the model parameter denoted by $\hat{\boldsymbol{\Theta}}_n$ is theoretically feasible using a standard expectation-maximization (EM) algorithm but is too time and memory costly in practice when the amount of data is large. In this work, we therefore resort to an \textit{online} version of EM \cite{OEM} that we derived for our $\mathcal{MMST}$ model as detailed in \cite{rapportG}. Finally given a latent representation of a patch $\mathbf{z}$, we can use the scale variables to derive a measure of proximity $f$ to the learned normal model: $f(\mathbf{z}) = \max_{1 \leq m \leq M} \Bar{w}_m^{\mathbf{z}}$, with $\Bar{w}_m^{\mathbf{z}} = \mathbb{E}[W_m | \mathbf{z}; \hat{\boldsymbol{\Theta}}_n]$, where the expectation is computed for the learned $\mathcal{MMST}$ model and is typically larger when at least one dimension of $\mathbf{z}$ is well explained by the model. This measure of proximity, available for each voxel, provides in turn an anomaly score map for the whole image. \vspace{-10px} \subsection{Post-processing of the anomaly maps} \label{sec:postpro} A threshold value (the \textit{abnormality threshold}) set to an extreme quantile (eg. in the range of [90\%, 100\%[) of the anomaly scores distribution in the normal train samples was derived for each method (reconstruction error, encoder + OC-SVM and encoder + $\mathcal{MMST}$) and applied to the test patient and test control dataset. The resulting binary anomaly maps can serve to identify suspect regions. To help evaluate the localization of these anomalies, two atlases were considered and fused: the Neuromorphometrics atlas \cite{neuromorphometrics} which segments the brain into 8 macro-regions and the MNI PD25 atlas \cite{Xiao2015} which is specifically designed for PD patients exploration and delineates 8 relevant subcortical structures (see Fig \ref{boxplot}). The percentage of anomalous voxels was computed for each of these regions of interest leading to region-wise anomaly maps as depicted on the right of Figure \ref{SAE_diagram}. \begin{figure*}[!ht] \centering \includegraphics[scale=0.057]{boxplot_article_tresh_0.02.png} \caption{\small \textit{g-mean score} of the 3 UAD and 2 CNN models. For UAD models, we consider anomaly \% on the whole brain and per region, including the 8 subcortical structures from the MNI PD25 atlas: substantia nigra (SN), red nucleus (RN), subthalamic nucleus (STN), globus pallidus interna and externa (GPi, GPe), thalamus, putamen and caudate nucleus.} \label{boxplot} \end{figure*} \vspace{-10px} \section{Experiments} \subsection{Data description and splitting} \label{sec:data} T1-weighted and DTI MR scans from 54 healthy controls and 124 \textit{de novo} PD patients were extracted from the PPMI database \cite{ppmi}. All retrieved images were acquired with the same MR scanner model (3T Siemens Trio Tim). Mean diffusivity (MD) and fractional anisotropy (FA) maps were computed from DTI using MRtrix3.0. All maps $X$ (T1w, FA, MD) were normalized in intensity with \hfill \break $X_{\text{norm}} = \frac{X - 1\%quantile(\chi)}{99\%quantile(\chi) - 1\%quantile(\chi)}$ with $\chi$ being the intensity distribution of train controls images of one modality. \hfill \break All maps were non-linearly registered onto the MNI atlas resulting in images of dimension $121 \times 145 \times 121$ with a voxel size of $1.5 \times 1.5 \times 1.5\ \text{mm}^3$. As for the cross-validation, healthy controls dataset was divided into 10 folds following a bootstrap procedure \cite{Poldrack2019}, leading to each fold containing $[\![$39, 41$]\!]$ train controls and $[\![$13, 15$]\!]$ test controls. The same procedure was performed with PD patients, leading to each fold containing $[\![$36, 40$]\!]$ train patients and $[\![$82, 86$]\!]$ test patients. Special care was put into balancing the age and sex distribution of each fold. \vspace{-10px} \subsection{Hyperparameters of the UAD pipeline} The encoder was composed of 4 convolutionnal blocks with kernel size $(5, 5)$, $(3, 3)$, $(3, 3)$ and $(3, 3)$, with strides respectively $(1, 1)$, $(1, 1)$, $(3, 3)$ and $(1, 1)$, number of filters respectively $3$, $4$, $12$ and $16$, no padding and GeLu activation. Each block was followed by a batch normalization block. The decoder was the symmetric counterpart of the encoder. The input of the encoder consisted of the patches of each of the 3 modalities combined as channels. The SAE model was trained with $[\![$975000, 1025000$]\!]$ patches of size 15$\times$15$\times$3 (25 000 patches per subject). We used Adam optimizer \cite{kingma_adam_2017} for 20 epochs, with default hyperparameters, best model selection based on validation loss and training batch size of 1000. An ensemble of five OC-SVM were trained, each with 500 $\mathbf{z}_i$ samples extracted from 500 random brain localizations from the train set and the mean of the 5 decision functions was used as the final anomaly score (note that this differs from \cite{alaverdyan_regularized_2020} where one OC-SVM is trained per voxel). We used $\nu = 0.03$ and a Gaussian kernel whose hyperparameter $\frac{1}{\gamma}$ was set to the product of variance and dimension of the $\mathbf{z}_i$. For $\mathcal{MMST}$, we used $K = 9$. We set the \textit{abnormality threshold} defined in section \ref{sec:postpro} to 98\% (experiments have shown that the choice of this threshold has little influence on the final performance). \vspace{-10px} \subsection{Performance evaluation of the UAD models} Performance of the three methods was evaluated as in \cite{arnaud_fully_2018, mlcn2021}. The percentage of abnormal voxels in the whole brain or per region of interest derived from the post-processing of the anomaly score maps (see section \ref{sec:postpro}) was employed to classify the test controls and test patients as healthy or pathological (PD). By varying a threshold on this metric, we can draw a ROC curve from the test population, and derive the best-achievable \textit{g-mean score} defined as $\sqrt{\text{Sensitivity} \times \text{Specificity}}$. \textit{g-mean score} is used as a performance metric to compare the different classification models. In the absence of reference annotations of the brain structures affected by the pathology, this pretext classification task allows to indirectly evaluate if the anomalies detected by the UAD models are characteristic of the pathology. It was computed either considering the percentage of anomalies in the whole brain, or in each of the regions of interest of the Neuromorphometric and MNI PD25 atlases. \vspace{-10px} \subsection{Comparison with supervised approaches} \label{supervised} We compared classification performance of the three UAD models (reconstruction error, encoder + OC-SVM and encoder + $\mathcal{MMST}$) to that of two standard supervised 3D convolutional networks: 3D ResNet with 18 layers \cite{resnet3d} and DenseNet-264 \cite{densenet}. Each of these 2 CNN took as input the whole 3D T1w, MD and FA brain images combined as channels. A dense layer was added at the end of each network in order to have a one-dimensional output for classification. For each fold, the models were trained on 75\% of train controls and train patients, the remaining 25\% being kept for validation. Training was performed with Adam optimizer \cite{kingma_adam_2017} for 300 epochs with default hyperparameters and a batch size of 8. Note that the train patients described in section \ref{sec:data} were only used for training of these two supervised networks. These two models were evaluated on the same test patient dataset as used for the UAD models thus enabling a fair comparison. \vspace{-10px} \section{Results} \label{sec:results} The \textit{g-mean score} of each method is reported in Figure \ref{boxplot}. \break We notice that the 3 UAD models achieve a median \textit{g-mean score} around 0.65 on the whole brain, and in the range [0.6, 0.7] when only considering certain macro-regions (e.g. temporal or occipital lobe). For subcortical structures (e.g. RN or SN), performance drop to the range [0.5, 0.6] and even lower for some methods (especially Encoder + OC-SVM). At this stage of the PD progression, these subcortical structures seem slightly impacted. Note that the supervised methods, Resnet3D and Densenet, provide on the whole brain a median \textit{g-mean score} in the range [0.55, 0.6], lower than the UAD models considered in this study. \vspace{-10px} \section{Discussion and conclusion} \label{ssec:discussion} Auto-encoders have shown to be a reference method regarding unsupervised anomaly detection \cite{baur_autoencoders_2020} but have also shown limits when used for very subtle anomalies \cite{meissen2022_pitfalls}. We have investigated whether an analysis of the latent space could improve these performances compared to a classical reconstruction error approach. We used two methods based on different paradigms: One-Class SVM (\textit{discriminative}) and Mixture of Multiple scaled t-distributions (\textit{generative}). It is clear from the supervised networks results that the proposed task, discriminating \textit{de novo} PD from controls, is very hard: the supervised methods performances fall below the unsupervised methods ones, validating our approach. As seen with the performance of the reconstruction error, we found that the latent space UAD methods are strong competitors but do not surpass the former. In comparison with \cite{mlcn2021} where only diffusion was used, we report that the addition of T1w images does not improve significantly the performances. \hfill \break We also demonstrated that using a patch-based encoder, as a feature extractor to feed a $\mathcal{MMST}$ model, gave promising results as it allows capturing some spatial context, which was lacking in \cite{arnaud_fully_2018}. Finally, the discrimination of PD based only on subcortical structures seems not feasible, as reported in \cite{prasuhn_machine_2020} for substantia nigra, at an early stage of the pathology. \hfill \break Future work includes investigating whether the combination of reconstruction error and latent space anomaly maps can increase the classification performance. We aim to extract 3D features with the auto-encoders and complete the multi-modal approach by adding T2w and T2$^*$w images as in \cite{sivaranjini_deep_2020}. \vspace{-10px} \section{Acknowledgments} \label{sec:acknowledgments} G. Oudoumanessah was financially supported by the AURA region. This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011012813R1 made by GENCI. It was partially funded by French program “Investissement d’Avenir” run by the Agence Nationale pour la Recherche (ANR-11-INBS-0006). \vspace{-10px} \section{Compliance with Ethical Standards} This research study was conducted retrospectively using human subject data made available by the Parkinson Progression Markers Initiative (PPMI). Ethical approval was not required as confirmed by the license attached with the open access data. \vspace{-10px} \bibliographystyle{IEEEbib}
1,314,259,992,713
arxiv
\section{INTRODUCTION} \input{Sections/introduction} \section{METHODOLOGY} \label{sec:method} \input{Sections/methodology} \section{EXPERIMENTAL SETUP} \label{sec:setup} \input{Sections/setup} \section{EXPERIMENTAL RESULTS} \input{Sections/results} \addtolength{\textheight}{-0.9cm} \section{DISCUSSION} \input{Sections/discussion} \section{CONCLUSIONS} \input{Sections/conclusions} \bibliographystyle{IEEEtran} \input{Literature/literature.bbl} \end{document} \subsection{Structure-From-Motion Point Cloud Creation} \label{sec:method:sfm} The first step in this process is creating a point cloud. For this step, an open-source structure-from-motion (SFM) software~\cite{wu-vsfm11, wu-cvpr11, wu-sgpu07} is used for sparse and dense 3D reconstruction. The software takes a set of images, detects and describes scale-invariant feature transform (SIFT) features in them, matches these features between images, conducts bundle adjustment to create sparse and then dense 3D reconstruction of the scene, stores these reconstructions as a point cloud, and finally transforms the point cloud using global position information for the set of images. The UAV is used to take overlapping images around the perimeter of the investigation area. For example, if the area of interest is a soccer field, the UAV flies around the perimeter of the field at 2~meters per second capturing photos aimed at the center of the field. The result of using the SFM software for the rock fragment pile in Fig.~\ref{fig:FigureAction} is illustrated in Fig.~\ref{fig:sfm}. When this method is implemented in a large environment, such as a mine, this task can be performed by another UAV prior to or in parallel with a UAV conducting analyses that require the point-cloud-based method for determining image scale. The point cloud created in this step can also be used for other analyses than rock fragmentation analysis, such as drill and blast optimization campaigns~\cite{stewart-isee17}. \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigureSFM.jpg} \caption{The point cloud created for the rock fragment pile with the structure-from-motion algorithm.} \label{fig:sfm} \end{figure} \subsection{Camera Parameter Matrix} \label{sec:method:cam} The next step towards obtaining image scale for fragmentation analysis is the introduction of the camera intrinsic parameters. These parameters are required to transform a point, represented by pixel coordinates, in the image to a point on the image in the world frame. The camera parameter matrix defined as~(cf. \cite{corke-rvc11}): \begin{equation} \label{eqn:cam_param} \textbf{K} = \left[ \begin{IEEEeqnarraybox*}[][c]{,c/c/c,} f_x & s & c_x\\ 0 & f_y & c_y\\ 0 & 0 & 1% \end{IEEEeqnarraybox*} \right] , \end{equation} where \( f_x \) and \( f_y \) are focal lengths in the x- and y-direction, respectively, \( c_x \) and \( c_y \) are pixel coordinates of the optical center, and \( s \) is the skew between sensor axes. These parameters are innate characteristics of the camera and sensor, and should be estimated through a camera calibration. For the setup in Sec.~\ref{sec:setup}, these parameters are estimated using an open-source camera calibration package~\cite{bowman-ros17}. \subsection{Camera Pose} \label{sec:method:pose} The pose, translation and rotation, of the camera in the world frame is required as an origin in the derivation of a ray equation needed to project image points in the world frame and onto the surface elevation profile. The pose of the camera is defined as: \begin{equation} \label{eqn:cam_extrinsic} \textbf{T} = \bigg[ \begin{IEEEeqnarraybox*}[][c]{,c/c,} \textbf{C} & \textbf{r} \\ \textbf{0}^T & 1 % \end{IEEEeqnarraybox*} \bigg] , \end{equation} where \( \textbf{T} \) is referred to as the (\( 4\times4 \)) transformation matrix of the camera with respect to the world frame origin, \( \textbf{C} \) is a (\( 3\times3 \)) rotation matrix in the special orthogonal group, \textit{SO}(3), and \( \textbf{r} = (x_{r}, y_{r}, z_{r}) \) is a translation vector. This transformation matrix is known as the camera's extrinsic parameters, and comprises a minimum of six parameters to describe translation and rotation in the special Euclidean group, \textit{SE}(3). For our experiments in Sec.~\ref{sec:setup}, the pose of the camera is estimated from onboard measurement of the camera orientation and the UAV pose is obtained from a motion capture system. In field experiments, the UAV pose will be estimated by fusing odometry and GPS sensor measurements. Fig.~\ref{fig:cloud} illustrates the camera pose above the rock pile, represented by a point cloud. \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigurePC.eps} \caption{Camera pose above the rock pile in the world frame. The image outline is computed by projecting rays from the four corners of the image to intersect with the rock pile surface.} \label{fig:cloud} \end{figure} \subsection{Ray Equation} \label{sec:method:ray} This section derives an equation to represent a ray from pixel coordinates in the image to the world frame using the camera parameter matrix (Sec.~\ref{sec:method:cam}) and the camera pose (Sec.~\ref{sec:method:pose}). The derived equation is used to represent each ray projected from the four corners of the image so that the rays can be intersected with the surface elevation profile. Fig.~\ref{fig:cloud} illustrates the four rays projected from the image corners to intersect with the point cloud. A pixel location in the image, \(\textbf{p}\), is represented by coordinates \(u\) and \(v\), where \(u\) and \(v\) are integers. This pixel location, \(\textbf{p}\), is represented in the image using homogeneous coordinates, \( \tilde{\textbf{p}} = (u',v',w') \). The non-homogeneous pixel coordinates are computed from: \begin{equation} \label{eqn:nonhomogeneous} u = \dfrac{u'}{w'} ,\quad v = \dfrac{v'}{w'} . \end{equation} To represent the point, \( \textbf{p} \), in the image as \( \tilde{\textbf{p}} \), we set \( w'=1 \) such that that the center of the image frame is the origin and points are mapped to the plane \( w'=1 \). The first step to find an equation to represent a ray from image pixel coordinates to the world frame, is to determine the direction of the ray in the image. The direction of the ray in the image, \( \tilde{\textbf{P}}_{c} \), from the homogeneous pixel location, \(\tilde{\textbf{p}}\), for a camera with a camera parameter matrix from (\ref{eqn:cam_param}), \( \textbf{K} \), is calculated using: \begin{equation} \label{eqn:ray_image} \tilde{\textbf{p}}_{c} = \textbf{K}^{-1}\tilde{\textbf{p}} . \end{equation} Using \( \tilde{\textbf{p}}_{c} \) from~(\ref{eqn:ray_image}), the ray direction in the image is transformed into the world frame. This transformation assumes that camera distortion has been removed from the image to ensure that the ray in the world frame is straight and follows the same direction as the ray in the image frame. In the setup described in Sec.~\ref{sec:setup}, the camera distortion is removed using an open-source image processing package~\cite{mihelich-ros17} with the camera distortion model parameters estimated during the camera calibration. The homogeneous ray direction (\(4 \times 1 \)) in the world frame, \( \tilde{\textbf{p}}_{m} \), is computed using the ray direction in the image, \( \tilde{\textbf{p}}_{c} \), and the camera pose from (\ref{eqn:cam_extrinsic}), \(\textbf{T} \), according to: \begin{equation} \label{eqn:ray_world} \tilde{\textbf{p}}_{m} = \textbf{T}\left[ \begin{IEEEeqnarraybox*}[][c]{,c,} \tilde{\textbf{p}}_{c} \\ 1 % \end{IEEEeqnarraybox*} \right] , \end{equation} where \( \tilde{\textbf{p}}_{m}=(x_{m},y_{m},z_{m},1) \), with \( x_{m},y_{m},z_{m}\) representing the slope along each axis in the world frame. Once the ray direction in the world frame, \( \tilde{\textbf{p}}_{m} \), is determined using~(\ref{eqn:ray_world}), the equation for the ray in the world frame emitting from the pixel location, \(\textbf{p}\), is represented by: \begin{equation} \label{eqn:ray} \tilde{\textbf{q}} = \left[ \begin{IEEEeqnarraybox*}[][c]{,c,} \textbf{r} \\ 1 % \end{IEEEeqnarraybox*} \right] + \alpha\tilde{\textbf{p}}_{m} , \end{equation} where \( \tilde{\textbf{q}}=(x,y,z,1) \) is a homogeneous point on the ray at the location \( \textbf{q}=(x,y,z) \) in the world frame, \( x \), \( y \), and \( z \) are components in the world frame, and \( \alpha \) is a scalar since the pixel location is projected along the ray. For example, if (\ref{eqn:ray}) is used to represent the ray emitted from a point in the image in the world frame, and assuming that the ground surface lies on a plane at \( z = 0 \) then (\ref{eqn:ray}) can be easily rearranged to solve for \( \alpha \): \begin{equation} \label{eqn:ray_z0} \alpha = -\dfrac{z_{r}}{z_{m}} . \end{equation} \subsection{Plane and Line Parameterization} \label{sec:method:plane} To find the intersection of the ray with the point cloud, the point cloud is triangulated using the Delaunay triangulation and each triangle is represented as a plane. The three corners of each triangle (\( \textbf{p}_{0}, \textbf{p}_{1}, \textbf{p}_{2} \)) represent a plane such that a general point on the plane is represented by: \begin{equation} \label{eqn:general_plane} \textbf{p}_{0} + (\textbf{p}_{1} - \textbf{p}_{0})\eta + (\textbf{p}_{2} - \textbf{p}_{0})\mu , \end{equation} with \( \eta,\mu\in\mathbb{R} \). Using two points along the corner point ray, such as the translation vector of the camera pose \( \textbf{p}_{a} = \textbf{r} \) and the point intersecting the plane at \( z = 0 \), \( \textbf{p}_{b} \), a simple line equation is developed: \begin{equation} \label{eqn:general_line} \textbf{p}_{a} + (\textbf{p}_{b} - \textbf{p}_{a})t , \end{equation} with \( t\in\mathbb{R} \). The line and plane parameters at the point of intersection can then be solved according to: \newpage \begin{equation} \label{eqn:intersection} \left[ \begin{IEEEeqnarraybox*}[][c]{,c,} t \\ \eta \\ \mu % \end{IEEEeqnarraybox*} \right] = \left[ \begin{IEEEeqnarraybox*}[][c]{,c,c,c,} (x_a-x_b) & (x_1-x_0) & (x_2-x_0) \\ (y_a-y_b) & (y_1-y_0) & (y_2-y_0) \\ (z_a-z_b) & (z_1-z_0) & (z_2-z_0) % \end{IEEEeqnarraybox*} \right]^{-1} \left[ \begin{IEEEeqnarraybox*}[][c]{,c,} (x_a-x_0) \\ (y_a-y_0) \\ (z_a-z_0) % \end{IEEEeqnarraybox*} \right] . \end{equation} \subsection{Image Scale for Fragmentation Analysis} \label{sec:method:scale} Once all four corner points of the image are represented by~(\ref{eqn:ray}), the scale is calculated. For the specialized fragmentation analysis software used in Sec.~\ref{sec:setup}, the scale is applied at the top and bottom edge of the image. As such, each pair of corner points along each edge are used to compute the image scale: \begin{equation} \label{eqn:scale} \text{scale} = \dfrac{\text{image width}}{\sqrt{(\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2}} , \end{equation} where the image width is the width in pixels, and \( \Delta x, \Delta y,\) and \( \Delta z \) are the distances between corner points along the \(x, y \) and \(z \) world frame axes, respectively. The distance between the corner points is in the unit of distance measurement used in the image analysis software. Assumptions made in the image software for rock fragmentation analysis are described in Sec.~\ref{sec:setup:split}. \subsection{Algorithm} \label{sec:method:algorithm} Algorithm~\ref{alg} is used to compute scale for each image captured in the aerial fragmentation analysis. Fig.~\ref{fig:cloud} illustrates graphically the corner point intersections found for a given camera pose using this algorithm. Fig.~\ref{fig:delineationPCScale} illustrates the scales computed for the raw photo in Fig.~\ref{fig:raw} using the developed point-cloud-based algorithm. \begin{algorithm} \caption{Calculate image scale using the point-cloud-based method.} \label{alg} \begin{algorithmic} \STATE Store all corner point rays defined by (\ref{eqn:ray}) as lines for (\ref{eqn:general_line}) \FOR {each triangle created to represent the point cloud} \STATE parameterize 3 points as a plane using (\ref{eqn:general_plane}) \FOR {each corner point line} \STATE compute intersection using (\ref{eqn:intersection}) \IF {point on ray, (\( t \geq 0 \)), and inside triangle ( \( \eta,\mu \in[0,1] \) and \( \eta+\mu \leq 1 \))} \STATE store intersection point for use in (\ref{eqn:scale}) \ENDIF \ENDFOR \ENDFOR \IF {intersection point not found} \STATE assume on plane \( z = 0 \) \ENDIF \STATE compute image scale using intersections with (\ref{eqn:scale}) \end{algorithmic} \end{algorithm} \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigurePCScale.jpg} \caption{Delineated photo in automated aerial fragmentation analysis. Scales computed using the point-cloud-method are shown at the top and bottom of the image. Scale variation from left to right could also be computed, the software Split-Desktop can only set top and bottom scales.} \label{fig:delineationPCScale} \end{figure} \subsection{Rock Size Distribution} Using the experimental setup described in Sec.~\ref{sec:setup}, ten trials of automated aerial fragmentation were conducted and a rock size distribution was generated for the rock pile. The flight plan and rock pile morphology remained constant for all ten trials. Nine photos per trial were taken by the UAV at the planned locations according to the flight plan described in Sec.~\ref{sec:setup:flight} to calculate a rock size distribution. Fig.~\ref{fig:raw} illustrates one of these photos for a sample trial with a single scale object near the center of the image. For each trial, fragmentation analysis was conducted on the same set of photos using both the scale-object method and the point-cloud-based method. For each method, the same delineation net (a parameter in Split-Desktop) was used and masking was applied to the scale object and pile boundaries. Figures~\ref{fig:delineationPCScale} and \ref{fig:delineationScale} show an example of a scaled and delineated photo using the scale-object and the point-cloud-based method, respectively. The following subsections provide a comparison between the scale-object and point-cloud-based method in terms of prediction accuracy and time effort for a sample trial. \subsubsection{Prediction Accuracy} To determine prediction accuracy of each method for comparison, the percent error residuals for percent passing with respect to the reference sieve analysis curve were computed for the discrete sieve series. The method of computing percent error residuals is described in detail in~\cite{bamford-cami16}. The average rock size distribution for ten repeated trials with residuals for each method is plotted in Fig.~\ref{fig:distribution}. For this plot, the point-cloud-based method is shown to have comparable accuracy to the scale-object method. The 2-norm error was calculated over the full curve for both methods and the point-cloud-based method has a 6\% improvement over the scale-object method. The point-cloud-based method performs better in the coarse region of the rock size distribution but slightly over-predicts the amount of fines. Both methods can be seen to have residuals less than 10\% with rock size distributions remaining within the accepted maximum error envelope of 30\% recommended by~\cite{sanchidrian-rmre09} for industry standard 2D image analysis of measuring rock fragmentation. All of the other trials exhibited similar trends, as shown by the standard deviation envelopes in Fig.~\ref{fig:distribution} where the point-cloud-based method has a smaller envelope than the scale-object method. The decrease in standard deviation realized by the point-cloud-based method is thought to be caused by a better estimation of scale throughout the image rather than assuming that the pile is planar. This is because actual photo locations varied in the trials, causing the scale location in the images to change, while the point-cloud-based method accounted for this change, the scale-object method did not. These results are very promising since the point-cloud-based method has comparable accuracy and lies well within the industry accepted bounds, which makes it a suitable replacement for the scale-object method during field experiments. \begin{figure*} \centering \includegraphics[width=0.97\textwidth]{Figures/FigureResults.eps} \caption{Automated aerial rock fragmentation analysis results for ten trials using the point-cloud-based and scale-object methods with respect to the sieve analysis reference curve (ground truth). Discrete points (average value) and standard deviation envelopes represent the combined results for all ten trials. The Swebrec rock size distribution function~\cite{sanchidrian-irfb15} has been fit to the discrete points from sieve analysis, scale-object method, and point-cloud-based method so that a 2-norm error between the two image analysis methods and the ground truth could be calculated. The gray envelope represents the accepted maximum error envelope of 30\% recommended by~\cite{sanchidrian-rmre09} for industry standard 2D image analysis of measuring rock fragmentation.} \label{fig:distribution} \end{figure*} \subsubsection{Time Effort} Table~\ref{table:times} details the amount of time taken in seconds for the sampling flight and each extra task required for the point-cloud-based method for each trial. For the first trial, the total time taken in addition to flight time and fragmentation analysis in Split-Desktop was 5.7 minutes. Obviously, the point-cloud-based method requires more time effort than the scale-object method in the lab environment due to generating the point cloud and since the rock pile is easily accessible and only covers a small area. However, in field experiments, the amount of time for scale object placement is expected to be much longer, while the amount of time required for the point-cloud-based method is expected to increase marginally. Additionally, most of the time taken was in the construction of a point cloud, which can be used in other analyses for the mining operation. \begin{table*}[!htb] \centering \caption{Trial times for scale-object and point-cloud-based methods in seconds.}\label{table:times} \begin{tabular}{|c|c|c|c|c|c|c|c|}\hline Trial & SFM Flight & Sampling Flight & Computing Matches & Sparse Recst. & Dense Recst. & Scale Comp. & Total \\ \hline 1 & 93 & 159 & 121 & 14 & 81 & 30 & 339 \\ 2 & 93 & 162 & 88 & 12 & 54 & 27 & 274 \\ 3 & 89 & 160 & 124 & 27 & 64 & 29 & 333 \\ 4 & 87 & 165 & 129 & 19 & 74 & 33 & 342 \\ 5 & 85 & 157 & 130 & 16 & 59 & 34 & 323 \\ 6 & 94 & 151 & 119 & 18 & 80 & 34 & 345 \\ 7 & 88 & 156 & 124 & 18 & 56 & 32 & 318 \\ 8 & 86 & 157 & 121 & 14 & 69 & 29 & 319 \\ 9 & 90 & 158 & 184 & 20 & 49 & 7 & 349 \\ 10 & 86 & 148 & 191 & 39 & 63 & 30 & 409 \\ \hline Average & 89 & 157 & 133 & 20 & 65 & 29 & 335 \\ \hline \end{tabular} \end{table*} \subsection{Analysis of Variance} The Analysis of Variance (ANOVA) is a statistical model used to analyze whether there are any statistically significant differences between the means of independent factors. One-dimensional ANOVA with replication was set up to analyze whether repeated aerial fragmentation analysis statistically produces the same rock size distribution. This analysis was set up based on other applications of ANOVA for comparing rock size distributions, more details on the assumptions made and the background of ANOVA is available in~\cite{niedoba-ms16}. The rock size distributions for all ten trials were used to conduct one-dimensional ANOVA with replication for the point-cloud-based and scale-object methods, respectively. The trial and weight percent passing are sources of variability. However, we are interested in the effect of varying trial because we are testing whether the aerial fragmentation analysis is robust. The critical value \( F_{n,m;\alpha} \) of the Fisher-Snedecor distribution was set as \( F_{9,20;0.05} = 2.39 \), to have a 5\% level of significance. This critical value is used to reject the null hypothesis that results are the same with varied trials if the \(F \)-test statistic is greater than \( 2.39 \). Tables~\ref{table:anovaPC} and \ref{table:anova} show the \( F \)-test statistic computed for all trials and for the full sieve series for both point-cloud-based and scale-object methods, respectively. As can be seen, both \( F \)-test statistics are much less than \( 2.39 \), and the F-test statistic for the point-cloud-based method is less than the scale-object method. Therefore, the results for all ten trials are statistically the same with a 5\% level of significance for both the point-cloud-based method and the scale-object method. This indicates that aerial fragmentation analysis with and without scale objects is robust and statistically produces the same results when experiments are replicated. \begin{table}[!htb] \centering \caption{Results of one-way ANOVA of replicated experiments for point-cloud-based method.} \label{table:anovaPC} \begin{tabular}{!{\vrule width 2pt}p{1.3cm}!{\vrule width 2pt}p{1.3cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}}\hline Source of Variation & Degrees of Freedom & Sum of Squares & Mean Square & \textit{F}-test \\ \hline Trial & 9 & 1.9 & 0.21 & 0.003 \\ \hline Residuals & 20 & 1255.3 & 62.76 & \\ \hline \end{tabular} \end{table} \begin{table}[!htb] \centering \caption{Results of one-way ANOVA of replicated experiments for scale-object method.}\label{table:anova} \begin{tabular}{!{\vrule width 2pt}p{1.3cm}!{\vrule width 2pt}p{1.3cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}p{1.0cm}!{\vrule width 2pt}}\hline Source of Variation & Degrees of Freedom & Sum of Squares & Mean Square & \textit{F}-test \\ \hline Trial & 9 & 8.6 & 0.96 & 0.016 \\ \hline Residuals & 20 & 1170.0 & 58.50 & \\ \hline \end{tabular} \end{table} \subsection{Rock Fragment Pile} A pile of rock fragments with different sizes, ranging from coarse gravel (19~millimeters) to fine sand (\(<\)4~millimeters), was built in the lab. Prior to forming the pile, the rock fragments were put through sieve analysis to determine the `true' rock size distribution as a reference for experimental results. The results of the sieve analysis are presented for four discrete screen sizes, which is referred to as the discrete sieve series in Fig.~\ref{fig:distribution}. To use the sieve analysis as a reference, a rock size distribution curve was fit to the collected data. The parameters of this distribution are found in~\cite{bamford-cami16}. Spherical scale objects, with a diameter of 60 millimeters, were used to provide image scale when applying conventional image analysis, as seen in Fig.~\ref{fig:FigureAction}. These scale objects are ignored and masked when applying the point-cloud-based method for image scale computation. \subsection{Lab Environment} The indoor lab is equipped with a motion capture system for precise UAV localization and control. The lab has fluorescent lighting, providing optimal lighting conditions for the image analysis. The environment is also free of wind. \subsection{Unmanned Aerial Vehicle and Software Framework} A commercially available UAV with integrated camera, the Parrot Bebop 2, was used in our experiments. This UAV has the ability to capture stabilized high-resolution photos and videos, which is essential for accurate image analysis. In this experiment, the UAV broadcasts a video stream with an image resolution of \( 1280 \times 720 \) pixels which is stabilized onboard with respect to the world frame during flight. The camera orientation is changed onboard by moving a virtual window through the field of view of the integrated fish-eye-lens. The UAV receives camera commands and transmits the camera orientation in tilt and pan with respect to the world frame. The open-source Robot Operating System (ROS)~\cite{ros-icra09} was chosen to act as the central software node of the experimental setup. In these experiments, ROS uses a predetermined high-level flight plan and actual position and orientation measurements from the motion capture system to send low-level velocity and camera control commands wirelessly to the UAV. Images are captured from the UAV video stream and then analyzed in the specialized fragmentation analysis software. We use a macro to run the analysis automatically. \subsection{Rock Fragmentation Analysis} \label{sec:setup:split} For these experiments, Split-Desktop~\cite{split-10}, an industry standard software for image analysis in mining, was used. The main software parameters, such as the fines factor, were calibrated using sieve analysis data as a reference. The software receives an image and delineates particles using image segmentation, see Fig.~\ref{fig:delineationScale}. A scale object is then traced graphically to set the image scale assuming that the spherical scale object lies on the rock pile surface and that the surface is planar. Optionally, an image scale can be set uniformly or at the top and bottom edge of the image without graphical input assuming that the scale changes linearly from top to bottom. Fig.~\ref{fig:raw} gives an example of a raw photo imported into Split-Desktop, and Fig.~\ref{fig:delineationScale} illustrates the same photo after image segmentation. \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigureRaw.jpg} \caption{Raw photo captured in automated aerial fragmentation analysis with scale object to determine image scale.} \label{fig:raw} \end{figure} \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigureScale.jpg} \caption{Delineated photo in automated aerial fragmentation analysis. Scale object is measured and masked in light blue.} \label{fig:delineationScale} \end{figure} \subsection{Flight Plan} \label{sec:setup:flight} For these experiments, a flight plan was created to capture photos for a tilt angle of 83~degrees, at a fixed altitude of 0.5~meters above the rock pile base while ensuring no image overlap. The tilt angle was chosen so that the camera was directed as far downward as the UAV specification allowed such that the images are approximately perpendicular to the rock pile surface. This is following the suggestions made in the Split-Desktop software. In future work, adjusting the camera angle according to the pile geometry will be investigated. Fig.~\ref{fig:flight} illustrates the flight plan over the rock pile with planned and actual image capture locations for a sample trial. Each planned UAV location captures a single scale object near the center of the photo. This is a fair comparison to conditions in the mine environment since measurement devices are sparsely placed on a rock pile for rock fragmentation analysis campaigns such that the largest area possible can be captured. \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/FigureFlightPlan.eps} \caption{Planned flight over the rock pile. Camera poses (in red) are included for a sample trial. Crosses indicate planned image capture locations and the dotted line represents the flight trajectory.} \label{fig:flight} \end{figure}
1,314,259,992,714
arxiv
\section{Introduction} \label{section:introduction} Recently, Artificial Intelligence has shown great advances in many varied research areas, but there is one critical area where limited progress has been shown: commonsense knowledge representation and commonsense reasoning \cite{mccarthy1989artificial,BBK01,BlB05,minsky2007emotion,DaM15}. The work introduced in this paper proposes to advance a step forward in this research line by providing a new black-box testing methodology of first-order logic (FOL) SUMO{}-based ontologies \cite{Niles+Pease'01} that exploits {WordNet}{} \cite{Fellbaum'98} and its mapping into SUMO{} \cite{Niles+Pease'03}. Formal ontology development is a discipline whose goal is to derive explicit formal specifications of the concepts in a domain and relations among them \cite{noy2001ontology,Gru09,StS09,ALR12}. As with other software artifacts, ontologies typically have to fulfill some previously specified requirements. Usually both the creation of ontologies and the verification of its requirements are manual tasks that require a significant amount of human effort. In the literature, some methodologies exist that collect the experience in ontology development \cite{GFC04} and, more specifically, in ontology verification \cite{GCC06}. Roughly speaking, the methodologies for validating functional requirements of ontologies are based on the use of {\it competency questions} (CQs) \cite{GrF95}. That is, according to the requirements of a given ontology, its {\it competency} is described by means of a set of goals or problems that the ontology is expected to answer. Thus, testing an ontology consists in checking whether its set of CQs is effectively answered by the ontology. In this sense, these methods can be classified as {\it black-box} testing \cite{MSB12} according to the classical definition in software engineering, since the definition of questions does not depend on the particular specification of knowledge proposed by the ontology. Black-box testing strategies have some disadvantages. For example, it is difficult to determine the coverage level of a set of tests, since different black-box tests can repeatedly check the same portions of software. Further, the process of obtaining CQs is not automatic but creative \cite{FGS13}. Depending on the size and complexity of the ontology, creating a suitable set of CQs is by itself a very challenging and costly task. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={180pt,between origins},row sep={40pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=verb]| [ \subsumptionMappingTikZ{Planning} ] : \langle \synsetTikZ{schedule}{2}{v} \rangle & |[name=noun]| \langle \synsetTikZ{schedule}{1}{n} \rangle : [ \subsumptionMappingTikZ{Plan} ] \\ }; \draw[-latex] (verb) -- node[auto] {\(\langle result \rangle\)} (noun); \end{tikzpicture} \caption{Creation of competency questions using {WordNet}{}} \label{fig:introduction} \end{figure} In this paper, we propose a new method for the (semi-)automatic creation of CQs that enables the evaluation of the {\it competency} of SUMO{}-based ontologies in the sense proposed in \cite{GrF95}. Our proposal for the construction of CQs is based on several predefined question patterns that yield a large set of conjectures by using information from {WordNet}{} and its mapping into SUMO{}. A preliminary version of our method for the automatic creation of CQs has already been presented in \cite{ALR15}, where we also proposed an adaptation of the methodology for the evaluation of ontologies introduced in \cite{GrF95} to be automatically applied using automated theorem provers (ATPs). As far as we know, our proposals are the first attempts to exploit {WordNet}{} for the evaluation of SUMO{} and, in general, for the evaluation of knowledge-based resources of this kind. We illustrate our proposal for the creation of CQs using {WordNet}{} by means of the next example: the synsets (sets of synonyms) \synset{schedule}{2}{v} and \synset{schedule}{1}{n} ---which refer to the second sense of the verb {\it schedule} and the first sense of the noun {\it schedule} respectively (see Subsection \ref{subsection:WordNet})--- are related by the semantic relation \textPredicate{result} in {WordNet}{}, as depicted in Figure \ref{fig:introduction}.\footnote{We denote {WordNet}{} synsets and relations between chevrons (angle brackets). In addition, we denote the mapping information of each synset into SUMO{} separated by colon (:), where SUMO{} concepts are denoted between square brackets.} In the same figure, we also provide the mapping of \synset{schedule}{2}{v} and \synset{schedule}{1}{n} into SUMO{}: \synset{schedule}{2}{v} is connected to \subsumptionMapping{Planning} and \synset{schedule}{1}{n} is connected to \subsumptionMapping{Plan}, where the symbol $+$ refers to the {\it{subsumption}}{} mapping relation (see Subsection \ref{subsection:WordNet}). Roughly speaking, the mapping states that the semantics of the synsets \synset{schedule}{2}{v} and \synset{schedule}{1}{n} is more specific than the semantics of \textConstant{Planning} and \textConstant{Plan} ---i.e., \textConstant{Planning} and \textConstant{Plan} are more general concepts than \synset{schedule}{2}{v} and \synset{schedule}{1}{n}. Using the above information, we obtain a new conjecture by stating the same fact in terms of SUMO{}: that is, {\it ``\textConstant{Plan} is \textPredicate{result} of a process of \textConstant{Planning}''}. Indeed, we can propose two different conjectures (CQs) on the basis of the knowledge in Figure \ref{fig:introduction}. In the first one, the statement is assumed to be true in the ontology:\footnote{Assuming that the knowledge in the ontology and {WordNet}{} is correct, and also that the mapping from {WordNet}{} to the ontology is correct, we consider that statement (\ref{goal:PlanPlanning}) is true according to our commonsense knowledge interpretation.} \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{exists} \; ( \variable{X} \; \variable{Y} ) & \label{goal:PlanPlanning} \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Planning} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{Y} \; \constant{Plan} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{result} \; \variable{X} \; \variable{Y} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}In the second one, which is obtained by the negation of (\ref{goal:PlanPlanning}), we assume that the statement is false: that is, that {\it ``\textConstant{Plan} is not \textPredicate{result} of any process of \textConstant{Planning}''}. By proceeding in this way, we obtain around 7,500 pairs of CQs on the basis of the information of {WordNet}{} using additional {WordNet}{} relations and question patterns. The contributions of this paper are manifold. First, we present an evolved version of our methodology for the evaluation of FOL ontologies using ATPs. As introduced in \cite{ALR15}, our proposal is an adaptation of the methodology described in \cite{GrF95} for the design and evaluation of ontologies. Second, we propose a novel method for the (semi-)automatic creation of CQs that relies on a small set of question patterns. The proposed set of CQs enables the evaluation of a) the competency of ontologies derived from SUMO{}, b) the mapping between {WordNet}{} and SUMO{}, c) the knowledge in {WordNet}{}, and d) ATPs and other tools for automated reasoning. To the best of our knowledge, our proposal is the first attempt to exploit the information in {WordNet}{} and its mapping into SUMO{} for the automatic evaluation of knowledge-based resources using FOL ATPs. Third, we summarize the results of an automatic evaluation of the competency of several translations of SUMO{} into first-order logic (FOL) and the performance of various FOL ATPs by means of the adapted evaluation method proposed in \cite{ALR15}. Fourth, we report on the evaluation of the set of resulting CQs according to different quality criteria. On one hand, we automatically check its level of coverage with respect to the evaluated ontologies by parsing the proofs provided by ATPs. On the other hand, we perform a manual evaluation of a sample of the CQs and analyze in detail their results by considering the quality of proposed conjectures, the mapping information of the involved synsets and the knowledge in the ontology. {\it Outline of the paper}. In order to make the paper self-contained, in the following section we review the state-of-the-art in automatic evaluation of SUMO{}-based ontologies using CQs. Our revision includes the existing translations of SUMO{} into FOL, the most successful FOL ATPs and the previously proposed CQs. In Section \ref{section:methodology}, we describe our methodology for the automatic evaluation of ontologies using ATPs. Next, in Section \ref{section:CQs} we introduce our proposal for the (semi-)automatic creation of CQs by exploiting the knowledge in {WordNet}{} and its mapping into SUMO{}, with the purpose of evaluating SUMO{}-based ontologies. The different question patterns proposed for the creation of CQs are described in Sections \ref{section:MultipleMappingPattern} to \ref{section:ProcessPatterns}. Then, we report on our experimental evaluation of the competency of some FOL translations of SUMO{}, the performance of FOL ATPs and the quality of the proposed CQs in Section \ref{section:experimentation}. Finally, we provide some conclusions and discuss future work in Section \ref{section:conclusions}. \section{State of the art} \label{section:art} In this section, we review the state-of-the-art in automatic evaluation of SUMO{}-based ontologies. For this purpose, we focus on the description of the resources that have been proposed and used in the literature for the evaluation of SUMO{}-based ontologies using CQs. First, we introduce SUMO{} and its transformations into FOL in the following subsection. Next, we describe the most successful state-of-the-art FOL ATPs in Subsection \ref{subsection:ATPs}. Finally, we review the CQs that have been previously proposed for the evaluation of SUMO{}-based ontologies in Subsection \ref{subsection:CQs}. \subsection{SUMO{} and its Transformations into FOL} \label{subsection:SUMO} SUMO{}\footnote{\url{http://www.ontologyportal.org}} \cite{Niles+Pease'01} has its origins in the nineties, when a group of engineers from the IEEE Standard Upper Ontology Working Group pushed for a formal ontology standard. Their goal was to develop a standard upper ontology to promote data interoperability, information search and retrieval, automated inference and natural language processing. SUMO{} is expressed in SUO-KIF (Standard Upper Ontology Knowledge Interchange Format \cite{Pea09}), which is a dialect of KIF (Knowledge Interchange Format \cite{Richard+'92}). Both KIF and SUO-KIF can be used to write FOL formulas, but their syntax goes beyond FOL. Consequently, SUMO{} cannot be directly used by FOL ATPs without a suitable transformation \cite{ALR12}. With respect to higher-order aspects of SUMO{}, an additional translation is required for enabling the use of SUMO{} by means of pure higher-order theorem provers \cite{PeB13}. Several different proposals for converting large portions of SUMO{} into a FOL ontology exist. In \cite{PeS07}, the authors report some preliminary experimental results evaluating the query timeout for different options when translating SUMO{} into FOL. Evolved versions of the translation described in \cite{PeS07} can be found in the {\it Thousands of Problems for Theorem Provers} (TPTP) problem library\footnote{\url{http://www.tptp.org}} \cite{Sut09} (hereinafter TPTP-SUMO{}), but is no longer maintained since TPTP problem library version v5.4.0 (the current TPTP version is v7.0.0). Following the approach of \cite{HoV06}, in \cite{ALR12} we use ATPs for reengineering around 88\% of SUMO{}, obtaining Adimen-SUMO{} (v2.2). We are continuously evolving and improving Adimen-SUMO{} by correcting some of the defects presented in SUMO{}. As result of this process, we have corrected more than 100 defective axioms in the current version of Adimen-SUMO{} (v2.6). Both TPTP-SUMO{} and Adimen-SUMO{} inherits information from the top and the middle levels of SUMO{} (from now on, the {\it core} of SUMO{}), thus not considering the information from the domain ontologies. The knowledge in SUMO{} is organized around the notions of {\it object} and {\it class} ---the main SUMO{} concepts. These concepts are respectively defined in Adimen-SUMO{} by means of the {\it meta}-predicates \textPredicate{\$instance} and \textPredicate{\$subclass}. SUMO{} objects and classes are not disjoint, since every SUMO{} class is defined to be instance of \textConstant{class}, and thus every SUMO{} class is also a SUMO{} object. Additionally, SUMO{} also differentiates between {\it relations} and {\it attributes}. In particular, SUMO{} distinguishes between {\it individual} relation and attributes ---that is, instances of the SUMO{} classes \textConstant{Relation} and \textConstant{Attribute} respectively--- and {\it classes} of relations and attributes ---that is, subclasses of the SUMO{} classes \textConstant{Relation} and \textConstant{Attribute} respectively. SUMO{} provides specific predicates for dealing with relations and attributes. Amongst others, we currently use the next ones in Adimen-SUMO{}: \begin{itemize} \item \textPredicate{subrelation}, which relates two individual SUMO{} relations (that is, two instances of the SUMO{} class \textConstant{Relation}). For example, the following SUMO{} axiom states that \textConstant{member} is subrelation of \textConstant{part}: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{subrelation} \; \constant{member} \; \constant{part} ) & \label{axiom:part} \end{flalign} \end{footnotesize} \item \textPredicate{subAttribute}, which relates two individual SUMO{} attributes (that is, two instances of the SUMO{} class \textConstant{Attribute}). For example, the following SUMO{} axiom states that \textConstant{Headache} is subattribute of \textConstant{Pain}: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{subAttribute} \; \constant{Headache} \; \constant{Pain} ) & \label{axiom:Pain} \end{flalign} \end{footnotesize} \item \textPredicate{holds$^k$}, which relates an individual SUMO{} relation (that is, an instance of the SUMO{} class \textConstant{Relation}) with a $k$-tuple of SUMO{} concepts. For example, the following Adimen-SUMO{} formula is inherited from the SUMO{} axiom that characterizes transitive relations: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} \; ( \variable{REL} ) & \label{axiom:TransitiveRelation} \\ & \hspace{20pt} ( \connective{<=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{REL} \; \constant{TransitiveRelation} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{forall} \; ( \variable{INST1} \; \variable{INST2} \; \variable{INST3} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$holds3} \; \variable{REL} \; \variable{INST1} \; \variable{INST2} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$holds3} \; \variable{REL} \; \variable{INST2} \; \variable{INST3} ) ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$holds3} \; \variable{REL} \; \variable{INST1} \; \variable{INST3} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \item \textPredicate{attribute}, which relates a SUMO{} object with an individual SUMO{} attribute (that is, an instance of the SUMO{} class \textConstant{Attribute}). For example, in the next SUMO{} axiom the predicate \textPredicate{attribute} is used for the characterization of \textPredicate{subAttribute}: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} \; ( \variable{ATTR1} \; \variable{ATTR2} ) & \label{axiom:subAttribute} \\ & \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{subAttribute} \; \variable{ATTR1} \; \variable{ATTR2} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{forall} \; ( \variable{OBJ} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{attribute} \; \variable{OBJ} \; \variable{ATTR1} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{attribute} \; \variable{OBJ} \; \variable{ATTR2} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \end{itemize} For simplicity, from now on we denote the nature of SUMO{} concepts by adding as subscript the symbols $o{}$ (SUMO{} objects that are neither classes nor individual relations nor individual attributes), $c{}$ (SUMO{} classes that are neither classes of relations nor classes of attributes), $r{}$ (individual SUMO{} relations), $a{}$ (individual SUMO{} attributes), $R{}$ (classes of SUMO{} relations) and $A{}$ (classes of SUMO{} attributes). For example: \SUMOObject{YearDuration}, \SUMOClass{Artifact}, \SUMOIndividualRelation{customer}, \SUMOIndividualAttribute{HotTemperature}, \SUMOClassOfRelations{TranstiveRelation} and \SUMOClassOfAttributes{BreakabilityAttribute}. \begin{table}[t] \centering \begin{tabular} {lrrrrrr} \hline \\[-10pt] \multirow{2}{*}{} & \multirow{2}{*}{\hspace{10pt} {\bf SUMO{}}} & \multicolumn{3}{c}{{\bf TPTP-SUMO{}}} & \multicolumn{2}{r}{{\bf Adimen-SUMO{}}} \\ \multirow{2}{*}{} & \multirow{2}{*}{} & \multicolumn{3}{c}{v5.3.0} & v2.2 & v2.6 \\ \hline \\[-10pt] Objects & 20,168 & & 2,920 & & 940 & 1,007 \\ Classes & 5,595 & \hspace{14pt} & 2,086 & & 2,093 & 2,120\\ Relations & 369 & & 208 & & 207 & 207 \\ Attributes & 2,181 & & 68 & & 67 & 66 \\ \hline \end{tabular} \caption{\label{table:SUMOFigures} Some figures about SUMO{}, TPTP-SUMO{} and Adimen-SUMO{}} \end{table} In Table \ref{table:SUMOFigures} we provide some figures comparing the explicit content of SUMO{}, TPTP-SUMO{} and Adimen-SUMO{}. In particular, the number of objects, classes, relations (both individual relations and classes of relations) and attributes (both individual attributes and classes of attributes) that are explicitly defined. The most significant difference between TPTP-SUMO{} and Adimen-SUMO{} is the number of explicitly defined objects, which is due to the fact that during the FOL transformation many objects that are implicitly defined in the core of SUMO{} are explicitly introduced in TPTP-SUMO{}. On the contrary, the translation from SUMO{} into Adimen-SUMO{} is based on a small set of axioms, which provide the axiomatization of SUMO{} {\it meta}-predicates. Apart from \SUMOIndividualRelation{\$instance} and \SUMOIndividualRelation{\$subclass} for the definition of objects and classes, some of these {\it meta}-predicates are \SUMOIndividualRelation{\$disjoint} and \SUMOIndividualRelation{\$partition}. The axiomatization of these {\it meta}-predicates, which is essential for the transformation of SUMO{} knowledge into FOL formulas, cannot be directly inherited from SUMO{} (see \cite{ALR12}). The transformation also adds new axioms for a suitable characterization of SUMO{} types, variable-arity relations and \HoldsIndividualRelation{\$holds}{k} predicates, which simulate the use of variable-predicates in FOL formulas. Nevertheless, Adimen-SUMO{} (and also TPTP-SUMO{}) does not include most of the instances defined in SUMO{} since domain ontologies are not translated. To overcome this problem, we include the following axiom in Adimen-SUMO{} v2.4: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} \; ( \variable{CLASS} ) & \label{axiom:nonEmptyClasses} \\ & \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$subclass} \; \variable{CLASS} \; \constant{Entity} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{exists} \; ( \variable{THING} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{THING} \; \variable{CLASS} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} In this fashion, we ensure the existence in Adimen-SUMO{} (v2.4 or newer) of some instance of every SUMO{} class although domain ontologies are not translated. \subsection{{WordNet}{} and its Mapping to SUMO{}} \label{subsection:WordNet} {WordNet}{} \cite{Fellbaum'98} is a large lexical database where nouns, verbs, adjectives and adverbs are grouped into sets of synonyms ({\it synsets}), each expressing a distinct concept. Each synset refers to a word sense using the following format: \synset{word}{s}{p}, where $s$ is the sense number and $p$ is the part-of-speech ($n$ for nouns, $v$ for verbs, $a$ for adjectives and $s$ for satellites). \begin{figure}[t] \centering \begin{tikzpicture} \node at (-3, 2) {\synset{blistering}{2}{s}~~}; \node at (-3, 1) {\synset{warming}{2}{s}~~}; \node at (-3, 0) {\synset{torrid}{3}{s}~~}; \node at (-3,-1) {\synset{heated}{1}{s}~~}; \node at (-3,-2) {\synset{tropical}{4}{s}~~}; \node at ( 0, 0) {\synset{hot}{1}{a}}; \node at ( 1, 0) {/}; \node at ( 2, 0) {\synset{cold}{1}{a}}; \node at ( 5, 2) {~~\synset{gelid}{1}{s}}; \node at ( 5, 1) {~~\synset{frosty}{3}{s}}; \node at ( 5, 0) {~~\synset{heatless}{1}{s}}; \node at ( 5,-1) {~~\synset{refrigerated}{1}{s}}; \node at ( 5,-2) {~~\synset{shivery}{1}{s}}; \draw [-latex] (-2, 2) -- (-0.5, 0.5); \draw [-latex] (-2, 1) -- (-0.6, 0.25); \draw [-latex] (-2, 0) -- (-0.75, 0); \draw [-latex] (-2,-1) -- (-0.6,-0.25); \draw [-latex] (-2,-2) -- (-0.5,-0.5); \draw [latex-latex] ( 0.5, 0) -- ( 1.5, 0); \draw [-latex] (4, 2) -- (2.5, 0.5); \draw [-latex] (4, 1) -- (2.6, 0.25); \draw [-latex] (4, 0) -- (2.75, 0); \draw [-latex] (4,-1) -- (2.6,-0.25); \draw [-latex] (4,-2) -- (2.5,-0.5); \end{tikzpicture} \caption{Antonym-pairs} \label{fig:antonymPairs} \end{figure} Although superficially resembling a thesaurus, {WordNet}{} interlinks not just word forms but specific senses of words. Thus, the main relation in {WordNet}{} is synonymy, but synsets are interlinked by means of many conceptual-semantic and lexical relations such as the super- and subordinate relations hyperonymy and hyponymy. Amongst them, in this paper we focus on the following ones: \begin{itemize} \item {\it Morphosemantic Links} \cite{FOC09}, which are semantic relations between morphologically related verbs and nouns provided in the morphosemantic database.\footnote{Available at \url{http://wordnetcode.princeton.edu/standoff-files/morphosemantic-links.xls}.} Among the 14 proposed semantic relations, one can find {\it agent}, {\it instrument}, {\it result} and {\it event}. The first three ones relate a process (verb) with its corresponding agent/instrument/result (noun), while {\it event} relates nouns and verbs referring to the same process. For example, the synsets \synset{patent}{1}{v} and \synset{patentee}{1}{n} are related by {\it agent}, \synset{cool}{1}{v} and \synset{cooler}{1}{n} are related by {\it instrument}, \synset{schedule}{2}{v} and \synset{schedule}{1}{n} are related by {\it result} (see Figure \ref{fig:introduction}), and the synsets \synset{kill}{10}{v} and \synset{killing}{2}{n} are related by {\it event}. \item {\it antonymy} and {\it similarity} relations, which are used to organize adjectives as follows: {\it antonymy} connects pairs of adjectives with opposite semantics, and each of these adjectives in turn is linked to semantically comparable adjectives ---called {\it satellites}--- by {\it similarity}. For example, the adjectives \synset{hot}{1}{a} and \synset{cold}{1}{a} are related by {\it antonymy}, and the adjectives \synset{blistering}{2}{s}, \synset{warming}{2}{s}, \synset{torrid}{3}{s}, \synset{heated}{1}{s} and \synset{tropical}{4}{s} are satellites of \synset{hot}{1}{a} (see Figure \ref{fig:antonymPairs}). In addition, {\it antonymy} is inherited by {\it similarity}, which enables the extension of the set of pairs of adjectives related by {\it antonymy}. In the above example, each satellite of \synset{hot}{1}{a} (resp. \synset{cold}{1}{a}) is antonym of \synset{cold}{1}{a} (resp. \synset{hot}{1}{a}) and, furthermore, is also an antonym of each satellite of \synset{cold}{1}{a} (resp. \synset{hot}{1}{a}), thus obtaining a set of 36 antonym-pairs from the information in Figure \ref{fig:antonymPairs}. In addition, {\it antonymy} also relates nouns or verbs with opposite semantics. For example, \synset{natural\_object}{1}{n} and \synset{artifact}{1}{n} are related by the semantic relation {\it antonymy}. \end{itemize} {WordNet}{} is linked with SUMO{} by means of the mapping described in \cite{Niles+Pease'03}. This mapping connects {WordNet}{} synsets to terms in SUMO{} using three relations: {\it{equivalence}}{}, {\it{subsumption}}{} and {\it{instance}}{}. Additionally, the mapping also uses the complementaries of {\it{equivalence}}{} and {\it{instance}}{}. We denote mapping relations by concatenating the symbols `$=$' ({\it{equivalence}}{}), `$+$' ({\it{subsumption}}), `$@$' ({\it{instance}}), `$\widehat{\equivalenceMappingSymbol}$' (complementary of {\it{equivalence}}) and `$\widehat{\subsumptionMappingSymbol}$' (complementary of {\it{subsumption}}) to the corresponding SUMO{} concept. For example, the synsets \synset{horse}{1}{n}, \synset{education}{4}{n}, \synset{zero}{1}{a}, \synset{natural\_object}{1}{n} and \synset{dark}{1}{a} are connected to \equivalenceMapping{\SUMOClass{Horse}}, \subsumptionMapping{\SUMOClass{EducationalProcess}}, \instanceMapping{\SUMOClass{Integer}}, \negatedEquivalenceMapping{\SUMOClass{Artifact}} and \negatedSubsumptionMapping{\SUMOClass{RadiatingLight}} respectively. {\it{equivalence}}{} denotes that the related {WordNet}{} synset and SUMO{} concept are equivalent in meaning, whereas {\it{subsumption}}{} and {\it{instance}}{} indicate that the semantics of the {WordNet}{} synset is less general than the semantics of the SUMO{} concept. In particular, {\it{instance}}{} is used when the semantics of the {WordNet}{} synsets refers to a particular member of the class to which the semantics of the SUMO{} concept is referred.\footnote{Note that {\it{instance}}{} denotes the relation that is used in the mapping between {WordNet}{} and SUMO{} (for example, in \instanceMapping{Integer}), while \SUMOIndividualRelation{\$instance} denotes the meta-predicate that is used in the axiomatization of Adimen-SUMO{}.} From now on, we say that a {WordNet}{} synset is {\it less general} than the SUMO{} concepts to which the synset is connected using {\it{subsumption}}{} or {\it{instance}}{}. {WordNet}{} v3.0 consists of 117,659 synsets: 82,115 nouns, 13,767 verbs, 18,156 adjectives and 3,621 adverbs. From the 82,115 noun synsets, 576 synsets are connected to more than one SUMO{} concept. Furthermore, 1,560 adjective synsets and 179 adverb synsets are not connected to any SUMO{} concept. All the remaining synsets are connected to a single SUMO{} concept. \subsection{FOL Automated Theorem Provers} \label{subsection:ATPs} The automatic application of methodologies based on CQs requires the use of ATPs. State-of-the-art ATPs for FOL are highly sophisticated systems that have been demonstrated to provide advanced reasoning support to expressive ontologies. Since 1993, many researchers have used the {\it Thousands of Problems for Theorem Provers} (TPTP) problem library as an appropriate and convenient basis for ATP system evaluation \cite{Sut09}, and TPTP has become the {\it de facto} standard set of test problems for classical FOL ATP systems. The performance of ATP systems is evaluated every year in the {\it CADE ATP System Competition} (CASC) \cite{PSS02,SuS06} in the context of a set of problems chosen from the TPTP problem library and applying a specified time limit for each individual problem. Among the systems that have ever participated in CASC, we have selected the ones that are of special interest for reasoning with FOL ontologies, which are Vampire \cite{RiV02} and E \cite{Sch02}. Next, we describe those systems and justify our selection. The first one is Vampire\footnote{\url{http://www.vprover.org}} \cite{RiV02}, an ATP system for first-order classical logic which has been the winner of the FOF\footnote{First-Order Form non-propositional theorems (axioms with a provable conjecture).} and LTB\footnote{First-order form theorems from Large Theories, presented in Batches.} divisions in CASC during several years. Vampire implements the calculi of ordered binary resolution and superposition for handling equality, and it also implements the Inst-gen calculus. Vampire uses various standard redundancy criteria and implements several simplification techniques for pruning the search space, such as subsumption, tautology deletion, subsumption resolution and rewriting by ordered unit equalities. The reduction ordering is the Knuth-Bendix Ordering. In this paper, we consider four different versions of Vampire that have participated in CASC since 2012: v2.6, v3.0, v4.0 and v4.1. Vampire v2.6 is the CASC-J6 (2012), CASC-24 (2013) and CASC-J7 (2014) FOF division winner, and the CASC-J6 (2012) LTB division winner. Vampire v3.0 obtained 2$^{nd}$ place in the CASC-24 (2013) FOF division, but performed better than the winner (Vampire v2.6), and was used for the experimentation reported in \cite{ALR15}. Vampire v4.0 is the CASC-25 (2015), CASC-J8 (2016) and CASC-26 (2017) LTB division winner, the CASC-25 (2015) and CASC-J8 (2016) FOF division winner, and the CASC-25 (2015) FNT\footnote{First-order form non-propositional Non-Theorems.} and EPR\footnote{Effectively PRopositional clause normal form theorems and non-theorems.} divisions winner. In addition, Vampire v4.0 obtained $2^{nd}$ place in the CASC-26 (2017) FOF division. Finally, Vampire v4.1 is the CASC-J8 (2016) and CASC-26 (2017) FNT and TFT\footnote{Typed First-order Theorems.} divisions winner, and also achieved the 2$^{nd}$ place in the CASC-J8 (2016) FOF and LTB divisions. The second system that we have selected is E \cite{Sch02}, a theorem prover for full FOL with equality which consists of a (optional) clausifier for pre-processing full first-order formulae into clausal form, and a saturation algorithm implementing an instance of the superposition calculus with negative literal selection and a number of redundancy elimination techniques. Among other awards, E has been one of the top three ATP systems in the FOF division of CASC since 2012. E has also been used as a subcomponent by some other competitors in CASC. For its evaluation, we use E v2.0, which is available at \url{http://www.eprover.org}. \subsection{Available Competency Questions for SUMO{}} \label{subsection:CQs} In this subsection, we review the CQs that have been proposed in the literature for the evaluation of SUMO{}-based ontologies. We classify those CQs into 2 sets, depending on the nature of their creation method. On one hand, the first set consists of only 64 CQs that have been manually created ({\it creative} CQs). This set includes the 33 CQs belonging to the {\it Commonsense Reasoning} (CSR) domain of the TPTP problem library that is based on SUMO{}. For example, the following conjecture that belongs to the CSR domain of the TPTP problem library \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} \; ( \variable{ORG1} \; \variable{ORG2} \; \variable{ORG3} ) & \label{goal:SiblingMother} \\ & \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{mother} \; \variable{ORG1} \; \variable{ORG2} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{sibling} \; \variable{ORG1} \; \variable{ORG3} ) ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{mother} \; \variable{ORG3} \; \variable{ORG2} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}states that {\it ``Siblings have the same mother''} as follows: the mother of an organism \textVariable{ORG3} is \textVariable{ORG2} whenever \textVariable{ORG2} is mother of some other organism \textVariable{ORG1} such that \textVariable{ORG1} and \textVariable{ORG3} are sibling. In the past, the CSR domain was part of the set of eligible problems for the LTB division in CASC, but is not currently used. In addition, we have proposed 5 creative CQs in \cite{ALR12} and 26 creative CQs in \cite{ALR15}. For example, the conjectures {\it ``Plants do not suffer from headache''} \cite{ALR12} and {\it ``Herbivores eat animals''} \cite{ALR15}: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{=>} & \label{goal:HeadachePlant} \\ & \hspace{20pt} ( \predicate{attribute} \; \variable{OBJ} \; \constant{Headache} ) & \nonumber \\ & \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{OBJ} \; \constant{Plant} ) ) ) & \nonumber \\[5pt] & ( \connective{exists} \; ( \variable{HERBIVORE} \; \variable{ANIMAL} \; \variable{EATING} ) & \label{goal:HerbivoreFalsityTest} \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{HERBIVORE} \; \constant{Herbivore} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{ANIMAL} \; \constant{Animal} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{EATING} \; \constant{Eating} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{agent} \; \variable{EATING} \; \variable{HERBIVORE} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{patient} \; \variable{EATING} \; \variable{ANIMAL} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}Obviously, conjecture (\ref{goal:HeadachePlant}) is assumed to be true and conjecture (\ref{goal:HerbivoreFalsityTest}) is assumed to be false according to commonsense knowledge. On the other hand, the second set consists of the CQs that have been obtained by following a (semi-)automatic process ({\it automatically generated} CQs). To the best of our knowledge, the first proposal for the (semi-)automatic creation of CQs is described in \cite{ALR15}, where we introduced a preliminary version of the method described in this paper for the exploitation of {WordNet}{} and its mapping into SUMO{}. Among other restrictions, we focused on synsets connected to SUMO{} classes, and thus we discarded much of the mapping information. The resulting set of 7,112 CQs have been used for the automatic evaluation of ATP systems reported in \cite{ALR16}. We provide more details about this preliminary version of our proposal in Section \ref{section:CQs}. In addition, we have applied the same methodology for the creation of CQs on the basis of the meronymy relations of {WordNet}{}, as described in \cite{AlR18,AGR18}. The resulting benchmark consists of 4,290 CQs. \section{Automatic Evaluation of FOL Ontologies using CQs} \label{section:methodology} In this section, we summarize our adaptation of the methodology for the design and evaluation of ontologies introduced in \cite{GrF95} to be automatically applied using state-of-the-art ATPs, as initially proposed in \cite{ALR15}. In \cite{GrF95}, the authors propose to evaluate the expressiveness of an ontology by proving completeness theorems w.r.t. a set of CQs: that is, the conditions under which the solutions to the CQs are complete. The proof of completeness theorems requires checking whether a given CQ is entailed by the ontology or not: that is, given an ontology $\Phi$ and a conjecture $\phi$, we must decide if $\Phi \models \phi$. For this purpose, in \cite{ALR15} we propose to use ATPs such as Vampire \cite{RiV02} and E \cite{Sch02} that work by refutation\footnote{The proof that a conjecture is entailed by an ontology consists in demonstrating that the formula resulting from the conjunction of the ontology and the negation of the conjecture is unsatisfiable.} within some given execution-time and memory limits. Theoretically, if the conjecture is entailed by the ontology, then ATPs will eventually find a refutation given enough time (and space). However, theorem proving in FOL is a very hard problem, so it is not reasonable to expect ATPs to find a proof for every entailed conjecture \cite{KoV13}. Thus, if ATPs can find a proof for a conjecture $\phi$ in an ontology $\Phi$, then we can be sure that the corresponding CQ is entailed by $\Phi$: that is, $\Phi \models \phi$. On the contrary, if ATPs cannot find a proof, we do not know if (a) the conjecture is not entailed by the ontology ($\Phi \not\models^? \phi$) or (b) although the conjecture is entailed, ATPs have not been able to find the proof within the provided execution-time and memory limits ($\Phi \models^? \phi$). Due to the semi-decidability problem of FOL, increasing the execution-time and memory limits is not a solution for conjectures that are not entailed. For the same reason, using other systems that do not work by refutation (for example, by model generation) is not a general solution. Furthermore, we also propose the division of the set of CQs into two classes: {\it truth-tests} and {\it falsity-tests}, depending on whether we expect the conjecture to be entailed by the ontology or not. An example of truth-test is conjecture (\ref{goal:SiblingMother}) ---{\it ``Siblings have the same mother''}---, which belongs to the CSR domain of the TPTP problem library, because it is expected to be entailed. On the contrary, conjecture (\ref{goal:HerbivoreFalsityTest}) ---{\it ``Herbivores eat animals''}---, which belongs to the set of CQs proposed in \cite{ALR15}, is a falsity-test since it is not expected to be entailed by the ontology. In order to overcome the problem of deciding whether CQs are entailed or not by the ontology using ATPs, we propose the classification of CQs as either (i) {\it passing}, (ii) {\it non-passing} or (iii) {\it unknown} using the following criteria: \begin{itemize} \item If ATPs find a proof, then {\it truth-tests} are classified as {\it passing} since the corresponding conjectures are expected to be entailed, while {\it falsity-tests} are classified as {\it non-passing}, because the corresponding conjectures are expected not to be entailed. For example, ATPs easily prove that conjecture (\ref{goal:SiblingMother}) is entailed by Adimen-SUMO{} v2.6, thus the truth-test is classified as {\it passing}. \item Otherwise, if no proof is found, then we classify both {\it truth-} and {\it falsity-tests} as {\it unknown} because we do not know whether the corresponding conjectures are entailed or not. For example, conjecture (\ref{goal:HerbivoreFalsityTest}) is classified as {\it unknown} according to Adimen-SUMO{} v2.6. \end{itemize} \setlength\dashlinedash{0.2pt} \setlength\dashlinegap{5pt} \setlength\arrayrulewidth{0.3pt} \begin{table}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular} {ll;{2.5pt/2.5pt}ll;{2.5pt/2.5pt}ll;{2.5pt/2.5pt}l} \hline \multicolumn{2}{c;{2.5pt/2.5pt}}{\bf Problem} & \multicolumn{4}{c}{\bf Condition} & \multicolumn{1}{;{2.5pt/2.5pt}c}{\multirow{2}{*}{\bf Assessment}} \\ \multicolumn{2}{c;{2.5pt/2.5pt}}{\bf classification} & \multicolumn{2}{c}{\bf Truth-test} & \multicolumn{2}{c}{\bf Falsity-test} & \multicolumn{1}{;{2.5pt/2.5pt}c}{\multirow{2}{*}{}} \\ \hline \multirow{4}{*}{Solved} & \multirow{2}{*}{Entailed} & \multirow{2}{*}{Passing} & \multirow{2}{*}{($\Phi \models \TT{\phi}$)} & \multirow{2}{*}{Unknown} & ($\Phi \models^? \FT{\phi}$) & $\phi$ is redundant knowledge \\ \multirow{4}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & ($\Phi \not\models^? \FT{\phi}$) & $\Phi$ is validated against $\phi$ \\ \cdashline{3-7}[2.5pt/2.5pt] \multirow{4}{*}{} & \multirow{2}{*}{Incompatible} & \multirow{2}{*}{Unknown} & ($\Phi \models^? \TT{\phi}$) & \multirow{2}{*}{Non-passing} & \multirow{2}{*}{($\Phi \models \FT{\phi}$)} & $\Phi$ and $\phi$ are incompatible \\ \multirow{4}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & ($\Phi \not\models^? \TT{\phi}$) & \multirow{2}{*}{} & \multirow{2}{*}{} & There is a defect in $\Phi$ \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{} & Passing & ($\Phi \models \TT{\phi}$) & Non-passing & ($\Phi \models \FT{\phi}$) & $\Phi$ is inconsistent \\ \cdashline{3-7}[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{\multirow{3}{*}{Unsolved}} & \multirow{3}{*}{Unknown} & \multirow{2}{*}{($\Phi \models^? \TT{\phi}$)} & \multirow{3}{*}{Unknown} & \multirow{2}{*}{($\Phi \models^? \FT{\phi}$)} & Is $\phi$ new knowledge? \\ \multicolumn{2}{l;{2.5pt/2.5pt}}{\multirow{3}{*}{}} & \multirow{3}{*}{} & \multirow{2}{*}{($\Phi \not\models^? \TT{\phi}$)} & \multirow{3}{*}{} & \multirow{2}{*}{($\Phi \not\models^? \FT{\phi}$)} & Is $\phi$ redundant? \\ \multicolumn{2}{l;{2.5pt/2.5pt}}{\multirow{3}{*}{}} & \multirow{3}{*}{} & \multirow{3}{*}{} & \multirow{3}{*}{} & \multirow{3}{*}{} & Is there any defect in $\Phi$? \\ \hline \end{tabular} } \caption{\label{table:Methodology} Evaluating FOL Ontologies Using ATPs} \end{table} As discussed for the example in Figure \ref{fig:introduction}, truth- and falsity-tests can be interpreted as complementary conjectures. That is, given a truth-test $\phi$, one can propose its negation $\neg \phi$ as falsity-test, and {\it vice versa}. For example, the following truth-test ---{\it ``Herbivores do not eat animals''}--- is obtained by the negation of (\ref{goal:HerbivoreFalsityTest}): \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} \; ( \variable{HERBIVORE} \; \variable{ANIMAL} \; \variable{EATING} ) & \label{goal:HerbivoreTruthTest} \\ & \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{HERBIVORE} \; \constant{Herbivore} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{ANIMAL} \; \constant{Animal} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{EATING} \; \constant{Eating} ) ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{agent} \; \variable{EATING} \; \variable{HERBIVORE} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{patient} \; \variable{EATING} \; \variable{ANIMAL} ) ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}Conjecture (\ref{goal:HerbivoreTruthTest}) is classified as {\it passing} according to Adimen-SUMO{} v2.6. In the same way, we obtain a new falsity-test by negating conjecture (\ref{goal:SiblingMother}). Hence, in general we can assume that any set of CQs that is used for the evaluation of FOL ontologies consists of complementary truth- and falsity-tests. Furthermore, from now on we consider a truth-test $\phi$ and its negative counterpart $\neg \phi$ as a single {\it problem} consisting of two conjectures. For the sake of simplicity, we denote each problem by its truth-test. Thus, the truth-test of a problem $\phi$ is $\phi$ itself, and the falsity-test of a problem $\phi$ is $\neg \phi$. In Table \ref{table:Methodology}, we describe the evaluation of a FOL ontology $\Phi$ on the basis of a set of problems that are assumed to be true using ATPs. For each problem, we distinguish four cases. In the first two cases, a problem $\phi$ is decided to be {\it solved} because ATPs find a proof for either its truth-test $\phi$ or its falsity-test $\neg \phi$. If ATPs prove only $\Phi \models \phi$ (that is, $\Phi \models^? \neg \phi$ and $\Phi \not\models^? \neg \phi$), then we know that the knowledge in $\phi$ is already included in the ontology and, consequently, we say that the problem $\phi$ is {\it entailed} by (also {\it compatible} with) the ontology $\Phi$. Otherwise, when ATPs prove only $\Phi \models \neg \phi$ (that is, $\Phi \models^? \phi$ and $\Phi \not\models^? \phi$), this reveals the existence of a defect in the ontology since we assume that $\phi$ is true. Therefore, we can say that the problem $\phi$ is {\it incompatible} with the ontology. In the last two cases, the problem $\phi$ remains {\it unsolved}. On one hand, if $\Phi$ is inconsistent then ATPs find a proof for its truth- and falsity-test, which are classified as passing and non-passing respectively. Since falsity-tests are obtained by the negation of truth-tests and a consistent formula cannot entail a formula and its negation, then we can be certain that $\Phi$ is inconsistent in this case. On the other hand, both the truth- and the falsity-test of a problem $\phi$ are classified as unknown because ATPs do not find any proof before running out of resources. Hence, we have no information for the evaluation of $\Phi$ according to the problem $\phi$ and, more specifically, we do not know whether: \begin{itemize} \item $\phi$ is new knowledge that could be included in $\Phi$ for improving the knowledge in the ontology. \item $\phi$ is either redundant ---that is, $\Phi$ already entails $\phi$--- or incompatible with $\Phi$ ---that is, $\Phi \models \neg \phi$---, since ATPs cannot find a proof within the given resources of time and memory. \end{itemize} \section{Automatic Creation of CQs Using {WordNet}{}} \label{section:CQs} In this section, we introduce our proposal for the creation of problems by exploiting {WordNet}{} and its mapping into SUMO{}, as introduced with the example in Figure \ref{fig:introduction}. Our proposal is a substantially evolved version of the method presented in \cite{ALR15}. Amongst other improvements, we now make use of the mapping relations between {WordNet}{} and SUMO{}, that were equally addressed in \cite{ALR15}, and we are now able to exploit additional {WordNet}{} information. In addition, we have also improved the process of obtaining a mapping between {WordNet}{} and the core of SUMO{}. Therefore, the set of CQs introduced in this work ---which is different from the one introduced in \cite{ALR15}--- enables richer exploitation of the knowledge in {WordNet}{} and its mapping into SUMO{}. In the following subsections, we first describe the method for obtaining a mapping from {WordNet}{} into Adimen-SUMO{} (Subsection \ref{subsection:AdimenSUMOMapping}). Then, we introduce the method for the translation of {WordNet}{} knowledge into Adimen-SUMO{} statements in Subsection \ref{subsection:AdimenSUMOStatements}. Finally, we focus on the description of the {WordNet}{} knowledge and the hypothesis that are the basis of our proposal in Subsection \ref{subsection:Exploiting}. \subsection{Obtaining a mapping between {WordNet}{} and the core of SUMO{}} \label{subsection:AdimenSUMOMapping} The mapping between {WordNet}{} and SUMO{} uses terms from the core ---top and middle levels--- of SUMO{}, but also from the domain ontologies. However, both TPTP-SUMO{} and Adimen-SUMO{} use only axioms from the core of SUMO{}. A full mapping between {WordNet}{} and the core of SUMO{} is obtained by means of the structural relations of SUMO{}: \SUMOIndividualRelation{\$instance}, \SUMOIndividualRelation{\$subclass}, \SUMOIndividualRelation{subrelation} and \SUMOIndividualRelation{subAttribute}. Since \SUMOIndividualRelation{\$subclass}, \SUMOIndividualRelation{subrelation} and \SUMOIndividualRelation{subAttribute} are transitive and, additionally, the relations \SUMOIndividualRelation{\$instance}, \SUMOIndividualRelation{subrelation} and \SUMOIndividualRelation{subAttribute} are inherited through \SUMOIndividualRelation{\$subclass}, it is not difficult to obtain the super-concepts of each SUMO{} concept. By proceeding in this way, for each SUMO{} concept that is not defined in the core of SUMO{} we have obtained its set of most-specific super-concepts that are defined in the core of SUMO{}. If a SUMO{} concept is already defined in the core of SUMO{}, then its set of most-specific SUMO{} concepts defined in the core of SUMO{} exclusively consists of itself. Additionally, we have manually corrected some minor and typographical errors affecting 293 SUMO{} concepts. To summarize, 24,906 SUMO{} concepts not defined in the core of SUMO{} are used in the {WordNet}{}-SUMO{} mapping, from which 14,472 concepts are related with several (more than one) super-concepts belonging to the core of SUMO{}, whereas 10,434 concepts are related with a single super-concept. \begin{figure} \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={60pt,between origins},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { & |[name=Cooking]| [ \subsumptionMappingTikZ{\SUMOClassTikZ{Cooking}} ] & & |[name=TopLevel]| \mbox{(Top level)} \\ |[name=synset]| \langle \synsetTikZ{frying}{1}{n} \rangle : & |[name=Frying]| [ \equivalenceMappingTikZ{\SUMOClassTikZ{Frying}} ] & & |[name=FoodOntology]| \mbox{({\it Food} ontology)} \\ }; \draw[-To,dotted](Frying) -- node[right] {\([\$subclass]\)} (Cooking); \end{tikzpicture} \caption{Obtaining a mapping between {WordNet}{} and Adimen-SUMO{}} \label{fig:AdimenSUMOMapping} \end{figure} Using the sets of most-specific super-concepts as described above, we obtain the mapping between each synset $ws$ of {WordNet}{} and the core of SUMO{} as follows: if $ws$ is already mapped into a concept in the core of SUMO{}, we simply keep the current mapping of $ws$; otherwise, if $ws$ is connected to a concept $C$ that is not defined in the core of SUMO{}, then we map $ws$ to each element of its set of most-specific super-concepts of $C$ in the core of SUMO{}. Additionally, in the latter case, the {\it{equivalence}}{} mapping relation is replaced with {\it{subsumption}}{}, since the super-concepts of $C$ are more general than $C$. For example, the synset \synset{frying}{1}{n} is connected to \equivalenceMapping{\SUMOClass{Frying}}, which belongs to the domain ontology {\it Food}. In the same domain ontology, \SUMOClass{Frying} is defined to be a subclass of \SUMOClass{Cooking}, which is defined in the top level of SUMO{}. That is, \SUMOClass{Frying} is not defined in the core of SUMO{}, but \SUMOClass{Cooking} is. Thus, we decide to connect \synset{frying}{1}{n} to \SUMOClass{Cooking} in the mapping from {WordNet}{} to the core of SUMO{}. However, instead of {\it{equivalence}}{}, we connect \synset{frying}{1}{n} to \SUMOClass{Cooking} using the {\it{subsumption}}{} mapping relation: that is, \subsumptionMapping{\SUMOClass{Cooking}} (see Figure \ref{fig:AdimenSUMOMapping}). It is worth noting that the complementaries of the relations {\it{equivalence}}{} and {\it{subsumption}}{} are only used with concepts belonging to the core of SUMO{} in the {WordNet}{}-SUMO{} mapping. As result of this process, we obtain a mapping between all {WordNet}{} synsets and the core of SUMO{} except for 822 nouns, 24 verbs, 3,634 adjectives and 260 adverbs. In addition to the synsets that are not connected to any concept, this process also reveals the existence of synsets connected to concepts that were defined in older versions of SUMO{} but that are no longer available in the current version. For example, the synsets \synset{salmon}{1}{n} and \synset{architect}{2}{n} are connected to \equivalenceMapping{\SUMOClass{Salmon}} and \equivalenceMapping{\SUMOClass{Architect}}, which do not appear in recent versions of SUMO{}. In total, 113 concepts that are used in the {WordNet}{}-SUMO{} mapping are not currently defined in the ontology. In order to obtain a complete mapping into the core of SUMO{}, all synsets without a suitable mapping (around 4,700 synsets) are connected to the SUMO{} top-concept \SUMOClass{Entity} using {\it{subsumption}}{}: that is, \subsumptionMapping{\SUMOClass{Entity}}. In the resulting mapping, 1,104 noun synsets and 2 verb synsets are connected to multiple SUMO{} concepts ---the mapping of those synsets is used in the {\it Multiple mapping} category for the creation of CQs (see Section \ref{section:MultipleMappingPattern})---, whereas the remainder are connected to a single concept. \subsection{Translating the mapping information into the language of Adimen-SUMO{}} \label{subsection:AdimenSUMOStatements} In order to use the {WordNet}{}-SUMO{} mapping to obtain CQs, we have to characterize the mapping information using statements in the language of Adimen-SUMO{}. As described in Subsection \ref{subsection:AdimenSUMOMapping}, each {WordNet}{} synset is connect to SUMO{} concepts using {\it{equivalence}}{}, {\it{subsumption}}{} (or its complementaries) or {\it{instance}}{}. For example, the synsets \synset{horse}{1}{n}, \synset{pony}{1}{n} and \synset{Secretariat}{2}{n} are connected to \equivalenceMapping{\SUMOClass{Horse}}, \subsumptionMapping{\SUMOClass{Horse}} and \instanceMapping{\SUMOClass{Horse}}. Thus, in a literal (or strict) interpretation of the {WordNet}{}-SUMO{} mapping, \synset{horse}{1}{n} is exactly equivalent to the SUMO{} concept \SUMOClass{Horse}, while \synset{pony}{1}{n} is less general than \SUMOClass{Horse} and \synset{Secretariat}{2}{n} is an instance of \SUMOClass{Horse}. In order to translate the above interpretation of the mapping information into statements in the language of Adimen-SUMO{}, we might simply use {\it equality} in the case of the synset \synset{horse}{1}{n}. With respect to the last two synsets, we might use the meta-predicates \SUMOIndividualRelation{\$subclass} and \SUMOIndividualRelation{\$instance} respectively. Likewise, since \synset{male\_horse}{1}{n} is connected to both \subsumptionMappingOfConcept{\SUMOIndividualAttribute{Male}} and \subsumptionMappingOfConcept{\SUMOClass{Horse}}, we have that \synset{male\_horse}{1}{n} is less general than both \SUMOIndividualAttribute{Male} and \SUMOClass{Horse}. Hence, by following the same literal interpretation of the mapping information, \synset{male\_horse}{1}{n} should be translated as both subclass of \SUMOClass{Horse} ---by means of \SUMOIndividualRelation{\$subclass}--- and subattribute of \SUMOIndividualAttribute{Male} ---by means of \SUMOIndividualRelation{subAttribute}. However, this literal interpretation of the mapping information would lead to inconsistent Adimen-SUMO{} statements: on one hand, \SUMOIndividualRelation{subAttribute} relates two individual SUMO{} attributes, which are therefore restricted to be instance of \SUMOClass{Attribute}; on the other hand, \SUMOIndividualRelation{\$subclass} relates two SUMO{} classes, which are defined to be instance of \SUMOClass{class}. Since the SUMO{} classes \SUMOClass{Attribute} and \SUMOClass{class} are disjoint, it is inconsistent to state that any SUMO{} concept is both a subclass of \SUMOClass{Horse} and subattribute of \SUMOIndividualAttribute{Male}. Unlike its literal interpretation, one can propose several suitable translations of the mapping information that do not yield inconsistent Adimen-SUMO{} statements. Amongst the existing options, in this work we use two different translations of the mapping information on the basis of the following criteria. First, our main purpose is to exploit as much information as possible, to obtain the maximum amount of problems. Second, our intention is also to propose the strongest possible candidate truth-tests. It is worth noting that these two criteria are sometimes contradictory, so we need to find a trade-off between them. Next, we introduce two different proposals for the translation of the mapping information into Adimen-SUMO{} statements, where the second proposal produces stronger statements than the first. The purpose of our first proposal is to relate {WordNet}{} synsets with sets of SUMO{} objects, while the purpose of the second one is to relate {WordNet}{} synsets with SUMO{} classes. For these purposes, we consider the nature of the SUMO{} concept to which a synset is connected in order to choose the most suitable Adimen-SUMO{} predicate: either \SUMOIndividualRelation{equal}, \SUMOIndividualRelation{\$instance}, \SUMOIndividualRelation{\$subclass} or \SUMOIndividualRelation{attribute}.\footnote{In this work, we do not translate the mapping information of synsets connected to SUMO{} relations. This information should be translated using \HoldsIndividualRelation{\$holds}{k}. However, \HoldsIndividualRelation{\$holds}{k} does not enable the definition of the set of SUMO{} concepts that is related with a synset. This is due to the fact that the arity of SUMO{} relations is greater than 1. Consequently, \HoldsIndividualRelation{\$holds}{k} relates SUMO{} relations with a set of tuples of 2 or more SUMO{} concepts, instead of a set of (single) SUMO{} concepts.} \paragraph{First proposal} In order to restrict the set of single SUMO{} objects that can be related with a given synset, we make use of a lenient interpretation of the {WordNet}{}-SUMO{} mapping. In the proposed Adimen-SUMO{} statements, we use the predicate \SUMOIndividualRelation{equal} with synsets connected to SUMO{} objects, the predicate \SUMOIndividualRelation{\$instance} with synsets connected to SUMO{} classes and \SUMOIndividualRelation{\$attribute} with synsets connected to SUMO{} attributes. We introduce a new variable in the Adimen-SUMO{} statement proposed for each individual synset. The quantification of the introduced variables is determined by question patterns and the mapping relation that is used for connecting the given synset (see Sections \ref{section:MultipleMappingPattern} and \ref{section:AntonymPatterns}-\ref{section:ProcessPatterns}). Next, we formalize our proposal for the translation of the mapping information of synsets connected to a single SUMO{} concept: \begin{itemize} \item If the given synset is connected to a {\it SUMO{} object}, then we simply use {\it equality} to state that the synset is exactly related with that SUMO{} object. For example, the synset \synset{yearlong}{1}{s} is connected to the SUMO{} object \SUMOObject{YearDuration}, thus the statement \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{subCQ:yearlong} \tab\tab & ( \predicate{equal} \; \variable{X} \; \constant{YearDuration} ) & \end{flalign} \end{footnotesize} \hspace{-5pt}represents that the values of \textVariable{X} related with \synset{yearlong}{1}{s} have to be equal to \SUMOObject{YearDuration}. \item If the synset is connected to a {\it SUMO{} class}, then we use the Adimen-SUMO{} predicate \SUMOIndividualRelation{\$instance}. For example, \synset{artifact}{1}{n} is connected to the SUMO{} class \SUMOClass{Artifact}, hence \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{subCQ:artifact} \tab\tab & ( \predicate{\$instance} \; \variable{X} \; \constant{Artifact} ) & \end{flalign} \end{footnotesize} \hspace{-5pt}states that the values of \textVariable{X} related with \synset{artifact}{1}{n} must be an instance of \SUMOClass{Artifact}. \item If the given synset is connected to an {\it individual SUMO{} attribute}, we can establish the properties of the SUMO{} objects related to that synset using the Adimen-SUMO{} predicate \textPredicate{attribute}.\footnote{Due to the restrictions on arguments of predicates provided by SUMO{} {\it domain} axioms, we use the SUMO{} predicate \SUMOIndividualRelation{property} instead of \SUMOIndividualRelation{attribute} when convenient.} For example, \synset{goddess}{1}{n} is connected to \SUMOIndividualAttribute{Female} as stated before, therefore the statement \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{subCQ:femaleAttribute} \tab\tab & ( \predicate{attribute} \; \variable{X} \; \constant{Female} ) & \end{flalign} \end{footnotesize} \hspace{-5pt}states that the values of \textVariable{X} related with \synset{goddess}{1}{n} have \SUMOIndividualAttribute{Female} as a property. \item Finally, if the synset is connected to a {\it class of SUMO{} attributes}, then we have to conveniently combine the SUMO{} predicates \textPredicate{attribute} and \textPredicate{\$instance}. For example, the synset \synset{breakableness}{1}{n} is connected to \SUMOClassOfAttributes{BreakabilityAttribute}, which denotes a class of SUMO{} attributes. Hence, the statement \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{exists} ( \variable{Z} ) & \label{subCQ:breakablenessAttribute} \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{Z} \; \constant{BreakabilityAttribute} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{attribute} \; \variable{X} \; \variable{Z} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}states that the values of \textVariable{X} related with \synset{breakableness}{1}{a} have some instance of \SUMOClassOfAttributes{BreakabilityAttribute} as property. \end{itemize} Regardless of the nature of the SUMO{} concept to which a synset is connected, we negate the statements obtained for synsets connected using the complementary of the {\it{equivalence}}{} or the {\it{subsumption}}{} mapping relations. For example, the synset \synset{natural\_object}{1}{n} is connected to \negatedEquivalenceMappingOfConcept{\SUMOClass{Artifact}}. By proceeding as described above, we would obtain statement (\ref{subCQ:artifact}). Hence, we negate statement (\ref{subCQ:artifact}) and obtain \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{subCQ:natural_object} \tab\tab & ( \connective{not} & \\ & \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Artifact} ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}which states that the values of \textVariable{X} related to \synset{natural\_object}{1}{n} cannot be an instance of \SUMOClass{Artifact}. In addition, for the translation of the mapping information of synsets connected to more than one SUMO{} concept, we conveniently combine the statements obtained for each single SUMO{} concept as previously stated with conjunction. In this way, the mapping information of \synset{male\_horse}{1}{n}, which is connected to both \subsumptionMappingOfConcept{\SUMOIndividualAttribute{Male}} and \subsumptionMappingOfConcept{\SUMOClass{Horse}}, is translated as follows: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{subCQ:male_horse} \tab\tab & ( \connective{and} & \\ & \hspace{20pt} ( \predicate{attribute} \; \variable{X} \; \constant{Male} ) & \nonumber \\ & \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Horse} ) ) & \nonumber \end{flalign} \end{footnotesize} \vspace{-\baselineskip} \paragraph{Second proposal} In this proposal for the translation of the mapping information, we obtain stronger statements by restricting the SUMO{} class ---instead of the SUMO{} object--- that is related with a given synset. Thus, we consider exclusively those synsets connected to SUMO{} concepts that are classes and discard the remainder. In the proposed Adimen-SUMO{} statements, we simply use the predicates \SUMOIndividualRelation{equal} ---for synsets connected by {\it{equivalence}}{}--- and \SUMOIndividualRelation{\$subclass} ---for synsets connected by {\it{subsumption}}{} or {\it{instance}}{}. Therefore, the mapping information of synsets connected by the complementary of {\it{equivalence}}{} or {\it{subsumption}}{} is also discarded for the moment. In the following sections, we use the methods proposed above for the translation of the mapping information of synsets to obtain CQs according to different conceptual question patterns. By following our previously introduced criteria, we use the first proposal in Sections \ref{section:MultipleMappingPattern} and \ref{section:AntonymPatterns} to \ref{section:ProcessPatterns}, while the second is used in Section \ref{section:EventPatterns}. In those sections, we also discuss the differences between using each of the proposed translations of the mapping information. \subsection{Exploiting {WordNet}{} and its Mapping into SUMO{}} \label{subsection:Exploiting} This subsection explains how the semantic knowledge of {WordNet}{} and its SUMO{} mapping introduced in Subsection \ref{subsection:WordNet} are exploited for the construction of CQs. Our proposal is based on the hypothesis that both {WordNet}{} relation-pairs and the mapping information are correct. Under this assumption, we propose different question patterns with two different purposes: first, the validation of the mapping itself and, second, the validation of the knowledge in the ontology according to the knowledge in {WordNet}{}. Most of the proposed question patterns are based on checking the {\it compatibility}/{\it incompatibility} of the Adimen-SUMO{} statements obtained from the related SUMO{} concepts as described in the above subsection. More specifically, each question pattern states the way those Adimen-SUMO{} statements are combined and the resulting conjecture is checked to be compatible or not. For simplicity, from now on we say that two or more SUMO{} concepts are {\it compatible}/{\it incompatible} when the Adimen-SUMO{} statements obtained from them are entailed/incompatible with the ontology (see Table \ref{table:Methodology}). For the validation of the mapping information, we propose the following two problem categories ({\it Mapping} categories): \begin{itemize} \item {\it Multiple mapping} pattern. This category of problems focuses on synsets that are connected to multiple SUMO{} concepts. Assuming that the mapping is correct, the truth-tests of the proposed problems state that the SUMO{} concepts connected to the same synset are compatible. Hence, their negations (falsity-tests) state that those SUMO{} concepts are not compatible, which implies that the mapping is inherently wrong. In Section \ref{section:MultipleMappingPattern}, we describe the single question pattern from which we obtain the problems belonging to this category. \item {\it Event} patterns. Verbs and nouns referring to the same process are related by {\it event}. Since the synsets in {\it event}-pairs are referring to the same process, we consider that both synsets should be mapped into the same SUMO{} concept and, if not, our hypothesis is that the mapping information is not correct. Following this hypothesis, for each pair of verb and noun related by {\it event} and connected to different SUMO{} concepts, we propose a new problem such that its truth-test states that those SUMO{} concepts are compatible: that is, that the mapping is not necessarily wrong. Thus, the corresponding falsity-tests state that SUMO{} concepts connected to verbs and nouns related by {\it event} are not compatible and, thus, that the mapping is wrong. This category is divided into 3 subcategories, depending on the mapping relations that are used in {\it event}-pairs. In Section \ref{section:EventPatterns}, we describe in detail the different question patterns and provide examples. \end{itemize} In the case of problems proposed for the validation of the knowledge in the ontology, for each {WordNet}{} relation-pair we create a problem such that its truth-test states the same affirmation in terms of SUMO{}. Next, we describe the two main categories of problems with this purpose ({\it Competency} categories): \begin{itemize} \item {\it Antonym} patterns. In this category, problems are obtained from question patterns based on {\it antonymy} as follows: since {\it antonymy} relates adjectives with opposite semantics in {WordNet}{}, for each pair of antonym adjectives we create a new problem such that its truth-test states that the SUMO{} concepts related to those adjectives are not compatible. Consequently, the corresponding falsity-tests state that the SUMO{} concepts related to antonym adjectives are compatible. Again, we propose 3 alternative subcategories depending on the mapping relations that are used in the pairs of antonym adjectives. This category is described in Section \ref{section:AntonymPatterns}. \item {\it Process} patterns. This category consists of question patterns that focus on verbs and nouns related by {\it agent}, {\it instrument} and {\it result}, and the truth-tests of the proposed problems state the same relation in terms of SUMO{}. For example, conjecture (\ref{goal:PlanPlanning}) states that \synset{schedule}{2}{v} and \synset{schedule}{1}{n} are related by \SUMOIndividualRelation{result} in terms of SUMO{}. The corresponding falsity-tests state that the SUMO{} concepts connected to synsets in {\it agent}/{\it instrument}/{\it result}-pairs of verbs and nouns are not semantically related in the same form. We propose a subcategory of problems for each relation and, in addition, an alternative question pattern for each possible combination of mapping relations. We provide a complete description of this category of problems in Section \ref{section:ProcessPatterns}. \end{itemize} \section{Multiple Mapping Pattern} \label{section:MultipleMappingPattern} In this section, we describe the problems that are obtained from synsets connected to several SUMO{} concepts for the validation of the mapping information. For this purpose, we assume that both {WordNet}{} relation-pairs and the mapping information of synsets are correct. Under this assumption, from each synset connected to more than one SUMO{} concept we propose a new problem such that its truth-test states that those SUMO{} concepts are compatible. Therefore, the corresponding falsity-tests state that the SUMO{} concepts connected to the same synset are not compatible, which contradicts our assumption. In both cases, we follow the first proposal for the translation of the mapping information described in Subsection \ref{subsection:AdimenSUMOStatements}. This decision is based on the fact that many synsets are connected to SUMO{} concepts that are not classes, which makes our second proposal for the translation of the mapping information unsuitable. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=synset]| \langle \synsetTikZ{warhead}{1}{n} \rangle & |[name=Mapping1]| : [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{ExplosiveDevice}} ] & |[name=Mapping2]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Weapon}} ] \\[-10pt] & |[name=X]| [ X ]? & \\ }; \draw[-To,dotted] (X) -- (Mapping1); \draw[-To,dotted] (X) -- (Mapping2); \end{tikzpicture} \caption{Multiple mapping pattern: \synset{warhead}{1}{n}} \label{fig:MultipleMapping1} \end{figure} As described in Subsection \ref{subsection:AdimenSUMOMapping}, there are 1,106 synsets (1,104 nominal and 2 verbal) connected to more than one SUMO{} concepts as result of the process of obtaining a mapping from {WordNet}{} to the core of SUMO{}. Since {\it{equivalence}}{} is replaced with {\it{subsumption}}{} in that process, all of the synsets are connected using {\it{subsumption}}{} or {\it{instance}}{}. Hence, in this category we propose a single question pattern for the creation of problems such that its truth-tests state that the SUMO{} concepts connected to a single synset are compatible. This simply implies that we have to consider the variable in the statement proposed for the translation of the mapping information to be existentially quantified. For example, \synset{warhead}{1}{n} is connected to \subsumptionMappingOfConcept{\SUMOClass{ExplosiveDevice}} and \subsumptionMappingOfConcept{\SUMOClass{Weapon}} as described in Figure \ref{fig:MultipleMapping1}, from which we obtain the following truth-test that states that \SUMOClass{ExplosiveDevice} and \SUMOClass{Weapon} are compatible: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:ExplosiveDeviceWeapon} \tab\tab & ( \connective{exists} \; ( \variable{X} ) & \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{ExplosiveDevice} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Weapon} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}The corresponding falsity-test, which is obtained by negating (\ref{CQ:ExplosiveDeviceWeapon}), states that \SUMOClass{ExplosiveDevice} and \SUMOClass{Weapon} are not compatible. The mapping of \synset{warhead}{1}{n} is validated since ATPs are able to find a proof for (\ref{CQ:ExplosiveDeviceWeapon}) in Adimen-SUMO{} v2.6, but not in TPTP-SUMO{} and Adimen-SUMO{} v2.2. For example, ATPs are able to discover that \SUMOClass{Bomb} is a subclass of both \SUMOClass{ExplosiveDevice} and \SUMOClass{Weapon}, thus any instance of \SUMOClass{Bomb} is also an instance of \SUMOClass{ExplosiveDevice} and \SUMOClass{Weapon} simultaneously. Accordingly, the proposed problem is decided as {\it solved} and {\it entailed} in Adimen-SUMO{} v2.6, while it is unsolved in TPTP-SUMO{} and Adimen-SUMO{} v2.2. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=synset]| \langle \synsetTikZ{coal}{1}{n} \rangle & |[name=Mapping1]| : [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{FossilFuel}} ] & |[name=Mapping2]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Mineral}} ] & |[name=Mapping3]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Rock}} ] \\[-10pt] & & |[name=X]| [ X ]? & \\ }; \draw[-To,dotted] (X) -- (Mapping1); \draw[-To,dotted] (X) -- (Mapping2); \draw[-To,dotted] (X) -- (Mapping3); \end{tikzpicture} \caption{Multiple mapping pattern: \synset{coal}{1}{n}} \label{fig:MultipleMapping2} \end{figure} Similarly, \synset{coal}{1}{n} is connected to \subsumptionMappingOfConcept{\SUMOClass{FossilFuel}}, \subsumptionMappingOfConcept{\SUMOClass{Mineral}} and \subsumptionMappingOfConcept{\SUMOClass{Rock}} (see Figure \ref{fig:MultipleMapping2}). Hence, we create a new problem such that its truth-test states that \SUMOClass{FossilFuel}, \SUMOClass{Mineral} and \SUMOClass{Rock} are compatible: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:FossilFuelMineralRock} \tab\tab & ( \connective{exists} \; ( \variable{X} ) & \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{FossilFuel} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Mineral} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Rock} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}ATPs find a proof (as before, only in Adimen-SUMO{} v2.6) for the corresponding falsity-test, which is obtained by negating (\ref{CQ:FossilFuelMineralRock}) and states that \SUMOClass{FossilFuel}, \SUMOClass{Mineral} and \SUMOClass{Rock} are not compatible: for example, ATPs are able to discover that every instance of \SUMOClass{FossilFuel} has \SUMOIndividualAttribute{Liquid} as attribute\footnote{Every instance of \SUMOClass{Solution}, which is a super-class of \SUMOClass{FossilFuel}, has \SUMOIndividualAttribute{Liquid} as attribute.} and every instance of \SUMOClass{Rock} has \SUMOIndividualAttribute{Solid} as attribute, although \SUMOIndividualAttribute{Liquid} and \SUMOIndividualAttribute{Solid} are contrary attributes. Consequently, this falsity-test enables the detection of a defect in the mapping information of \synset{coal}{1}{n} and the problem is decided to be {\it solved} and {\it incompatible} in Adimen-SUMO{} v2.6. By proceeding in this way, we create 151 problems from the single question pattern proposed in this category. \section{Event Patterns} \label{section:EventPatterns} In this section we describe the problems that are obtained from the question patterns based on the semantic relation {\it event} defined in the {\it Morphosemantic Links} database \cite{FOC09} of {WordNet}{} for the validation of synsets mapping information. For this purpose, in addition to assuming that {WordNet}{} relation-pairs and the synsets mapping are correct, we also assume that {WordNet}{} synsets related by {\it event} should be connected to the same SUMO{} concept since {\it event} relates verb and noun synsets that refer to the same process. Under those assumptions, for each verb and noun synsets related by {\it event} and connected to different SUMO{} concepts, we propose a new problem such that its truth-test states that the SUMO{} concepts linked to those synsets are compatible. Hence, the corresponding falsity-tests state that the SUMO{} concepts connected to verb and noun synsets related by {\it event} are not compatible, which contradicts our assumptions. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=verbMappingClass]| [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{Death}} ] : & |[name=verb]| \langle \synsetTikZ{kill}{10}{v} \rangle & \hspace{60pt} & |[name=noun]| \langle \synsetTikZ{killing}{2}{n} \rangle & |[name=nounMappingClass]| : [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{Killing}} ] \\[-15pt] & & ? & & \\[-40pt] & & = & & \\[-45pt] |[name=verbMapping]| & & & & |[name=nounMapping]| \\ }; \draw[latex-latex] (verb) -- node[auto] {\(\langle event \rangle\)} (noun); \draw[To-,dotted] (verbMappingClass) -- (verbMapping.center); \draw[-To,dotted] (verbMapping.center) -- (nounMapping.center) -- (nounMappingClass); \end{tikzpicture} \caption{Event pattern \#1} \label{fig:EventPattern1} \end{figure} In the {WordNet}{} {\it Morphosemantic Links} database, there are 8,158 event-pairs of synsets where the two synsets are equally mapped to 1,991 event-pairs. In addition, the synsets are connected to different SUMO{} concepts where at least one is not a SUMO{} class in only 499 event-pairs. Thus, we decide to apply our second proposal for the translation of the mapping information described in Subsection \ref{subsection:AdimenSUMOStatements} in order to create problems on the basis of the remaining 5,668 event-pairs where the two synsets are connected to different SUMO{} classes. In this manner, we obtain stronger truth-tests than using our first proposal for the translation of the mapping information. In the following subsections, we introduce different conceptual patterns of questions depending on the mapping relations used. \subsection{Event Pattern \#1} \label{subsection:Event1} The first question pattern is focused on the 26 event-pairs where both synsets are connected to two different SUMO{} classes using {\it{equivalence}}{}. Since the mapping of those synsets denotes exactly the SUMO{} class to which the synset is related, our question pattern states that those SUMO{} classes are completely equivalent by using {\it equality}. For example, the synsets \synset{kill}{10}{v} and \synset{killing}{2}{n} are related by {\it event} and connected respectively to the SUMO{} classes \equivalenceMappingOfConcept{\SUMOClass{Death}} and \equivalenceMappingOfConcept{\SUMOClass{Killing}}, as described in Figure \ref{fig:EventPattern1}, from which we obtain the next truth-test: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:DeathKilling} \tab\tab & ( \predicate{equal} \; \constant{Death} \; \constant{Killing} ) & \end{flalign} \end{footnotesize} \hspace{-5pt}The corresponding falsity-test, which is obtained by negating (\ref{CQ:DeathKilling}), states that \SUMOClass{Death} and \SUMOClass{Killing} are different. This falsity-test is classified as non-passing only in Adimen-SUMO{} v2.6. The proof is based on the fact that \SUMOClass{PhysiologicProcess} and \SUMOClass{PathologicProcess} are disjoint classes. On one hand, \SUMOClass{Death} is a subclass of \SUMOClass{PhysiologicProcess}. On the other hand, \SUMOClass{Killing} is a subclass of a \SUMOClass{Damaging} and every instance of \SUMOClass{Damaging} with some instance of \SUMOClass{Organism} as patient is also instance of \SUMOClass{Injuring}, which is a subclass of \SUMOClass{PathologicProcess}. Consequently, the proposed problem is decided to be {\it solved} and {\it incompatible} in Adimen-SUMO{} v2.6, which enables the detection of an error in the mapping between {WordNet}{} and SUMO{}. It is worth noting that in order to state the equivalence of the classes \SUMOClass{Death} and \SUMOClass{Killing} using our first proposal for the translation of the mapping information, we would have to state that the set of objects belonging to those classes are equal, which is a weaker affirmation than conjecture (\ref{CQ:DeathKilling}). Using this first question pattern, we obtain 24 problems. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=nounMappingSuperClass1]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Pretending}} ] : & |[name=noun1]| \langle \synsetTikZ{fix}{1}{v} \rangle & \hspace{60pt} & |[name=noun2]| \langle \synsetTikZ{fixing}{1}{n} \rangle & |[name=nounMapping2]| : [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{Repairing}} ] \\[-25pt] & & ? & & \\[-40pt] & & = & & \\[-20pt] |[name=nounMapping1]| [ X ] & & & & \\ }; \draw[-To,dotted] (nounMapping1) -- (nounMappingSuperClass1); \draw[latex-latex] (noun1) -- node[auto] {\(\langle event \rangle\)} (noun2); \draw[To-To,dotted] (nounMapping1) -- (nounMapping2.south); \end{tikzpicture} \caption{Event pattern \#2} \label{fig:EventPattern2} \end{figure} \subsection{Event Pattern \#2} \label{subsection:Event2} In this subsection, we describe the question pattern that focuses on the 509 event-pairs where one synset is connected using {\it{equivalence}}{}, while the other synset is connected using {\it{instance}}{} or {\it{subsumption}}{}. In this case, we know the precise SUMO{} class to which the synset connected by {\it{equivalence}}{} is related, as in the previous subsection. However, for the synset connected by {\it{subsumption}}{} or {\it{instance}}{}, we only know the superclass of the SUMO{} class to which that synset is related. That is, we know that the synset is connected to some subclass of the class provided in the mapping information. Hence, in order to prove that those SUMO{} classes are compatible, we must demonstrate that the class related to the synset connected by {\it{equivalence}}{} is a subclass of the class related to the synset connected by {\it{subsumption}}{} or {\it{instance}}{}. For example, \synset{fix}{1}{v} and \synset{fixing}{1}{n} are related by {\it event} and connected to \subsumptionMappingOfConcept{\SUMOClass{Pretending}} and \equivalenceMappingOfConcept{\SUMOClass{Repairing}} respectively, as described in Figure \ref{fig:EventPattern2}. Therefore, we create a new problem such that its truth-test states that \SUMOClass{Repairing} is a subclass of \SUMOClass{Pretending} \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:RepairingPretending} \tab\tab & ( \predicate{\$subclass} \; \constant{Repairing} \; \constant{Pretending} ) ) & \end{flalign} \end{footnotesize} \hspace{-5pt}and the corresponding falsity-test states that \SUMOClass{Repairing} cannot be subclass of \SUMOClass{Pretending}. Neither conjecture (\ref{CQ:RepairingPretending}) nor its negation are not proved to be entailed by TPTP-SUMO{} or Adimen-SUMO{} in our experimentation. From this second event pattern of questions we obtain 350 problems. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=verbMapping1]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Comparing}} ] : & |[name=verb]| \langle \synsetTikZ{appraise}{1}{v} \rangle & \hspace{60pt} & |[name=noun]| \langle \synsetTikZ{appraisal}{1}{n} \rangle & |[name=nounMapping1]| : [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Judging}} ] \\[-15pt] & & ? & & \\[-40pt] & & = & & \\[-45pt] |[name=verbMapping2]| [ X ] & & & & |[name=nounMapping2]| [ Y ] \\ }; \draw[-To,dotted] (verbMapping2) -- (verbMapping1); \draw[-To,dotted] (nounMapping2) -- (nounMapping1); \draw[latex-latex] (verb) -- node[auto] {\(\langle event \rangle\)} (noun); \draw[To-To,dotted] (verbMapping2) -- (nounMapping2); \end{tikzpicture} \caption{Event pattern \#3} \label{fig:EventPattern3} \end{figure} \subsection{Event Pattern \#3} \label{subsection:Event3} Finally, we focus on the 5,130 event-pairs where both synsets are connected using {\it{instance}}{} or {\it{subsumption}}{}. In this case, we only know the superclass of the SUMO{} class to which each synset is related. Therefore, in order to prove that those SUMO{} classes are compatible, we have to demonstrate that those SUMO{} classes have a subclass in common. For example, \synset{appraise}{1}{v} and \synset{appraisal}{1}{n} are related by {\it event} and respectively connected to \subsumptionMappingOfConcept{\SUMOClass{Judging}} and \subsumptionMappingOfConcept{\SUMOClass{Comparing}}, as described in Figure \ref{fig:EventPattern3}. From this event-pair, we create a new problem such that its truth-test states that \SUMOClass{Judging} and \SUMOClass{Comparing} have some common subclasses: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:JudgingComparing} \tab\tab & ( \connective{exists} \; ( \variable{X} ) & \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$subclass} \; \variable{X} \; \constant{Judging} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$subclass} \; \variable{X} \; \constant{Comparing} ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}Thus, the corresponding falsity-test states that \SUMOClass{Judging} and \SUMOClass{Comparing} do not have any common subclass. Conjecture (\ref{CQ:JudgingComparing}) and its negation are not shown to be entailed by TPTP-SUMO{} or Adimen-SUMO{} in our experimentation. Using this third question pattern based on {\it event}, we obtain 2,011 different problems. \section{Antonym Patterns} \label{section:AntonymPatterns} In this section, we describe the problems of the {\it Competency} categories that are obtained from the question patterns based on antonyms. For this purpose, we assume that both {WordNet}{} relation-pairs and their mappings into SUMO{} are correct. Under those assumptions, questions patterns focus on the {\it antonymy} ---which relates words with opposite semantics--- and {\it similarity} ---that links semantically comparable words--- relations of {WordNet}{} (see Figure \ref{fig:antonymPairs}) and propose the creation of new problems such that their truth-tests state that the SUMO{} concepts related to antonym words are not compatible. Thus, the corresponding falsity-tests state that the SUMO{} concepts related to antonym words are compatible. {WordNet}{} provides 7,604 antonym-pairs, from which 1,950 are noun-pairs, 1,016 are verb-pairs, 3,998 are adjective-pairs and 640 are adverb-pairs. In addition, given a synset $ws$ in an antonym-pair that is related with another synset $ws'$ via similarity, we can propose a new antonym-pair by simply replacing $ws$ with $ws'$ in the pair. In this fashion, we extend the given 7,604 antonym-pairs to a set of 121,496 antonym-pairs. Since many of the synsets in those pairs are connected to SUMO{} concepts that are not classes, we use our first proposal for the translation of the mapping information described in Subsection \ref{subsection:AdimenSUMOStatements}. Further, in 36,934 antonym-pairs some of the synsets are mapped into SUMO{} relations and, therefore, those pairs are not considered. In the remaining 84,562 antonym-pairs of synsets, there are: \begin{itemize} \item 186 antonym-pairs where both synsets are connected using {\it{equivalence}}{} (or its complement). \item 2,542 antonym-pairs where {\it{equivalence}}{} (or its complement) is mixed with {\it{subsumption}}{} (or its complement) or {\it{instance}}{}. \item 81,834 antonym-pairs where both synsets are connected using {\it{subsumption}}{} (or its complement) and {\it{instance}}{}. \end{itemize} In the following subsections, we describe 3 alternative question patterns depending on the mapping relations that are used. \subsection{Antonym Pattern \#1} \label{subsection:Antonym1} \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=adjectiveMappingClass1]| [ \equivalenceMappingTikZ{\SUMOClassTikZ{Birth}} ] : & |[name=adjective1]| \langle \synsetTikZ{birth}{2}{n} \rangle & \hspace{40pt} & |[name=adjective2]| \langle \synsetTikZ{death}{1}{n} \rangle : & |[name=adjectiveMappingClass2]| [ \equivalenceMappingTikZ{\SUMOClassTikZ{Death}} ] \\[-15pt] & & ? & & \\[-40pt] |[name=adjectiveMapping1]| & & / & & |[name=adjectiveMapping2]| \\ }; \draw[latex-latex] (adjective1) -- node {\(/\)} (adjective2); \draw[To-,dotted] (adjectiveMappingClass1) -- (adjectiveMapping1.center); \draw[-To,dotted] (adjectiveMapping1.center) -- (adjectiveMapping2.center) -- (adjectiveMappingClass2); \end{tikzpicture} \caption{Antonym pattern \#1: \synset{birth}{2}{n} and \synset{death}{1}{n}} \label{fig:AntonymPattern1BirthDeath} \end{figure} The first question pattern based on antonym is focused on the 186 antonym-pairs where both synsets are connected using {\it{equivalence}}{} (or its complement). In this case, we assume that all the SUMO{} objects represented by the statement obtained from the first synset are different from all the SUMO{} objects represented by the statement obtained from the second synset. Formally, this implies that we consider the variables used in the Adimen-SUMO{} statements proposed for the translation of the mapping information to be universally quantified. For example, the antonym-synsets \synset{birth}{2}{n} and \synset{death}{1}{n} are respectively connected to \equivalenceMappingOfConcept{\SUMOClass{Birth}} and \equivalenceMappingOfConcept{\SUMOClass{Death}} (see Figure \ref{fig:AntonymPattern1BirthDeath}), from which we obtain the following statements: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{\$instance} \; \variable{X} \; \constant{Birth} ) & \label{subCQ:birth} \\ \tab\tab & ( \predicate{\$instance} \; \variable{Y} \; \constant{Death} ) & \label{subCQ:death} \end{flalign} \end{footnotesize} \hspace{-5pt}By considering \textVariable{X} and \textVariable{Y} to be universally quantified, the following truth-test results from the combination of statements (\ref{subCQ:birth}) and (\ref{subCQ:death}): \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{forall} ( \variable{X} \; \variable{Y} ) & \label{CQ:BirthDeath} \\ & \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Birth} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{Y} \; \constant{Death} ) ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{equal} \; \variable{X} \; \variable{Y} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}The above CQ states that any two SUMO{} objects that are instance of \SUMOClass{Birth} and \SUMOClass{Death} respectively are inevitably different. The corresponding falsity-test is obtained by negating (\ref{CQ:BirthDeath}), which states that some SUMO{} object exists which is instance of \SUMOClass{Birth} and \SUMOClass{Death} at the same time. This problem remains unsolved in TPTP-SUMO{} and Adimen-SUMO{} (v2.2 and v2.6) due to the lack of information in the ontology. From this question pattern, we obtain 71 different problems. \subsection{Antonym Pattern \#2} \label{subsection:Antonym2} \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=nounMappingSuperClass1]| [ \subsumptionMappingTikZ{\SUMOClassTikZ{GeographicArea}} ] : & |[name=noun1]| \langle \synsetTikZ{rural\_area}{1}{n} \rangle & \hspace{40pt} & |[name=noun2]| \langle \synsetTikZ{urban\_area}{1}{n} \rangle & |[name=nounMapping2]| : [ \equivalenceMappingTikZ{\SUMOClassTikZ{City}} ] \\[-25pt] & & \hspace{-10pt} ? & & \\[-15pt] |[name=nounMapping1]| [ X ] & & & & \\ }; \draw[-To,dotted] (nounMapping1) -- (nounMappingSuperClass1); \draw[latex-latex] (noun1) -- node {\(/\)} (noun2); \draw[To-To,dotted] (nounMapping1) -- node {\(/\)} (nounMapping2.south); \end{tikzpicture} \caption{Antonym pattern \#2: \synset{rural\_area}{1}{n} and \synset{urban\_area}{1}{n}} \label{fig:AntonymPattern2RuralUrban} \end{figure} The second question pattern is focused on the 2,542 antonym-pairs where {\it{equivalence}}{} (or its complement) is mixed with {\it{subsumption}}{} (or its complement) or {\it{instance}}{}. As in the previous case, we consider the variable in the Adimen-SUMO{} statement proposed for the translation of equivalence mapping information to be universally quantified. On the contrary, the variable in the Adimen-SUMO{} statement translating {\it{subsumption}}{} or {\it{instance}}{} mapping information is considered to be existentially quantified because the information provided by these mapping relations is weaker than the information provided by {\it{equivalence}}{}. Since we are using both universally and existentially quantified variables, there are two additional options: we may nest the universally quantified statement inside the formula obtained from the existentially quantified statement, or nest the existentially quantified statement inside the formula that is derived from the universally quantified statement. From these two options, we choose the one that yields stronger truth-tests, which is the first. For example, the antonym synsets \synset{rural\_area}{1}{n} and \synset{urban\_area}{1}{n} are connected to \subsumptionMappingOfConcept{\SUMOClass{GeographicArea}} and \equivalenceMappingOfConcept{\SUMOClass{City}} respectively (see Figure \ref{fig:AntonymPattern2RuralUrban}), from which we obtain the following statements: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{\$instance} \; \variable{X} \; \constant{GeographicArea} ) & \label{subCQ:ruralArea} \\[5pt] & ( \predicate{\$instance} \; \variable{Y} \; \constant{City} ) ) & \label{subCQ:urbanArea} \end{flalign} \end{footnotesize} \hspace{-5pt}Consequently, the Adimen-SUMO{} statement that is obtained for the \synset{urban\_area}{1}{n} is nested into the Adimen-SUMO{} statement that is obtained for \synset{rural\_area}{1}{n}: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{exists} ( \variable{X} ) & \label{CQ:GeographicAreaCity} \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{GeographicArea} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{forall} ( \variable{Y} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{Y} \; \constant{City} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{equal} \; \variable{X} \; \variable{Y} ) ) ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}The above CQ states that there exists some SUMO{} object which is an instance of \SUMOClass{GeographicArea} such that it is different from any SUMO{} object that is an instance of \SUMOClass{City}. It is worth noting that if the previous statement holds true, it implies that every SUMO{} object that is an instance of \SUMOClass{City} is different from some SUMO{} object that is an instance of \SUMOClass{GeographicArea}: in particular, all the SUMO{} objects that are an instance of \SUMOClass{City} would be different from a single SUMO{} object that is an instance of \SUMOClass{GeographicArea}. Hence, the truth-test in (\ref{CQ:GeographicAreaCity}) is stronger than the conjecture that results by nesting the existentially quantified statement into the formula obtained from the universally quantified statement. Although \SUMOClass{City} is a subclass of \SUMOClass{GeographicArea}, the truth-test defined in (\ref{CQ:GeographicAreaCity}) is classified as passing in only Adimen-SUMO{} v2.6 since \SUMOClass{GeographicArea} has other subclasses that are disjoint with \SUMOClass{City}: for example, \SUMOClass{WaterArea}. Therefore, the proposed problem is {\it solved} and {\it entailed} in Adimen-SUMO{} v2.6. In summary, we obtain 489 problems from this second question pattern based on {\it antonymy}. \subsection{Antonym Pattern \#3} \label{subsection:Antonym3} \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=adjectiveMappingSuperClass1]| [ \subsumptionMappingTikZ{\SUMOClassTikZ{Coloring}} ] : & |[name=adjective1]| \langle \synsetTikZ{stained}{1}{a} \rangle & \hspace{40pt} & |[name=adjective2]| \langle \synsetTikZ{unstained}{1}{a} \rangle & |[name=adjectiveMappingSuperClass2]| : [ \negatedSubsumptionMappingTikZ{\SUMOClassTikZ{SurfaceChange}} ] \\[-15pt] & & \hspace{30pt} ? & & \\[-40pt] |[name=adjectiveMapping1]| [ X ] & & & & |[name=adjectiveMapping2]| [ Y ] \\ }; \draw[-To,dotted] (adjectiveMapping1) -- (adjectiveMappingSuperClass1); \draw[-To,dotted] (adjectiveMapping2) -- (adjectiveMappingSuperClass2); \draw[latex-latex] (adjective1) -- node {\(/\)} (adjective2); \draw[To-To,dotted] (adjectiveMapping1) -- node {\(/\)} (adjectiveMapping2); \end{tikzpicture} \caption{Antonym pattern \#3: \synset{stained}{1}{a} and \synset{unstained}{1}{a}} \label{fig:AntonymPattern3ColoringSurfaceChange} \end{figure} In the third question pattern based on antonym, we focus on the 81,834 antonym-pairs where both synsets are connected using {\it{subsumption}}{} (or its complement) or {\it{instance}}{}. As before, we consider the variables used in Adimen-SUMO{} statements to be existentially quantified, which implies we consider that some of the SUMO{} objects represented by the statements obtained from the mapping information of the antonym synsets are not equal. For example, the antonym synsets \synset{stained}{1}{a} and \synset{unstained}{1}{a} are connected respectively to \subsumptionMappingOfConcept{\SUMOClass{Coloring}} and \negatedSubsumptionMappingOfConcept{\SUMOClass{SurfaceChange}} (see Figure \ref{fig:AntonymPattern3ColoringSurfaceChange}), from which we obtain the following statements: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{\$instance} \; \variable{X} \; \constant{Coloring} ) & \label{subCQ:stained} \\[5pt] & ( \connective{not} & \label{subCQ:unstained} \\ & \hspace{20pt} ( \predicate{\$instance} \; \variable{Y} \; \constant{SurfaceChanging} ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}Therefore, we propose the following truth-test \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{exists} ( \variable{X} \; \variable{Y} ) & \label{CQ:ColoringNotSurfaceChange} \\ & \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{Coloring} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{Y} \; \constant{SurfaceChanging} ) ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{not} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{equal} \; \variable{X} \; \variable{Y} ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}stating that two different SUMO{} objects exist such that the first is an instance of \SUMOClass{Coloring} and the second one is not an instance of \SUMOClass{SurfaceChanging}. The corresponding falsity-test that is obtained by negating (\ref{CQ:ColoringNotSurfaceChange}) states that any two SUMO{} objects such that the first one is an instance of \SUMOClass{Coloring} and the second one is not an instance of \SUMOClass{SurfaceChanging} are equal. Although \SUMOClass{Coloring} is a subclass of \SUMOClass{SurfaceChanging}, the truth-test defined in (\ref{CQ:ColoringNotSurfaceChange}) is classified as passing in only Adimen-SUMO{} v2.6. Therefore, the proposed problem is {\it solved} and {\it entailed} in Adimen-SUMO{} v2.6, while it is {\it unsolved} in TPTP-SUMO{} and Adimen-SUMO{} v2.6. Using this third question pattern, we obtain 2,444 problems. \section{Process Patterns} \label{section:ProcessPatterns} In this section, we describe the problems that are obtained from the {\it Morphosemantic Links} database \cite{FOC09} of {WordNet}{} for the validation of the knowledge in the ontology. As in the case of the question patterns based on antonym, we assume that both {WordNet}{} relation-pairs and their mappings into SUMO{} are correct. Among the 14 semantics relations between morphologically related verbs and nouns provided by the {\it Morphosemantic Links}, we concentrate on {\it agent}, {\it instrument} and {\it result}, which relate a process (verb) with its corresponding agent/instrument/result (noun). For each pair of synsets connected by the above relations, we propose the creation of a new problem such that its truth-test states the same affirmation in terms of SUMO{}. For this purpose, we make proper use of the SUMO{} relations \SUMOIndividualRelation{agent}, \SUMOIndividualRelation{instrument} and \SUMOIndividualRelation{result} that link SUMO{} processes (i.e., an instance of the SUMO{} class \textConstant{Process}) to its corresponding agent, instrument and result, which are restricted respectively to being instances of the SUMO{} classes \SUMOClass{Agent}, \SUMOClass{Physical} and \SUMOClass{Entity}. That is, the SUMO{} relations \SUMOIndividualRelation{agent}, \SUMOIndividualRelation{instrument} and \SUMOIndividualRelation{result} connect two SUMO{} objects. Consequently, it is unfeasible to apply our second proposal for the translation of the mapping information described in Subsection \ref{subsection:AdimenSUMOStatements}, since the connected concepts are not SUMO{} classes. Depending on the mapping relations that are used to relate the verb and noun synsets, we introduce 4 different question patterns by means of our first proposal for the translation of the mapping information. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle},ampersand replacement=\&] (s) { |[name=verbMappingClass]| [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{EducationalProcess}} ] : \& |[name=verb]| \langle \synsetTikZ{instruct}{1}{v} \rangle \& \hspace{60pt} \& |[name=noun]| \langle \synsetTikZ{instructor}{1}{n} \rangle \& |[name=nounMappingClass]| : [ \equivalenceMappingTikZOfConcept{\SUMOIndividualAttributeTikZ{Teacher}} ] \\[-20pt] \& \& \hspace{10pt} [\SUMOIndividualRelationTikZ{agent}]? \& \& \\[-40pt] |[name=verbMapping]| \& \& \& \& |[name=nounMapping]| \\ }; \draw[-latex] (verb) -- node[auto] {\(\langle agent \rangle\)} (noun); \draw[-To,dotted] (verbMappingClass) -- (verbMapping.center) -- (nounMapping.center) -- (nounMappingClass); \end{tikzpicture} } \caption{Process pattern: verb and noun synsets connected by {\it{equivalence}}{}} \label{fig:ProcessPattern1} \end{figure} In the {\it Morphosemantic Links} database, there are 5.295 relation-pairs of synsets where one of the relations {\it agent}, {\it instrument} or {\it result} are used. Among those pairs, there are 5,098 relation-pairs such that none of the synsets are connected to a SUMO{} relation and, additionally, none of the synsets is connected using the complementary of {\it{equivalence}}{} or {\it{subsumption}}{}. Therefore, we use those 5,098 relation-pairs for creating problems. For example, the synsets \synset{instruct}{1}{v} and \synset{instructor}{1}{n} are related by {\it agent} and connected respectively to \equivalenceMappingOfConcept{\SUMOClass{EducationalProcess}} and \equivalenceMappingOfConcept{\SUMOIndividualAttribute{Teacher}} (see Figure \ref{fig:ProcessPattern1}). From the mapping information of those synsets, we obtain the following statements: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \predicate{\$instance} \; \variable{X} \; \constant{EducationalProcess} ) & \label{subCQ:instruct} \\[5pt] & ( \predicate{attribute} \; \variable{Y} \; \constant{Teacher} ) & \label{subCQ:instructor} \end{flalign} \end{footnotesize} \hspace{-5pt}Thus, we must combine the above statements using the SUMO{} relation \SUMOIndividualRelation{agent} and quantify their variables in order to create a problem. However, unlike the case of {\it event} question patterns, we cannot consider both variables in Adimen-SUMO{} statements to be universally quantified when both synsets are connected by {\it{equivalence}}{} (see Subsection \ref{subsection:Event1}): in our example, it is not true that all the SUMO{} objects with \SUMOIndividualAttribute{Teacher} as an attribute are the agent of all the instances of \SUMOClass{EducationalProcess}. At most, we can state that all the instances of \SUMOClass{EducationalProcess} have a SUMO{} object with the attribute \SUMOIndividualAttribute{Teacher} as agent and, at the same time, all the SUMO{} objects with \SUMOIndividualAttribute{Teacher} as attribute are the agent of some instance of \SUMOClass{EducationalProcess}, as proposed in the next conjecture (truth-test): \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \tab\tab & ( \connective{and} & \label{CQ:AgentEducationalProcessTeacher} \\ & \hspace{20pt} ( \connective{forall} ( \variable{X} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{EducationalProcess} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{exists} ( \variable{Y} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{attribute} \; \variable{Y} \; \constant{Teacher} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{agent} \; \variable{X} \; \variable{Y} ) ) ) ) ) & \nonumber \\ & \hspace{20pt} ( \connective{forall} ( \variable{Y} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} ( \connective{=>} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{attribute} \; \variable{Y} \; \constant{Teacher} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{exists} ( \variable{X} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \connective{and} & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{\$instance} \; \variable{X} \; \constant{EducationalProcess} ) & \nonumber \\ & \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} \hspace{20pt} ( \predicate{agent} \; \variable{X} \; \variable{Y} ) ) ) ) ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}The corresponding falsity-test states that either some instance of the SUMO{} class \SUMOClass{EducationalProcess} exists where its agent does not have \SUMOIndividualAttribute{Teacher} as an attribute or some SUMO{} object exists with \SUMOIndividualAttribute{Teacher} as an attribute that it is not the agent of any \SUMOClass{EducationalProcess}. \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=verbMappingSuperClass]| [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{Cooling}} ] : & |[name=verb]| \langle \synsetTikZ{cool}{1}{v} \rangle & \hspace{90pt} & |[name=noun]| \langle \synsetTikZ{cooler}{1}{n} \rangle & |[name=nounMappingSuperClass]| : [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{Refrigerator}} ] \\[-35pt] & & \hspace{-10pt} \rotatebox{-11}{[$\SUMOIndividualRelationTikZ{instrument}$]?} & & \\[-10pt] & & & & |[name=nounMapping]| [ Y ] \\ }; \draw[-To,dotted] (nounMapping) -- (nounMappingSuperClass); \draw[-latex] (verb) -- node[auto] {\(\langle instrument \rangle\)} (noun); \draw[-To,dotted] (verbMappingSuperClass.south) -- (nounMapping); \end{tikzpicture} \caption{Process pattern: verb and noun synsets connected by {\it{equivalence}}{} and {\it{subsumption}}{}/{\it{instance}}{} respectively} \label{fig:ProcessPattern2} \end{figure} Next, we summarize the proposed question patterns and the number of problems that result from them. However, the resulting problems are organized into 3 categories ---{\it Agent}, {\it Instrument} and {\it Result}--- depending on the semantic relation with the purpose of analyzing the knowledge in SUMO{} about each of these relations: \begin{itemize} \item The first question pattern focuses on relation-pairs where both synsets are connected by {\it{equivalence}}{}, as the example in Figure \ref{fig:ProcessPattern1}. From this pattern, we obtain 13 problems where 2 problems belong to the Agent category, 3 problems to the Instrument category and 8 problems to the Result category. \item The second question pattern focuses on relation-pairs where the verb synset is connected by {\it{equivalence}}{} and the noun synset is connected by {\it{subsumption}}{} or {\it{instance}}{}, as per the example in Figure \ref{fig:ProcessPattern2}. The truth-tests of the proposed problems state that all the SUMO{} objects that can be assigned to the verb synset have some of the SUMO{} objects that can be assigned to the noun synset as agent/instrument/result, which corresponds to the first half of the problem proposed by the first process question pattern (see conjecture (\ref{CQ:AgentEducationalProcessTeacher})). Using this question pattern, we obtain 197 problems, from which 137, 30 and 30 problems belong respectively to the Agent, Instrument and Result categories. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={50pt,between origins},nodes={asymmetrical rectangle},ampersand replacement=\&] (s) { |[name=verbMappingSuperClass]| [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{ComposingMusic}} ] : \& |[name=verb]| \langle \synsetTikZ{compose}{2}{v} \rangle \& \hspace{50pt} \& |[name=noun]| \langle \synsetTikZ{composition}{4}{n} \rangle \& |[name=nounMappingSuperClass]| : [ \equivalenceMappingTikZOfConcept{\SUMOClassTikZ{MusicalComposition}} ] \\[-20pt] \& \& \hspace{-10pt} \rotatebox{7}{[$\SUMOIndividualRelationTikZ{result}$]?} \& \& \\[-25pt] |[name=verbMapping]| [ X ] \& \& \& \& \\ }; \draw[-To,dotted] (verbMapping) -- (verbMappingSuperClass); \draw[-latex] (verb) -- node[auto] {\(\langle result \rangle\)} (noun); \draw[-To,dotted] (verbMapping) -- (nounMappingSuperClass.south); \end{tikzpicture} } \caption{Process pattern: verb and noun synsets connected by {\it{subsumption}}{}/{\it{instance}}{} and {\it{equivalence}}{} respectively} \label{fig:ProcessPattern3} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture}[>=triangle 60] \matrix[matrix of math nodes,column sep={-5pt},row sep={70pt,between origins},nodes={asymmetrical rectangle}] (s) { |[name=verbMappingSuperClass]| [ \subsumptionMappingTikZOfConcept{\SUMOIndividualAttributeTikZ{Permission}} ] : & |[name=verb]| \langle \synsetTikZ{patent}{1}{v} \rangle & \hspace{50pt} & |[name=noun]| \langle \synsetTikZ{patentee}{1}{n} \rangle & |[name=nounMappingSuperClass]| : [ \subsumptionMappingTikZOfConcept{\SUMOClassTikZ{LegalAgent}} ] \\[-20pt] |[name=verbMapping]| [ X ] & & & & |[name=nounMapping]| [ Y ] \\ }; \draw[-To,dotted] (verbMapping) -- (verbMappingSuperClass); \draw[-To,dotted] (nounMapping) -- (nounMappingSuperClass); \draw[-latex] (verb) -- node[auto] {\(\langle agent \rangle\)} (noun); \draw[-To,dotted] (verbMapping) -- node[auto] {[$\SUMOIndividualRelationTikZ{agent}$]?} (nounMapping); \end{tikzpicture} \caption{Process pattern: verb and noun synsets connected by {\it{subsumption}}{}/{\it{instance}}{}} \label{fig:ProcessPattern4} \end{figure} \item The third question pattern is focused on relation-pairs where the verb synset is connected by {\it{subsumption}}{} or {\it{instance}}{}, while the noun synset is connected by {\it{equivalence}}{}, as the example in Figure \ref{fig:ProcessPattern3}. In this case, the truth-tests of the proposed problems state that all of the SUMO{} objects that can be assigned to the noun synset are the agent/instrument/result of some SUMO{} object that can be assigned to the verb synset, which corresponds to the second half of the problem proposed by the first process question pattern (see conjecture (\ref{CQ:AgentEducationalProcessTeacher})). Using this third question pattern, we obtain 137 problems, from which 27, 28 and 82 problems belong respectively to the Agent, Instrument and Result categories. \item The last question pattern is focused on relation-pairs where both the verb and noun synsets are connected by {\it{subsumption}}{} or {\it{instance}}{}, such as the example in Figure \ref{fig:ProcessPattern4}. The truth-tests of the proposed problems state that some of the SUMO{} objects that can be assigned to the verb synset have some of the SUMO{} objects that can be assigned to the noun synset as agent/instrument/result. From this question pattern, we obtain 1,618 problems, from which 663 problems belong to the Agent category, 287 problems to the Instrument category and 668 problems to the Result category. \end{itemize} In total, we obtain 829 problems for the Agent category, 348 problems for the Instrument category and 788 problems for the Result category. \begin{sidewaystable} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{ll;{2.5pt/2.5pt}rrrr;{2.5pt/2.5pt}rrrr;{2.5pt/2.5pt}rrrr} \hline \multicolumn{2}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{Problem category}} & \multicolumn{4}{c;{2.5pt/2.5pt}}{{\bf TPTP-SUMO{} v5.3.0}} & \multicolumn{4}{c;{2.5pt/2.5pt}}{{\bf Adimen-SUMO{} v2.2}} & \multicolumn{4}{c}{{\bf Adimen-SUMO{} v2.6}} \\ \multicolumn{2}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{}} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{T} & \multicolumn{1}{c;{2.5pt/2.5pt}}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{T} & \multicolumn{1}{c;{2.5pt/2.5pt}}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E}\\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Truth-tests}} & & & & & & & & & & & & \\ & Multiple Mapping (151) & 0 & -~~ & - s. & - & 0 & -~~ & - s. & - & 23 & 15.23\% & 81.72 s. & 0.13 \\ & Event \#1 (24) & 0 & -~~ & - s. & - & 0 & -~~ & - s. & - & 0 & -~~ & - s. & - \\ & Event \#2 (350) & 82 & 23.43\% & 68.09 s. & 0.94 & 83 & 23.71\% & 0.66 s. & 5.46 & 83 & 23.71\% & 0.54 s. & 3.85 \\ & Event \#3 (2,011) & 580 & 28.84\% & 20.73 s. & 0.62 & 580 & 28.84\% & 0.97 s. & 5.49 & 582 & 28.94\% & 1.67 s. & 2.76 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Mapping (2,536)} & 662 & 26.10\% & 26.60 s. & 0.66 & 663 & 26.14\% & 0.93 s. & 5.48 & 688 & 27.13\% & 4.21 s. & 2.78 \\ \hdashline[2.5pt/2.5pt] & Antonym \#1 (71) & 12 & 16.90\% & 4.55 s. & 1.93 & 24 & 33.80\% & 6.87 s. & 2.59 & 44 & 61.97\% & 103.42 s. & 1.16 \\ & Antonym \#2 (489) & 66 & 13.50\% & 103.90 s. & 0.27 & 133 & 27.20\% & 22.82 s. & 0.15 & 193 & 39.47\% & 77.22 s. & 0.05 \\ & Antonym \#3 (2,444) & 83 & 3.40\% & 125.71 s. & 0.05 & 149 & 6.10\% & 56.95 s. & 0.23 & 686 & 28.07\% & 46.36 s. & 0.08 \\ & Agent (829) & 4 & 0.48\% & 62.22 s. & 0.24 & 7 & 0.84\% & 119.55 s. & 3.17 & 39 & 4.70\% & 6.28 s. & 0.49 \\ & Instrument (348) & 1 & 0.29\% & 236.39 s. & 0.00 & 1 & 0.29\% & 3.60 s. & 0.28 & 61 & 17.53\% & 45.61 s. & 0.23 \\ & Result (788) & 4 & 0.51\% & 332.67 s. & 0.01 & 11 & 1.40\% & 124.05 s. & 1.06 & 94 & 11.93\% & 11.04 s. & 0.29 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Competency (4,969)} & 170 & 3.42\% & 112.72 s. & 0.27 & 325 & 6.54\% & 42.74 s. & 0.46 & 1,117 & 22.48\% & 49.53 s. & 0.10 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Total (7,505)} & 832 & 11.09\% & 44.19 s. & 0.58 & 988 & 13.16\% & 14.68 s. & 3.83 & 1,805 & 24.05\% & 32.25 s. & 1.12 \\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Falsity-tests}} & & & & & & & & & & & & \\ & Multiple Mapping (151) & 1 & 0.66\% & 233.92 s. & 0.00 & 3 & 1.99\% & 15.08 s. & 0.07 & 2 & 1.32\% & 230.74 s. & 0.59 \\ & Event \#1 (24) & 0 & -~~ & - s. & - & 1 & 4.17\% & 17.73 s. & 0.06 & 7 & 29.17\% & 42.40 s. & 0.02 \\ & Event \#2 (350) & 0 & -~~ & - s. & - & 27 & 7.71\% & 33.13 s. & 0.04 & 131 & 37.43\% & 36.53 s. & 0.14 \\ & Event \#3 (2,011) & 0 & -~~ & - s. & - & 0 & -~~ & - s. & - & 646 & 32.12\% & 22.27 s. & 0.53 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Mapping (2,536)} & 1 & 0.04\% & 233.92 s. & 0.00 & 31 & 1.22\% & 30.89 s. & 0.04 & 786 & 30.99\% & 25.36 s. & 0.46 \\ \hdashline[2.5pt/2.5pt] & Antonym \#1 (71) & 0 & -~~ & - s. & - & 1 & 1.41\% & 4.35 s. & 0.23 & 4 & 5.63\% & 3.66 s. & 0.28 \\ & Antonym \#2 (489) & 25 & 5.11\% & 16.38 s. & 2.06 & 23 & 4.70\% & 25.36 s. & 5.10 & 21 & 4.29\% & 0.27 s. & 4.08 \\ & Antonym \#3 (2,444) & 13 & 0.53\% & 23.04 s. & 0.04 & 14 & 0.57\% & 16.91 s. & 0.06 & 1 & 0.04\% & 68.91 s. & 0.01 \\ & Agent (829) & 5 & 0.60\% & 205.67 s. & 0.01 & 2 & 0.24\% & 268.67 s. & 0.03 & 3 & 0.36\% & 402.85 s. & 0.03 \\ & Instrument (348) & 0 & -~~ & - s. & - & 2 & 0.57\% & 15.70 s. & 0.06 & 1 & 0.29\% & 595.03 s. & 0.00 \\ & Result (788) & 3 & 0.38\% & 249.40 s. & 0.00 & 12 & 1.52\% & 49.04 s. & 0.07 & 11 & 1.40\% & 186.29 s. & 0.28 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Competency (4,969)} & 46 & 0.93\% & 54.03 s. & 1.13 & 54 & 1.09\% & 36.70 s. & 2.21 & 41 & 0.83\% & 96.15 s. & 2.35 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Total (7,505)} & 47 & 0.63\% & 57.86 s. & 1.11 & 85 & 1.13\% & 34.58 s. & 1.42 & 827 & 11.02\% & 30.57 s. & 0.55 \\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Total (15,010)}} & {\bf 879} & {\bf 5.85\%} & {\bf 44.92 s.} & {\bf 0.61} & {\bf 1,073} & {\bf 7.14\%} & {\bf 16.26 s.} & {\bf 3.64} & {\bf 2,632} & {\bf 17.53\%} & {\bf 31.19 s.} & {\bf 0.94} \\ \hline \end{tabular} } \caption{\label{table:CompetencyComparison} Evaluating the competency of SUMO{} ontologies using Vampire v3.0} \end{sidewaystable} \section{Experimentation} \label{section:experimentation} Based on the set of CQs proposed in the previous sections, we have performed several experiments in order to evaluate the competency of SUMO{} based ontologies, the mapping between SUMO{} and {WordNet}{}, and the performance of FOL ATPs by following the methodology described in Section \ref{section:methodology}. In this experimentation, we have used an Intel\textregistered~Xeon\textregistered~CPU E5-2640v3@2.60GHz with 2GB of RAM per processor. For each CQ, we provide an ontology and the given conjecture as input to the ATP system. In the following subsections, we report on the results of these experiments.\footnote{The Adimen-SUMO{} ontology, the set of CQs and all the execution reports are freely available at \url{http://adimen.si.ehu.es}.} Additionally, we have manually analyzed some of the tests in order to evaluate the proposed CQs, as reported in the last subsection. \subsection{Evaluating the competency of SUMO{} based ontologies} In this subsection, we report on the evaluation of the competency of TPTP-SUMO{} and Adimen-SUMO{}. In the case of Adimen-SUMO{}, we also evaluate the improvement between two different versions: Adimen-SUMO{} v2.2, which is the first version we proposed, and Adimen-SUMO{} v2.6. Table \ref{table:CompetencyComparison} summarizes some results of the ATP Vampire v3.0 when evaluating TPTP-SUMO{} and Adimen-SUMO{} (v2.2 and v2.6) with an execution time limited to 600 seconds. The selection of Vampire v3.0 is due to the fact that it is the most successful ATP system in the experimentation reported in \cite{ALR16} when using the set of CQs proposed in \cite{ALR15} for the evaluation of ATPs. The execution time limit is set to 600 seconds since it is the longest time limit that has been ever used in CASC and we have obtained good practical results using 600 seconds as time limit in our preliminary experiments. CQs have been organized in two main divisions: {\it truth-tests} and {\it falsity-tests}. In addition, each division is organized according the two main problem categories introduced in the previous sections (and their corresponding subcategories): {\it Mapping}, for the validation of the mapping, and {\it Competency}, for the evaluation of the knowledge in the ontologies. For each ontology and each problem (sub)category (with the total number of problems between brackets), we provide the number (\# column) and percentage (\% column) of CQs that are proved together with the average run time (T column) and the efficiency measure that is used in CASC. This efficiency measure balances the time taken for each problem solved against the number of problems solved and it is calculated as the average of the inverses of the times for problems solved. With respect to the {\it Mapping} categories, Adimen-SUMO{} v2.6 slightly outperforms TPTP-SUMO{} and Adimen-SUMO{} v2.2 in the truth-test division---more passing tests (688 compared to 662 and 663)--- because there is almost no difference in the Event subcategories. On the contrary, no proof is found for the truth-test Multiple Mapping category using TPTP-SUMO{} or Adimen-SUMO{} v2.2, while 23 truth-tests can be classified as passing using Adimen-SUMO{} v2.6. In the case of falsity-tests, the difference is clearly larger: 786 non-passing tests using Adimen-SUMO{} v2.6 compared to 1 non-passing test using TPTP-SUMO{} and 31 non-passing tests using Adimen-SUMO{} v2.2. This result reveals that Adimen-SUMO{} v2.6 enables the detection of many defects in the mapping information which are not discovered using TPTP-SUMO{} or Adimen-SUMO{} v2.2. The results of the {\it Competency} categories shows that Adimen-SUMO{} v2.6 is the most competent ontology. It outperforms TPTP-SUMO{} and Adimen-SUMO{} v2.2 in terms of competency in both the truth-tests ---more passing tests (1,117 compared to 170 and 325), since conjectures are expected to be entailed--- and the falsity-test divisions ---less non-passing tests (41 compared to 46 and 54), since conjectures are expected not to be entailed. Further, Adimen-SUMO{} v2.6 is by far the most competent ontology in all the {\it Competency} subcategories of the truth-test division. On the contrary, TPTP-SUMO{} is the less competent ontology since Adimen-SUMO{} v2.2 clearly outperforms TPTP-SUMO{} in all the {\it Competency} subcategories of the truth-test division and the difference in the falsity-test division is not relevant. \begin{sidewaystable} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lrrrrrrrrrrrrrrr} \hline \multicolumn{1}{c}{\multirow{2}{*}{{\bf Problem category}}} & \multicolumn{3}{c}{{\bf VP v2.6}} & \multicolumn{3}{c}{{\bf VP v3.0}} & \multicolumn{3}{c}{{\bf VP v4.0}} & \multicolumn{3}{c}{{\bf VP v4.1}} & \multicolumn{3}{c}{{\bf EP v2.0}} \\ \multicolumn{1}{c}{\multirow{2}{*}{}} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{E} \\ \hline Antonym \#1 (71) & 44 & 68.32 s. & 2.06 & 44 & 103.42 s. & 1.16 & 37 & 68.15 s. & 0.82 & 43 & 30.07 s. & 0.27 & 40 & 35.01 s. & 0.31 \\ Antonym \#2 (489) & 204 & 86.84 s. & 0.13 & 193 & 77.22 s. & 0.05 & 54 & 130.27 s. & 0.05 & 119 & 144.33 s. & 0.11 & 150 & 145.10 s. & 0.03 \\ Antonym \#3 (2,444) & 1,086 & 182.29 s. & 0.15 & 686 & 46.36 s. & 0.07 & 431 & 60.32 s. & 0.06 & 851 & 194.74 s. & 0.10 & 256 & 324.63 s. & 0.01 \\ Agent (829) & 43 & 14.93 s. & 0.46 & 39 & 6.28 s. & 0.49 & 10 & 22.94 s. & 0.09 & 23 & 318.42 s. & 0.17 & 12 & 394.34 s. & 0.13 \\ Instrument (348) & 61 & 3.36 s. & 0.33 & 61 & 45.61 s. & 0.23 & 25 & 64.65 s. & 0.10 & 26 & 404.10 s. & 0.02 & 2 & 381.84 s. & 0.01 \\ Result (788) & 118 & 4.46 s. & 0.35 & 94 & 11.04 s. & 0.29 & 52 & 74.86 s. & 0.09 & 64 & 294.91 s. & 0.12 & 39 & 418.10 s. & 0.01 \\ \hdashline[2.5pt/2.5pt] Truth-tests (4,969) & 1,556 & 141.43 s. & 0.19 & 1,117 & 49.53 s. & 0.10 & 609 & 67.80 s. & 0.12 & 1,126 & 196.18 s. & 0.11 & 499 & 256.66 s. & 0.05 \\ \hdashline[2.5pt/2.5pt] Multiple Mapping (151) & 3 & 271.55 s. & 0.52 & 2 & 230.74 s. & 0.59 & 2 & 55.36 s. & 0.02 & 3 & 85.13 s. & 0.02 & 0 & - s. & - \\ Event \#1 (24) & 5 & 128.66 s. & 0.35 & 7 & 42.40 s. & 0.02 & 3 & 388.45 s. & 0.00 & 4 & 250.19 s. & 0.01 & 5 & 357.09 s. & 0.00 \\ Event \#2 (350) & 117 & 58.43 s. & 0.15 & 131 & 36.53 s. & 0.14 & 38 & 173.34 s. & 0.04 & 41 & 88.12 s. & 0.09 & 52 & 306.44 s. & 0.00 \\ Event \#3 (2,011) & 646 & 35.05 s. & 0.73 & 646 & 22.27 s. & 0.53 & 104 & 120.23 s. & 0.09 & 83 & 62.00 s. & 0.02 & 190 & 271.18 s. & 0.00 \\ \hdashline[2.5pt/2.5pt] Falsity-tests (2,536) & 771 & 40.12 s. & 0.63 & 786 & 25.36 s. & 0.46 & 147 & 136.56 s. & 0.08 & 131 & 76.45 s. & 0.05 & 247 & 280.34 s. & 0.00 \\ \hline {\bf Total (7,505)} & {\bf 2,327} & {\bf 107.86 s.} & {\bf 0.30} & {\bf 1,903} & {\bf 39.54 s.} & {\bf 0.19} & {\bf 756} & {\bf 81.56 s.} & {\bf 0.11} & {\bf 1,257} & {\bf 183.70 s.} & {\bf 0.11} & {\bf 746} & {\bf 264.50 s.} & {\bf 0.03} \\ \hline \end{tabular} } \caption{\label{table:ATPComparison} Evaluating the performance of FOL ATPs} \end{sidewaystable} Regarding efficiency, Adimen-SUMO{} v2.2 is the most efficient ontology and TPTP-SUMO{} the least efficient one according the efficiency values: 3.64 (Adimen-SUMO{} v2.2), 0.94 (Adimen-SUMO{} v2.6) and 0.61 (TPTP-SUMO{}). The fact that Adimen-SUMO{} v2.2 outperforms Adimen-SUMO{} v2.6 in terms of efficiency is not surprising since Adimen-SUMO{} v2.2 encodes less knowledge and is less competent that Adimen-SUMO{} v2.6. Similarly, the average run times with Adimen-SUMO{} are in general shorter than those with TPTP-SUMO{}, especially in the truth-test division: 14.68 s. (Adimen-SUMO{} v2.2) and 32.25 s. (Adimen-SUMO{} v2.6) against 44.19 s. (TPTP-SUMO{}). At the same time, the average run times with Adimen-SUMO{} v2.6 are longer than the ones with Adimen-SUMO{} v2.2. These facts lead us to think that the problems that are only solved using Adimen-SUMO{} v2.6 require complex and long proofs and, additionally, it also confirms the improvement of Adimen-SUMO{} v2.6 in terms of competency. \subsection{Evaluating the performance of FOL ATPs} According to the results reported in the previous section, most of the truth-tests belonging to the {\it Mapping} categories are solved in less than 2 seconds: more concretely, all the proved truth-tests belonging the Event category. Furthermore, in additional preliminary experiments using the remaining ATPs, we check that most of the proofs were trivial ---just involving 2 or 3 axioms--- and that no significant differences between the considered ATPs. Consequently, we conclude that those CQs do not enable a suitable evaluation of ATPs. Additionally, very few falsity-tests belonging to the {\it Competency} categories are proved (less than 1\% of CQs). Further, the remaining ones are not likely to be entailed by Adimen-SUMO{}, as confirmed in Subsection \ref{subsection:evaluatingAdimenSUMO}. Therefore, we concentrate on the {\it Competency} categories of the truth-test division and the {\it Mapping} categories of the falsity-test division. In Table \ref{table:ATPComparison}, we summarize some figures from the evaluation of the different versions of Vampire (VP)\footnote{Using the following parameters: \tt{--proof tptp --output\_axiom\_names on --mode casc -t 600 -m 2048}.} and E (EP)\footnote{Using the following parameters: \tt{--auto --proof-object -s --cpu-limit=600 --memory-limit=2048 }.} introduced in Subsection \ref{subsection:ATPs} using Adimen-SUMO{} v2.6. For each ATP, we provide the number of proofs (\# column), the average run times (T column) and the CASC efficiency measure (E column) in each problem subcategory. Globally, Vampire v2.6 is the most effective ATP according to the total number of proofs (2,327 proofs) with a difference of 424 and 1,070 proofs to Vampire v3.0 (second place) and Vampire v4.1 (third place) respectively. This result is different from our preliminary evaluation of ATPs reported in \cite{ALR16}. In that evaluation, Vampire v3.0 was the most effective ATP and Vampire v2.6 obtained nearly the same number of proofs for the set of CQs proposed in \cite{ALR15}, which is different from the set of CQs introduced in this work. With respect to the remaining ATPs (Vampire v4.0 and E v2.0), the number of proofs is clearly smaller. Regarding each division and problem subcategory, Vampire v2.6 is the winner in the truth-test division (1,556 proofs) and in all the truth-test problem subcategories. On the contrary, Vampire v3.0 is the winner in the falsity-test division (786 proofs) and in all the Antonym problem subcategories. In the case of the falsity-test division, the differences between the two most effective ATPs (Vampire v3.0 and v2.6) are smaller than the differences between the two most effective ATPs in the truth-test division (Vampire v2.6 and v4.1), but the difference between the two most effective ATPs and the remaining ones is clearly larger. The analysis of efficiency is more disparate: \begin{itemize} \item According to the CASC efficiency measure, Vampire v2.6 is also the most efficient in all the divisions and (sub)categories followed by Vampire v3.0, although Vampire v4.0 and Vampire v4.1 outperform Vampire v3.0 in the truth-test division. On the contrary, E is the less efficient ATP in both the truth- and the falsity-test divisions. \item Vampire v3.0 is the ATP with the lowest average run time (39.54 s.) followed by Vampire v4.0 (81.56 s.) and Vampire v2.6 (107.86 s.). However, Vampire v4.0 proves fewer CQs in comparison with Vampire v2.6, which in general is faster than Vampire v4.0 except for the third problem subcategory of Antonym (182.29 s. compared to 60.32 s.) and the Multiple Mapping category (271.55 s. compared to 55.36 s.). \item Vampire v4.0 is more efficient (lower average run time and higher efficiency value) than Vampire v4.1 and E v2.0, although Vampire v4.1 and E v2.0 are the two fastest systems in the first subcategory of Antonym problems (Antonym \#1), and Vampire v4.1 also performs faster than Vampire v4.0 in all the Event subcategories of the falsity-test division. \end{itemize} To sum up, we can conclude that our set of proposed CQs is really heterogeneous, enabling the evaluation of a wide range of features of state-of-the-art ATPs. \subsection{Evaluating Adimen-SUMO{} v2.6 and its Mapping from {WordNet}{}} \label{subsection:evaluatingAdimenSUMO} \begin{sidewaystable} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{ll;{2.5pt/2.5pt}rrrr;{2.5pt/2.5pt}rrrrr;{2.5pt/2.5pt}rrrr} \hline \multicolumn{2}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{{\bf Problem category}}} & \multicolumn{4}{c;{2.5pt/2.5pt}}{{\bf Proofs}} & \multicolumn{5}{c;{2.5pt/2.5pt}}{{\bf Coverage}} & \multicolumn{4}{c}{{\bf Difficulty}} \\ \multicolumn{2}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{}} & \multicolumn{1}{c}{\#} & \multicolumn{1}{c}{\%} & \multicolumn{1}{c}{T} & \multicolumn{1}{c;{2.5pt/2.5pt}}{E} & \multicolumn{1}{c}{{\bf N}} & \multicolumn{1}{c}{{\bf P}} & \multicolumn{1}{c}{{\bf S}} & \multicolumn{1}{c}{{\bf C}} & \multicolumn{1}{c;{2.5pt/2.5pt}}{{\bf F}} & \multicolumn{1}{c}{{\bf D}} & \multicolumn{1}{c}{{\bf N}} & \multicolumn{1}{c}{{\bf C}} & \multicolumn{1}{c}{{\bf F}} \\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Truth-tests}} & & & & & & & & & & & & & \\ & Multiple Mapping (151) & 27 & 17.88\% & 134.69 s. & 0.17 & 131 & 1.76\% & 23 & 106 & 25 & 0.48 & 14.44 & 8.19 & 6.26 \\ & Event \#1 (24) & 0 & 0.00\% & - s. & - & - & -~~~ & - & - & - & - & - & - & - \\ & Event \#2 (350) & 108 & 30.86\% & 0.26 s. & 4.21 & 196 & 2.64\% & 7 & 176 & 20 & 0.00 & 5.32 & 3.27 & 2.06 \\ & Event \#3 (2,011) & 582 & 28.94\% & 0.30 s. & 3.69 & 380 & 5.11\% & 88 & 378 & 2 & 0.00 & 1.78 & 1.32 & 0.45 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Mapping (2,536)} & 717 & 28.27\% & 5.35 s. & 3.61 & 606 & 8.15\% & 118 & 565 & 41 & 0.02 & 2.79 & 1.88 & 0.91 \\ \hdashline[2.5pt/2.5pt] & Antonym \#1 (71) & 44 & 61.97\% & 20.06 s. & 2.19 & 94 & 1.26\% & 26 & 67 & 27 & 0.05 & 4.14 & 1.77 & 2.36 \\ & Antonym \#2 (489) & 233 & 47.65\% & 65.27 s. & 0.13 & 601 & 8.08\% & 40 & 432 & 169 & 0.45 & 11.39 & 5.79 & 5.60 \\ & Antonym \#3 (2,444) & 1,167 & 47.75\% & 110.16 s. & 0.20 & 1,601 & 21.53\% & 652 & 1,121 & 480 & 0.44 & 13.91 & 8.86 & 5.05 \\ & Agent (829) & 43 & 5.19\% & 14.90 s. & 0.57 & 144 & 1.94\% & 14 & 102 & 42 & 0.42 & 11.46 & 6.53 & 4.93 \\ & Instrument (348) & 61 & 17.53\% & 3.34 s. & 0.39 & 161 & 2.16\% & 39 & 120 & 41 & 0.42 & 12.23 & 7.25 & 4.98 \\ & Result (788) & 118 & 14.97\% & 3.47 s. & 0.44 & 281 & 3.78\% & 32 & 200 & 81 & 0.43 & 11.49 & 6.30 & 5.19 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Competency (4,969)} & 1,666 & 33.53\% & 87.58 s. & 0.22 & 1,822 & 24.50\% & 803 & 1,266 & 556 & 0.43 & 13.00 & 7.94 & 5.06 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Total (7,505)} & 2,383 & 31.75\% & 63.47 s. & 1.24 & 1,989 & 26.74\% & 921 & 1,431 & 558 & 0.31 & 9.93 & 6.12 & 3.81 \\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Falsity-tests}} & & & & & & & & & & & & \\ & Multiple Mapping (151) & 3 & 1.99\% & 69.47 s. & 0.54 & 31 & 0.42\% & 1 & 20 & 11 & 0.33 & 11.33 & 6.67 & 4.67 \\ & Event \#1 (24) & 8 & 33.33\% & 176.04 s. & 0.27 & 93 & 1.25\% & 5 & 70 & 23 & 0.42 & 16.25 & 10.13 & 6.13 \\ & Event \#2 (350) & 131 & 37.43\% & 47.73 s. & 0.20 & 381 & 5.12\% & 45 & 305 & 76 & 0.41 & 14.35 & 8.16 & 6.19 \\ & Event \#3 (2,011) & 646 & 32.12\% & 31.22 s. & 0.76 & 466 & 6.27\% & 49 & 401 & 65 & 0.47 & 14.99 & 8.83 & 6.16 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Mapping (2,536)} & 788 & 25.45\% & 35.44 s. & 0.66 & 659 & 8.86\% & 100 & 541 & 118 & 0.46 & 14.89 & 8.73 & 6.16 \\ \hdashline[2.5pt/2.5pt] & Antonym \#1 (71) & 4 & 5.63\% & 1.88 s. & 0.77 & 30 & 0.40\% & 0 & 18 & 12 & 0.40 & 8.25 & 5.00 & 3.25 \\ & Antonym \#2 (489) & 23 & 4.70\% & 50.94 s. & 3.59 & 33 & 0.44\% & 0 & 20 & 13 & 0.43 & 2.96 & 1.57 & 1.39 \\ & Antonym \#3 (2,444) & 4 & 0.16\% & 401.56 s. & 1.79 & 47 & 0.63\% & 6 & 24 & 23 & 0.80 & 23.50 & 11.50 & 12.00 \\ & Agent (829) & 6 & 0.72\% & 273.96 s. & 0.01 & 53 & 0.71\% & 3 & 21 & 32 & 0.57 & 14.67 & 5.00 & 9.67 \\ & Instrument (348) & 1 & 0.29\% & 385.10 s. & 0.00 & 15 & 0.20\% & 4 & 5 & 10 & 0.60 & 15.00 & 5.00 & 10.00 \\ & Result (788) & 21 & 2.66\% & 286.12 s. & 0.16 & 121 & 1.63\% & 10 & 62 & 59 & 0.60 & 18.57 & 6.29 & 12.29 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Competency (4,969)} & 59 & 1.19\% & 183.44 s. & 1.68 & 199 & 2.68\% & 23 & 111 & 88 & 0.53 & 11.66 & 4.56 & 7.10 \\ \hdashline[2.5pt/2.5pt] \multicolumn{2}{l;{2.5pt/2.5pt}}{Total (7,505)} & 847 & 11.29\% & 45.88 s. & 0.75 & 764 & 10.27\% & 123 & 581 & 183 & 0.47 & 14.66 & 8.44 & 6.22 \\ \hline \multicolumn{2}{l;{2.5pt/2.5pt}}{{\bf Total (15,010)}} & {\bf 3,230} & {\bf 21.52\%} & {\bf 58.39 s.} & {\bf 1.11} & {\bf 2,149} & {\bf 28.90\%} & {\bf 1,044} & {\bf 1,542} & {\bf 607} & {\bf 0.35} & {\bf 11.16} & {\bf 6.72} & {\bf 4.44} \\ \hline \end{tabular} } \caption{\label{table:AdimenSUMOEvaluation} Evaluating the competency of Adimen-SUMO{} v2.6} \end{sidewaystable} Finally, we evaluate the competency of Adimen-SUMO{} v2.6 ---which consists of 7,437 axioms: 4,638 unit clauses (atomic formulas) and 2,799 formulae (non-atomic formulas)--- and its mapping from {WordNet}{}. For this purpose, all the ATP systems introduced in the above subsection have been used individually to experiment with the entire set of 15,010 CQs and then the outputs obtained from them have been jointly analyzed. In Table \ref{table:AdimenSUMOEvaluation}, we report the results of this experimentation and our joint analysis. These results are organized in three main parts ---Proofs, Coverage and Difficulty---, each of them consisting of three columns. In the first part (Proofs), we provide the number (\# column) of CQs that are proved by some of the ATPs, together with its percentage (\% column) and the average run time (T column). In the second part (Coverage), we provide the following figures about the axioms that are used in some of the proofs provided by ATPs: \begin{itemize} \item The number (N column) and percentage (P column) of axioms that are used in some proofs. \item The number of axioms that are exclusively used in proofs of the corresponding problem subcategory (S column). \item The number of used unit clauses (C column) and formulae (F column). \end{itemize} In the last part (Difficulty), we provide some measures of how difficult it is to prove the CQs of each (sub)category: \begin{itemize} \item On one hand, we use the {\it problem difficulty rating} introduced in \cite{SuS01}, which is calculated as the ratio between the number of ATPs that fail to solve a conjecture ({\it failing rating contributors}) and the total number of ATPs that have been tried ({\it rating contributors}). Thus, this rating provides a value between 0 ({\it easy problems}, 0 failing contributors) and 1 (unknown of {\it difficult problems}, all the rating contributors are failing) for each CQ. In column D, we provide the average of the difficulty problem ranking for the CQs that are successfully solved by at least one ATP among the five ranking contributors. Consequently, the highest possible value in column D is 0.80, since the number of rating contributors is 5. \item On the other hand, we report the average number of axioms (N column) that are used in each proof and the average number of unit clauses (C column) and formulae (F column). These values provide a measure about the amount and nature of knowledge that is required for solving a CQ: more concretely, the amount of explicit (unit clauses) and implicit knowledge (formulae) of the ontology that is used by ATPs. \end{itemize} Regarding the {\it Mapping} categories, we have proposed 2,536 problems, from which 1,505 have been successfully solved by ATPs (59.35\%). According to the results reported in Table \ref{table:AdimenSUMOEvaluation}, most of the truth-tests are easily solved by ATPs: in the second and third Event subcategories, the difficulty ratio is 0.0 and the average runtime is 0.30 seconds or smaller. On the contrary, the difficulty of the falsity-tests is comparable to the difficulty of the CQs belonging to the {\it Competency} categories. Among the solved problems, the mapping information is validated by 717 truth-tests and 788 falsity-tests enable the detection of some defects. For example, the synsets \synset{affirm}{3}{v} and \synset{affirmation}{2}{n} are related by \textPredicate{event} and respectively connected to \subsumptionMappingOfConcept{\SUMOClass{Communication}} and \equivalenceMappingOfConcept{\SUMOClass{Stating}}, from which we obtain the following truth-test: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:StatingCommunication} \tab\tab & ( \predicate{\$subclass} \; \constant{Stating} \; \constant{Communication} ) & \end{flalign} \end{footnotesize} \hspace{-5pt}The above CQ is entailed by Adimen-SUMO{} v2.6 since \SUMOClass{Stating} is a subclass of \SUMOClass{LinguisticCommunication}, which is in turn a subclass of \SUMOClass{Communication}. Therefore, the problem is decided to be {\it solved} and {\it entailed}, and we can conclude that the mapping of the synsets in the pair \pair{event}{\synset{affirm}{3}{v}}{\synset{affirmation}{2}{n}} is validated according to our criteria. On the contrary, we detect some defects in the mapping information of the synsets in \pair{event}{\synset{represent}{14}{v}}{\synset{representation}{1}{n}} as follows. Since \synset{represent}{14}{v} and \synset{representation}{1}{n} are connected to \equivalenceMappingOfConcept{\SUMOClass{Stating}} and \equivalenceMappingOfConcept{\SUMOClass{Imagining}} respectively, we propose the following falsity-test stating that its mapping is wrong: \vspace{-\baselineskip} \begin{footnotesize} \begin{flalign} \label{CQ:StatingImagining} \tab\tab & ( \connective{not} \\ & \hspace{20pt} ( \predicate{equal} \; \constant{Stating} \; \constant{Imagining} ) ) & \nonumber \end{flalign} \end{footnotesize} \hspace{-5pt}ATPs can prove that the above CQ is entailed by Adimen-SUMO{} v2.6 since \SUMOClass{Stating} is a subclass of \SUMOClass{IntentionalProcess} and \SUMOClass{Dreaming} is a subclass of \SUMOClass{Imagining}, with \SUMOClass{IntentionalProcess} and \SUMOClass{Dreaming} being disjoint classes. Thus, the problem is decided to be {\it solved} and {\it incompatible} and it enables the detection of that the mapping information in the pair \pair{event}{\synset{represent}{14}{v}}{ \synset{representation}{1}{n}} is incorrect. With respect to the {\it Competency} categories (4,969 problems), we can conclude that: \begin{itemize} \item The knowledge of Adimen-SUMO{} and {WordNet}{} seems to be well-aligned for 33.53\% of problems (1,666 truth-tests from the Antonym and Relation categories are solved). \item A quarter of the ontology (1,822 axioms, 24.50\% of total) has been used in the proof of the truth-tests. \item Only 1.19\% of problems (59 falsity-tests from the Antonym and Relation categories are proved) enable the detection of some failure or misalignment in the knowledge of Adimen-SUMO{} and {WordNet}{}, which involve a total of 199 axioms (2.68\% of the total). \item The knowledge about antonym concepts in {WordNet}{} is better covered by Adimen-SUMO{} than the knowledge about roles in events: 48.07\% of truth-tests from the Antonym category (1,444 proofs from 3,004 CQs) are proved against 11.30\% of truth-tests from the Relation category (222 proofs from 1,965 CQs). \end{itemize} In addition to evaluating the competency, {\it incompatible} (its falsity-test is classified as non-passing) and {\it unsolved} problems (both tests are classified as unknown) provide useful information to improve the ontology. For example, ATPs do not find a proof for the CQ in (\ref{CQ:BirthDeath}) (truth-test) and its negation (falsity-test). By inspecting the ontology, it is easy to check that the SUMO{} classes \SUMOClass{Birth} and \SUMOClass{Death} are not axiomatized to be disjoint, as one would naturally expect. Thus, the problem consisting of (\ref{CQ:BirthDeath}) (truth-test) and its negation (falsity-test) enables the detection of missing knowledge in the ontology. Further, the quality of the problems belonging to the {\it Competency} categories can be measured through the following three indicators: \begin{itemize} \item The average difficulty rating (D column) of all the problem subcategories in the truth-test division is at least 0.40 except for the first Antonym subcategory. In addition, the average difficulty rating is much higher (0.60 or around) in all the Result subcategories of the falsity-test division, and further the maximum possible (0.80) in the last subcategory of Antonym. \item The average number of axioms that are used in each proof (N column of Difficulty part) is higher than 11 except for the case of the first Antonym subcategory (both divisions) and the second Antonym subcategory of falsity-test division. This implies that substantial portions of the knowledge in the ontology are required for proving these CQs. \item The number of axioms that are used in proofs (N column of Coverage part) grows with (and it is always greater than) the number of proofs (\# column). In addition, among the 921 axioms that are exclusively used in a single problem subcategory (S column), 803 axioms correspond to the {\it Competency} categories of the truth-test division. \end{itemize} We conclude that the two first indicators lead us to confirm that the proofs for the problems in the Antonym and Relation categories are not trivial, while the last reveals that ATPs are not repeatedly using an small subset of axioms of the ontology for constructing the proofs. Finally, from the results reported in Tables \ref{table:ATPComparison} and \ref{table:AdimenSUMOEvaluation} we can conclude that ATPs are able to prove different subsets of CQs. In this sense, the number of truth-tests belonging to the {\it Competency} categories that are proved by at least one of the ATPs (1,666 truth-tests) is 7,07\% larger than the number of truth-tests that are proved by Vampire v2.6 (1,556 truth-tests), which is the most effective ATP. In particular, 233 truth-tests belonging to the second Antonym subcategory are proved by some of the ATPs while each ATP at most proves 204 truth-tests. Therefore, the number of CQs entailed by Adimen-SUMO{} could be larger than the number of proofs reported in Table \ref{table:AdimenSUMOEvaluation}. We could increase the number of proofs in our experiments by increasing the execution time and memory limit settings, by tuning the ATPs to Adimen-SUMO{} or by trying other ATP systems. \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l;{2.5pt/2.5pt}r;{2.5pt/2.5pt}rr;{2.5pt/2.5pt}rr;{2.5pt/2.5pt}rr;{2.5pt/2.5pt}rr;{2.5pt/2.5pt}rr} \hline \multicolumn{1}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{{\bf Problem category}}} & \multicolumn{1}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{{\bf Problems}}} & \multicolumn{2}{c;{2.5pt/2.5pt}}{{\bf Mapping}} & \multicolumn{6}{c;{2.5pt/2.5pt}}{{\bf Solutions}} & \multicolumn{2}{c}{{\bf Missing solutions}} \\ \multicolumn{1}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{}} & \multicolumn{1}{c;{2.5pt/2.5pt}}{\multirow{2}{*}{}} & \multicolumn{1}{c}{Correct} & \multicolumn{1}{c;{2.5pt/2.5pt}}{Incorrect} & \multicolumn{1}{c}{TT} & \multicolumn{1}{c}{FT} & \multicolumn{1}{c}{CM} & \multicolumn{1}{c}{IM} & \multicolumn{1}{c}{CK} & \multicolumn{1}{c;{2.5pt/2.5pt}}{IK} & \multicolumn{1}{c}{Knowledge} & \multicolumn{1}{c}{ATP} \\ \hline Multiple Mapping (151) & 1 & 1 (0) & 0 & 0 & 0 & - & - & - & - & 1 & 0 \\ Event \#1 (24) & 0 & - (-) & - & - & - & - & - & - & - & - & - \\ Event \#2 (350) & 1 & 1 (0) & 0 & 0 & 0 & - & - & - & - & 1 & 0 \\ Event \#3 (2,011) & 22 & 16 (6) & 6 & 7 & 7 & 8 & 6 & 14 & 0 & 7 & 1 \\ \hdashline[2.5pt/2.5pt] Mapping (2,536) & 24 & 18 (6) & 6 & 7 & 7 & 8 & 6 & 14 & 0 & 9 & 1 \\ \hdashline[2.5pt/2.5pt] Antonym \#1 (71) & 2 & 2 (2) & 0 & 2 & 0 & 2 & - & 1 & 0 & - & - \\ Antonym \#2 (489) & 3 & 2 (0) & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 2 & 0 \\ Antonym \#3 (2,444) & 27 & 8 (2) & 19 & 14 & 0 & 5 & 9 & 14 & 0 & 3 & 0 \\ Agent (829) & 5 & 4 (1) & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 3 & 0 \\ Instrument (348) & 2 & 2 (2) & 0 & 0 & 0 & - & - & - & - & 0 & 2 \\ Result (788) & 12 & 7 (4) & 5 & 1 & 0 & 1 & 0 & 1 & 0 & 6 & 0 \\ \hdashline[2.5pt/2.5pt] Competency (4,969) & 51 & 25 (11) & 26 & 19 & 0 & 10 & 9 & 19 & 0 & 13 & 2 \\ \hline {\bf Total (7,505)} & {\bf 75} & {\bf 43 (17)} & {\bf 32} & {\bf 26} & {\bf 7} & {\bf 18} & {\bf 15} & {\bf 33} & {\bf 0} & {\bf 22} & {\bf 3} \\ \hline \end{tabular} } \caption{\label{table:CQEvaluation} Detailed analysis of problems} \end{table} \subsection{A Complete Analysis of a Small Set of Problems} As we have already described in the above subsections, the proposed set of CQs is suitable for evaluating the competency of Adimen-SUMO{} and for detecting some mapping misalignments. These are some good indicators of the quality of the proposed CQs. However, a more detailed analysis requires a manual inspection of the conjectures, the mapping of the involved synsets, and the proofs obtained by ATPs. Thus, we have randomly selected a sample of 75 problems (1\%) following a uniform distribution. In Table \ref{table:CQEvaluation}, we summarize some figures of our detailed analysis in four main parts ---Problems, Mapping, Solutions and Missing solutions---. In the first part (Problems), we provide the number of problems of each subcategory that have been randomly chosen. In the second part (Mapping, two columns), we provide the result of our quality analysis of the mapping between {WordNet}{} and Adimen-SUMO{}: the number of problems where both synsets are correctly connected to Adimen-SUMO{} (Correct column) and the number of problems such that at least one of the synsets is incorrectly connected (Incorrect column). In addition, we also provide the number of mappings where the two synsets are both correctly and {\it precisely} connected (Correct column, between brackets). Our criteria for classifying a mapping as {\it only correct} or as {\it correct and precise} are the following: on one hand, we consider a mapping as {\it correct} if the semantics associated with the Adimen-SUMO{} concept and with the synset are compatible, and a correct mapping is also considered as {\it precise} if the semantics of the synset and the SUMO{} concept are equivalent. For example, the semantics of the noun synset \synset{machine}{1}{n} is {\it ``Any mechanical or electrical device that transmits or modifies energy to perform or assist in the performance of human tasks''} and the semantics of the SUMO{} class \SUMOClass{Machine} is {\it ``Machines are Devices that that have a well-defined resource and result and that automatically convert the resource into the result''}. Thus, the mapping of \synset{machine}{1}{n} to \equivalenceMapping{\SUMOClass{Machine}} is classified as correct and precise. On the contrary, the semantics of the adjective synset \synset{homemade}{1}{a} is {\it ``made or produced in the home or by yourself''} and the semantics of the SUMO{} class \SUMOClass{Making} is {\it ``The subclass of Creation in which an individual Artifact or a type of Artifact is made''}. Hence, we classify the mapping of \synset{homemade}{1}{a} to \subsumptionMapping{\SUMOClass{Making}} as incorrect. On the other hand, we consider a mapping as {\it only correct} (that is, correct but not precise) when the semantics of the Adimen-SUMO{} concept is more general than the semantics of the synset. For example, the semantics of the verb synset \synset{machine}{1}{v} is {\it ``Turn, shape, mold, or otherwise finish by machinery''}. Hence, the mapping of \synset{machine}{1}{v} to \subsumptionMapping{\SUMOClass{Making}} is classified as only correct. In the third part (Solutions, six columns), we provide the number of solutions classified according to two different criteria: \begin{itemize} \item In the first two columns (TT and FT columns), we provide the number of truth-and falsity-tests that are proved. \item In the next two columns, we provide the number CQs where the mapping is correct ---both only correct or correct and precise--- (CM column) and incorrect (IM column) from the truth-and falsity-tests that are proved. \item In the last two columns (CK and IK columns), we provide the number of truth-and falsity-tests that are proved on the basis of knowledge that is classified as correct (CK column) and incorrect (IK columns). \end{itemize} Finally, in the last part (Missing solutions, two columns) we sum up the results of our analysis of unsolved problems with a correct mapping ---either {\it only correct} or {\it correct and precise}---: the number of problems that cannot be solved because of lack of knowledge in Adimen-SUMO{} or due to a misalignment in the knowledge of {WordNet}{} and Adimen-SUMO{} (Knowledge column), and the number of problems that are entailed by the ontology although the ATPs do not find a proof within the given resource limits (ATP column). As reported at the bottom of Table \ref{table:CQEvaluation}, the synsets in 43 problems are decided to be correctly connected to Adimen-SUMO{} and, among them, the synsets in 17 problems are decided to be precisely connected to Adimen-SUMO{} (Correct column). Thus, some of the synsets are not correctly connected to Adimen-SUMO{} in 32 problems (Incorrect column). In total, 33 problems are solved: 26 problems are classified as entailed (TT column) and 7 problems are classified as incompatible (FT column). The knowledge of the ontology that is used in the proof of those 33 solved problems is decided to be correct (CK column) and, among them, the synsets in 18 problems are decided to be correctly connected (CM column). Consequently, from the 43 problems where the synsets are decided to be correctly connected, there are 25 unsolved problems and Adimen-SUMO{} lacks sufficient knowledge in 22 problems (Knowledge column). Thus, 3 problems that are entailed by Adimen-SUMO{} remain unsolved (ATP column). With respect to the 32 problems where some of the synsets are not correctly connected to Adimen-SUMO{}, 15 problems are solved (IM column) and the remaining unsolved problems (17) have not been analyzed since the resulting conjectures are senseless. Next, we summarize the main conclusions drawn from our detailed analysis: \begin{itemize} \item More than two thirds of the problems with an incorrect mapping (20 of 32 problems) belong to the Antonym category, especially to the third Antonym subcategory (19 problems). This is mainly due to the poor mapping of {WordNet}{} adjectives and satellites. More concretely, many {WordNet}{} adjectives and satellites are connected to SUMO{} processes instead of SUMO{} attributes. \item Among the problems with a correct mapping, the number of problems with a precise mapping is very low (17 of 43 problems). However, this is not surprising due to the large difference between the number of concepts defined in Adimen-SUMO{} (3,407 concepts) and {WordNet}{} (117,659 synsets). \item Our evaluation results (i.e. number of solved problems) are penalized by the poor mapping of {WordNet}{} adjectives and satellites, especially in the third Antonym subcategory: 62.5\% of problems with a correct mapping are solved (5 of 8 problems) against 47.37\% of problems with an incorrect mapping (9 of 19 problems). \item The solutions of all the problems that have been solved (33 problems) are based on correct knowledge of the ontology (CK column), for problems with both a correct and incorrect mapping. This means that we have not discovered incorrect knowledge in the ontology by inspecting the proofs provided by the ATPs. \item Most of the unsolved problems with a correct mapping (22 of 25 problems) are due to the lack of information in the ontology. However, we have also discovered 3 problems for which either the truth- or the falsity-test are entailed by Adimen-SUMO{} although it cannot be proved by ATPs within the given resources of time and memory. \end{itemize} \section{Conclusions and Future Work} \label{section:conclusions} Artificial Intelligence aims to provide computer programs with commonsense knowledge to reason about our world \cite{mccarthy1986applications}. This work offers a new practical approach towards automated commonsense reasoning with SUMO{}-based first-order logic (FOL) ontologies. Next, we review the main contributions and results reported in this paper and discuss future work. First, we have introduced a novel black-box testing methodology for FOL ontologies ---which is an evolved version of the methodology introduced in \cite{ALR15}--- that exploits {WordNet}{} and its mapping into SUMO{}. For this purpose, we have considered different interpretations of the mapping and selected the most productive option for our purposes. By following our proposal, we have obtained more than 7,500 problems (thus, more than 15,000 CQs). To the best of our knowledge, this is the largest set of problems proposed for SUMO{}-based ontologies. Secondly, we have experimentally evaluated the competency of various translations of SUMO{} into FOL ontologies ---TPTP-SUMO{}, Adimen-SUMO{} v.2.2 and Adimen-SUMO{} v2.6---, the mapping between SUMO{} and {WordNet}{}, and the efficiency of several FOL ATPs. In our experimentation, we have checked the coverage of our set of problems by analyzing the axioms that are used in the proofs provided by the ATPs. Additionally, we have also demonstrated that the proposed set of problems enables the evaluation of different features of the ATPs since each system is able to solve a different subset of problems using the same time and memory resources. Finally, we have manually evaluated the quality of a subset of the proposed problems when testing Adimen-SUMO{} v2.6. From our manual evaluation, we have detected a) some defects in the mapping of synsets (especially in the case of adjectives) and b) some solvable problems for which ATPs find no solution. We plan to propose the inclusion of our set of problems in the CSR domain of the TPTP problem library and in the set of eligible problems for the LTB division of CASC. All the resources that have been used and developed during this work are available in a single package, including:\footnote{The package is available at \url{http://adimen.si.ehu.es/web/AdimenSUMO}.} a) the ontologies; b) tools for the creation of tests, its experimentation and the analysis of results; and c) the resulting tests for each ontology and the output obtained from different ATPs. Regarding future work, our plan is to enlarge the proposed set of problems by following different strategies. Amongst others: \begin{itemize} \item By considering alternative proposals for the translation of the {WordNet}{}-SUMO{} mapping. \item By exploiting additional SUMO{} relations, such as meronymy, hyponymy, etc. Some preliminary works have been already introduced in \cite{AlR18,AGR18}. \item By exploiting other resources of knowledge such as EuroWordNet Top Ontology{} \cite{AAC08}, FrameNet{} \cite{REP06}, Predicate Matrix{} \cite{LLA16}, ConceptNet}\newcommand{\VISUALGENOME}{VisualGenome{} \cite{SpH12} or \VISUALGENOME{} \cite{KZG16}. \item By following white-box testing strategies that focus on the particular representation of the knowledge \cite{AHL17}. \end{itemize} Furthermore, we also aim to exploit unsolved problems in order to improve Adimen-SUMO{}. For this purpose, we will have to analyze whether the classification of problems as unsolved is due to the lack of knowledge in Adimen-SUMO{}. If so, we would consider the possibility of enriching Adimen-SUMO{} by adding knowledge from {WordNet}{} or other resources. Additionally, {WordNet}{} itself and its mapping can be evaluated. For example, by detecting synsets that are frequently involved in problems classified as incompatible. Finally, we plan to evaluate the knowledge in the Multilingual Central Repository (MCR) \cite{GLR12} and to check the utility of Adimen-SUMO{} v2.6 in Natural Language Processing (NLP) tasks that involve reasoning on commonsense knowledge \cite{Bos09}, such as Recognizing Textual Entailment (RTE) \cite{BoM06,DRS13,Abz17}, Natural Language Inference (NLI) \cite{BAP15} or Interpretable Semantic Textual Similarity (ISTS) \cite{LMG17}. \section*{Acknowledgements} We thank the anonymous reviewers for their valuable comments and suggestions. This work has been partially funded by the Spanish Projects TUNER (TIN2015-65308-C5-1-R) and COMMAS (TIN2013-46181-C2-2-R), the Basque Project LoRea (GIU15/30) and grant BAILab (UFI11/45). \section*{Bibliography} \bibliographystyle{abbrv}
1,314,259,992,715
arxiv
\section{Introduction}Mechanical resonators in the quantum limit can be used to explore macroscopic quantum effects and the quantum-to-classical boundaries in such systems.\cite{Blencowe} To reach the quantum ground state of the mechanical systems, many approaches have been explored, including feedback cooling, dynamic back-action cooling, and cooling via quantum bits.\cite{Kippenberg, Metzger, QubitCooling} Mechanical resonators with a wide range of frequencies and high qualify-factors have been fabricated \cite{Harris, Cleland} and the interface between mechanical modes and other quantum systems has also been studied for quantum information processing.\cite{Tian2010} Among the various approaches to reaching the quantum ground state of a mechanical resonator, sideband cooling can be achieved by coupling the resonator to an optical or microwave cavity.\cite{Wilson-Rae2007, Marquardt2007, Genes, Clerk, Tian2009, Armour} With the cavity driven at the detuning $\Delta_{b}$, the cooling (heating) rate $\Gamma_{-}$ ($\Gamma_{+}$) can be written as \begin{equation} \Gamma_{\mp}=g_{0}^{2}\kappa_{0}/[\kappa_{0}^2/4+(\omega_{m}\pm\Delta_{b})^{2}]\label{eq:Gapm} \end{equation} where $\omega_{m}$ is the mechanical frequency, $\kappa_{0}$ is the cavity damping rate, and $g_{0}$ is the effective linear coupling between the mechanical mode and the cavity mode under the driving. The cooling rate reaches a maximum of $\Gamma_{-}=4g_{0}^{2}/\kappa_{0}$ when the cavity detuning is at the first red-sideband frequency with $-\Delta_{b}=\omega_{m}$. The cooling process corresponds to an energy up-conversion of the thermal phonons in the mechanical resonator to the cavity photons. Given the mechanical damping rate $\gamma_{m}$ and the thermal phonon number $n_{th}$, the steady state phonon number under the cavity cooling is \begin{equation} n_{ss}=(\Gamma_{+}+\gamma_{m}n_{th})/[(\Gamma_{-}-\Gamma_{+})+\gamma_{m}],\label{eq:nss} \end{equation} which is ultimately limited by the heating rate $\Gamma_{+}$. In recent experiments, the resolved-sideband regime with $\kappa_{0}\ll\omega_{m}$ has been demonstrated which shows that it is promising to reach the quantum ground state in such systems.\cite{Kippenberg2009, Wang, Schliesser, Rocheleau, Teufel} Molecular adsorbates and crystal defects exist on the surface or in the bulk of mechanical resonators.\cite{Pohl, Phillips, Anderson} These structures can be modeled as two-level systems (TLS) and their acoustic and thermodynamic properties have been extensively studied in amorphous solids.\cite{Seoanez, Neeley} Recently, it was experimentally demonstrated that both the mechanical resonance and mechanical damping can be affected strongly by the TLS defects.\cite{Kuhn, Kippenberg2010} The vibration of a mechanical resonator modulates the asymmetric energy of a TLS via the deformation potential, and hence generates a coupling between the TLS and the mechanical mode. In a recent work,\cite{Remus} this coupling and its effect on the decoherence of the mechanical resonator have been thoroughly studied. In this work, we study the cavity cooling of a mechanical resonator in the presence of a TLS using the adiabatic elimination technique that is widely exploited in quantum optics. The energy spectrum of the coupled resonator-TLS system contains polariton states that are quite different from the harmonic oscillator spectrum of a bare mechanical mode.\cite{Blais} The cooling process, strongly depending on the energy spectrum, can hence be affected significantly. As we will show, the ``simple'' approach of adding the TLS and its coupling directly to the cooling equation of the mechanical resonator does not describe the cooling accurately. We derive the master equation for the resonator-TLS system and study how the steady state will be affected by the properties of the TLS. This theory can be extended to study the cooling of a mechanical resonator coupled with more complicated structures such as multiple TLS's. The theory can also be extended to include the dynamics of the TLS in the cooling process. Our results can be used to analyze sideband cooling schemes for mechanical resonators when taking into account the defects. This paper is organized as the following. In Sec.~\ref{sec:system}, we present the Hamiltonian of the coupled system including the mechanical mode, the TLS, the cavity used in the cooling process, and the bath modes of the thermal reservoirs for each of the sub-systems. Then, in Sec.~\ref{sec:cooling}, we derive the cooling master equation for the coupled resonator-TLS system using the adiabatic elimination technique. The results from this master equation and the consequence of the presence of the TLS on the cooling will be discussed in detail in Sec.~\ref{sec:results}. Finally, discussions and conclusions will be presented in Sec.~\ref{sec:conclusion}. \section{The Coupled System\label{sec:system}} Our system consists of the mechanical resonator, the TLS defect, and a cavity that couples with the resonator via, e.g. optomechanical force.\cite{Kippenberg} As is shown in Fig.~\ref{fig1}, the TLS couples with the resonator with the Hamiltonian \begin{equation} H_{\tau}=\hbar\omega_{m}a^{\dag}a+\hbar[\Delta_{z}/2+\lambda(a+a^{\dag})]\sigma_{z}+(\hbar\Delta_{x}/2)\sigma_{x}\label{eq:Htau} \end{equation} where $a$ ($a^{\dag}$) is the annihilation (creation) operator for the mechanical mode, $\sigma_{x,z}$ are the Pauli operators, $\Delta_{z}$ is the asymmetric energy of the TLS, and $\Delta_{x}$ is the tunneling matrix element between the two sites of the TLS. The mechanical displacement generates a strain tensor in the location of the defect, and hence generates a coupling between the resonator and the TLS. The coupling amplitude $\lambda$ is proportional to the deformation potential and the second order derivative of the mechanical displacement and has been studied in detail in previous work.\cite{Seoanez, Remus} For convenience of discussion, we rewrite the Hamiltonian as \begin{equation} H_{\tau}=\hbar\omega_{m}a^{\dag}a+(\hbar\omega_{z}/2)\bar{\sigma}_{z}+\hbar\bar{\lambda}(a\bar{\sigma}_{+}+a^{\dag}\bar{\sigma}_{-})\label{eq:Htau2} \end{equation} in terms of the rotated Pauli matrices $\bar{\sigma}_{x,z}$ with \begin{equation} \bar{\sigma}_{z}=(\Delta_{z}/\omega_{z})\sigma_{z}+(\Delta_{x}/\omega_{z})\sigma_{x}, \end{equation} where $\omega_{z}=\sqrt{\Delta_{z}^{2}+\Delta_{x}^{2}}$ is the frequency of the TLS and $\bar{\lambda}=\lambda (\Delta_{x}/\omega_{z})$ is the coupling constant projected in the rotated basis. Here, we have applied the rotating wave approximation to omit the term $\lambda(\Delta_{z}/\omega_{z})(a+a^{\dag})\bar{\sigma}_{z}$ and the counter rotating terms which only have higher order effects on the cooling process. The Hamiltonian in Eq.~(\ref{eq:Htau2}) has the form of the Jaynes-Cummings Model \cite{Blais, Raimond} that has an energy spectrum distinctly different from the phonon spectrum of a bare mechanical mode as is illustrated in Fig.~\ref{fig1}. When adding the cavity mode into this system, we consider a strong red-detuned driving applied to the cavity which generates an effective linear coupling between the mechanical mode and the cavity mode.\cite{Wilson-Rae2007, Tian2009} The total Hamiltonian can be written as \begin{equation} H_{t}=H_{\tau}+(-\hbar\Delta_{b})b^{\dagger}b+\hbar g_{0}(a+a^{\dagger})(b+b^{\dagger})\label{eq:Ht} \end{equation} where $\Delta_b$ is the cavity detuning and $b$ ($b^{\dag}$) is the annihilation (creation) operator of the cavity mode. In addition to the quantum components, each sub-system is subject to the environmental noise from a thermal bath which plays an essential role in the cooling process. For example, the cavity couples with a continuous spectrum of bath modes $\{b_{k}\}$ with the coupling Hamiltonian \begin{equation} H_{cn}=\sum_k (c_k b^\dag b_{k}+c_k^{\star}b_{k}^\dag b)\label{eq:Hcn} \end{equation} where $c_k$ is the coupling coefficient. The coupling with the bath modes induces cavity damping with the damping rate $\kappa_{0}$. The mechanical resonator and the TLS also couple with bath modes in similar forms as that of $H_{cn}$. At the moment, no definite theory or experimental results are available to accurately describe the TLS decoherence. For simplicity, we will model the environment of the TLS as a thermal reservoir with the damping rate $\gamma_\tau$. Similarly, the intrinsic damping rate of the mechanical mode is assumed to be $\gamma_m$. \begin{figure} \includegraphics[width=8.5cm,clip]{fig1} \caption{\label{fig1}(Color online) (a) Mechanical resonator couples with TLS defects. (b) Double-well potential model for the TLS. (c) Energy spectrum of the coupled system. The eigenstates are the polariton doublets labeled as $|n\pm\rangle$. The solid (dashed) arrows indicate transitions between states of identical (opposite) polarizations.} \end{figure} \section{Cavity Cooling of the Coupled System\label{sec:cooling}} In this section, we study the cooling of the resonator-TLS system via a cavity mode driven by red-detuned source. When the coupling between the resonator and the TLS is strong, the effect of the coupling on the eigenenergy spectrum of the coupled system can strongly affect the cooling process. We will study the cooling process using a master equation approach and derive the cooling equation by the adiabatic elimination technique. \subsection{Eigenbasis} The nonzero coupling between the mechanical resonator and the TLS modifies the eigenenergy spectrum of the resonator-TLS system, as is shown in Fig.~\ref{fig1}c. Let $|\uparrow,\downarrow\rangle$ be the eigenstates of the $\bar{\sigma}_{z}$ operator of the TLS and $|n\rangle$ be the Fock states of the mechanical mode. The eigenstates of the coupled system include the ground state $|0\downarrow\rangle$ and the polariton doublets \cite{Blais} \begin{equation} |n\alpha\rangle=c_{\alpha}^{n}|n\downarrow\rangle+s_{\alpha}^{n}|(n-1)\uparrow\rangle \label{eq:nalpha} \end{equation} with $n\ge1$ and $\alpha=\pm$. The coefficients of the eigenstates are \begin{eqnarray} c_{+}^{n}=-s_{-}^{n}=\cos(\delta_{n}/2) \nonumber \\ s_{+}^{n}=c_{-}^{n}=\sin(\delta_{n}/2) \label{coeff} \end{eqnarray} where $\cos(\delta_{n}/2)=\sqrt{(\omega_{tn}+\delta\omega)/2\omega_{tn}}$, $\delta\omega=\omega_{m}-\omega_{z}$ is the off-resonance between the resonator and the TLS, and $\omega_{tn}=\sqrt{\delta\omega^{2}+ 4\bar{\lambda}^{2}n} $. The eigenenergies of the polariton states are \begin{equation} \omega_{n\alpha}=n\omega_m +(\alpha \omega_{tn}- \delta\omega)/2 \end{equation} for the $n$-th doublet. In the following, we write the quantum operators of each sub-system in terms of the eigenbasis of the coupled resonator-TLS system. For example, the mechanical annihilation operator $a$, which appears in the coupling between the cavity and the mechanical resonator in Eq.~(\ref{eq:Ht}), can be written in the eigenbasis as $a=\sum A_{\beta\alpha}^{(n)}\hat{O}_{n}^{\alpha\beta}$ with the operator \begin{equation} \hat{O}_{n}^{\alpha\beta}=|(n-1)\beta\rangle\langle n\alpha| \end{equation} defined with the polariton states, and the matrix elements \begin{equation} A_{\beta\alpha}^{(n)}=\sqrt{n}c_{\beta}^{n-1}c_{\alpha}^{n}+\sqrt{n-1}s_{\beta}^{n-1}s_{\alpha}^{n}.\label{eq:Aba} \end{equation} Similarly, the spin operators for the TLS can be written as $\bar{\sigma}_{\pm}=\sum \sigma_{\beta\alpha}^{(n)}\hat{O}_{n}^{\alpha\beta}$ with $\sigma_{\beta\alpha}^{(n)} = c_{\beta}^{n-1} s_{\alpha}^{n}$. The total master equation for this system including the cavity mode can be written as \cite{QO} \begin{equation} \frac{d\rho}{dt} = -\frac{i}{\hbar}[H_{t},\,\rho]+\frac{\kappa_{0}}{2}{\cal L}(b)\rho + \sum_{n,\alpha,\beta} \frac{\Gamma_0^{n\alpha\beta}}{2}{\cal L}_{0}^{n\alpha\beta}\rho\label{eq:rho} \end{equation} where ${\cal L}(o)\rho$ is defined as the Lindblad form for the operator $o$ with \begin{equation} {\cal L}(o)\rho=2 o\rho o^\dag - \rho o^\dag o - o^\dag o\rho, \end{equation} and the term \begin{equation} {\cal L}_{0}^{n\alpha\beta}=(n_{th}^{n\alpha\beta}+1){\cal L}(\hat{O}_{n}^{\alpha\beta})+n_{th}^{n\alpha\beta}{\cal L}(\hat{O}_{n}^{\alpha\beta\dag}) \end{equation} describes the damping between the states $|(n-1)\beta\rangle$ and $|n\alpha\rangle$ generated by the intrinsic noise reservoirs for the mechanical resonator and for the TLS. Given a flat spectrum for the intrinsic noise reservoirs with the mechanical damping rate $\gamma_{m}$ and the TLS damping rate $\gamma_{\tau}$ respectively, we can derive \begin{equation} \Gamma_0^{n\alpha\beta}=|A_{\beta\alpha}^{(n)}|^{2} \gamma_m+|\sigma_{\beta\alpha}^{(n)}|^{2}\gamma_{\tau} \end{equation} where $n_{th}^{n\alpha\beta}=(\exp{(\hbar\omega_{n\alpha\beta}/k_{B}T)}-1)^{-1}$ is the thermal occupation number for the energy separation \begin{equation} \omega_{n\alpha\beta}=\omega_{n\alpha}-\omega_{(n-1)\beta}\label{omn} \end{equation} between the states $|(n-1)\beta\rangle$ and $|n\alpha\rangle$. \subsection{Adiabatic Elimination and Master Equation} Let $\rho_{\tau}=\textrm{Tr}_{c}(\rho)$ be the reduced density matrix for the coupled resonator-TLS system by tracing over the cavity mode. The cooling master equation for the reduced density matrix can be derived by applying the adiabatic elimination technique \cite{Cirac} to Eq.~(\ref{eq:rho}) in the limit of $\kappa_{0}\gg g_{0}, \gamma_{m}, \gamma_{\tau}$. Under strong cavity damping, only density matrix components with low photon numbers: $\rho_{\tau}^{(mn)}=\langle m_{c}|\rho|n_{c}\rangle$ for $m_{c}, n_{c}=0, 1$, need to be considered. After integrating over the time variable $t$ for the duration $t>1/\kappa_{0}$, it can be shown that $\rho_{\tau}^{(01)}$, $\rho_{\tau}^{(10)}$, and $\rho_{\tau}^{(11)}$ adiabatically follow the time evolution of $\rho_{\tau}^{(00)}$. The terms $\rho_{\tau}^{(01)}$ and $\rho_{\tau}^{(10)}$ are of the first order of the small ratio $g_{0}/\kappa_{0}$ and the term $\rho_{\tau}^{(11)}$ is of the second order of $g_{0}/\kappa_{0}$. Hence, keeping all the terms to the second order of $g_{0}/\kappa_{0}$, we can use the time evolution of $\rho_{\tau}^{(00)}$ to approximate the cooling master equation for $\rho_{\tau}$. We derive \begin{eqnarray} \frac{d\rho_{\tau}}{dt} &=& -i[\widetilde{H}_\tau,\rho_{\tau}] /\hbar + \sum_{n,\alpha,\beta} \frac{\Gamma_0^{n\alpha\beta}}{2}{\cal L}_{0}^{n\alpha\beta}\rho_{\tau} \label{eq:rhot} \\ &+& \sum_{n,\alpha,\beta} |A_{\beta\alpha}^{(n)}|^{2}[\frac{\Gamma_{-,\alpha\beta}^{n}}{2} {\cal L}(\hat{O}_{n}^{\alpha\beta}) + \frac{\Gamma_{+,\alpha\beta}^{n}}{2}{\cal L}(\hat{O}_{n}^{\alpha\beta\dag})]\rho_{\tau} \nonumber \end{eqnarray} as the master equation for the coupled system. The first term in the equation includes the polariton Hamiltonian $\widetilde{H}_\tau=\sum_{n,\alpha }\hbar \widetilde{\omega}_{n\alpha} |n\alpha\rangle\langle n\alpha|$ with the modified polariton frequencies $\widetilde{\omega} _{n\alpha}$. The modified frequencies include small corrections of the second order of $g_{0}/\kappa_0$. The second term describes the decoherence due to the intrinsic noise reservoirs for the mechanical resonator and the TLS, as is given in Eq.~(\ref{eq:rho}). The last term in the above equation describes the cavity cooling in the eigenbasis with the cooling (heating) rates \begin{equation} \Gamma_{\mp,\alpha\beta}^{n}=g_{0}^{2}\kappa_{0}/[\kappa_{0}^2/4+(\omega_{n\alpha\beta}\pm\Delta_{b})^{2}],\label{eq:Ga12} \end{equation} where $\omega_{n\alpha\beta}$ is defined in Eq.~(\ref{omn}). The main differences between the above master equation and the standard cooling equation \cite{Wilson-Rae2007, Marquardt2007} are: 1. the cooling rates in Eq.~(\ref{eq:Ga12}) depend on the energy differences $\omega_{n\alpha\beta}$ which are state-dependent and are modified by the finite coupling constant $\bar{\lambda}$, compared with Eq.~(\ref{eq:Gapm}); 2. the matrix elements $A_{\beta\alpha}^{(n)}$ which is the projection of the annihilation operator $a$ in the polariton basis can be \emph{quite} different from the factor $\sqrt{n}$ in the cooling equation for a bare mechanical mode. As we will see below, the matrix elements $A_{\beta\alpha}^{(n)}$ can play an essential role in the cooling process. For $\bar{\lambda} =0$, it can be shown that Eq.~(\ref{eq:rhot}) recovers the form of the cooling equation for a bare resonator.\cite{Wilson-Rae2007, Marquardt2007} For a finite $\bar{\lambda}$, the cooling process can be significantly altered. In the dispersive regime with $|\omega_m-\omega_z|\gg\bar{\lambda}$, $A_{\alpha\alpha}^{(n)}\approx\sqrt{n}$ for the transitions between states with identical polarization, but $A_{+-}^{(n)} = (\bar{\lambda}/\delta\omega)$ and $A_{-+}^{(n)}=O[(\bar{\lambda}/\delta\omega)^3]\sim0$ for the transitions between states with opposite polarization. The mechanical cooling therefore includes two separate cooling ladders each involving states with identical polarization, plus an additional cooling for the TLS with a much smaller cooling rate $\sim\Gamma_{-}(\bar{\lambda}/\delta\omega)^{2}$, as is shown in Fig.~\ref{fig1}c. This is further confirmed by an analytical study of the cooling process in the dispersive regime where the cooling can be studied by applying the unitary transformation \cite{Blais} \begin{equation} U=\exp{[-(a\sigma_{+}-a^{\dag}\sigma_{-})\bar{\lambda}/\delta\omega]} \end{equation} to Eq.~(\ref{eq:rho}). After the transformation, the TLS becomes decoupled from the resonator but obtains an extra coupling with the cavity mode: \begin{equation} H_{\tau,c}=g_{0}(\bar{\lambda}/\delta\omega)(\sigma_{+}a+\sigma_{-} a^{\dag}) \end{equation} which is to the first order of the factor $\bar{\lambda}/\delta\omega$. This coupling generates cooling (polarization) of the TLS with a cooling rate $\sim\Gamma_{-}(\bar{\lambda}/\delta\omega)^{2}$. The cooling of the mechanical mode also recovers the results in Eqs.~(\ref{eq:Gapm}) and (\ref{eq:nss}). Detailed comparison shows that this analytical result agrees with the numerical solution for the steady state of Eq.~(\ref{eq:rhot}). In contrast, in the near-resonance regime with $|\omega_m-\omega_z|\sim0$, $A_{\alpha\pm\alpha}^{(n)} = (\sqrt{n}\pm\sqrt{n-1})/2$ for $n\ge2$ and $A_{1\pm1}^{(1)}=1/\sqrt{2}$. For large $n$, $A_{\alpha-\alpha}^{(n)}\rightarrow1/4\sqrt{n}$, indicating the vanishing of the transitions between states with opposite polarization. While for small $n$, the transition matrix elements between states with identical and opposite polarizations are comparable, indicating a strong mixing of all the low-lying states in the eigenbasis. \section{Results\label{sec:results}} To illustrate the effect of the TLS on the cooling process, we numerically solve the steady state of Eq.~(\ref{eq:rhot}). The dependences of the steady state phonon number of the mechanical mode and the TLS polarization on the TLS frequency, the TLS damping rate, and the cavity detuning are studied. The parameters we choose are comparable with what have been achieved in recent experiments using microwave or optical cavities:\cite{Kippenberg2009, Blais} $\omega_{m}=200\, \textrm{MHz}$, mechanical damping rate $\gamma_{m}/\omega_{m}=10^{-6}$ ($Q_{m} = 10^{6}$), frequency of the TLS in the range of $\omega_{z}/\omega_{m}\in(0.5,\,1.5)$, TLS damping rate in the range of $\gamma_{\tau}/\omega_{m}\in 5\times (10^{-8},\,10^{-4})$, $\bar{\lambda}/\omega_{m}=0.05$, $g_{0}/\omega_{m}=0.05$, $\kappa_{0}/ \omega_{m} = 0.15$, and cavity detuning in the range of $-\Delta_{b}/\omega_{m}\in (0.5,\,1.5)$. The coupling constants $g_{0}$ and $\bar{\lambda}$ are chosen by estimating the geometry and material properties of the resonator.\cite{Remus} Given a bath temperature of $\sim 100\,\textrm{mK}$, the initial thermal phonon number can be $\approx 10$. \subsection{Dependence on TLS Frequency and Damping} \begin{figure} \includegraphics[width=8.5cm,clip]{fig2_v2} \caption{\label{fig:2}(a) $n_{ss}$ and (b) $\langle\bar{\sigma}_{z}\rangle_{ss}$ versus the ratio $\omega_{z}/\omega_{m}$ for $\gamma_{\tau}/\omega_{m}=2.5\times (10^{-4},10^{-5},10^{-6})$ from top to bottom. Dash-dotted curves: results for zero coupling $\bar{\lambda}=0$; dashed curves: results from the ``simple'' approach (see text in Sec.~\ref{sec:conclusion}) for $\gamma_{\tau}=2.5\,\omega_{m}\times 10^{-4}$.} \end{figure} We first study the steady state phonon number $n_{ss}$ and spin polarization $\langle\bar{\sigma}_{z}\rangle_{ss}$ as a function of the TLS frequency $\omega_{z}$. The cavity detuning is set to be at the first red sideband frequency with $-\Delta_{b}=\omega_{m}$. In Fig.~\ref{fig:2}a, the steady state phonon number $n_{ss}$ is plotted. It can be seen that the mechanical cooling can be significantly degraded as the TLS frequency approaches the mechanical frequency. With a moderate TLS damping rate of $\gamma_{\tau}/\omega_{m} =2.5\times10^{-4}$ and a cryogenic temperature of $k_{B}T=10\,\hbar\omega_{m}$, $n_{ss}$ can be raised by nearly $50$ times due to the presence of the TLS when $\omega_z=\omega_m$; while $n_{ss}$ is only slightly increased when $\omega_z=0.5\, \omega_m$. When the two frequencies are in resonance, the thermal noise from the reservoir of the TLS can be effectively transferred to the mechanical resonator and causes severe heating of the mechanical mode. When there is a finite off-resonance between the two frequencies, the off-resonance generates an energy barrier that prevents the transfer (exchange) of energy between the two sub-systems. Hence, in the dispersive regime, the mechanical cooling is only slightly degraded. Meanwhile, the cavity cooling process extracts and damps the thermal noise in the TLS via its coupling with the mechanical mode. In Fig.~\ref{fig:2}b, the TLS polarization $\langle\bar{\sigma}_{z}\rangle_{ss}$ is plotted. The TLS is maximally polarized to approach the state $|\downarrow\rangle$ when $\omega_z\sim\omega_m$. With the above parameters, we have $\langle\bar{\sigma}_{z}\rangle_{ss}\approx-0.9$ at $\gamma_{\tau}=2.5\,\omega_{m}\times 10^{-4}$. A finite off-resonance prevents effective extraction of the thermal noise in the TLS in the dispersive regime. In Fig.~\ref{fig:2}a, we observe dips in the steady state phonon number $n_{ss}$ when the TLS frequency approaches the mechanical frequency $\omega_{m}$. The appearance of these dips reflects the partial reduction of the thermal noise transferred from the TLS to the mechanical resonator when the TLS is highly polarized. As a result, the heating of the mechanical mode by the presence of the TLS, although very effective as the two sub-systems are in resonance, is partially reduced. \begin{figure} \includegraphics[width=8.5cm, clip]{fig2_damping_v2} \caption{\label{fig:2damping}(a) $n_{ss}$ and (b) $\langle\bar{\sigma}_{z}\rangle_{ss}$ versus the ratio $\gamma_{\tau}/\omega_{m}$ for $\omega_{z}/\omega_{m}=1.3, 1, 0.7$ (diamond-dashed, solid, circle-solid).} \end{figure} We also study the dependence of the steady state behavior on the damping of the TLS. In Fig.~\ref{fig:2damping}, the steady state phonon number $n_{ss}$ and spin polarization $\langle\bar{\sigma}_{z}\rangle_{ss}$ are plotted as a function of $\gamma_{\tau}/\omega_{m}$. It can be seen that $n_{ss}$ increases sharply with the damping rate $\gamma_{\tau}$ when the TLS frequency is in resonance with the mechanical frequency, in contrast to the much slower increase when the two sub-systems are off-resonance. At a damping rate $\gamma_{\tau}/\omega_{m}>10^{-6}$, the TLS is only strongly polarized when it is in resonance with the mechanical mode (the solid curve in Fig.~\ref{fig:2damping}b). \subsection{Dependence on Cavity Detuning} Next, we study the dependence of the steady state properties on the cavity detuning. In the cavity cooling of a bare mechanical resonator, the optimal cavity detuning $\Delta_{b}^{(m)}$ to achieve best cooling (lowest achievable $n_{ss}$) is at the red sideband frequency: $-\Delta_{b}^{(m)}=\omega_{m}$. In our system due to the presence of the TLS, this behavior can be altered. Using Eq.~(\ref{eq:rhot}) and studying the steady state by varying the cavity detuning, we numerically obtain the optimal detuning $\Delta_{b}^{(m)}$ as a function of the TLS frequency $\omega_{z}$, which is plotted in Fig.~\ref{fig:3}a. It can be seen that the optimal detuning is shifted away from the mechanical resonance as the TLS frequency varies, and demonstrates a non-monotonic dependence on the TLS frequency. This dependence can be explained as a combined effect due to two factors. The first factor is the cooling of the mechanical resonator. Without the TLS, the cooling is optimal at the detuning $-\Delta_{b}=\omega_{m}$. The second factor is the polarization of the TLS which is optimal when the cavity detuning is at the TLS resonance with $-\Delta_{b}=\omega_{z}$. Note that the polarization of the TLS reduces the heating of the mechanical mode by the thermal noise from the TLS and hence improves the cooling of the mechanical mode. When $\omega_{z}=\omega_{m}$, both factors reach their optimal value at the red sideband frequency, so that we have $-\Delta_{b}^{(m)}=\omega_{m}$. However, when $\omega_{z}\ne\omega_{m}$, the optimal cooling can be reached at a cavity detuning that balances the two factors, i.e. $-\Delta_{b}^{(m)}$ drifts to an intermediate value between $\omega_{m}$ and $\omega_{z}$. In the deep dispersive regime, the noise from the reservoir of the TLS is largely screened from affecting the mechanical resonator. The cooling is again dominated by the first factor so that $-\Delta_{b}^{(m)}\rightarrow\omega_{m}$ returns to the mechanical resonance. In Fig.~\ref{fig:3}b, the phonon number at the optimal detuning is plotted as a function of $\omega_{z}$, which can be compared with Fig.~\ref{fig:2}a. \begin{figure} \includegraphics[width=8.5cm,clip]{fig3_v2} \caption{\label{fig:3} (a) $\Delta_{b}^{(m)}/\omega_{m}$ and (b) $n_{ss}$ at cavity detuning $\Delta_{b}^{(m)}$ versus $\omega_{z}/\omega_{m}$ for $\gamma_{\tau}/ \omega_{m}=2.5\times (10^{-4},10^{-5},10^{-6})$ (circle, square, triangle) respectively. Thin straight line: results from the ``simple'' approach; dashed curve: results for zero coupling $\bar{\lambda}=0$.} \end{figure} \section{Discussions and conclusions\label{sec:conclusion}} One may ask whether the above theory gives different results from the ``simple approach'' in which the TLS and its coupling are directly added into the cooling master equation for a bare mechanical resonator? To see the differences between these two approaches, we study the steady state behavior using both the ``simple'' approach and the above theory. In Fig.~\ref{fig:2}, the results from the ``simple'' approach are plotted as dashed curves for $\gamma_{\tau}=2.5\,\omega_{m}\times 10^{-4}$. In Fig.~\ref{fig:3}a, the results from the ``simple'' approach are plotted as the thin straight line. We find that the dependence of the phonon number $n_{ss}$ on the TLS properties can be very different in the two approaches. In particular, as seen from Fig.~\ref{fig:3}a, the optimal cavity detuning $-\Delta_{b}^{(m)}$ in the ``simple'' approach always occurs at the first red sideband frequency; while in the above theory, $-\Delta_{b}^{(m)}$ can shift away from the red sideband frequency by as much as $10\%$ of the mechanical resonance. Note that the results from our theory agree well with the results from a brutal-force solution of the total master equation in Eq.~(\ref{eq:rho}), which further confirms the validity of this theory. We have discussed the effect of a single TLS on the cavity cooling and showed that the mechanical cooling can be degraded strongly when the frequency of the TLS falls within a narrow range near the mechanical resonance. Given the miniature size of the resonators, it is a reasonable scenario to assume that very few (e.g. one or two) TLS's exist in such frequency regime in the entire sample.\cite{Seoanez, Remus} Furthermore, the theory developed above can also be extended to study the cooling of a resonator coupling with multiple TLS's by deriving a master equation for the total coupled system in their eigenbasis. We can also extend the theory to include the dynamics of the TLS defects. In conclusion, we have studied a theory for the cavity cooling of a mechanical resonator coupling with a TLS defect. The defect, subject to thermal noise, can add extra heating to the mechanical mode and affect the cooling process. We use the adiabatic elimination technique in the eigenbasis of the coupled resonator-TLS system and derive the cooling master equation for the coupled system. Our results showed that the cooling of the resonator can be significantly affected by the thermal noise of the TLS when the TLS frequency approaches the mechanical frequency. This theory can also be extended to describe the cooling of mechanical resonators coupling with more complicated quantum structures such as multiple TLS's. \section*{Acknowledgements} This work is supported by the DARPA/MTO ORCHID program through AFOSR, NSF-DMR-0956064, NSF-CCF-0916303, and COINS.
1,314,259,992,716
arxiv
\section{Introduction} There are currently four Earth-sized (0.9~R$_\oplus<$R$_p<$1.5~R$_\oplus$) planets in the habitable zone of their host stars that are amenable to atmospheric follow-up with the James Webb Space Telescope (\emph{JWST}): TRAPPIST-1d,e,f and LHS 1140b \citep{gillon2016temperate, gillon2017seven,dittmann2017temperate, grimm2018nature}. The Transiting Exoplanet Survey Satellite (\emph{TESS}), slated for launch in 2018, is expected to find dozens more of these temperate Earth-sized planets orbiting cool, nearby stars \citep{sullivan2015transiting}. In order to prepare for this new era of exoplanet characterization, studies have sought to assess the feasibility of detecting the atmospheres of temperate worlds, and to determine optimal observing strategies for yielding these constraints. One strategy for assessing the feasibility of characterizing temperate Earth-sized planets is to define a detection criterion and compute the number of transits needed to meet that criterion. For example, \citet{Batalha2015transiting} determine how many transmission spectra must be coadded to detect H$_2$O at a S/N=15. \citet{louie2017simulated} did a comprehensive analysis of the expected S/N return for the full \emph{TESS} planet yield observed via transmission spectroscopy using \emph{JWST}'s NIRISS SOSS. Most recently, \citet{morley2017observing} determine the number of transits needed to detect molecular features in Earth, Venus and Titan-like atmospheres at S/N=5. The consensus of this work indicates that 10$+$ transits must be co-added in order to yield significant S/N on molecular features of small Earth-like planets orbiting M-dwarf stars. In some cases though, up to 100 transits were needed to detect key atmospheric features \citep{morley2017observing}. Another strategy for assessing the observability of temperate planets is to use sophisticated retrieval algorithms to determine with what fidelity atmospheric properties can be constrained \citep[e.g.][]{benneke2012atmospheric, dewit2013constraining,Barstow2015transit,Greene2016characterizing}. \citet{benneke2012atmospheric} and \citet{Barstow2015transit} simulate observations of a GJ 1214-like system utilizing the NIRSpec prism \citep{dorner2016model}. \citet{dewit2013constraining} simulate observations of an Earth-like planet orbiting an M7V star at 15~pc utilizing the NIRSpec gratings. \citet{Greene2016characterizing} simulate observations of a mini-Neptune with NIRISS Single Object Slitless Spectroscopy (SOSS), the NIRCam long wave grism, and MIRI Low Resolution Spectroscopy (LRS). All of these studies offer insights into the kind of constraints we expect in \emph{JWST}-era spectra of exoplanets, however they do not explore a wide range of planet-types or observing strategies due to the computationally-intensive nature of retrieval algorithms. To combat this, information content analysis has been leveraged to suggest ways to optimize the science yield of \emph{JWST} observations of a large variety of planets ranging from warm-Neptunes to hot-Jupiters \citep{batalha2016information,Howe2017information}. \citet{batalha2016information} suggest that the best modes for constraining gas giants exoplanets' terminator temperature, metallicity, and C/O, are the combination of NIRISS SOSS and NIRSpec G395H. This analysis focuses on targets brighter than J=10.5, and therefore does not include the NIRSpec prism (saturates at J=10.5). Here, we extend this analysis toward temperate planets and include an in-depth analysis of the utility of the NIRSpec Prism to explore these worlds. We choose TRAPPIST-1 as a case study, with the goal of generally guiding observing strategies of temperate Earth-sized planets with \emph{JWST}. In \S\ref{sec:model} we describe our methodology for modeling instrument systematics, and transmission \& emission spectra. We also describe how we quantify the information content contained within an observation. In \S\ref{sec:results} we provide our results and discussions, and offer concluding remarks in \S\ref{sec:conc}. \section{Methods} \label{sec:model} \subsection{Instrument Simulations} When using \emph{JWST} to probe exoplanet atmospheres with high-precision time-series observations, the precision expected for each observing mode depends on the stellar energy distribution (SED) of the parent star. We first compute the potential systematic noise sources from each instrument mode using \texttt{PandExo}, described in \citet{batalha2017pandexo}. \texttt{PandExo} relies on the \emph{JWST} Exposure Time Calculator engine \texttt{Pandeia} \citep{Pontoppidan2016pandeia} to compute throughputs, realistic point spread functions and other instrumental effects. Both \texttt{PandExo} and \texttt{Pandeia} generally agree with the instrument team's individual noise simulators to better than 10\% \citep{batalha2017pandexo}. First, we compute simulations for each time-series spectroscopy mode, using the standard readout patterns offered in Cycle~1. Then, we explore the expected performance for readout patterns and observing strategies that have have been proposed. We bin all of our calculations to a resolving power of R=100 to facilitate direct comparisons between different instruments (later discussed in \S3.1). Lastly, we use a T=2550 K, M/H=0.4, and logg=4.0 Phoenix stellar model \citep{husser2013new} for all calculations in this work because we are using TRAPPIST-1 as a case study. \subsection{Transmission \& Emission Spectra} Extensive theoretical work has been done to assess the climate, habitability, composition and detectability of the planets that transit TRAPPIST-1 \citep{Barstow2016habitable,dong2017atmospheric,bolmont2017water,unterborn2017constraining,wolf2017assessing,turbet2017modelling,morley2017observing}. Nevertheless, a model capable of predicting, \emph{a priori}, the atmospheric composition of these planets from the few known parameters (mass, radius, orbital properties) does not exist. Therefore, many of the predictions of the atmospheric composition of these planets are grounded in Solar System science. For example, \citet{morley2017observing} create an extensive grid of both primary and secondary transit spectra using the elemental ratios of Earth, Titan and Venus with different incident flux levels, surface pressures and albedos for each planet in their study. Due to the complexity and quantity of unknown parameters, here we do not aim to produce chemically consistent spectra in composition or temperature-pressure. Instead, in order to obtain estimates for constraints we might expect from a variety of atmospheres, we explore nine simple chemical prescriptions for each planet. This allows us to assess the impact of the quality and spectral coverage of \emph{JWST} data on how well atmospheric parameters can be constrained. Our transmission and emission model is described in \citep{batalha2016information,line2013near,Greene2016characterizing,line2016influence}. All nine chemical scenarios considered here are composed of a combination of H$_2$O, CO$_2$, N$_2$ and CH$_4$, based on the dominant molecules in the atmospheres of rocky Solar System bodies and also the dominant molecules in the \citet{morley2017observing} grid. The main difference between the nine scenarios is the background gas: either H$_2$O-rich, N$_2$-rich or CO$_2$-rich. After the background gas, the three remaining species are added in equal quantities at trace levels (1\%, 0.01\% and 0.0001\%). Adding the gases in equal quantities allows any inability to detect a spectral feature to be attributed to data precision, spectral coverage, or masking by the dominant gas. All compositions are uniform with altitude. For each planet we explore temperature-pressure profiles, which we assume can be fully described by a 1D profile. For our 1D profiles we use a 5 parameter double-gray analytic formula \citep{guillot2010on, line2013near}, which for weakly irradiated systems approximate to $T_z^4 \approx 0.75*T^4(p+2.0/3.0)$, where $p$, and $T_z$ are the height-dependent pressure and temperature. Using this scaling, we explore surface temperatures consistent with the full range of potential values given an Earth-like composition, and a range of pressures and Bond albedos from \citet{morley2017observing}. This range is particularly important in the analysis of emission spectra. We use a surface temperature of 200 K and 400 K to set the our pessimistic and optimistic atmospheric constraints in emission, respectively. Clouds mute or mask the atmospheric features in transmission spectra \citep[e.g.][]{Kreidberg2014clouds, sing2016continuum}. For terrestrial planets, the models of the cloud-microphysics for Earth \citep[e.g.][]{albrecht1989aerosols, tinsley2000influence}, Venus \citep[e.g.][]{knollenberg1980microphysics,allen1984cloud}, and Titan \citep[e.g.][]{mckay2001physical, rannou2006latitudinal} have all been guided by observations. For exoplanets, a general cloud model does not yet exist. Therefore, we use a grey opacity source at two different pressures levels (0.01 and 0.1 bars) to set our optimistic and pessimistic cases. Optimistically, we assume observations would be limited by the tropopause of the planet, located at 0.1 bars in the Solar System planets \citep{robinson2014common}. Pessimistically, we assume observations would be limited by the formation of high altitude clouds in slowly rotating habitable zone planets \citep{kopparapu2017habitable}. For emission, we do not include the presence of clouds because their reduced optical depths have less of an impact on the spectrum \citep{fortney2005effect}. \subsection{Information Content Theory} \citet{batalha2016information} details our information content methodology. We describe the relevant sections here. The information content is a quantity that describes how the state of knowledge of a system has increased (relative to the prior) by making a measurement \citep{shannon2001mathematical, line2012information}. Here, we are specifically interested in the posterior covariance matrices, $\mathbf{\hat{S}}$, which describe the uncertainties on each of the state vector parameters. We assume that our atmospheric state is described by $T$, $\xi_{H_{2}O}$, $\xi_{CO_{2}}$, $\xi_{CH_{4}}$, $\xi_{N_{2}}$, and $\times R_p$. $T$ is the temperature above the tropopause which we set to planet's $T_{eq}$, $\xi_i$ is the concentration of the $i$th gas, and $\times R_p$ is a factor to account for the radius arbitrarily set at 10 bars. $\mathbf{\hat{S}}$, can be computed as \begin{equation} \mathbf{\hat{S}} = (\mathbf{K^TS_e^{-1}K + S_a^{-1}})^{-1} \end{equation} where $\mathbf{K}$ is the Jacobian matrix, which describes the model sensitivity, $\mathbf{S_a}$ is the \emph{a priori} covariance matrix, and $\mathbf{S_e}$ is the error covariance matrix. We assume that the observer has no prior information so that $\mathbf{K^TS_e^{-1}K >> S_a^{-1}}$. This ensures that our calculations are driven by the model sensitivity and the \emph{JWST} data, not the prior. \begin{figure*}[ht] \centering \includegraphics[angle=0,width=0.7\linewidth]{fig1.pdf} \caption{Curves show the spectral precision on the planet spectrum as a function of J magnitude at various wavelengths. Each simulation is composed of 2 hours of total observing time. Colored opaque lines represent the \emph{Reset} and \emph{read} mode and transparent lines are for a 100\% efficient observation. The dashed vertical lines represent the J-magnitude of TRAPPIST-1, for reference. In the $8\mu$m panel, the grey line represents the pure-photon limited precision. All simulations are binned to R=100 for comparison. \textbf{Main Points}: 1) Low observing efficiency limits precision for bright targets. 2) NIRSpec Prism becomes dominated by read noise at longer wavelengths. 3) MIRI is background limited past J=10\label{fig:noise}} \end{figure*} \section{Results \& Discussion}\label{sec:results} \subsection{Instrument Systematics \& Observing Strategies} \label{sec:noise} \emph{JWST} nondestructively reads charge in a pixel as it accumulates during an integration. Each integration begins with a \emph{reset} frame, during which pixels in the subarray are reset one at a time. For time series science, each downlinked \emph{group} consists of one \emph{frame}, the result of reading each pixel once. The first \emph{frame} after a \emph{reset} will usually be used to establish a zero point, or bias level. Subsequent \emph{frames} will be read accumulated science photons. The efficiency of an observation is then =$\frac{n-1}{n+1}$, where $n$ is the number of groups. If the bias level is known a priori, then efficiency is $\frac{n-1}{n}$. The MIRI detectors are somewhat more efficient at small $n$ because it can read then reset pixels in a single frame time. For bright targets near the saturation point of the instrument, this readout pattern becomes inefficient. The opaque lines in Figure~\ref{fig:noise} show our results for the expected spectral precision for each mode as a function of J magnitude at key wavelengths (H$_2$O at 1.4 $\mu$m \& 2.5 $\mu$m, CH$_4$ at 3.3 $\mu$m, CO/CO$_2$ at $\sim$4.5$\mu$m). Each opaque curve follows the expected photon-limited relationship until the lower limit in magnitude, where the detectors begin to approach saturation. This flattening out in precision is the result of decreasing the groups within an integration for brighter targets, which decreases the observing efficiency to 33\% for the brightest targets. The transparent lines in Figure \ref{fig:noise} show the expected spectral precision if \emph{JWST} had 100\% observing efficiency at all magnitudes within the saturation limits of the instrument. For temperate terrestrial planets, which require very high spectral precision, we need to determine strategies to increase this observing efficiency for targets near the saturation limits of a given mode. \begin{figure*}[ht] \centering \includegraphics[angle=0,width=0.7\linewidth]{fig2.pdf} \caption{The top panel shows the presently supported readout pattern for Cycle~1. The middle and bottom panels show potential enhancements. Each of the three columns (separated by a dashed vertical line) represent one \emph{group} time. Blue always represents science time, yellow represents potential bias time. The main difference between the top and middle panel is the \emph{read} that occurs immediately before the \emph{reset} in the first column. See \S3.1 for a more thorough explanation. \textbf{Main Point:} In Cycle~1, readout patterns will limit the observing efficiency for targets near the saturation point of a particular instrument to 33\%. Enhanced readout patterns could change this to $\sim$100\% efficiency. \label{fig:rmode}} \end{figure*} Therefore, high efficiency readout patterns are under investigation for NIRISS and NIRSpec, shown in Figure \ref{fig:rmode}. Each of the three columns in Figure \ref{fig:rmode} shows what takes place during a single pixel's read and/or reset. As stated above, a full \emph{group} is the result of clocking through all the pixels one time. The top panel is the currently supported readout pattern, and the bottom two show potential enhancements. In the \emph{Read-Reset and Read} pattern, the position of the reset frame is moved to take place directly after an individual pixel is read (middle panel). Reading each pixel immediately \emph{before} the reset measures all accumulated charge and yields 100\% efficiency, only if the reset level is known. If the reset level is not known, the \emph{Read-Reset-Read} readout pattern would have to be implemented to yield $\sim$100\% efficiency (bottom panel). If these new readout patterns get implemented, it will greatly increase efficiency of targets near the saturation limits of specific high-precision time-series modes. However, these enhanced readout patterns are not currently slated to be available for observations in JWST’s Cycle~1. TRAPPIST-1, J=11.3, saturates the NIRSpec Prism after the third group. Therefore, an observation with no saturation leads to $\frac{3-1}{3+1}\sim0.50$ observing efficiency across the entire detector. Using the NIRSpec Prism is advantageous because it yields a 1-5 $\mu$m spectrum in a single transit. For targets not accessible with the Prism, observers will have to decide if they want to split up observations between modes to yield an entire 1-5 $\mu$m spectrum, or use all their time in a single observing mode. One additional caveat with the NIRSpec Prism, shown in Figure \ref{fig:noise}, is its decrease in precision toward longer wavelengths. At wavelengths less than 2.5 $\mu$m, the NIRSpec Prism attains higher spectral precision than other available instruments at identical 2MASS J. The NIRSpec Prism becomes less favorable at higher wavelengths though, because the stellar SED dramatically drops off, causing read noise to dominate over photon noise, and because the efficiency of the observation is limited by saturation at the shorter wavelengths. To combat both these sources of decreased precision, we propose an observing strategy to partially saturate the detector. \emph{JWST} acquires sampled up the ramp data and will return a data product for every single group within the integration. Therefore, unless the observation is saturated at the end of the second group, a variable number of groups can be used to extract the full wavelength space, regardless of saturation. Figure \ref{fig:prism} shows the spectral precision of this variable group observing strategy (orange), versus a non-saturated \texttt{PandExo} run with 3 groups (blue) for the TRAPPIST-1 system. In this particular phase space (targets with low-efficiency observations), this proposed strategy has the potential to increase the precision at longer wavelengths by a factor of 2 for the NIRSpec Prism. Note, this observing strategy has not yet been formally introduced into the \texttt{PandExo} package. In order to obtain emission spectroscopy of cool planets, observations in the mid-IR, using MIRI LRS, will be required. There are currently no plans to increase the observing efficiency of MIRI LRS near the saturation limit. There are other ways to increase observing efficiency that are not specific to the exoplanet case. Although, because the MIRI detectors are read out more efficiently than the near-IR detectors, this is less problematic (see Figure \ref{fig:noise}). One caveat of MIRI is that observations become background limited past J=10, seen by the grey photon-limited line in the MIRI LRS panel of Figure \ref{fig:noise}. Therefore, TRAPPIST-1 is not an ideal target to study emission spectroscopy of terrestrial exoplanets and it will be especially important for \emph{TESS} to detect planets with J$<10$ to optimize observations with MIRI LRS. \begin{figure} \centering \includegraphics[angle=0,width=3in]{fig3.pdf} \caption{A comparison of two different observing strategies with the NIRSpec Prism. The blue curve shows the result of a \texttt{PandExo} run with the number of groups determined by the ``optimize'' option (ngroup=3). The orange curve shows the result of ngroup=6 for the total exposure time. \textbf{Main Point:} Partial saturation can increase precision for NIRSpec Prism observations of bright targets. \label{fig:prism}} \end{figure} \subsection{Transmission Analysis} The relationship between the uncertainties on the retrieved parameters and the number of transmisison spectra needed are similar for each planet in the TRAPPIST-1 system. The differences between planets are driven by differences in temperature and gravity, which set the strength of the molecular features through the scale height $=kT/\mu g$. We choose TRAPPIST-1f to illustrate our results. TRAPPIST-1f is the outermost habitable zone planet (219 K), and has a gravity similar to that of Earth (8.33 m/s$^2$). \begin{figure*}[ht] \centering \includegraphics[angle=0,width=5in]{fig4.pdf} \caption{Uncertainties on each state vector parameter calculated from the information content analysis for TRAPPIST-1f. Each transit observation consists of a NIRSpec Prism observation with total observation time = 4 $\times t_{14}$. Purple curves correspond to a H$_2$O-rich atmosphere with 0.01\% of CO$_2$, CH$_4$, and N$_2$. Orange curves correspond to an N$_2$-rich atmosphere with 0.01\% of CO$_2$, CH$_4$, and H$_2$O. And grey curves correspond to a CO$_2$-rich atmosphere with 0.01\% of N$_2$, CH$_4$, and H$_2$O. The upper and lower bound of the curve are set by the optimistic (P=0.1 bar) and pessimistic (P=0.01 bar) specifications for the grey cloud top pressure.\textbf{Main Points:} 1) The greatest gain in information occurs in the first 10 transits, 2) The dominant molecular absorber is detected after the 10th transit in all cases, 3) Transit transmission spectroscopy will not constrain temperature profiles \label{fig4}} \end{figure*} Figure \ref{fig4} shows the expected uncertainty on the atmospheric parameters of TRAPPIST-1f after each transit, if it were observed with NIRSpec Prism's partial saturation strategy in transmission. The upper and lower bound of the curve is set by the pessimistic (P=0.01 bar) and optimistic (P=0.1 bar) specifications for the grey cloud top pressure, respectively. When there is no information content in the observation, the uncertainty approaches the prior (12 dex for abundances and 1000 K for temperatures). We do not show the results for detecting N$_2$ because it is void of molecular features unless temperatures are very high \citep{schwieterman2016identifying}. Therefore, it cannot be directly detected or constrained. Temperature is difficult to constrain in transmission spectroscopy, regardless of atmospheric composition. Meaningful constraints ($<\pm 50$~K) are only achievable with 10$+$ transits. For abundances, in all our cases, the dominant \emph{absorber} is constrained by the 10th transit. Therefore for the TRAPPIST-1 system, if no atmospheric signals are detected by the 10th transit, it is unlikely that co-adding more would unveil new information. However, if the dominant absorber is detected by the 10th transit, additional observations could reveal trace gases in the atmosphere at the 0.01\% level. Our results demonstrate the high potential the NIRSpec Prism has for detecting a wide variety of molecular features. H$_2$O has dominant absorption features from 1-2$\mu$m, CH$_4$ has dominant absorption features from 3-4 $\mu$m, and CO$_2$ has dominant absorption features from 4-5 $\mu$m. In Cycle~1, it is important to survey this entire parameter space. For targets too bright to be accessible with NIRSpec Prism, observations with a combined NIRISS SOSS and NIRSpec G395H observation yield higher information content results, despite the lower precision that comes from splitting time between two modes. \begin{figure*}[ht] \centering \includegraphics[angle=0,width=5in]{fig5.pdf} \caption{Same as Figure \ref{fig4}, but for emission spectroscopy with MIRI LRS. Here, the upper and lower bound of the curve are set by the optimistic (T=400 K bar) and pessimistic (T=200 K) specifications for the surface temperature at 100 and 1 bar, respectively. \textbf{Main Point:} Emission spectra of temperate planets with \emph{JWST}'s MIRI LRS is unlikely to provide strong atmospheric constraints of truly temperate planets. Future facilities, such as the \emph{Origins Space Telescope} concept, could provide the required precision and wavelength coverage needed to improve these constraints. \label{fig5}} \end{figure*} \subsection{Emission Analysis} For emission we also show the case of TRAPPIST-1f. Figure \ref{fig5} shows the constraints on the atmospheric state vector parameters. Emission spectroscopy is more sensitive to the atmospheric temperature structure than transmission spectroscopy. However, the uncertainties on temperature for emission spectroscopy (Figure \ref{fig5}) are comparable to that of transit transmission (Figure \ref{fig4}). This is because the \emph{JWST} MIRI LRS data does not have sufficient precision to detect the small signal that comes from the emission of temperate planets at 5-12$\mu$m. The constraints on abundances are also highly driven by the prior (12 dex). No molecules are constrained within 1 dex for less than 50 transits. Detecting molecular features in emission spectroscopy of truly temperate exoplanets will be very difficult with \emph{JWST}'s MIRI LRS. This conclusion is also supported by the analysis of \citep{morley2017observing}, which suggests photometry of temperate planets as an alternative to emission spectroscopy. \section{Conclusions} \label{sec:conc} Here, we used \texttt{PandExo} in combination with an information content analysis to determine optimal strategies for constraining the atmospheres of the planets in the TRAPPIST-1 system-- with the ultimate goal of guiding observations of temperate terrestrial planets. We summarize our conclusions below: \begin{itemize} \item Bright targets near the saturation point of the specific instrument mode have low observing efficiency. This is especially true of observations of the TRAPPIST-1 system with NIRSpec Prism, which is a favorable mode because of its ability to get a complete spectrum (1-5 $\mu$m) in one transit. The Prism also is dominated by read noise at longer wavelengths because of this low efficiency and because the stellar SED drops towards 5 $\mu$m. While high efficiency read modes are being investigated, we outline a partial saturation strategy for the NIRSpec Prism that can increase observing efficiency and decrease the effect of readnoise at long wavelengths. \item Using a partial saturation strategy with the Prism, we will detect the dominant atmospheric absorber of temperate terrestrial planets by the 10th transit. If we do not detect anything by the 10th transit, it is not likely that coadding more transits would unveil more information. If we do detect the dominant absorber by the 10th transit, more transits could reveal trace gases at the 0.01\% level. \item Emission spectroscopy with MIRI LRS is unlikely to provide strong atmospheric constraints of truly temperate (surface temperatures=200-400 K) planets. Future missions/facilities, such as the \emph{Origins Space Telescope} concept, could provide the required wavelength coverage and precision to provide robust constraints on the atmospheres of temperate terrestrial worlds in the mid-IR. \end{itemize} \acknowledgments We thank the members of STScI's STARGATE team for their comments and discussions. Specifically, we thank Hannah Wakeford, Jonathan Fraine, and Giovanni Bruno for their helpful feedback. We also thank Eddie Bergeron for investigating more efficient detector readout modes for bright targets. NB, NK, and JV acknowledge support from Grant NNX15AC86G from NASA/GSFC for the JWST Telescope Scientist Investigation. Facilities: \facility{JWST}
1,314,259,992,717
arxiv
\section{Introduction} \label{sec:introduction} The coupled evolution of solid-fluid interfaces due to dissolution, melting, and erosion is important in shaping geophysical features including mountains, icebergs, caverns, aquifers, and petroleum reservoirs~\cite{Fredd1998,Ortoleva1994,Malin2000,Meakin2010}. While typically slow, such evolution can lead to dramatic appearance of sinkholes, avalanches, and other modes of rapid failure. There is considerable need to understand the evolution of such features from a mechanistic perspective to complement field observations towards developing quantitative models because of their complexity and ubiquity both on Earth and other celestial bodies~\cite{Malin2000,Kudrolli2016,Jerolmack2019}. Irregular surface patterns in salt crystals dissolving from below in aqueous solutions have been observed and analyzed with turbulent boundary layer models~\cite{Thomas1968,Sullivan96}. Differential growth resulting in up-facing conical cavities created within salt deposits shaped by dissolution driven internal flows have been investigated in the context of land subsidence and structure collapse~\cite{Gechter2008}. Opacity of the medium only allowed intermittent and indirect inferences to be drawn in these studies. Recent studies of two-phase model systems with rapidly dissolving non-crystalline hard candy have illuminated the formation of pinnacles in karsts, furrows in sandstone, and ice scallops~\cite{Huang2015,Nakouzi2015,Cohen2016,Huang2020,Wykes2018,Pegler2020,Cohen2020,bushuk_holland_stanton_stern_gray_2019}. Significant progress has been made in describing the observed outer envelops of the structures with counter-intuitive growth of singularities versus blunting of sharp tips depending on the shapes of the initial surfaces and flow conditions~\cite{Pegler2020}. While occurrence of surface cavities were noted as an undesirable outcome in their studies, they were not explored. Moreover, cavity growth is of interest in an even broader class of systems, as in fluid-mediated pitting corrosion which leads to deep isolated holes and failure in metal structures. Here, we focus on the evolution of vertical solid-liquid interfaces as a result of coupled fluid flow and dissolution of a solid. Figure~\ref{fig:alcove} shows motivational examples of alcoves - cave-like features that occur above ground on cliffs - found in the American Southwest and thought to result from fluid flow and weathering. Dissolution of calcium carbonate, and the repeated freezing and expansion cycles which weaken and loosen pieces are known to lead to alcoves within the cliffs as they erode. However, the minimal conditions under which they can occur and the detailed mechanism by which they evolve beyond these general descriptions remain unclear. Therefore, we investigate if alcoves can be produced by using a highly simplified two-phase model composed of a homogeneous noncrystalline solid and a dissolving fluid phase in a gravitational field. The relatively high transparency and dissolution speed of the medium enables us to obtain detailed evolution information on the of shapes with complementary optical and x-ray measurements, and the flow fields in a matter of hours if not a day. Specifically, we show perturbations in a vertical dissolving interface develop into alcoves shaped by the interactions with the surrounding fluid as a result of a density inversion instability at the underside of surface perturbations. In contrast, up-facing horizontal surfaces are observed to dissolve more uniformly and much more slowly, even when surface indentations and material defects are present, under otherwise similar conditions. \begin{figure} \begin{center} \includegraphics[width=.9\linewidth]{Fig1.pdf} \caption{Examples of alcoves observed in (a) Grand Staircase-Escalante National Monument, Utah, and (b) Montezuma Castle National Monument, Arizona which has pre-Columbian dwellings built within. The cliffs are approximately 20 and 30 meters high for scale, respectively. Photos taken on June 21, 2019 and June 24, 2019, respectively. } \label{fig:alcove} \end{center} \end{figure} \section{Materials and Methods} The solids used in our study are prepared with 5 part sucrose, 3 part light corn syrup, and 2 part distilled water by volume, similar to recipes in other studies~\cite{Huang2015,Huang2020}. These ingredients are stirred and heated together, while keeping the temperature at approximately $160^\circ$C, {\color{black} until} 7/8th of the original volume remains after evaporation corresponding to a density $1.3 \pm 0.1$ g/cm$^3$. The {\color{black} hot liquid} is poured into a 3D printed rectangular transparent mold with internal width $W$, length $L$, and height $H$. Typically, we use $W = 4$\,cm, length $L=6$\,cm, and height $H=0.3$\,cm to standardize our measurements, and larger molds to check any mold dimension dependence. The {\color{black} hot liquid} solidifies with a flat surface in gravity, with prescribed surface indentations imposed as needed during the final stages of cooling as the glass transition is approached. Air bubbles from submicron to a few millimeters in size generated during the mixing and heating process can be trapped in, and on, the surface of the {\color{black} hot liquid} at it cools. The larger visible bubbles were eliminated by trial and error, and degassing as needed. Surface indentations are also added if desired as the liquid hardens using solid impalers with prescribed shapes. Using this preparation method, we obtain a homogeneous noncrystalline solid with a uniform shade of color, and density $\rho_s = 1.41 \pm 0.06$\,g/cm$^3$. The mold with solid within is then placed in a transparent acrylic rectangular container filled with distilled water and oriented as desired with one surface exposed. A surfactant is added to reduce air bubbles in the fluid. A sufficiently large bath is used so that density of the bath solution even after all the solid dissolves varied {\color{black} by} less than 0.5\%. The experiments were performed in a laboratory with temperature $T$ in the range $21^\circ$C to $25.1^\circ$C as noted. The solid is imaged with a megapixel Pixelink color camera through the sidewalls of the container by back-lighting with a uniform LED panel. The image intensity is mapped to the thickness of the solid enabling us to obtain a map of the surface dynamically (see Appendix~\ref{sec:img}). The optical measurements are also complemented with snapshots of the solid with a Varian Medical Systems micro x-ray CT instrument. This requires us to pull the solid and the mold out of the bath, and was mostly conducted to illustrate and check the overall shape of partially dissolved solids. \section{Observation of Alcove Growth} \begin{figure} \begin{center} \includegraphics[width=17cm]{Fig2.pdf} \caption{(a) Image of solid sugar block with two circular indentations contained within a transparent mold. ($L=14$\,cm; $W=9$\,cm; $H=1$\,cm). Several other small and large defects are also made visible by reflected light. (b) Schematic crosssection of the sugar block contained within the mold placed vertically in an aqueous bath and Cartesian coordinate system. (c) The surface shows development of a number of alcoves after being immersed for an hour. (d) A surface rendering obtained with x-ray scanning displayed at a $60^\circ$. The alcove shown corresponds to the one marked with dashed box corresponding to a 3\,cm by 3\,cm area in (c). (e) Same surface plot shown from side. } \label{fig:intro} \end{center} \end{figure} Figure~\ref{fig:intro}(a) shows an example of a surface prepared with two large circular indentations besides a number of smaller surface imperfections formed during the molding process. The solid and the mold are immersed while oriented vertically in the center of a bath container for 60\,minutes as shown schematically in Fig.~\ref{fig:intro}(b). An image of the partially dissolved solid where a number of pits or alcove-like features have appeared is shown in Fig.~\ref{fig:intro}(c). Comparing Fig.~\ref{fig:intro}(a) and Fig.~\ref{fig:intro}(c), the largest alcoves correspond to the two initial circular indentations, but many other smaller ones are also observed. A surface rendering of a typical alcove obtained with x-ray scanning - corresponding to the region indicated by Fig.~\ref{fig:intro}(c) - is shown in Fig.~\ref{fig:intro}(d) and (e) as viewed at a 60$^\circ$ angle and from the side, respectively. A sharp transition from the vertical interface to the ceiling can be observed, whereas the floor is very sloped and meets the vertical interface with a further gradual change of slope. The alcoves are wider near the top and appear conical below, giving rise to an overall inverted triangle appearance with flat ceilings, reminiscent of the alcoves seen in Fig.~\ref{fig:alcove}. \begin{figure} \begin{center} \includegraphics[width=18cm]{Fig3.pdf} \caption{(a) The alcove outline and vertical mid-profile shows the evolution of the ceiling and floor. {\color{black} The outline correspond to a contour $h = 0.1H$ from the vertical surface, and its roughness is predominantly due to imaging artifacts. } (b) The average change in ceiling height as a function time based on 5 spontaneously formed alcoves under similar conditions. The ceiling is measured to recede up at a constant rate range between $\dot{\eta}$ = 6.0 mm/hr and $\dot{\eta}$ = 10.7 mm/hr till the back wall is reached (vertical dashed line) before slowing down. The measured rate averaged over the 5 examples $\dot{\eta}$ = 8.4 mm/hr is consistent with the estimated rate 9.4 mm/hr from Eq.~(\ref{eq:rrates}) corresponding to a solutal Rayleigh-B\'enard instability.} \label{fig:time} \end{center} \end{figure} From a snapshot alone it is difficult to match each of the alcoves with a defect, but it is evident that defects play an important part in their formation. Therefore, we performed complementary experiments with a 6 cm by 4 cm by 0.3 cm solid with an initial 0.1\,cm fluid filled gap between the dissolving interface and container boundary to enable clear visualization. Figure~\ref{fig:time}(a) shows the evolution of the outline and vertical transect of an alcove observed using optical imaging at various stages of development. {\color{black} The calibration used to map the image intensity to the local surface height can be found in Appendix~\ref{app:smooth}.} The corresponding movie can be found in the supplementary documentation~\cite{sup-doc}. Here, an essentially flat surface is observed to develop an alcove which grows longer, wider, and deeper until it encounters the back wall of the mold. Even beyond that point, the alcove is observed to continue to grow longer and wider. Thus, the alcove is observed to evolve {\color{black} to have an inverted triangle appearance} with a sharp transition to the horizontal roof from the vertical face, and a smooth sloping floor which gradually transitions to the vertical surface similar to that shown in Fig.~\ref{fig:intro}(d,e). This evolution indicates that all the alcove-like features observed in Fig.~\ref{fig:intro}(c) are similar in nature, and that the smaller ones start to form later as the dissolving interface reaches small air bubbles trapped inside the dissolving solid. While some surface indentations develop into alcoves, adjacent vertical regions become optically smooth compared to even the initial surface which may have small pits and grooves on the solid surface formed during the cooling process. Thus, it appears that not all surface perturbations become unstable, and the overall appearance of the isolated alcoves is different than the irregular scallops which develop over a surface melting or dissolving from below~\cite{Sullivan96,Meakin2010}. Further comparing the crosssections at different time intervals, we observe that the ceiling recedes the fastest compared to all the other surfaces. We measure the change in the ceiling height as a function of time $t$ for 5 different examples of spontaneously formed alcoves in our experiments (see Fig.~\ref{fig:time}(b)). While there is some variation, the time dependence is more or less similar with essentially linear growth before a slowing down once the alcove reaches the back-wall of the mold {\color{black} at $16.3\,{\rm min} \pm 0.3$\,min in the various cases}. We thus postulate that alcoves start to develop at inverted regions corresponding to indentations, or bubble defects with a critical size, when the ceilings recede rapidly and spread horizontally as the solid dissolves. \section{Boundary Layer Analysis} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Fig4.pdf} \caption{(a) Image of plumes descending from the ceiling of an alcove ($t= 21$\,min). {\color{black} A few stray air bubbles which appear as small circles are also visible. The scale bar corresponds to 5\,mm. } (b) Intensity profiles as a function of horizontal distance corresponding to dashed box shown in (a) with locations of plumes indicated by arrows. Average plume spacing is observed to be approximately 2\,mm consistent with critical wavelength $\lambda_c$ estimate corresponding to solutal Rayleigh-B\'enard instability. } \label{fig:plume} \end{figure} When a solid is immersed in a solvent bath, a concentration gradient of the solute develops at the interface, forming a boundary layer, which grows by diffusion~\cite{Garner1961}. This boundary layer is stable in gravity when the dissolving interface is below the solution and the solution density increases with solute concentration. If the dissolving surface is tilted or of finite size, boundary layer flows can develop along the slope under the action of the gravity, but remain attached to the solid interface. Such conditions shape the dissolving body at large scales~\cite{Wykes2018,Pegler2020,Pegler2021}. In contrast, if the solid is above the solvent bath, a density inversion occurs rapidly as the boundary layer becomes more dense than the solvent below. Thus, gravity can destabilize the boundary layer generating descending plumes similar to the miscible Rayleigh-Taylor instability~\cite{Chandrasekhar}. {\color{black} Because diffusion and the fluid viscosity can oppose the destabilizing effect of gravity, the concentrated boundary layer becomes unstable only when its thickness exceeds a critical value.} Then, the instability is more accurately identified with the solutal Rayleigh-Bénard instability~\cite{Sullivan96,Philippi2019}. The boundary layer is continuously regenerated by dissolution between the emission of successive plumes, so that its thickness remains on average close to the critical value, as shown experimentally~\cite{Sullivan96,Wykes2018,Cohen2020} and numerically~\cite{Philippi2019}. As we argue and demonstrate in the following, this analysis developed on the dissolution of inverted surfaces can be applied here to the ceiling of the alcoves. While it is not possible for us to directly show the boundary layer, we can visualize the descending plumes when they can be contrasted with a bright background. This is possible by using an alcove with a clear back-wall corresponding to that of the mold. As shown in Fig.~\ref{fig:plume}(a), faint plumes descending from the ceiling can be directly observed in the closeup image and in the associated movie in the supplementary documentation~\cite{sup-doc}. Plotting the average intensity across the region marked by the dashed box in Fig.~\ref{fig:plume}(a), we observe 7-8 plumes clearly visible corresponding to an approximate spacing of 2 \,mm. Similar plumes have been observed descending from spherical candy blocks as well giving rise to nonuniform dissolution of the solid with a smooth top-half surface, and a flatter bottom with sharp edges due to flow separation~\cite{Wykes2018}. These plumes rapidly take away the dense solution from the boundary allowing fresh fluid to enter the region, and allowing further dissolution to occur. Thus, this instability allows a relatively rapid recession of inverted dissolving boundaries to occur relative to other orientations as noted in Fig.~\ref{fig:time}(a). The critical thickness of the concentration boundary layer $\delta_c$ that can cause the solutal Rayleigh-Bénard instability between a rigid boundary and a stress-free boundary is given by~\cite{Sullivan96,Wykes2018}: \begin{equation} \delta_c = \left(\frac{{\rm Ra}_c \, \mu \, \rho_f \, D}{g \Delta \,\rho}\right)^{1/3}, \label{Eq1} \end{equation} where Ra$_c = 1101$ is the critical Rayleigh number corresponding to rigid-free mixed boundary conditions~\cite{Chandrasekhar}, $g = 9.8$\,m/s$^2$ is the gravitational acceleration, $\Delta \rho$ the density difference across the layer, $\mu$ the kinematic viscosity, $\rho_f$ the density of the far-field fluid, and $D$ the molecular diffusion constant of the dissolved species. During the entire dissolution process, the thickness of the concentration boundary layer remains close to its critical value $\delta_c$ due to the emission of the plumes. Because the solid-fluid material system is similar to those used in Ref.~\cite{Wykes2018}, we follow estimates there that $\Delta \rho = 300$ kg/m$^3$, and $D = 4.3 \times 10^{-10}$m$^2$/s. Then, assuming a viscosity of the sugar solution averaged between the saturation value ($\mu = 7.7 \times 10^{-4}$\,m$^2$/s) next to the solid, and water ($\mu = 1.0 \times 10^{-6}$\,m$^2$/s) outside the boundary layer, we find $\delta_c \approx 0.40$\,mm. The Schmidt number is used to estimate the relative magnitude of momentum diffusivity to mass diffusivity, and is given by ${\rm Sc} = \mu/D$. {\color{black} Substituting in the viscosity corresponding to water to obtain a lower bound,} we have Sc $\approx 10^3$. Hence, viscosity dominates diffusion and the solute can be expected to be confined within the boundary layer. Under these conditions, the critical wavelength of the instability $\lambda_c \approx 5\, \delta_c = 2$\,mm. This estimate is remarkably consistent with the mean plume spacing seen in the example shown in Fig.~\ref{fig:plume}. Thus, $\delta_c$ can provide a critical length scale for an inverted region needed to trigger a flow instability which can lead to enhanced dissolution. Conversely, smaller perturbations may not be sufficient for the instability to develop and would thus smooth out over time as the surface dissolves stably. The recession rate in terms of the location of the interface $\eta$ relative to the initial surface is obtained by writing two conservation laws at the interface. The conservation of the mass: $$-\rho_s \frac{d\eta}{dt}=\rho_l\,\left(\mathbf{u_i}\cdot \mathbf{n} - \frac{d\eta}{dt}\right)\, ,$$ with $\mathbf{u_i}$ the fluid velocity at the interface, $ \mathbf{n}$ a unitary vector normal to the interface and $\rho_l$ the density of the liquid at the interface. Then, the conservation of the solute gives: $$\rho_s \frac{d\eta}{dt}=\left(\frac{d\eta}{dt}-\mathbf{u_i}\cdot \mathbf{n} \right)\, c_i + D\,\mathbf{\nabla} c \cdot \mathbf{n}\, , $$ where $c$ is the mass concentration of the solute in the solution, and $c_i$ is the concentration of the solute at the interface. The last term corresponds to the diffusive flux at the interface according to the Fick's law. For fast dissolving species, the interface concentration $c_i$ is, to a good approximation, very close to the saturation concentration $c_{sat}$~\cite{Philippi2019}, and thus the liquid density $\rho_l$ is also close to the saturation density $\rho_{sat}$ . The diffusive flux $D\,\mathbf{\nabla} c \cdot \mathbf{n}$ can be then approximated by $D\,c_{sat}/\delta$ if the bath can be approximated as fresh water. By combining these last two equations, we obtain: \begin{equation} \frac{d\eta}{dt} = \frac{D \, c_{sat}}{\rho_s \, \delta \, (1-c_{sat}/\rho_{sat})}\,, \label{eq:rrates} \end{equation} where the saturation concentration of sucrose $c_ {sat} =0.67$, $\rho_s=940$\,kg/m$^3$, and the liquid density at saturation $\rho_{sat} = 1300$ kg/m$^3$~\cite{Wykes2018}, while starting with distilled water as the solvent. Plugging in the diffusion and concentrations corresponding to our experiments, we estimate the solutal Rayleigh-Bénard instability implied recession rate $\dot{\eta} = {d\eta}/{dt} = 9.4$\,mm/hr by then using $\delta = \delta_c$. This {\color{black} estimated value of $\dot{\eta}$} is remarkably consistent with the measured rate $\approx 8.4$\,mm/hr at which the ceiling recedes up upward in the examples shown in Fig.~\ref{fig:time}(b). The agreement observed with the plumes and the recession rate thus clearly identifies the nature of the instability leading to alcoves as being consistent with solutal Rayleigh B\'enard instability in our experiments. \section{Alcove Evolution} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Fig5.pdf} \caption{(a) Contour corresponding to $h = 0.1H$ to show the progression of the alcove opening shape and vertical crosssection. {\color{black} The angle $\phi = - 20 \pm 5^0$ and $20 \pm 5^0$ at which triangular corners develop are also shown.} (b) The area contained with the contour for the case shown vertically, and when a similar initial interface is placed horizontally (see Appendix~\ref{sec:hori}). Dissolution progresses much more rapidly when the interface is vertical. {\color{black} A fit to Eq.~(\ref{eq:area}) with $K = 1.647$, 95\% confidence bounds (1.609, 1.685), and $R_o = 6.17$\,mm is observed to capture the overall increase in alcove opening area in the vertical case surprising well considering that a circular opening is assumed}. } \label{fig:vertical} \end{figure} {\color{black} Having established predictions for} the conditions under which the alcoves initiate, we next examine the evolution of the alcove opening shape. We simplify and standardize the surface perturbation that lead to alcoves by starting with a cylindrical cavity of radius $R_o = 5$\,mm in the center of $L = 60$\,mm by $W = 40$\,mm by $H = 3$\,mm mold, while minimizing the bubble defects in the solid. We place the vertical interface a distance $d_{gap} = 1$\,mm parallel to a sidewall of the bath container to simplify imaging, and distilled water is then added to start the dissolution. As long as fluid is allowed to pass across and $d_{gap}$ is greater than the boundary layer flow, the relative location of the dissolving interface in relation to a sidewall is found to have no particular effect on the phenomena of interest. Figure~\ref{fig:vertical}(a) shows the contours in the $x-y$ plane corresponding to a depth of $0.1H$ from the vertical surface in 240\,s time intervals to quantify the development of the alcove opening. We observe that the ceiling of the cavity flattens over time, while the bottom tapers and becomes elongated as it grows to resemble the ones seen in Fig.~\ref{fig:intro}(c) and Fig.~\ref{fig:time}(a). Measured from the initial center ($x=y=0$\,cm), {\color{black} the alcove opening develops peaks or triangular corners at approximately $\phi = -20^\circ$ and $\phi = 20^\circ$ relative to the $x$-axis}. Thus, the shape grows rather symmetrically about the flow direction considering the imperfections in the solids. Some imperfections in the shape of the initial circular shape and a few air bubbles can be noted in the images as dissolution progresses. But, these imperfections appear to not affect the evolution of the shape downstream, and the overall development of the alcove. Fig.~\ref{fig:vertical}(b) shows the area of the alcove opening $A(t)$ enclosed by the contours corresponding to a small depth $h = 0.1H$ from the vertical surface. We observe that $A(t)$ grows increasingly rapidly with time. Further comparing the opening area of the horizontal cavity as a function of time in Fig.~\ref{fig:vertical}(b), we observe that it does not grow significantly over the same time interval (see Appendix~\ref{sec:hori}). While not circular, the alcove grows in all directions. {\color{black}The recession rate $\dot{\eta}$ given by Eq.~(\ref{eq:rrates}) is only strictly valid at the inverted alcove ceiling. Nonetheless, we assume that $\dot{\eta}$ provides a first order estimate of the magnitude of the gravity driven dissolution rate all along the alcove contour (see Fig.~\ref{fig:time}(a)). Then, the opening area of the alcove $A(t)$ as a function of time $t$ can be estimated as}, \begin{equation} A(t) = \pi (R_o + K \dot{\eta}\, t)^2, \label{eq:area} \end{equation} where $R_o$ is the radius of the indentation at time $t=0$, and $K$ is a geometric fitting parameter of O(1). This measured area of the alcove enclosed by the contours is plotted along with the fits to Eq.~(\ref{eq:area}) in Fig.~\ref{fig:vertical}(b). {\color{black} The fit appears reasonable with $K \approx 1.65$, and $R_o = 6.17$\,mm implying that the average contour expansion speed is essentially constant. (A somewhat larger $R_o$ fit-value is used compared to initially prepared 5\,mm circular radius because the surface starts to dissolve as soon as the mold is placed in the water bath before the imaging starts.) Averaging further over four examples of alcove growth, we find $K$ in the range 1.44 and 2.77. We also compare Eq.~(\ref{eq:area}) with area evolution corresponding to the spontaneously formed alcove shown in Fig.~\ref{fig:time}(a). As can be seen from Fig.~\ref{fig:area2}, we observe good agreement with $K \approx 1.1$ and $R_o = 0$\,cm, since the initial size of a spontaneously formed alcove is negligible. Therefore, $\dot{\eta}$ given by Eq.~(\ref{eq:rrates}) gives a surprisingly good estimate of the overall alcove opening formed under different conditions, even though the hypothesis of uniform expansion does not apply to the contours at all times. A systematic overestimate may be expected because the model assumes that the alcove remains cylindrical, whereas the floors are sloped significantly giving rise to a larger area while considering the contour given by $0.1H$.} One observes from the central crosssectional view shown in Fig.~\ref{fig:vertical}(a) that the transition upstream from the vertical interface above the alcove to the ceiling remains sharp even as it retreats upwards. Whereas, the floor begins to slope and the edge round out, leading to a gradual transition downstream over time. \begin{figure} \begin{center} \includegraphics[width=7cm]{Fig6.pdf} \caption{{\color{black} The alcove opening area corresponding to the example shown in Fig.~\ref{fig:time}(a). Eq.~(\ref{eq:area}) with $K = 1.104$, 95\% confidence bounds $(1.073, 1.137)$, and $R_o = 0$\,mm is observed to describe the area evolution. } } \label{fig:area2} \end{center} \end{figure} The evolution of the floor slope may be viewed as being similar to those reported in the development of pinnacles~\cite{Huang2020,Pegler2020} as when an attached flow shapes dissolution over large scales. There it was found that the pinnacle tip {\color{black} descends at a rate proportional to the curvature of the tip to the 1/4 power.} {\color{black} However, the overall geometries are different because of the convex tip shape in the case of the pinnacles, and the concave shape at the base of the alcove. A more recent theoretical study in two-dimensions~\cite{Pegler2021} addresses the shape evolution of a soluble body with an attached boundary layer flow. The predicted shape evolution is qualitatively similar to the floor change plotted in Fig.~\ref{fig:vertical}(a) in the $y-z$ plane. Nonetheless, this model does not capture the triangle shape observed experimentally in the $x-y$ plane. } Assuming that the thickness of the solid block is the appropriate length scale in our system, we note a faster retreat of the floor downwards, as the solid thickness decreases (see Fig.~\ref{fig:vertical}(a)). In contrast, the retreat of the floor downwards slows down and even stops over time in the example shown in Fig.~\ref{fig:time}(a), as the alcove depth increases starting from a small perturbation. \section{Gravity current and boundary separation} \label{sec:flow} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Fig7.pdf} \caption{(a) An image of the partially dissolved surface contained within a 6\,cm by 4\,cm vertically oriented mold at time $t = 20$\,min with fluorescent tracers added to visualize flow. (b) A zoomed in view of the fluid flow illustrated by using tracks of the tracers over a $\Delta t = 2$\,s time period. A gravity driven downward current is observed along the dissolving surface. Circulation can be also observed inside the alcove. (c) A central alcove crosssection showing the circulating currents within the alcove. The flow is observed to detach from the edge where the vertical dissolving surface meets the alcove ceiling. (d) A magnified view of the region containing the counterclockwise rotating vortex observed near the ceiling, the flow reattachment, and the clockwise rotating vortex further downstream with tracer tracks denoted with white, blue, and green colors, respectively. (e) A $\Delta t = 2$\,s long exposure image processed with maximum intensity and background subtraction {\color{black} reveals} the dominant flow path for the gravity current. The white bar in (a,b) corresponds to 1\,cm, and the width of the sugar block in (c-d) corresponds to 3\,mm for scale.} \label{fig:flow} \end{figure} We investigate the impact of the fluid flow on the shaping of the dissolving interface by adding micro-sphere tracers to the solution using a laser sheet with wavelength 532\,nm to uniformly illuminate the area of interest. The latex spheres with diameter $d_{ms} = 15\,\mu$m and density $\rho_{ms} = 1.1$\,g/cm$^3$ fluoresce and a bandpass filter is placed in front of the camera to block directly reflected light from entering the camera. This enables us to capture an image of the tracers with higher degree of contrast compared to the background. Sucrose was added to the initial bath to match that density while doing these measurements to reduce any settling due to density differences between the tracers and the fluid. Figure~\ref{fig:flow}(a) shows an image from the front while de-focusing the laser sheet and illuminating the entire $d_{gap}=1$\,mm fluid gap between the dissolving interface and the front boundary from the side. Figure~\ref{fig:flow}(b) then shows streaks made by the tracers obtained by adding successive images over 2\,s long using the maximum intensity function in ImageJ. The flow can be observed to be more or less uniformly downward outside the alcove but also converges weakly and then diverges as it passes the alcove. Further, flows in and out of the projected plane are also clearly visible inside the alcove. Thus, it can be noted that while the flow is symmetric about the vertical axis, the flow inside the alcove is asymmetric in the flow direction, and does not follow the interface near the ceiling. \begin{figure} \centering \includegraphics[width=\textwidth]{Fig8.pdf} \caption{(a) Average downward velocity of the gravity current along the dissolving vertical surface just above the alcove over time. The data is averaged over 5 examples and error bars corresponds to the root mean square deviations from the mean. (b) The total length of the interface decreases over time as the solid dissolves leading to the decrease in gravity current speed. (c) The fraction of the dissolving solid above the alcove also decreases proportionately. (d) The thickness of the dissolving solid above the alcove corresponds to the step size through which the gravity current passes as it enters the alcove region. (e) The reattachment length $L_R$ decreases as the step size decreases over time. Inset: Sketch of the flow and the corresponding $L_R$. (f) The ratio of the reattachment length to the step size remains approximately constant as a function of boundary layer Reynolds number calculated over the duration of the dissolution. Re$_{bl}$ remains moderate over the entire duration.} \label{fig:flow-analy} \end{figure} To obtain a full picture of the flow, we visualize a central vertical thin ($\approx 100\,\mu$m) section through the alcove, by swapping the position of the camera and laser sheet to be from the side and front, respectively. A movie of the flow can be found in the supplementary documentations~\cite{sup-doc}. Figure~\ref{fig:flow}(c) then shows the flow visualized using the tracers over a 2\,second time interval in the orthogonal plane. A further zoomed in side view of the region near the ceiling is shown in Fig.~\ref{fig:flow}(d), where the tracers tracked over time are also shown. It is evident that there is clear flow separation as the gravity driven flow detaches from the surface at the edge where the vertical surface meets the ceiling of the alcove before reattaching to the back wall some distance down (see Fig.~\ref{fig:flow}(e)). From these complementary views, we conclude that the flow has a rich three dimensional structure with a predominantly downward flow which detaches at the alcove ceiling and reattaches inside the back of the alcove before flowing down and out. Further, two sets of circulating currents or vortices are present inside the alcove. One gives rise to a return flow upwards in front of the alcove face away from the interface itself. The another, which is located between the ceiling and where the flow detaches from the ceiling, gives rise to flow which is directed outward along the alcove ceiling. The velocity $v_y$ along the interface was obtained by measuring the average length of the 10 longest streaks by eye from movies recorded at 24 frames per second in one minute time intervals over the course of the entire interface dissolution, and further averaging over 3 trails. These speeds in the flowing layer were observed to be right adjacent to the dissolving interface and are no more than a fraction of millimeter in width. The measured speeds are plotted in Fig.~\ref{fig:flow-analy}(a), and observed to be, initially, approximately 2.5 mm/s and then decrease over time. We attribute this decrease in speed to a finite size effect. As the interface dissolves, its total length inside the mold decreases after a few minutes because of relatively faster dissolution of the solid at the top and bottom edge of the mold interface (see Fig.~\ref{fig:flow}(a)). We quantify this by plotting the total length of the dissolving solid interface through the center in Fig.~\ref{fig:flow-analy}(b), and the fractional length dissolving solid interface above the alcove in Fig.~\ref{fig:flow-analy}(c). Both can be observed to decreases similarly, and one-to-one correspondence can be noted with the decreases in the flow speed. To understand this, we can note that the speed of the currents are determined by gravity acting on the solute concentrated solution, and the viscous drag of the flow across the interface and through the bath. Because the dissolving part of the interface decreases while the effective length of the recirculating flows through the bath remains unchanged, this can give rise to an overall slowing down of the flow as observed here. The slowing down of the flow due to finite size effects leads to a decrease in alcove ceiling recession as can be noted in the overlap of the contour lines at later times in Fig.~\ref{fig:vertical}(a). Thus, the overall flow across the entire interface and the dissolution at the ceiling due to density-inversion, are both important to determining the overall evolution of the dissolving interface. The Reynolds number of the flow is given by ${\rm Re} = {\rho \, U l}/{\mu}$, where $\rho$ is the density of the solution, $\mu$ the viscosity, $U$ is the velocity scale, and $l$ is a length scale. Then, assuming $\mu > 8.90 \times 10^{-4}$\,Pa\,s, the viscosity of water at $25^\circ$C, and {\color{black} $l \sim H = 0.4$\,cm}, we have Re $< 2.4$. Thus, the boundary layer flows in our experiments are in the low-Re laminar flow regime, as also confirmed by tracer motion near the dissolving surface in the movies. At similar low Re numbers, vortices have been reported in the case of flow past a back step~\cite{Matsui75}, and is a result of the momentum of the fluid as it moves past the sharp discontinuity. To quantify the detachment of the flow, we measure the reattachment length $L_r$ from where the flow moves downward next to the wall to the ceiling as shown in inset to Fig.~\ref{fig:flow-analy}(e). Figure~\ref{fig:flow-analy}(d) and (e) show that the step length and the reattachment length of the flow is observed to decrease over time as the solid dissolves. And, the ratio of the reattachment length and the step size is observed to be roughly similar and constant over the duration of the dissolution as the Reynolds number of the boundary layer flow ${\rm Re}_{\rm bl}$ decreases slowly (see Fig.~\ref{fig:flow-analy}(f)). {\color{black} In calculating ${\rm Re}_{\rm bl}$, we have assumed that the relevant density and viscosity of the fluid is that of water, the length scale corresponding to the boundary layer is given by the fast moving layer in Fig.~\ref{fig:flow}(e), which is of order 100\,microns, and the flow speed $U$ is given by $v_y$ plotted in Fig.~\ref{fig:flow-analy}(a).} Combining our observations on the evolution of the alcove shape and gravity currents, one can surmise that the alcove shapes evolve because of the two different flows at the ceiling and on the other sides of the alcove. The flow separation and the counter-rotating vortex lead to a boundary layer which preserves the sharp discontinuity as the ceiling dissolves at a rate set largely by the the solutal Rayleigh-B\'enard instability that causes the ceiling to grow laterally. Whereas, the flow does not separate from the boundary near the sides and the floor giving rise to dissolution rates set there by the flow tangential to the surface at those locations. This gives rise to the evolution of the surface which round out over time giving rise to a gradual change in slopes. {\color{black} Because the dissolution increases with tangential flow, the shapes can be expected to elongate along the flow direction. Hence this flow can give rise to the smooth conical surfaces below the ceiling which elongate over time. } \section{Dissolution of Smooth and Rough Interfaces} \label{app:smooth} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig9.pdf} \caption{The average thickness of the solid $\eta$ measured over time $t$ for the two orientations ($Q_{imp} = 0$\,mL/hr). The recession rates $\dot{\eta}$ are approximately 0.53\,mm/hr and 5.3\,mm/hr, respectively.} \label{fig:Fig3} \end{figure} In complementary experiments we examined the dissolution of solids with sufficiently smooth surfaces to see if the dissolving interface is indeed stable under otherwise similar preparation and environmental conditions. Solid blocks were prepared to be free of large bubble defects, and studied while the dissolving interface is oriented parallel and perpendicular with respect to gravity. No alcoves were observed to grow and the surfaces remained smooth and planar as they dissolve. Fig.~\ref{fig:Fig3}(a) shows a plot of the measured dissolved thickness layer $\eta(t)$ corresponding to the two orientations. The recession rate $\dot{\eta}$ is obtained using the slope of the dashed line fitted to the initial increase of $\eta(t)$ and is approximately 5.3\,mm/hr and 0.53\,mm/hr in the vertical and horizontal case, respectively. Thus, the dissolution is clearly greater in the case of the vertical orientation because convective currents which take away dense solution in the boundary layer are greater compared to the horizontal case. While dissolution driven current clearly occurs when the interface is vertical, driving the rapid descent of the denser solution, a slower circulation current is set up as well in the horizontal case. This occurs because of the finite size of the dissolving surface compared to the bath. Thus, as the sucrose diffuses up, the denser fluid slowly moves to the sides of the bath container as the interface is of finite size relative to the bath container. Thus, even in the horizontal interface, dissolution is not diffusion limited but assisted by convection, albeit much less so compared with a vertical interface. We then tested the fact that {\it indentations} of a critical size are needed to observe the growth of alcoves, and not just heterogeneity, by adding glass beads with a mean diameter of 0.2\,mm to the solid. This was accomplished by preparing the {\color{black} hot sucrose liquid} with the same recipe as noted earlier, and by adding beads corresponding to a volume fraction $V_f = 0.16$ as it sets. We find that although the glass beads protrude and peel off as the solid dissolves while the interface is oriented vertically, no alcoves form except where trapped air bubbles are present which were of diameter $d_b \gtrsim 0.63$\,mm. Alcoves were then observed to develop with inverted triangular shapes and to form at rates similar to those observed without the beads added~\cite{sup-doc}. It can be noted that this minimal defect size for alcove development is close to the critical size $\delta_c \approx 0.40$\,mm needed to trigger the solutal Rayleigh-Bénard instability according to Eq.~\ref{Eq1}. Thus, indentations caused by trapped bubbles are apparently sufficiently great that the solutal Rayleigh-B\'enard instability can develop. The roughness and protrusions due the additional granular phase in these experiments appear to be unimportant to the development of the overall alcove shapes. This data shows the robustness of the alcove feature even in the presence of additional non-dissolving solids as is often the case in natural sediments. \section{Conclusions} We have demonstrated that alcoves can develop on vertical solid-fluid interfaces as the solid phase dissolves when indentations of a critical size are present on the surface. Unlike previous demonstrations of scallop-like indentations which cover the undersides of dissolving surfaces, these features are isolated, and small perturbations and protrusions are polished away as the interface dissolves. The alcoves are shown to develop starting at inverted surfaces (or ceilings) of sufficient size as they dissolve leading to a boundary layer with a higher density compared to the solution below. From the observation of plumes descending from the ceiling, their spacing, and the observed recession rate of the ceiling, we deduce that the instability occurs when the boundary layer exceeds a length scale set by the critical Rayleigh number corresponding to mixed boundary conditions. Thus, we identify the mechanism which leads to the formation of alcoves as being driven by the solutal Rayleigh-B\'enard instability as a result of the balance between the diffusion of the solute and the buildup of boundary layer density due to dissolution. By visualizing the fluid phase near the dissolving interface, we demonstrate that a gravity current develops as the density of solution increases with solute concentration along the entire dissolving interface and not just at the ceiling. As the fluid sinks, fresh fluid from the bath replaces it, and thus a rapid gravity current is setup moving downwards along the interface. We show that the boundary layer flow is in the low-Reynolds number laminar regime, and a boundary flow separation occurs at the edge where the vertical interface meets the ceiling of the indentations which enforces the discontinuous change in slope at the leading edge of the indentation. The resulting dissolution as a result of the solutal Rayleigh-B\'enard instability, and the direction of the vortex which develops past the ceiling inside the indentation, appear to work in concert to maintain the sharp edge and widening of the ceiling as it continues to retreat upwards along the vertical interface. The continuing fluid flow down the side and floor of the indentation remain attached to the surface and serve to smooth sharp edges downstream leading to a gradual change in slope. The combination of density inversion instability at the ceiling and the shaping of the dissolving surfaces by the boundary layer flow result in the triangular shaped alcove with a wide ceiling and a sloping back and floor. The evolution of these shapes is found to be robust even when additional heterogeneity in the form of non-dissolving glass beads are present in the solid phase. Although the flow geometry is quite different than in confined triangular cavities that occur {\it inside} salt deposits studied by Gechter, et al.~\cite{Gechter2008}, an analogy can be observed. The appearance of dissolution at the ceiling due to the solutal Rayleigh-B\'enard instability leads to an efficient transfer of dissolved solute to regions where flow is rapid in turn giving rise to the triangular shapes. The flows in and out the alcove are however different than those in the formation of conical enclosed cavities in salt deposits. The rapid gravity currents over the entire open interface further shape the dissolving boundary leading to a recession of the alcove floor with a slope downwards polished by the attached boundary layer flow consistent with recent studies of dissolving pinnacles~\cite{Wykes2018,Pegler2021}. Further modeling is needed to draw quantitative comparisons between features observed in these various interfaces shaped by dissolution and flow. Finally, we note that our two-phase physical model system is highly simplified compared with alcoves observed in nature. Natural cliffs have far more complex rock chemistry and heterogeneity, and can experience a wide range of physical weathering from temperature variation to rain and wind besides chemical and biological weathering. Dissolution by surface runoffs fed by rainfall may play a role in the initiation of surface patterns~\cite{Guerin2020}. Nonetheless, recent experiments with dissolving solids in an aqueous bath demonstrate that convective dissolution reproduce the shape of the pinnacles created by the rain in Karst regions~\cite{Huang2020}. {\color{black} In these studies, as in ours, dissolving sugar results in considerable variation in viscosity and diffusivity, which is not the case in typical rock dissolution.} The universality of the overall alcove shapes that evolve, largely independent of initial conditions in our system point to the underlying robustness of such features in dissolving solids shaped by gravity driven boundary layer flows. Exploring these connections more deeply remains an avenue for further research.
1,314,259,992,718
arxiv
\section{Introduction}\Label{int} There exist a wide variety of results concerned with the structure of the automorphism group of a given geometric structure. In Riemannian Geometry, the classical Myers-Steenrod theorem \cite{MS39} states that the group of all isometries of a Riemannian manifold is a Lie group. H.\ Cartan \cite{HCa} proved an analogous result for the group of holomorphic automorphisms of a bounded domain in $\mathbb{C}^N$. Cartan's techniques have in turn been used to establish general results for groups of diffeomorphisms of real or complex manifolds, see e.g.\ \cite{BM0}. In this paper, we consider an analogous question for CR manifolds (that one can think of as a boundary or CR version of Cartan's Theorem mentioned above): {\em Under what conditions on a real-analytic CR manifold $M$ is the group $\text{\rm Aut}_\text{\rm CR} (M)$ of all real-analytic CR automorphisms of $M$ a Lie group in an appropriate topology?} Here for every $r\in \mathbb{N} \cup \{\infty,\omega\}$, we equip $\text{\rm Aut}_\text{\rm CR} (M)$ with a natural topology that we call ``compact-open $\6C^r$ topology'', which is defined as follows. For open subsets $\Omega \subset \mathbb{R}^n$ and $\Omega' \subset \mathbb{R}^{n'}$, consider the space $\6C^r(\Omega,\Omega')$ of all maps of class $\6C^r$ from $\Omega$ to $\Omega'$. If $r\in \mathbb{N} \cup \{\infty\}$, $\6C^r(\Omega,\Omega')$ is equipped with the topology of uniform convergence on compacta together with all partial derivatives of order up to $r$. In case $r=\omega$, the space $\mathcal{C}^{\omega}(\Omega,\Omega')$ is equipped with its topology as an inductive limit of Fr\'echet spaces of holomorphic maps between open neighborhoods of $\Omega$ and $\Omega'$ in $\mathbb{C}^n$ and $\mathbb{C}^{n'}$ respectively. The compact-open $\6C^r$ topology on $\text{\rm Aut}_\text{\rm CR} (M)$ is now induced by the appropriate topology relative to the coordinate charts for the maps and their inverses (see e.g.\ \cite{BRWZ} for a more detailed discussion). For brevity, we adopt the order $k<\infty<\omega$ for any integer $k$. In this paper, we exhibit general sufficient conditions on $M$ that provide an affirmative answer to the above question. We begin with the following special case of our more general results, that is particularly easy to state: \begin{Cor}\label{t:lieh} Let $M$ be a compact real-analytic hypersurface in a Stein manifold of complex dimension at least two. Then the group $\text{\rm Aut}_\text{\rm CR}(M)$ of all $($global$)$ real-analytic CR automorphisms of $M$ is a Lie group in the compact-open $\mathcal{C}^{\omega}$ topology and the action $\text{\rm Aut}_\text{\rm CR}(M)\times M \to M$ is real-analytic. Furthermore, the compact-open $\6C^r$ topologies on $\text{\rm Aut}_\text{\rm CR}(M)$ coincide for $r = \infty,\omega$ and $r\ge k$, where $k$ is an integer depending only on $M$. \end{Cor} Corollary~\ref{t:lieh} is a direct consequence of the following more general result that also applies to CR manifolds of higher codimension. In the following statement, the notions of essential finiteness, finite nondegeneracy and minimality must be understood in the sense of \cite{BERbook} (see also Section~\ref{s:perturb} for more details). \begin{Thm}\label{t:unify} Let $M$ be a real-analytic CR manifold. Assume that $M$ has finitely many connected components, is minimal everywhere and that there exists a compact subset $K\subset M$ such that: \begin{enumerate} \item[(i)] $M$ is essentially finite at all points of $K$; \item[(ii)] $M$ is finitely nondegenerate at all points of $M\setminus K$. \end{enumerate} Then $\text{\rm Aut}_\text{\rm CR}(M)$ is a Lie group in the compact-open $\mathcal{C}^{\omega}$ topology and the action $\text{\rm Aut}_\text{\rm CR}(M)\times M \to M$ is real-analytic. Furthermore, the compact-open $\6C^r$ topologies on $\text{\rm Aut}_\text{\rm CR}(M)$ coincide for $r = \infty,\omega$ and $r\ge k$ for some integer $k$, where $k$ is an integer depending only on $M$. \end{Thm} Theorem~\ref{t:unify} provides a generalization of all known corresponding results for real-analytic CR manifolds. It also covers new situations, such as in Corollary~\ref{t:lieh}; indeed, any real-analytic compact hypersurface in a Stein manifold is essentially finite and minimal at {\em each} of its points (see e.g.\ \cite{DF, BERbook}). For the case of real hypersurfaces whose Levi form is nondegenerate at every point, the conclusion of Theorem~\ref{t:unify} follows from the work of E.\ Cartan \cite{Ca1, Ca2}, Chern-Moser \cite{CM}, Tanaka \cite{Ta} and Burns-Schnider \cite{BS}. For the case of Levi-degenerate CR manifolds, the same conclusion was recently obtained by Baouendi, Rothschild, Winkelmann and the third author \cite{BRWZ} for the class of finitely nondegenerate minimal CR manifolds, which corresponds here to our Theorem~\ref{t:unify} with $K=\emptyset$. (We should point out that the results in those mentioned papers also apply for merely smooth CR manifolds as well, based on the previous work \cite{KZ05}, but in this paper we shall focus on the real-analytic category.) In addition to the compact hypersurface case considered in Corollary~\ref{t:lieh}, an important class of CR manifolds for which the previously known results do not apply and for which the conclusion of Theorem~\ref{t:unify} holds is that of {\em compact} minimal real-analytic CR submanifolds embedded in a Stein manifold. Again, the condition of essential finiteness holds here at every point, see \cite{DF, BERbook} (whereas the condition of finite nondegeneracy holds only outside a proper real-analytic subvariety which need not be empty in general); Therefore taking $K=M$ in Theorem~\ref{t:unify}, we obtain the following extension of Corollary~\ref{t:lieh} to higher codimension: \begin{Cor}\label{t:liehighcodim} Let $M$ be a compact real-analytic CR submanifold in a Stein manifold. Assume that $M$ is minimal at every point. Then the group $\text{\rm Aut}_\text{\rm CR}(M)$ of all $($global$)$ real-analytic CR automorphisms of $M$ is a Lie group in the compact-open $\mathcal{C}^{\omega}$ topology and the action $\text{\rm Aut}_\text{\rm CR}(M)\times M \to M$ is real-analytic. Furthermore, the compact-open $\6C^r$ topologies on $\text{\rm Aut}_\text{\rm CR}(M)$ coincide for $r = \infty,\omega$ and $r\ge k$, where $k$ is an integer depending only on $M$. \end{Cor} On the other hand, Theorem~\ref{t:unify} also applies to cases with $M$ noncompact also not covered by previously known results. Let us illustrate this with an example: \begin{Exa} The hypersurface $M\subset \mathbb{C}^2$ given by $$|z|^2-|w|^4 = 1$$ is Levi-nondegenerate at all its points except the circle $S^1\times \{0\}\subset M$, where $M$ is essentially finite. Hence, Theorem~\ref{t:unify} applied with $K:= S^1\times \{0\}$, yields that $\text{\rm Aut}_\text{\rm CR} (M)$ is a Lie group. On the other hand, $M$ is not finitely nondegenerate at any point of $K$ and hence the results from \cite{BRWZ} do not apply to $M$. \end{Exa} Our proof of Theorem~\ref{t:unify} makes use of the recent developments providing a relationship between various notions and results concerning jet parametrization of local CR diffeomorphisms \cite{BER97, Z97, BER99b, E, KZ05, LM05} and Lie group structures on (local and) global groups of automorphisms of CR manifolds \cite{BRWZ}. In the next section, we give in Theorem~\ref{main-technical} new sufficient conditions on a connected real-analytic CR manifold $M$, in terms of local jet parametrization properties of CR automorphisms, that ensure that $\text{\rm Aut}_\text{\rm CR}(M)$ is a Lie group. Then the remainder of the paper is devoted to prove that under the assumptions of Theorem~\ref{t:unify} the conditions of Theorem~\ref{main-technical} are fulfilled. To this end, we establish, following the analysis of the first two authors' paper \cite{LM05}, a new parametrization theorem (Theorem~\ref{t:jetparamthm}) for local CR automorphisms that may be of independent interest. The proof of Theorem~\ref{t:unify} is given in Section \ref{ss:final}. We conclude the paper by giving in Section \ref{s:lmz} an alternative proof of Corollary~\ref{t:liehighcodim} following \cite{Z97} that does not make use of Theorem~\ref{main-technical} but requires compactness of the manifold $M$. \bigskip \noindent {\bf Acknowledgement.} The third author would like to thank A. Isaev for helpful discussions. \section{New sufficient conditions for the automorphism group being a Lie group}\label{s:jetparam} Let $M$ be a real-analytic manifold and $k$ a positive integer. We use the notation $G^k(M)$ for the fiber bundle of all $k$-jets of local real-analytic diffeomorphisms of $M$. For every point $p\in M$, we denote by $G_p^k(M)$ the fiber of $G^k(M)$ at $p$. Given a germ of a local real-analytic diffeomorphism $h\colon (M,p)\to M$, we write $j^k_p h\in G_p^k(M)$ for the corresponding $k$-jet. For instance, $j^k_p{\sf id}\,$ is the $k$-jet of the identity map of $M$, regarded as a germ at $p$. In local coordinates, $j^k_p h$ is given by the source $p$, the target $h(p)$ and the collection of all partial derivatives of $h$ at $p$ up to order $k$. (See e.g.\ \cite{GG} for more details on this terminology.) We now fix an arbitrary set $\6S$ of germs of local real-analytic diffeomorphisms $h\colon (M,p)\to M$ with possibly varying reference point $p\in M$ and, as in \cite{BRWZ}, consider the following condition. \begin{Def}\Label{abstract-param} Let $k$ be a positive integer and $p_0\in M$. We say that $\6S$ {\em has the real-analytic jet parametrization property of order $k$ at $p_0$} if there exist open neighbourhoods $\Omega'$ of $p_0$ in $M$, $\Omega''$ of $j^k_{p_0}{\sf id}$ in $G^k(M)$ and a real-analytic map $\Psi \colon \Omega'\times \Omega''\to M$ such that, for every germ $h\colon (M,p)\to M$ in $\6S$ with $p\in \Omega'$ and $j_p^k h\in \Omega''$, the identity $h(\cdot) \equiv \Psi (\cdot,j_p^k h)$ holds in the sense of germs at $p$. \end{Def} The following theorem is one of the key ingredients in the proof of Theorem~\ref{t:unify}. \begin{Thm}\Label{main-technical} Let $M$ be a connected real-analytic CR manifold. Assume that there exist an integer $k$ and a compact subset $K\subset M$ such that the following holds: \begin{enumerate} \item[(i)] For every $p_0\in K$, the set of all germs at $p_0$ of local CR diffeomorphisms of $M$ has the real-analytic jet parametrization property of order $k$ at $p_0$; \item[(ii)] The set of all germs at all points of local CR diffeomorphisms has the real-analytic jet parametrization property at every point $p_0\in M\setminus K$ of some finite order possibly depending on $p_0$. \end{enumerate} Then $\text{\rm Aut}_\text{\rm CR}(M)$ is a Lie group in the compact-open $\mathcal{C}^{\omega}$ topology and the action $\text{\rm Aut}_\text{\rm CR}(M)\times M \to M$ is real-analytic. Furthermore, the compact-open $\6C^r$ topologies on $\text{\rm Aut}_\text{\rm CR}(M)$ coincide for $r = \infty,\omega$ and $r\ge k$, where $k$ is an integer depending only on $M$. \end{Thm} In the case $K=\emptyset$, Theorem~\ref{main-technical} is contained in \cite{BRWZ}. Heuristically speaking the points of $M\setminus K$ in Theorem~\ref{main-technical} (ii) fulfil a "strong'' jet parametrization property (namely, a so-called complete system in the sense of \cite{KZ05,BRWZ}). In Theorem~\ref{main-technical}, we allow some points to satisfy a weaker property (namely condition (i)), but we have to pay the price by requiring that all these points lie in a compact subset of $M$. \begin{proof}[Proof of Theorem~{\rm \ref{main-technical}}] Let $K\subset M$ be the compact subset as in Theorem~\ref{main-technical}. We first apply the parametrization property for the set of all germs at a fixed point $p_0\in K$, which holds in view of (i); without loss of generality we may assume that $K$ is nonempty. By Definition~\ref{abstract-param}, for every fixed $p_0\in K$, we can find open neighbourhoods $\Omega'$ of $p_0$ in $M$, $\Omega''$ of $j^k_{p_0}{\sf id}$ in $G^k(M)$ and a real-analytic map $\Psi \colon \Omega'\times \Omega''\to M$ such that, for every germ $h\colon (M,p_0)\to M$ of a local CR diffeomorphism of $M$ with $j_{p_0}^k h\in \Omega''$, we have the identity $h(\cdot) \equiv \Psi (\cdot,j_{p_0}^k h)$ in the sense of germs at $p_0$. Let $\2\Omega'$ (resp.\ $\2\Omega''$) be a smaller neighbourhood of $p_0$ in $\Omega'$ (resp.\ of $j^k_{p_0}{\sf id}$ in $\Omega''$) which is relatively compact in $\Omega'$ (resp.\ $\Omega''$), chosen for every $p_0\in K$. Without loss of generality, all neighbourhoods here are connected. Using the compactness of $K$ and passing to a finite subcovering, we obtain a finite collection of points $p_1,\ldots,p_s\in K$, the corresponding neighbourhoods $$\Omega'_m \supset \supset \2\Omega'_m \ni p_m, \quad \Omega''_m \supset \supset \2\Omega''_m \ni j^k_{p_m}{\sf id},$$ and real-analytic maps $\Psi_m \colon \Omega'_m\times \Omega''_m\to M$ for $m=1,\ldots,s$, such that $(\2\Omega'_m)$ is a covering of $K$. We next define neighbourhoods $\6U$ and $\2{\6U}$ of the identity mapping in $\text{\rm Aut}_\text{\rm CR}(M)$ with respect to the compact-open $\6C^k$ topology as follows: \begin{equation}\Label{u-def} \2{\6U} := \{ g\in \text{\rm Aut}_\text{\rm CR}(M) : j^k_{p_m} g \in \2\Omega''_m, \, 1\le m\le s\}, \quad {\6U} := \{ g\in \2{\6U} : g^{-1} \in \ \2{\6U} \}. \end{equation} It is clear from the definition of the topology chosen that $\6U$ is indeed an open set. Obviously the same conclusion holds for the compact-open $\mathcal{C}^{\infty}$ topology as well as for the compact-open $\mathcal{C}^r$ topology for any $r \ge k$. Our main step of the proof will be to show that $\6U$ is relatively compact in $\text{\rm Aut}_\text{\rm CR}(M)$. We shall prove it with respect to the compact-open $\6C^k$ topology, which is Fr\'echet and hence, in particular, metrizable. Thus it suffices to prove that the closure of $\6U$ is sequentially compact. Let $(f_n)$ be any sequence in $\6U$, for which we shall prove that there exists a convergent subsequence. In view of \eqref{u-def}, we have $$j^k_{p_m} f_n \in \2\Omega''_m \subset\subset \Omega''_m \subset G^k(M), \quad m=1,\dots,s,$$ for every $n$. Hence, passing to a subsequence, we may assume that $j^k_{p_m} f_n$ converges to some $\Lambda_m\in \Omega''_m$ for each $m=1,\dots,s$. Following the strategy of \cite{BRWZ}, we denote by $\6O$ the open set of points $q\in M$ with the property that $(f_n)$ converges in the compact-open $\mathcal{C}^{\omega}$ (and hence any $\6C^r$ with $r\ge k$) topology in a neighbourhood $V$ of $q$ in $M$ to a map $f\colon V\to M$ such that the Jacobian of $f$ at $q$ is nonzero. We want to show that $\6O$ is nonempty and closed in $M$. By our construction, we have $f_n(\cdot) \equiv \Psi_1 (\cdot,j_{p_1}^k f_n)$ in the sense of germs at $p_1$ and hence, by the identity principle for real-analytic functions, all over $\Omega'_1$. Since $j^k_{p_1} f_n$ converges to $\Lambda_1 \in \Omega''_1$, it clearly follows that $f_n|_{\Omega'_1}\to f:=\Psi_1(\cdot,\Lambda_1)$ as $n\to +\infty$ in the compact-open $\mathcal{C}^{\omega}$ topology on $\Omega'_1$. In particular, we also have $j^k_{p_1}f_n\to j_{p_1}^kf$ and since $j_{p_1}^kf \in \Omega''_1\subset G^k(M)$, we immediately see that $p_1\in \6O$, proving that $\6O$ is nonempty. To show that $\6O$ is closed, let $q_0$ be any point in the closure of $\6O$ in $M$. We now distinguish two cases. {\bf Case 1}. $q_0\notin K$. Here we can repeat the arguments of the proof of \cite[Lemma 3.3]{BRWZ} to show that $q_0\in \6O$. {\bf Case 2}. $q_0\in K$. Here we only have the restricted parametrization given by (i) and hence cannot use the same arguments as in Case 1; instead, we use our construction. Since the neighbourhoods $\2\Omega'_m$, $m=1,\ldots,s$, cover $K$, we have $q_0\in \2\Omega'_{m_0} \subset \subset \Omega'_{m_0}$ for some $m_0$ and let $p_{m_0}\in K$ be the corresponding point. The sequence of the $k$-jets $\Lambda_{m_0}^n:= j^k_{p_{m_0}} f_n$ converges to $\Lambda_{m_0}$ by our assumptions above and therefore \begin{equation}\Label{m0par} f_n(\cdot) \equiv \Psi_{m_0} (\cdot,\Lambda_{m_0}^n) \to \Psi_{m_0} (\cdot,\Lambda_{m_0}), \end{equation} which immediately implies that $q_0\in \6O$. Summarizing, we have shown that $\6O$ is nonempty, open and closed in $M$ and therefore $\6O=M$, i.e.\ $(f_n)$ converges on $M$ to a real-analytic map $f\colon M\to M$ which is automatically CR. Furthermore, by our construction of $\6U$, also the sequence of the inverses $f_n^{-1}$ is in $\6U$. Hence similar arguments show that this sequence converges to another real-analytic CR self-map $g$ of $M$. Then it follows that $g\circ f = f\circ g = {\sf id}\,$ and therefore $f\in \text{\rm Aut}_\text{\rm CR}(M)$. This completes the proof that the chosen neighbourhood $\6U$ of ${\sf id}\,$ in $\text{\rm Aut}_\text{\rm CR}(M)$ is relatively compact. Since any $g\in \text{\rm Aut}_\text{\rm CR}(M)$ has $g\6U$ as its neighbourhood, it follows that the whole group $\text{\rm Aut}_\text{\rm CR}(M)$ is locally compact. As in \cite{BRWZ}, we make use of the following theorem of Bochner-Montgomery \cite{BM}, \cite[Theorem 2, p.~208]{MZ}: \begin{Thm}[Bochner-Montgomery]\Label{bm1} Let $G$ be a locally compact topological group acting effectively and continuously on a smooth manifold $M$ by smooth diffeomorphisms. Then $G$ is a Lie group and the action $G\times M\to M$ is smooth. \end{Thm} Indeed, we have just shown that $G:= \text{\rm Aut}_\text{\rm CR}(M)$ is locally compact. Since the action $\text{\rm Aut}_\text{\rm CR}(M)\times M \to M$ is obviously effective, Theorem~\ref{bm1} shows that $\text{\rm Aut}_\text{\rm CR}(M)$ is a Lie group and its action is smooth. The coincidence of the compact-open $\6C^r$ topologies on $\text{\rm Aut}_\text{\rm CR}(M)$ for $r\ge k$ also follows from the proof. Finally the analyticity of the action follows from another result of Bochner-Montgomery \cite{BM0}: \begin{Thm}[Bochner-Montgomery]\Label{bm2} Let $G$ be a Lie group acting continuously on a real-analytic manifold $M$ by real-analytic diffeomorphisms. Then the action $G\times M\to M$ is real-analytic. \end{Thm} The proof of Theorem~\ref{main-technical} is complete. \end{proof} \section{Parametrization of local CR diffeomorphisms}\label{s:perturb} In order to deduce Theorem~\ref{t:unify} from Theorem~\ref{main-technical}, we will establish a jet parametrization property of local CR diffeomorphisms for a certain class of real-analytic CR submanifolds in complex space. Such a property has already been established by the first two authors in \cite{LM05} for an appropriate class of CR manifolds for local CR diffeomorphisms {\em which furthermore fix a given point of the manifold}. However, in view of Definition~\ref{abstract-param}, we need to extend such a parametrization property to local CR diffeomorphisms {\em which do not necessarily fix a base point}. In what follows, we make the above statements precise and show how they may be derived from the analysis given in the paper \cite{LM05}. The class of germs of real-analytic generic submanifolds we shall consider in this paper is the one introduced by the first two authors in \cite{LM05}, denoted by ${\mathcal C}$, whose definition we now recall. Denote by $(M,p_0)$ a germ of a real-analytic generic submanifold of $\mathbb{C}^N$ (or, more generally, of any complex manifold) of CR dimension $n$ and real codimension $d$, i.e.\ $N=n+d$, $T_{p_0}M + iT_{p_0}M = T_{p_0}\mathbb{C}^N$ and $n=\dim_\mathbb{C} (T_{p_0}M\cap iT_{p_0}M)$. Let $\rho =(\rho_1,\ldots,\rho_d)$ be a real-analytic vector valued defining function for $M$ in some neighbourhood $U$ of $p_0$ in $\mathbb{C}^N$ satisfying $\partial \rho_1\wedge \ldots \wedge \partial \rho_d\not =0$. Using standard notation, we write $\rho$ as a convergent power series (after shrinking $U$ if necessary) \[\rho(Z,\1{Z})=\sum_{\alpha,\beta\in\mathbb{N}^N} \rho_{\alpha\beta}(Z-p_0)^\alpha (\1{Z-p_0})^\beta,\quad Z\in U,\] where $\rho_{\alpha,\beta}\in \mathbb{C}^d$ satisfy $\rho_{\alpha,\beta} = \overline{\rho_{\beta,\alpha}}$, and complexify it to the power series \[\rho(Z,\zeta)=\sum \rho_{\alpha\beta}(Z-p_0)^\alpha (\zeta-\1p_0)^\beta, \quad \partial\rho_1\wedge\cdots\wedge\partial\rho_d\ne0, \] with $(Z,\zeta)\in\mathbb{C}^N\times\mathbb{C}^N$, which we still denote by $\rho$. It is easy to see that the complexification $\rho(Z,\zeta)$ is still convergent in a suitable neighbourhood of $(p_0,\1p_0)$ that (after shrinking $U$ again if necessary) can be chosen of the form $U\times\1U\subset \mathbb{C}^N\times \mathbb{C}^N$. Recall that the {\em Segre variety} $S_q$ of a point $q\in U$ is the $n$-dimensional complex submanifold of $U$ given by $S_q:=\{Z\in U:\rho (Z,\1q)=0 \}$. Furthermore, the {\em complexification of $M$} is defined to be the $2n+d$-dimensional complex submanifold of $U\times\1U$ given by \begin{equation}\label{complexification} {\mathcal M}:=\{(Z,\zeta)\in U\times\1U :\rho (Z,\zeta)=0\} = \left\{ (Z,\zeta)\in U\times\1U \colon Z \in S_{\bar \zeta} \right\}. \end{equation} For every integer $k$ and for $q\in \mathbb{C}^N$, we denote by $J^{k,n}_q(\mathbb{C}^N)$ the space of all jets at $q$ of order $k$ of $n$-dimensional complex submanifolds of $\mathbb{C}^N$ passing through $q$. For every $q\in M$ sufficiently close to $p_0$, we consider the anti-holomorphic map $\pi_q^k$ defined as follows: \begin{equation}\label{e:segredef} \pi_q^k \colon S_q \to J^{k,n}_q(\mathbb{C}^N), \quad \pi_q^k (\xi) = j^k_q S_\xi, \end{equation} where $j^k_q S_\xi$ denotes the $k$-jet at $q$ of the submanifold $S_\xi$ (see e.g.\ \cite{Z99} for more details on jets of complex submanifolds used here, and also \cite{LM05}). Following \cite{LM05}, we say that the germ $(M,p_0)$ belongs to the class $\mathcal{C}$ if {\em the anti-holomorphic map $\pi^k_{p_0}$ is generically of full rank $n= \dim S_{p_0}$ in any neighborhood of $p_0$, for $k$ sufficiently large}. For $(M,p_0)\in \mathcal{C}$, we denote by $\kappa_M(p_0)$ the smallest integer $k$ for which the map $\pi_{p_0}^k$ is of generic rank $n$. Since the Segre varieties are associated to $(M,p_0)$ in a biholomorphically invariant way, the integer $\kappa_M(p_0)$ is a biholomorphic invariant of the germ $(M,p_0)$. Note furthermore that the condition for a germ of real-analytic generic submanifold to belong to the class $\mathcal{C}$ is an {\em open} condition in the sense that, if $M\in \mathcal{C}$ is given by the equation $\rho(Z,\bar Z)=0$ as above, then $\2M\in\mathcal{C}$ for any $\2M$ given by the equation $\2\rho(Z,\bar Z)=0$ with $\2\rho$ sufficiently close to $\rho$ in the $\mathcal{C}^{\infty}$ topology (see e.g.\ \cite{GG} for details on this topology; here it is enough to assume that $\2\rho$ is close to $\rho$ in the $\mathcal{C}^{k}$-topology for a suitable $k$). In particular, there exists a neighbourhood $V$ of $p_0$ in $M$ such that $(M,q)\in\mathcal{C}$ for all $q\in V$ and moreover, it is clear from the definition that $\kappa_M(q)$ is {\em upper semi-continuous} on $V$. We also recall that $M$ is {\em essentially finite} (resp.\ {\em finitely nondegenerate}) at $p_0$ if the map $\pi^k_{p_0}$ is {\em finite} near $p_0$ (resp.\ an {\em immersion} at $p_0$) for $k$ sufficiently large (see \cite{BHR, BERbook} for more details). It follows that finite nondegeneracy of $M$ at $p_0$ implies essential finiteness of $M$ at $p_0$ which in turn implies that $(M,p_0)\in \mathcal{C}$. Recall also that $M$ is {\em minimal} at $p_0$ if there does not exist any CR submanifold of lower dimension contained in $M$ and passing through $p_0$ with the same CR dimension as that of $M$ (see \cite{Tu, BERbook}). For a real-analytic CR submanifold $M\subset \mathbb{C}^N$ that is not necessarily generic and for a point $p_0\in M$, we say that $(M,p_0)$ is in the class $\mathcal{C}$ if it is in the class $\mathcal{C}$ when considered as a generic submanifold of its intrinsic complexification, i.e.\ the minimal germ of a complex submanifold of $\mathbb{C}^N$ containing $(M,p_0)$ (see e.g.\ \cite{BERbook} for this notion). Finally we should also note that the local nondegeneracy conditions defined above are defined in the same way for abstract real-analytic CR manifolds since such manifolds can always be locally embedded in some complex euclidean space $\mathbb{C}^q$ for some integer $q$, see e.g.\ \cite{BERbook}. Finally, we refer the reader to \cite{LM05} for examples of manifolds that belong to the class $\mathcal{C}$, as well as for a more thorough discussion of the relation between this nondegeneracy condition and other well-known nondegeneracy conditions such as essential finiteness and finite nondegeneracy. We only stress in this paper the following fact that will be used implicitly in the proofs of Corollaries~\ref{t:lieh} and \ref{t:liehighcodim} and that follows from a result of \cite{DF} : {\em for every compact real-analytic CR submanifold $\Sigma$ embedded in some Stein manifold and for every $q\in \Sigma$, $\Sigma$ is essentially finite at $q$, and in particular $(\Sigma, q)\in {\mathcal C}$} (see \cite{LM05} for more details). The following parametrization theorem is the second main ingredient of the proof of Theorem~\ref{t:unify}. \begin{Thm}\label{t:jetparamthm} Let $M\subset \mathbb{C}^N$ be a real-analytic CR submanifold of codimension $d$ and $p_0\in M$. Assume that $(M,p_0)$ is minimal and belongs to the class ${\mathcal C}$ and set $\ell_0:=2(d+1)\kappa_M(p_0)$. Then the set of all germs $h\colon (M,p_0)\to M$ of local CR diffeomorphisms of $M$ has the real-analytic jet parametrization property of order $\ell_0$ at $p_0$. \end{Thm} As mentioned above, the difference between Theorem \ref{t:jetparamthm} and \cite[Theorem 7.3]{LM05} is due to the fact that the parametrization theorem given in \cite{LM05} is obtained for the set of germs $h\colon (M,p_0)\to (M,p_0)$ of local CR diffeomorphisms with fixed source $p_0$ but also with fixed target $p_0$. The version given here by Theorem~\ref{t:jetparamthm} allows to parametrize local CR diffeomorphisms which send the point $p_0$ to a varying target point $p\in M$ (close to $p_0$) that has to be regarded as an additional parameter. We will provide a deformation version of \cite[Theorem 7.3]{LM05} which allows us to treat this additional parameter in Theorem~\ref{t:deformation} below. (Note that it is not always possible to parametrize in a proper sense the germs of all local CR diffeomorphisms with varying both source and targets, see Remark~\ref{cannot} below). Before we proceed, we need to introduce some further terminology. Given a real-analytic manifold $E$ and a point $p_0\in \mathbb{C}^N$, a {\em real-analytic family of germs at $p_0$ of real-analytic generic submanifolds} $(M_\epsilon)_{\epsilon\in E}$ of $\mathbb{C}^N$, is given by a family of convergent power series mapping in $Z$ and $\bar Z$ centered at $p_0$, $\rho(Z,\bar Z;\epsilon)= (\rho_1(Z,\bar Z;\epsilon),\ldots,\rho_{d}(Z,\bar Z;\epsilon))$ with $\rho(p_0,\bar p_0;\epsilon)=0$ and $\partial\rho_1(\cdot;\epsilon)\wedge\cdots\wedge\partial\rho_{d}(\cdot;\epsilon)\ne 0$ for every $\epsilon \in E$ such that there exists a neighbourhood of $\left\{ p_0 \right\} \times E\subset \mathbb{C}^N\times E$ on which $\rho(Z,\bar Z; \epsilon)$ is real-analytic in all its arguments. In particular, for each $\epsilon\in E$, the set $\{Z\in \mathbb{C}^N:\rho (Z,\bar Z;\epsilon)=0\}$ defines a germ at $p_0$ of a real-analytic generic submanifold $M_\epsilon\subset \mathbb{C}^N$ of codimension $d$. Given a fixed germ of a real-analytic generic submanifold $M\subset \mathbb{C}^N$ through $p_0$ and $(M_\epsilon)_{\epsilon\in E}$ a real-analytic family through $p_0$ as defined above, we say that $(M_\epsilon)_{\epsilon\in E}$ is a real-analytic {\em deformation} of $(M,p_0)$ if there exists $\epsilon_0\in E$ such that $(M,p_0)=(M_{\epsilon_0},p_0)$. We are now ready to state the following result. \begin{Thm}\label{t:deformation} Let $(M,p_{0})$ be a germ of a real-analytic generic submanifold of codimension $d$ that is minimal and in the class ${\mathcal C}$ and set $\ell_0 = 2 (d+1) \kappa_{M} (p_0)$. Let $(M_\epsilon)_{\epsilon \in E}$ be a real-analytic deformation of the germ $(M,p_0)$ $($parametrized by some real-analytic manifold $E)$ with $(M_{\epsilon_0},p_0)=(M,p_0)$ for some $\epsilon_0\in E$. Then there exist open neighbourhoods $U_0$ of $\epsilon_0$ in $E$, $U_1$ of $p_0 \in \mathbb{C}^N$ and $\Omega$ of $j_{p_0}^{\ell_0} {\sf Id}$ in $G^{\ell_0}_{p_0}(\mathbb{C}^N)$ and a real-analytic map $\Psi(Z, \Lambda; \epsilon ) \colon U_1 \times \Omega \times U_{0} \to \mathbb{C}^N$, holomorphic in its first factor such that for every germ of a biholomorphic map $H\colon (\mathbb{C}^N,p_0) \to (\mathbb{C}^N,p_0)$ sending $(M_\epsilon, p_0 )$ for some $\epsilon\in U_0$ into $(M,p_0)$ with $j_{p_0}^{\ell_0} H \in \Omega$, we have \[ H(Z) = \Psi (Z, j_{p_0}^{\ell_0} H; \epsilon ),\ {\rm for}\ Z\in \mathbb{C}^N\, {\rm close}\ {\rm to}\ p_0. \] \end{Thm} \begin{Rem}\label{cannot} It is natural to ask whether Theorem~\ref{t:deformation} remains true with the target manifold $(M,p_0)$ also varying. Such a result holds for finitely nondegenerate manifolds \cite{BER99b, KZ05}. However, it {\em cannot} hold for the more general class $\mathcal{C}$ (even in the real-analytic case) as the example with $M\subset \mathbb{C}_{(z,w)}^2$ given by ${\sf Im}\, w=|z|^4$ shows, see \cite[Example~1.5]{KZ05}. \end{Rem} Let us now show how Theorem~\ref{t:jetparamthm} follows from Theorem~\ref{t:deformation}. \begin{proof}[Proof of Theorem~{\rm \ref{t:jetparamthm}} assuming Theorem~{\rm \ref{t:deformation}}] Without loss of generality, we may assume that $M$ is generic. Let $\rho=\rho (Z,\bar Z)$ be a real-analytic vector valued defining equation for $M$ in a neighbourhood $U$ of $0$. Consider the real-analytic deformation of the germ $(M,p_0)$ obtained by varying the base point in some small neighbourhood $\widetilde U\subset U$ i.e.\ defined by the family $(M_{p})_{p\in \widetilde U}$ where $M_p$ is the germ at $p_0$ of the real-analytic generic submanifold given by the equation $\{Z\in \mathbb{C}^N:\rho (Z-p_0+p,\bar Z-\bar{p}_0+\bar p)=0\}$. Applying Theorem~\ref{t:deformation} to this deformation, it is not difficult to derive the following: \begin{Pro}\label{p:jetparaminv} Under the assumptions of Theorem~{\rm \ref{t:jetparamthm}}, the set of all germs $h\colon (M,p)\to (M,p_0)$ of local CR diffeomorphisms of $M$ with variable source point $p$ has the real-analytic jet parametrization property of order $\ell_0$ at $p_0$. \end{Pro} The conclusion of Theorem~\ref{t:jetparamthm} then follows easily from Proposition~\ref{p:jetparaminv} and an application of the inverse function theorem. We leave the details to the reader. \end{proof} \section{Proof of Theorem~\ref{t:deformation}} We assume that we are in the setting of Theorem~\ref{t:deformation}. Without loss of generality, suppose that $p_0$ coincides with the origin in $\mathbb{C}^N$ and set $n=N-d$. Consider the given real-analytic family $(M_\epsilon)_{\epsilon\in E}$ and $\epsilon_0\in E$ satisfying $(M_{\epsilon_0},0)=(M,0)$. \subsection{Normal coordinates and Segre mappings for the deformation} The first basic fact needed for the construction of a mapping $\Psi$ satisfying the conclusion of Theorem~\ref{t:deformation} is the choice of a certain set of coordinates (the so-called ``normal coordinates'') for each manifold $M_\epsilon$ near the origin and depending real-analytically on $\epsilon$ for $\epsilon$ close to $\epsilon_0$. The coordinates we need are obtained from the standard construction of the normal coordinates ({\em cf.} e.g. \cite{BERbook}): \begin{Lem} \label{L:normalorabnormal} Let $(M_\epsilon)_{\epsilon\in E}$ be a real-analytic family of real-analytic generic submanifolds through the origin in $\mathbb{C}^N$ of codimension $d$ and $\epsilon_0\in E$ as above. Then there exist germs of real-analytic maps $$Z\colon (\mathbb{C}^N\times E, (0,\epsilon_0))\to (\mathbb{C}^N,0) \text{ and } Q\colon (\mathbb{C}^n\times\mathbb{C}^n\times\mathbb{C}^d\times {E},(0,0,0,\epsilon_0))\to (\mathbb{C}^d,0),$$ holomorphic in all their components except $E$, such that for every fixed $\epsilon\in E$ sufficiently close to $\epsilon_0$, the following holds: \begin{itemize} \item[(i)] ${Z}(0;\epsilon) = 0$ and the map $Z(\cdot;\epsilon)\colon (\mathbb{C}^N,0)\to (\mathbb{C}^N,0)$ is locally biholomorphic near $0$; \item[(ii)] in the local coordinates ${Z}(\cdot;\epsilon)=(z,w) \in \mathbb{C}^n\times \mathbb{C}^d$ near $0$, the manifold $M_\epsilon$ is given by \begin{equation}\label{star} w - Q(z,\1{{z}}, \1{{ w}}; \epsilon) = 0; \end{equation} \item[(iii)] one has $Q(z,0, \tau;\epsilon) \equiv Q(0, \chi, \tau;\epsilon) \equiv \tau$. \end{itemize} \end{Lem} We fix an open neighbourhood $U_0$ of $\epsilon_0$ in $E$ so that Lemma~\ref{L:normalorabnormal} holds. After possibly shrinking $U_0$ we may assume that for every $\epsilon\in U_0$, $M_\epsilon$ is minimal at $0$ and that $\kappa_{M_\epsilon} (0) \leq \kappa_M (0)$; we set $Q(z,\chi,\tau) := Q(z,\chi,\tau,\epsilon_0)$. The next tools we need are the Segre mappings associated with the manifolds $M_\epsilon$, $\epsilon \in U_0$. Recall that for every integer $k\ge 1$, the $k$-th Segre (germ of a) mapping $$v_\epsilon^k\colon (\mathbb{C}^{kn},0)\to (\mathbb{C}^n\times\mathbb{C}^d,0)$$ associated to $(M_\epsilon,0)$ and the chosen normal coordinates is defined inductively as follows (see \cite{BER99b}): \begin{equation} v_\epsilon^1 (t^{1}) := (t^1,0), \quad v_\epsilon^{k+1}(t^{[k+1]}) := \left( t^{k+1}, Q\big(t^{k+1},\1{v_\epsilon^{k}} ( t^{[k]} ); \epsilon \big)\right), \label{e:segreparm} \end{equation} where $t^k \in \mathbb{C}^n$, $t^{[k]}: = (t^1,\dots,t^{k})\in\mathbb{C}^{kn}$. Here and throughout the paper, for any power series mapping $\theta$, we denote by $\bar{\theta}$ the power series obtained from $\theta$ by taking complex conjugates of its coefficients. For every $\epsilon \in U_0$ and a germ of a biholomorphic map $H\colon (\mathbb{C}^N,0) \to (\mathbb{C}^N,0)$ sending $(M_\epsilon,0)$ into $(M,0)$, we define \begin{equation}\label{e:given} H_\epsilon \colon (\mathbb{C}^N,0) \to (\mathbb{C}^N,0), \quad H_\epsilon: = H(Z(\cdot;\epsilon)^{-1}), \end{equation} where $Z(\cdot;\epsilon)^{-1}\colon (\mathbb{C}^N,0)\to (\mathbb{C}^N,0)$ is the local inverse of $Z(\cdot;\epsilon);$ $H_\epsilon$ sends $(M_\epsilon,0)$ written in the $Z$-coordinates into $(M,0)$, and $M_\epsilon$ is given by \eqref{star}. It is clear from the construction of the above coordinates and from the Inverse Function Theorem that it is enough to prove the parametrization property for all our mappings $H_\epsilon$ for $\epsilon$ sufficiently close to $\epsilon_0$ to obtain the conclusion of Theorem~\ref{t:deformation}. After choosing normal coordinates $Z'=(z',w')\in \mathbb{C}^n\times \mathbb{C}^d$ for the target manifold $M$ at $p_0$, which are fixed here, we write $H_\epsilon=(f_\epsilon,g_\epsilon)\in \mathbb{C}^n\times \mathbb{C}^d$ and also denote by $\6M_{\epsilon}$ the germ at the origin of the complexification of $M_\epsilon$. In our coordinates, it is defined as the germ of the complex submanifold of $\mathbb{C}^N\times\mathbb{C}^N$ at the origin given by $$\6M_\epsilon=\{(Z,\zeta)=((z,w),(\chi,\tau))\in \mathbb{C}^{n}\times \mathbb{C}^d\times \mathbb{C}^{n}\times \mathbb{C}^d: w=Q(z,{ \chi},{\tau};\epsilon)\}.$$ \subsection{Reflection identities with parameters}\label{s:identities} We now want to state a version with parameters of the reflection identities given in \cite[Propositions 9.1 and 9.2, Lemma 9.3 and Proposition 9.4]{LM05}. For this, as in \cite{LM05}, it is convenient to introduce the following notation. For every positive integer $k$, we denote by $J_{0,0}^k(\mathbb{C}^N)$ the space of all jets at the origin of order $k$ of holomorphic mappings from $\mathbb{C}^N$ into itself and fixing the origin. In our normal coordinates $Z = (Z_1,\dots,Z_N)$ in $\mathbb{C}^N$, we identify a jet ${\mathcal J}\in J_{0,0}^{k} (\mathbb{C}^N)$ with a polynomial map of the form \begin{equation}\label{jet} {\mathcal J}={\mathcal J} (Z)=\sum_{\alpha \in \mathbb{N}^r,\ 1\leq|\alpha|\leq k}\frac{\Lambda^k_{\alpha}}{\alpha !}\, Z^{\alpha}, \end{equation} where $\Lambda^k_{\alpha}\in \mathbb{C}^N$. We thus have for a jet $ {\mathcal J}\in J_{0,0}^k(\mathbb{C}^N)$, the coordinates \begin{equation}\label{e:jetnotation} \Lambda^k:=(\Lambda^k_{\alpha})_{1\leq|\alpha|\leq k} \end{equation} given by \eqref{jet}. Given a germ of a holomorphic map $h\colon (\mathbb{C}^N,0)\to (\mathbb{C}^N,0)$, $h=h(t)$, for $t$ sufficiently small we use for the $k$-jet of $h$ at $t$ the notation $j_t^kh=:(t,h(t),\widehat j_t^kh)$ (which is defined as a germ at $0$). Moreover, since $h(0)=0$, we may also identify $j^k_0h$ with $\widehat j^k_0h$, which we will freely do in the sequel. Given the normal coordinates $(z,w)\in\mathbb{C}^n\times\mathbb{C}^d=\mathbb{C}^N$, we consider a special component of a jet $\Lambda^k \in J_{0,0}^k(\mathbb{C}^N)$ defined as follows. Denote the set of all multiindices of length one having $0$ from the $n+1$-th to the $N$-th component by $S$, and the projection onto the first $n$ coordinates by ${\rm proj}_1\colon \mathbb{C}^N\to \mathbb{C}^n$ (that is, ${\rm proj}_1(z,w)=z$). Then set \begin{equation}\label{e:crapagain} \widetilde \Lambda^1:=({\rm proj}_1(\Lambda_{\alpha}))_{\alpha \in S}. \end{equation} Note that for any local holomorphic map $$(\mathbb{C}^n \times \mathbb{C}^d,0) \ni (z,w) \mapsto h(z,w)=(f(z,w),g(z,w))\in (\mathbb{C}^n \times \mathbb{C}^d,0),$$ if $\jetm{0}{k}h=\Lambda^k$, then $\widetilde \Lambda^1=(\frac{\partial f}{\partial z}(0))$. We can therefore identify $\widetilde \Lambda^1$ with an $n\times n$ matrix or equivalently with an element of $J_{0,0}^1(\mathbb{C}^n)$. Throughout the paper, given any jet $\lambda^k\in J_{0,0}^k(\mathbb{C}^N)$, $\widetilde \lambda^1$ will always denote the component of $\lambda^k$ defined by \eqref{e:crapagain}. In addition, for every positive integer $r$ and an open neighborhood $U_0$ of $\epsilon_0$ in $E$, we denote by $\6S_r=\6S_r(U_0)$ the ring of germs at $\{0\}\times U_0$ of real-analytic functions on $\mathbb{C}^r \times E$ that are holomorphic in their first argument. Recall that this is the space of all real-analytic functions that are defined in a connected open neighbourhood (depending on the function) of $\{0\}\times U_0$ in $\mathbb{C}^r \times E$ (and holomorphic in their first argument). We now collect the following versions of the reflection identities of \cite[Section 9]{LM05} with parameters that are necessary in order to complete the proof of Theorem~\ref{t:deformation}. The first basic identity given by \eqref{e:fundamental1} is standard and may obtained by complexifying the identity $H_\epsilon(M_\epsilon)\subset M$ and applying the vector fields tangent to $\6M_\epsilon$. \begin{Pro}\label{p:reflection1} In the above setting, there exists a polynomial $\6D=\6D(Z,\zeta,\epsilon, \Lambda^1)\in \6S_{2N}[\Lambda^1]$ and, for every $\alpha \in \mathbb{N}^n\setminus \{0\}$, a $\mathbb{C}^d$-valued polynomial map ${\mathcal P}_{\alpha}={\mathcal P}_{\alpha}(Z,\zeta,\epsilon,\Lambda^{|\alpha|})$ whose components are in the ring $\6S_{2N}[\Lambda^{|\alpha|}]$ such that for $\epsilon \in U_0$ and for every map $H_\epsilon\colon (M_\epsilon,0) \to (M,0)$ the following holds: \begin{enumerate} \item[{\rm (i)}] $\6D (0,0,\epsilon,\Lambda^1)={{\text{\rm det}}}\, \widetilde \Lambda^1$; \item[{\rm (ii)}] for all $(Z,\zeta)\in \mathcal M_\epsilon$ near $0$, \begin{equation}\label{e:fundamental1} (\6D(Z,\zeta,\epsilon,\widehat j_{\zeta}^{1} \overline{H}_\epsilon ))^{2|\alpha|-1}\, \bar{Q}_{\chi^{\alpha}}(\bar{f}_\epsilon(\zeta),H_\epsilon(Z))={\mathcal P}_{\alpha}(Z,\zeta,\epsilon,\widehat j_{\zeta}^{\left| \alpha \right|} \overline{H}_\epsilon). \end{equation} \end{enumerate} \end{Pro} The next identity given by \eqref{e:fundamental2} involves the (transversal) derivatives of the mappings $H_\epsilon$ and follows easily from differentiating \eqref{e:fundamental1} and applying the chain rule. \begin{Pro}\label{p:reflection2} For any $\mu \in \mathbb{N}^d\setminus \{0\}$ and $\alpha \in \mathbb{N}^n\setminus\{0\}$, there exist a $\mathbb{C}^d$-valued polynomial map ${\mathcal T}_{\mu,\alpha}(Z,\zeta,Z',\zeta',\epsilon,\lambda^{|\mu|-1},\Lambda^{|\mu|})$ whose components belong to the ring $\6S_{4N}[\lambda^{|\mu|-1},\Lambda^{|\mu|}]$ and a $\mathbb{C}^d$-valued polynomial map ${\mathcal Q}_{\mu, \alpha}(Z,\zeta,\epsilon,\Lambda^{|\alpha|+|\mu|})$ whose components are in the ring $\6S_{2N}[\Lambda^{|\alpha|+|\mu|}]$, such that for $\epsilon \in U_0$, for every map $H_\epsilon\colon (M_{\epsilon},0)\to (M,0)$ and for any $(Z,\zeta)\in \6M_\epsilon$ close to the origin, the following relation holds: \begin{equation}\label{e:fundamental2} \frac{\partial^{|\mu|}H_\epsilon}{\partial w^\mu}(Z)\cdot \bar{Q}_{\chi^{\alpha},Z}(\bar{f}_\epsilon(\zeta),H_\epsilon(Z)) =(*)_1+(*)_2, \end{equation} where \begin{equation}\label{e:*1} (*)_1:= {\mathcal T}_{\mu, \alpha}\left(Z,\zeta,H_\epsilon(Z),\overline{H}_\epsilon(\zeta), \epsilon,\widehat j_{Z}^{\left| \mu \right| -1} H_\epsilon , \widehat j_{\zeta}^{\left| \mu \right|} \overline{H}_\epsilon \right) \end{equation} and \begin{equation}\label{e:*2} (*)_2:=\frac{{\mathcal Q}_{\mu,\alpha}(Z,\zeta, \epsilon, \widehat j_{\zeta}^{|\alpha| + |\mu|} \overline{H}_\epsilon)}{(\6D(Z,\zeta,\epsilon, \widehat j_{\zeta}^{1} \overline{H}_\epsilon ))^{2|\alpha|+|\mu|-1}}. \end{equation} \end{Pro} In the next lemma, we observe that for any given map $H_\epsilon=(f_\epsilon,g_\epsilon)$, the (transversal) derivatives of the normal component $g_\epsilon$ can be expressed (in an universal way) through the (transversal) derivatives of the components of $f_\epsilon$ and some other terms that have to be seen as remainders. In particular, this lemma will allow us (as in \cite{LM05}) to derive the desired parametrizations of the maps $H_\epsilon$ and their derivatives on each Segre set from the corresponding parametrizations of the maps $f_\epsilon$ and their derivatives. \begin{Lem}\label{l:gderivative} For any $\mu \in \mathbb{N}^d\setminus \{0\}$, there exists a $\mathbb{C}^d$-valued polynomial map \[W_\mu={W}_{\mu}\left( Z,\zeta,Z',\zeta',\epsilon, \lambda^{|\mu|-1},\Lambda^{|\mu|} \right),\] whose components belong to the ring $\6S_{4N}[\lambda^{|\mu|-1},\Lambda^{|\mu|}]$ and such that for $\epsilon \in U_0$, for every map $H_\epsilon\colon (M_{\epsilon},0) \to (M,0)$ and for any $(Z,\zeta)\in \6M_\epsilon$ close to the origin, the identity \begin{equation}\label{e:gmu} \frac{\partial^{|\mu|}g_\epsilon}{\partial w^\mu}(Z)=\frac{\partial^{|\mu|}f_\epsilon}{\partial w^\mu}(Z)\cdot Q_z(f_\epsilon(Z),\overline{H}_\epsilon(\zeta))+(*)_3 \end{equation} holds with \begin{equation}\label{e:*3} (*)_3:=W_\mu \left(Z,\zeta,H_\epsilon(Z),\overline{H}_\epsilon(\zeta), \epsilon, \widehat j_{Z}^{\left| \mu \right| - 1}H_\epsilon , \widehat j_{\zeta}^{\left| \mu \right| } \overline{H}_\epsilon \right). \end{equation} \end{Lem} The next statement is obtained as a direct combination of Lemma~\ref{l:gderivative} and Proposition~\ref{p:reflection2} and provides the form of the system of equations fulfilled by any (transversal) derivative of $f_\epsilon$. \begin{Pro}\label{p:reflection3} For any $\mu \in \mathbb{N}^d\setminus \{0\}$ and $\alpha \in \mathbb{N}^n\setminus\{0\}$, there exist a $\mathbb{C}^d$-valued polynomial map ${\mathcal T}'_{\mu,\alpha}(Z,\zeta,Z',\zeta',\epsilon,\lambda^{|\mu|-1},\Lambda^{|\mu|})$ whose components belong to the ring $\6S_{4N}[\lambda^{|\mu|-1},\Lambda^{|\mu|}]$ such that for $\epsilon \in U_0$ and for every map $H_\epsilon\colon (M_{\epsilon},0)\to(M,0)$ the following relation holds for $(Z,\zeta)\in \6M_\epsilon$ close to $0$: \begin{equation}\label{e:fundamental3} \frac{\partial^{|\mu|}f_\epsilon}{\partial w^\mu}(Z)\cdot \left(\bar{Q}_{\chi^{\alpha},z}(\bar{f}_\epsilon(\zeta),H_\epsilon(Z))+ Q_z(f_\epsilon(Z),\overline{H}_\epsilon(\zeta))\cdot \bar{Q}_{\chi^{\alpha},w}(\bar{f}_\epsilon(\zeta),H_\epsilon(Z))\right)=(*)_1'+(*)_2, \end{equation} where $(*)_2$ is given by \eqref{e:*2} and $(*)_1'$ is given by \begin{equation}\label{e:*1'} (*)_1':={\mathcal T}_{\mu,\alpha}'\left(Z,\zeta,H_\epsilon(Z),\overline{H}_\epsilon(\zeta),\epsilon, \widehat j_{Z}^{\left| \mu \right| - 1} H_\epsilon, \widehat j_{\zeta}^{\left| \mu \right|}{\overline{H}_\epsilon} \right). \end{equation} \end{Pro} Since the proof of the above relations is analogous to those derived in \cite{LM05}, we leave the details to the reader. We should point out that, in the reflection identities with parameters mentioned above, the most relevant fact is the location of the parameter $\epsilon$ in the identities. Indeed, the parameter $\epsilon$ appears always in an appropriate place so that the results concerning the parametrization of solutions of singular analytic systems given in the next paragraph will be applicable. This crucial fact explains why we can follow the analysis of \cite{LM05} in order to derive Theorem~\ref{t:deformation}. \subsection{Parametrization of solutions of singular analytic systems}\label{s:parametrization} We state here the two versions of the parametrization results for singular systems needed for the proof of Theorem~\ref{t:deformation}. The first one is needed to have a parametrization of the compositions $H_\epsilon\circ v^j_\epsilon$ for all integers $j$, where $v^j_\epsilon$ is defined by \eqref{e:segreparm}. \begin{Thm} \label{c:corparcn2} Let $A:(\mathbb{C}^m,0)\to \mathbb{C}^m$ be a germ of a holomorphic map of generic rank $m$, $X$ a real-analytic manifold, $Y$ a complex manifold and $b=b(z,x,y)$ a $\mathbb{C}^m$-valued real-analytic map defined on an open neighbourhood $V$ of $\{0\}\times X\times Y$ in $\mathbb{C}^m\times X\times Y$, holomorphic in $(z,y)$. Then there exists a real-analytic map $\Gamma=\Gamma(z,\lambda,x,y)\colon \mathbb{C}^m\times\mathsf{GL_m}(\C)\times {X}\times Y\to \mathbb{C}^m$, defined on an open neighbourhood $\Omega$ of $\{0\}\times \mathsf{GL_m}(\C)\times {X}\times Y$, holomorphic in all its components except $X$, satisfying the following properties: \begin{enumerate} \item[(i)] If $u:(\mathbb{C}^m,0)\to(\mathbb{C}^m,0)$ is a germ of a biholomorphism satisfying $A(u(z)) = b(z,x_{0},y_0)$ for some $(x_{0},y_0)\in {X}\times Y$, then necessarily $u(z) = \Gamma(z,j_0^1u,x_0,y_0)$; \item[(ii)] For every $\lambda \in \mathsf{GL_m}(\C) $ and $(x_0,y_0)\in X\times Y$, the map $\Gamma$ satisfies $\Gamma (0,\lambda,x_0,y_0)=0$ and $\displaystyle \frac{\partial \Gamma}{\partial z} (0,\lambda,x_0,y_0)=\lambda$. \end{enumerate} \end{Thm} The statement given by Theorem~\ref{c:corparcn2} follows directly from \cite[Corollary 3.2]{LM05} after an obvious complexification argument. The second version given below is needed to get a parametrization of the mappings $(\partial^\beta H_\epsilon)\circ v^j_\epsilon$ for all integers $j$ and all multiindices $\beta \in \mathbb{N}^N \setminus \{0\}$. \begin{Pro} \label{t:lineqn2} Let $\Theta$ be an $r\times r$ matrix with holomorphic coefficients near the origin in $\mathbb{C}^m$, $m,r\geq 1$, such that $\Theta$ is of generic rank $r$. Let $X$ be a real-analytic manifold and $Y$ a complex manifold. Assume that $c\colon \mathbb{C}^m\times X\times Y \to \mathbb{C}^m$ and $b\colon \mathbb{C}^m\times X\times Y \to\mathbb{C}^r$ are real-analytic maps defined on some neighbourhood $V$ of $\{ 0 \}\times X\times Y$ such that $(z,y)\mapsto b(z,x,y)$ and $(z,y)\mapsto c(z,x,y)$ are holomorphic on $V_{x} = \{ (z,y)\in\mathbb{C}^m \times Y \colon (z,x,y) \in V \}$ for each $x\in X$. Assume furthermore that $c$ satisfies \[ c(0,x,y) = 0, \quad {\text{\rm det}} \, c_z (0,x,y) \neq 0, \quad {\rm for}\ {\rm every}\ (x,y)\in X\times Y. \] Then there exists a real-analytic map $\Gamma\colon \mathbb{C}^m\times X\times Y\to \mathbb{C}^r$ defined on a neighbourhood of $\{0\} \times X\times Y$, holomorphic in all its components except $X$, such that if $u\colon (\mathbb{C}^m,0)\to\mathbb{C}^r$ is a germ of a holomorphic map satisfying $ \Theta(c(z,x_0,y_0)) \cdot u(z) = b(z,x_0,y_0)$ for some $(x_0,y_0)\in X\times Y$, then $u(z)= \Gamma(z,x_{0},y_0)$. \end{Pro} The statement given by Proposition~\ref{t:lineqn2} follows from \cite[Proposition 6.3]{LM05} and again a simple complexification argument. \subsection{Completion of the proof of Theorem~\ref{t:deformation}} With the statements given in Sections \ref{s:identities} and \ref{s:parametrization} at our disposal, we can follow the plan of the proof of \cite[Theorem 7.3]{LM05} to get the needed parametrization of the maps $H_\epsilon$ restricted to any Segre set. More precisely, the reader may verify that after applying the above statements as in \cite{LM05}, one obtains the following: \begin{Pro} In the above setting and shrinking the neighbourhood $U_0$ if necessary, for every positive integer $j$, there exists a real-analytic map \[ \Psi_j \colon \mathbb{C}^{nj}\times {U_0} \times J_{0,0}^{j \kappa_M (0)} (\mathbb{C}^N)\to \mathbb{C}^N, \] defined in a neighbourhood of $\{ 0 \} \times U_0 \times W_j $ where $W_j$ is an open set in the jet space containing all the jets $($at $0)$ of the maps $H_\epsilon$ for $\epsilon \in U_0$, that is holomorphic in its first factor and satisfying in addition \begin{equation}\label{e:sicksicksick} \left( H_\epsilon \circ v_\epsilon^{j}\right) \left( t^{ [j]} \right) = \Psi_j \left( t^{[j]}, \epsilon, j_0^{j \kappa_M (0)} H_\epsilon\, \right), \end{equation} for all $t^{[j]}$ sufficiently close to the origin. \end{Pro} We are now in a position to finish the proof of Theorem~\ref{t:deformation}. For this, recall first that $\ell_0=2(d+1)\kappa_M(0)$ and consider the equation \eqref{e:sicksicksick} for $j = 2 (d +1)$ that we localize near the point $(\epsilon_0,j_0^{\ell_0}{\sf Id})\in E \times J_{0,0}^{\ell_0} (\mathbb{C}^N)$. Shrinking $U_0$ if necessary, there exist open neighbourhoods $O\subset \mathbb{C}^{2n(d+1)}$ of the origin and $O'\subset J_{0,0}^{\ell_0} (\mathbb{C}^N)$ of $j_0^{\ell_0}{\sf Id}$ such that $\Psi_{2(d+1)}$ is defined over $O\times U_0\times O'$ and such that for every $\epsilon \in U_0$ satisfying $j_0^{\ell_0}H_\epsilon \in O'$, the identity \eqref{e:sicksicksick} holds (with $j=2(d+1)$) for $t^{[2(d+1)]}$ sufficiently close to the origin. The rest of the proof closely follows the lines of \cite[Section~4]{KZ05}; it consists of using a version of the implicit function with singularities \cite[Lemma~3.4]{KZ05} and resolving the obtained singularities by using \cite[Lemma~4.3]{KZ05}. The differences between the situation treated in the present paper and that of \cite{KZ05} are the parameter dependence which is real-analytic in our case (instead of smooth in \cite{KZ05}) and the absence of the error terms in the formula \eqref{e:sicksicksick} (in contrast to \cite{KZ05}). The details are left to the reader. \section{Proof of Theorem~\ref{t:unify}}\label{ss:final} \begin{proof}[Proof of Theorem~{\rm \ref{t:unify}}] Suppose first that $M$ is a connected real-analytic CR-submanifold in $\mathbb{C}^N$. Then we claim that the conclusion of Theorem~\ref{t:unify} follows from the conjunction of Theorem~\ref{main-technical} and Theorem~\ref{t:jetparamthm}. Indeed, assumption (i) of Theorem~\ref{t:unify} and Theorem~\ref{t:jetparamthm} imply that assumption (i) of Theorem~\ref{main-technical} is satisfied. (Note that the upper semi-continuity of the integer $\kappa_M(p)$ on $p\in K\subset M$ in Theorem~\ref{t:jetparamthm} is also used here in order to deduce the existence of the integer $k$ satisfying the conclusions of Theorem~\ref{main-technical} (i)). Furthermore, assumption (ii) of Theorem~\ref{t:unify} together with the results of \cite{BER99b, KZ05} imply that assumption (ii) of Theorem~\ref{main-technical} is also satisfied. This proves the claim. If $M$ is not connected, we may repeat the arguments of the proof of \cite[Theorem 6.2]{BRWZ} since $M$ is assumed to have finitely many connected components. Finally, when $M$ is an abstract real-analytic CR manifold, the proof is the same as before since it is based on purely local arguments and since any such manifold can locally be embedded as a CR submanifold of some complex euclidean space $\mathbb{C}^q$ for some integer $q$ (see e.g.\ \cite{BERbook}). The proof of the theorem is complete. \end{proof} \section{An elementary proof of Corollary~\ref{t:liehighcodim}}\label{s:lmz} We conclude this paper by providing an elementary proof of Corollary~\ref{t:liehighcodim} which avoids the use of Theorem~\ref{main-technical} and rather follows the proof of \cite[Corollary 1.3]{Z97}. Note that in any case one has to make use of Theorem~\ref{t:jetparamthm}. \begin{proof}[Proof of Corollary~{\rm \ref{t:liehighcodim}}] Since $M$ is compact (and everywhere minimal) and embedded in some Stein manifold, we may apply Theorem~\ref{t:jetparamthm} to conclude that there exists a finite number of points $p_1,\ldots,p_k\in M$ and open neighbourhoods $\Omega'_j$ of $p_j$ in $M$ covering $M$ such that for every $h\in\text{\rm Aut}_\text{\rm CR}(M)$ sufficiently close to the identity mapping, say in an open neighbourhood $\mathcal{N}$ of it, Theorem~\ref{t:jetparamthm} holds at all points $p_j$ with a parametrization $\Psi_j$ defined in a neighborhood of $\Omega'_j\times\{p_j\}$ with the jet order $\ell_j$. Write $\ell =\max \ell_j$. As in \cite{Z97}, our goal is to show that the image of the neighbourhood $\mathcal{N}\subset \text{\rm Aut}_\text{\rm CR}(M)$ under the homeomorphism (onto its image) \[ h \mapsto \eta(h) = \left( j_{p_1}^{\ell} h, \ldots, j_{p_k}^{\ell} h, j_{p_1}^{\ell} h^{-1},\ldots, j_{p_k}^{\ell} h^{-1}\right) \in \left( G_{p_1}^{\ell} (M)\times \ldots \times G_{p_k}^{\ell} (M)\right)^2 =: \mathcal{Y}^2 \] is a real-analytic subset of the target space, and that the group law is real-analytic. But it is easy to single out the points in the image which give rise to a global automorphism of $M$. Any $ (\alpha,\beta)= \left( \alpha_1,\ldots, \alpha_k, \beta_1,\ldots,\beta_k \right) \in \mathcal{Y}^2$ belongs to $\eta ({\mathcal N})$ if and only if for every $j, m = 1,\ldots, k$, the following identities are satisfied: \[\begin{gathered} \Psi_j( \cdot , \alpha_j) = \Psi_m (\cdot, \alpha_m), \, \Psi_j( \cdot , \beta_j) = \Psi_m (\cdot, \beta_m) \text{ on } \Omega'_j \cap \Omega'_m, \\ \Psi_m \left( \Psi_{m} \left( \cdot, \alpha_m \right),\beta_m \right) = \Psi_m \left( \Psi_{m} \left( \cdot, \beta_m \right),\alpha_m \right) = {\sf Id} \quad \text{ near } p_m, \\ \alpha_j = j_{p_j}^{\ell} \left(\Psi_j (\cdot, \alpha_j) \right), \, \beta_m = j_{p_m}^{\ell} \left( \Psi_m (\cdot, \beta_m) \right). \end{gathered}\] From this, it is clear that $\eta ({\mathcal N})$ is a real-analytic subset of $\mathcal{Y}^2$, and again following~\cite{Z97}, we see that the group law is indeed real-analytic. This concludes the proof of Corollary~\ref{t:liehighcodim}. \end{proof}
1,314,259,992,719
arxiv
\section{Introduction} Many block ciphers used in real life applications are \emph{translation based ciphers}, which are essentially iterated block ciphers obtained by the composition of several ``round functions'', that is, key-dependent permutations of the message/cipher space. This class of ciphers has been introduced by Caranti, Dalla Volta and Sala, in~\cite{CDVS09}, and contains well-known ciphers like AES~\cite{AES}, SERPENT~\cite{SERPENT} and PRESENT~\cite{PRESENT}, where this latter is one of the most common lightweight ciphers (i.e., ciphers able to run on devices with very low computing power). Since 1975, when Coppersmith and Grossman~\cite{copp} defined a set of functions which can be adapted for constructing a block cipher, and studied the permutation group generated by them, much attention has been devoted to the group generated by the round functions of a block cipher. In this context, Kaliski, Rivest and Sherman~\cite{KRS88} proved that if such a group is too small, then the cipher is vulnerable to certain cryptanalytic attacks. Later, Paterson~\cite{Pat} showed that the imprimitivity of the group can be used to construct a trapdoor that may be difficult to detect. For a translation based cipher, in~\cite{CDVS09}, the authors provided two cryptographic conditions on S-Boxes (i.e., the weakly differential uniformity and the strongly anti-invariance) which guarantee the primitivity of the group generated by the round functions of the cipher. Furthermore, in~\cite{ONAN}, using the O'Nan-Scott classification of finite primitive groups, together with another cryptographic assumption, it has been proved that the group in question is the alternating group. Unfortunately, both these results are not applicable to the PRESENT cipher. Motivated by this, in the present paper we continue the study of the group generated by the round functions of a translation based cipher. The paper is organised as follows. In Section~\ref{sec:tbciphers}, we recall some definitions and a series of properties and already known results on translation based ciphers. In Section~\ref{sec:primitive}, we deal with primitive groups containing an abelian regular subgroup. In particular, we prove the primitivity of the group $G$ generated by the round functions of a translation based cipher satisfying different cryptographic assumptions with respect to the result given in~\cite{CDVS09}. More precisely, we consider the differential uniformity which allows us to relax the hypothesis on the strongly anti-invariance. Then, in Section~\ref{sec:main}, we provide some additional conditions from which it follows that $G$ is the alternating group. This is, for instance, the case when under the same hypotheses a round of the cipher is {\em strongly} proper and consists of $m$-bit S-Boxes, with $m=3,4$ or $5$. As an immediate consequence, we deduce that the round functions of some lightweight ciphers, such as PRESENT~\cite{PRESENT}, RECTANGLE~\cite{RECTANGLE} and PRINTcipher~\cite{PRINTcipher}, generate the alternating group. Finally, in Section \ref{sec:finrem}, we put in evidence the relationship between the non-linearity of a permutation and the strongly anti-invariance. \section{Translation based ciphers} \label{sec:tbciphers} Let $\mathcal{C}$ be a block cipher acting on a message space $V=({\mathbb F}_2)^{d}$, for some $d\geq1$, and suppose that $V$ coincides with the ciphertext space. Denote by $\mathcal K$ its key space. Then any key $k\in\mathcal K$ induces a permutation $\tau_k$ on $V$ and $\mathcal{C}=\{\tau_k\,|\,k\in\mathcal{K}\}$. We are interested in determining the subgroup $$\Gamma=\Gamma(\mathcal C)=\langle \tau_k\mid k\in\mathcal K\rangle$$ of the symmetric group on $V$ generated by all permutations $\tau_k$. Unfortunately, the study of $\Gamma$ appears a difficult problem. However, most modern block ciphers are iterated ciphers, i.e., obtained by a composition of several rounds. This allows to investigate an easier permutation group related to $\Gamma$. Assume therefore that $\mathcal C$ is an iterated block cipher. Then each $\tau_k$ is a composition of some permutations of $V$, say $\tau_{k,1},\dots,\tau_{k,l}$. For any round $h$, let $$\Gamma_h=\Gamma_h(\mathcal C)=\langle\tau_{k,h}\mid k\in\mathcal K\rangle.$$ Thus we can define a new group containing $\Gamma$, that is $$\Gamma_{\infty}=\Gamma_{\infty}(\mathcal C)=\langle\Gamma_h\mid h\in\{1,\dots,l\}\rangle,$$ which is also known as the {\it group generated by the round functions}. To better understand the structure of $\Gamma_{\infty}$, we refer to translation based ciphers \cite{CDVS09,ACDVS}. This is a class of iterated block ciphers including some well-known ciphers, as for instance AES \cite{AES} and SERPENT \cite{SERPENT}. In order to recall the definition of a translation based cipher $\mathcal{C}$ and cite some results on $\Gamma_{\infty}(\mathcal{C})$, we first fix the notation. Let $m,n>1$ and $$V=V_1\oplus\dots\oplus V_n,$$ where each $V_i$ is isomorphic to $({\mathbb F}_2)^m$. As usual, we denote by $\mbox{\rm Sym}(V)$ and $\mbox{\rm Alt}(V)$ the symmetric group and the alternating group on $V$, respectively. Given $v\in V$, we write $\sigma_{v} \in \mbox{\rm Sym}(V)$ for the translation of $V$ mapping $x$ to $x+v$ and denote by $T(V)=\{\sigma_v \mid v\in V\}$ the group of all translations of $V$. Clearly $T(V)$ is an elementary abelian regular subgroup of $\mbox{\rm Sym}(V)$. Also, we denote by $\mbox{\rm GL}(V)$ the group of linear permutations on $V$ and by $\mbox{\rm AGL}(V)=\mbox{\rm GL}(V)\ltimes T(V)$ the group of affine transformations of $V$. Any $\gamma\in \rm{Sym}(V)$ is called a {\it bricklayer transformation} if, for any $i\in \{1,\ldots,n\}$ and $v=v_1+ \dots + v_n$, with $v_i\in V_i$, there exists a {\it brick} $\gamma_i\in{\rm Sym}(V_i)$ such that $$v\gamma=v_1\gamma_1+\dots+ v_n\gamma_n.$$ In symmetric cryptography, the permutation $\gamma$ is traditionally called parallel S-Box, and each $\gamma_i$ is an $m$-bit S-Box. A linear map $\lambda:V\rightarrow V$ is a {\it mixing layer} when it is used in composition with a bricklayer transformation. We say that a nontrivial proper subspace $W$ of $V$ is a {\em wall} if it is a sum of some of the subspaces $V_i$. A linear permutation $\lambda\in \mbox{\rm GL}(V)$ is then a {\it proper mixing layer} if no wall is $\lambda$-invariant. We also say that $\lambda$ is a {\it strongly proper mixing layer} if there are no walls $W$ and $W'$ such that $W\lambda=W'$. \begin{definition}[see \cite{CDVS09}] An iterated block cipher $\mathcal{C}=\{\tau_k \mid k \in \mathcal{K} \}$ over ${\mathbb F}_2$ is translation based (tb, for short) if the following hold: \begin{itemize} \item[$(1)$] each $\tau_k$ is the composition of a finite number, say $l$, of round functions $\tau_{k,h}$, with $k\in\mathcal K$ and $h\in\{1,\dots,l\}$, such that each $\tau_{k,h}$ can be written as a composition $\gamma_h\lambda_h\sigma_{\phi(k,h)}$ of three permutations of $V$, where \begin{itemize} \item[-] $\gamma_h$ is a bricklayer transformation not depending on $k$ and $0\gamma_h=0$, \item[-] $\lambda_h$ is a linear permutation not depending on $k$, \item[-] $\phi: \mathcal K\times\{1,\dots,l\}\rightarrow V$ is the key scheduling function, so that $\phi(k,h)$ is the $h$-th {\it round key}, given by the master key $k$; \end{itemize} \item[$(2)$] at least one round is a proper round, that is, \begin{itemize} \item[-] $\lambda_{h}$ is a proper mixing layer for some $h$, and \item[-] the map $\phi_{h}: \mathcal K\rightarrow V$ given by $k\rightarrow\phi(k,h)$ is surjective. \end{itemize} \end{itemize} \end{definition} As noted in \cite[Remark 3.3]{CDVS09} the assumption $0\gamma_h=0$ in (1) is not restrictive. Indeed, we can always include $0\gamma_h$ in the round key addition of the previous round. In what follows, for a fixed round $h$, we drop the round index $h$ and denote by $\rho\sigma_k$, with $\rho=\gamma\lambda$, the corresponding round function. Furthermore, we refer to a {\em strongly proper round} whenever the mixing layer of a proper round is strongly proper. \begin{lemma}\label{even} Let $\mathcal{C}$ be any tb cipher. Then $\Gamma_\infty(\mathcal{C})$ contains only even permutations. \end{lemma} \begin{proof} Let $\tau_{k,h}=\gamma\lambda\sigma_k$ be an arbitrary round function. Clearly $\sigma_k$ is an even permutation and, by \cite[Lemma 2]{We1}, $\lambda$ is also even. We show that $\gamma$ is even, from which the claim follows. For any $i\in\{1,\ldots, n\}$, let $\overline{\gamma}_i$ be the permutation of $V$ given by $$(x_1,\ldots,x_n)\overline{\gamma}_i=(x_1,\ldots,x_{i-1},x_i\gamma_i,x_{i+1},\ldots,x_n).$$ Notice that $\gamma=\overline{\gamma}_1\overline{\gamma}_2\ldots\overline{\gamma}_n$. Also, if $\gamma_i$ is the product of $t$ transpositions, then $\overline{\gamma}_i$ is the product of $t\cdot 2^{m(n-1)}= t\cdot 2^{d-m}$ transpositions. Thus each $\overline{\gamma}_i$ is an even permutation and so is $\gamma$, as required. \end{proof} Let $G$ be a transitive permutation group on a set $X$. Recall that a partition $\mathcal{B}$ of $X$ is trivial if $\mathcal{B}=\{X\}$ or $\mathcal{B}=\{\{x\}\,|\,x\in X\}$, and it is $G$-invariant if $Yg\in \mathcal{B}$ for any $Y\in \mathcal{B}$ and $g\in G$. The group $G$ is then called {\em primitive} if it has no nontrivial $G$-invariant partition of $X$. On the other hand, if a nontrivial $G$-invariant partition exists, the group is called imprimitive and the partition is a block system of $G$ on $X$. We now collect together some results which can be found in \cite{ACDVS} (see Lemma 3.4, Lemma 3.5 and Proposition 3.6). \begin{lemma}\label{lambda inverse} Let $\mathcal{C}$ be a tb cipher with a proper round $h$. Then \begin{itemize} \item[(i)] $\Gamma_h(\mathcal{C})=\langle\rho,T(V)\rangle;$ \item[(ii)] $\Gamma_h(\mathcal{C})$ is imprimitive on $V$ if and only if there exists a nontrivial proper subspace $U$ of $V$ such that $(u+v)\gamma+v\gamma\in U\lambda^{-1}$, for all $u\in U$ and $v\in V$. A block system is then of the form $\{U+v\,|\,v\in V\}$. \end{itemize} \end{lemma} Let $\delta$ and $m$ be positive integers, and let $f:({\mathbb F}_2)^m\rightarrow({\mathbb F}_2)^m$ be a vectorial Boolean function. Denote by $\hat{f}_u(x)=f(x+u)+f(x)$ the derivative of $f$ in the direction of $u\in({\mathbb F}_2)^m$. Recall that $f$ is {\em differentially $\delta$-uniform} (or $\delta$-uniform, for short) if $$|\{x\in ({\mathbb F}_2)^m: \hat{f}_u(x)=v\}|\leq\delta,$$ for all $u,v\in ({\mathbb F}_2)^m$, with $u\neq 0$. By \cite[Fact 3]{CDVS09}, it follows that $$|{\rm Im}(\hat{f}_u)|\geq\frac{2^{m}}{\delta}.$$ Following \cite{CDVS09}, we say that $f$ is {\it weakly $\delta$-uniform} if $$|{\rm Im}(\hat{f}_u)|>\frac{2^{m-1}}{\delta},$$ for all $u \in(\mathbb F_2)^m\backslash\{0\}$. Of course, every $\delta$-uniform function is weakly $\delta$-uniform. Given $1\leq r<m$, we also say that $f$ is {\it $r$-anti-invariant} if $f(0)=0$ and, for any subspace $U$ of $(\mathbb F_2)^m$ such that $f(U)=U$, either $\dim(U)<m-r$ or $U =(\mathbb F_2)^m$. Furthermore, $f$ is said to be {\it strongly $r$-anti-invariant} if, for any two subspaces $U$ and $W$ of $(\mathbb F_2)^m$ such that $f(U)=W$, then either $\dim(U)=\dim(W)<m-r$ or $U =W=(\mathbb F_2)^m$. The next result is Theorem 4.4 of \cite{CDVS09}. \begin{theorem}\label{Th 4.4} Let $\mathcal{C}$ be a tb cipher with a proper round $h$. Suppose that, for some $1\leq r<m$, each brick of $\gamma$ is \begin{itemize} \item[$(i)$] weakly $2^r$-uniform, and \item[$(ii)$] strongly $r$-anti-invariant. \end{itemize} Then $\Gamma_h(\mathcal{C})$ is primitive, and hence so is $\Gamma_{\infty}(\mathcal{C})$. \end{theorem} \subsection{Some applications to real-life Cryptography} ${\rm AES}$ and ${\rm SERPENT}$ are two translation based ciphers used in real-life applications and their bricks satisfy the hypotheses of Theorem \ref{Th 4.4}; as a consequence, $\Gamma_\infty({\rm AES})$ and $\Gamma_\infty({\rm SERPENT})$ are primitive groups \cite{CDVS09}. Actually, in \cite{We1} and \cite{We2}, with an ad hoc proof, it has been proved respectively that $\Gamma_\infty({\rm AES})=\mbox{\rm Alt}(V)$ and $\Gamma_\infty({\rm SERPENT})=\mbox{\rm Alt}(V)$. See also \cite{ONAN, SW} for an AES-like cipher, \cite{GOST} for a GOST-like cipher, and \cite{werdes,kasumi} for ${\rm DES}$ and ${\rm KASUMI}$ respectively. Other interesting translation based ciphers are those of type \emph{lightweight}, i.e., ciphers designed to run on devices with very low computing power. The most used in real life applications is PRESENT~\cite{PRESENT} and we would like to apply similar techniques, in order to investigate $\Gamma_\infty({\rm PRESENT})$. In the case of PRESENT, we have $V=(\mathbb F_2)^{64}$. The S-Box used in PRESENT is always the same. It is a 4-bit S-Box $\gamma:({\mathbb F}_2)^4 \to ({\mathbb F}_2)^4$ and its action in hexadecimal notation is given in Table \ref{tab:gamma}. The mixing layer of PRESENT is given in Table \ref{tab:gl} and it is proper. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $x$&0&1&2&3&4&5&6&7&8&9&A&B&C&D&E&F\\ \hline $x\gamma$&C&5&6&B&9&0&A&D&3&E&F&8&4&7&1&2\\ \hline \end{tabular}\caption{PRESENT S-Box}\label{tab:gamma} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $i$&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15\\ $i\lambda$&0&16&32&48&1&17&33&49&2&18&34&50&3&19&35&51\\ \hline \hline $i$&16&17&18&19&20&21&22&23&24&25&26&27&28&29&30&31\\ $i\lambda$&4&20&36&52&5&21&37&53&6&22&38&54&7&23&39&55\\ \hline \hline $i$&32&33&34&35&36&37&38&39&40&41&42&43&44&45&46&47\\ $i\lambda$&8&24&40&56&9&25&41&57&10&26&42&58&11&27&43&59\\ \hline \hline $i$&48&49&50&51&52&53&54&55&56&57&58&59&60&61&62&63\\ $i\lambda$&12&28&44&60&13&29&45&61&14&30&46&62&15&31&47&63\\ \hline \end{tabular}\caption{PRESENT mixing layer}\label{tab:gl} \end{table} \begin{remark}\label{present} {\rm By a computer check (using, for instance, MAGMA \cite{MAGMA}) it is possible to see that the PRESENT mixing layer is strongly proper and its S-Box is weakly $4$-uniform and strongly $1$-anti-invariant (the strongly anti-invariance is computed with an equivalent permutation that sends $0$ to $0$).} \end{remark} By the previous remark, the S-Box of PRESENT does not satisfy the hypotheses of Theorem \ref{Th 4.4}. Nevertheless, in the next section, we will see that $\Gamma_\infty({\rm PRESENT})$ is primitive, as well. \section{Primitive groups with an abelian regular subgroup} \label{sec:primitive} Our first result is similar to Theorem \ref{Th 4.4}. We now consider the differential uniformity instead of the weakly differential uniformity and this leads to relax the assumption on the strongly anti-invariance. \begin{theorem} \label{primitivity} Let $\mathcal{C}$ be a tb cipher with a proper round $h$. Suppose that, for some $1<r<m$, each brick of $\gamma$ is \begin{itemize} \item[$(i)$] $2^r$-uniform, and \item[$(ii)$] strongly $(r-1)$-anti-invariant. \end{itemize} Then $\Gamma_h(\mathcal{C})$ is primitive, and hence so is $\Gamma_{\infty}(\mathcal{C})$. \end{theorem} \begin{proof} Suppose that $\Gamma_h(\mathcal C)$ is imprimitive. Then, by $(ii)$ of Lemma \ref{lambda inverse}, there exists a nontrivial proper subspace $U$ of $V$ such that $\{U+v\mid v\in V\}$ is a block system of $\Gamma_h(\mathcal C)$ on $V$. Recall that $\gamma\lambda\in \Gamma_h(\mathcal C)$, by $(i)$ of Lemma \ref{lambda inverse}. Thus $U\gamma\lambda=U+v$, for some $v\in V$. Since $0\gamma\lambda=0$ we deduce that $U+v=U$, and therefore $W=U\gamma=U\lambda^{-1}$ is a subspace of $V$. Let $\pi_i$ be the projection $V\rightarrow V_i$ and denote by $I$ the set of all $i$ such that $\pi_i(U)\neq \{0\}$. Clearly $I\neq\emptyset$. Then either $U\cap V_i=V_i$ for all $i\in I$, or there exists $j\in I$ such that $U\cap V_j\subset V_j$. In the first case, $U=\oplus_{i\in I} V_i$ is a wall. This gives $U\gamma=U$ and consequently $U\lambda=U$, which is impossible because $\lambda$ is a proper mixing layer. Assume $U\cap V_j\subset V_j$, for some $j\in I$. We claim that $U\cap V_j$ is nontrivial. Notice that $\pi_j(U)\ne \{0\}$, so we can consider $u\in U$ such that $\pi_j(u)=u_j\neq 0$. By Lemma \ref{lambda inverse} $(ii)$, for any $v_j\in V_j$, we have $(u+v_j)\gamma+v_j\gamma\in W$. Since $W$ is a subspace, it follows that $w=u\gamma+(u+v_j)\gamma+v_j\gamma\in W$. Actually $w=u_j\gamma_j+(u_j+v_j)\gamma_j+v_j\gamma_j\in W\cap V_j=(U\cap V_j)\gamma_j$. If $w=0$ for all $v_j$, then the map $\hat{\gamma_j}_{u_j}$ is constant, against the fact that $\gamma_j$ is $2^r$-uniform and $r<m$. Hence $w\neq 0$ for some $v_j$, and $0\neq w\gamma_j^{-1}\in U\cap V_j$. Now $U\cap V_j$ and $W\cap V_j$ are nontrivial proper subspaces of $V_j$ such that $(U\cap V_j)\gamma_j=W\cap V_j$. Moreover, $\gamma_j$ is strongly $(r-1)$-anti-invariant. Thus \begin{equation}\label{dim} \dim(U\cap V_j)=\dim (W\cap V_j)<m-r+1. \end{equation} Let $u\in (U\cap V_j)\backslash\{0\}$. Again by $(ii)$ of Lemma \ref{lambda inverse}, we have $(u+v_j)\gamma_j+v_j\gamma_j=(u+v_j)\gamma+v_j\gamma\in W\cap V_j$ for any $v_j\in V_j$. Then ${\rm Im}(\hat{\gamma_j}_u)\subseteq W\cap V_j$, where $|{\rm Im}(\hat{\gamma_j}_{u})|\geq 2^{m-r}$ since $\gamma_j$ is $2^r$-uniform. However $0\notin{\rm Im}(\hat{\gamma_j}_{u})$: otherwise $(u+v_j)\gamma_j=v_j\gamma_j$ and $u=0$, since $\gamma_j$ is a permutation. It follows that $|W\cap V_j|\ge 2^{m-r}+1$, which implies that $\dim( W\cap V_j)\ge m-r+1$, in contradiction to (\ref{dim}). This proves that $\Gamma_h(\mathcal C)$ is primitive. \end{proof} \begin{remark}\label{recprin} {\rm The previous theorem, together with Remark \ref{present}, allows us to conclude that $\Gamma_{\infty}({\rm PRESENT})$ is primitive. Similarly for the lightweight ciphers RECTANGLE~\cite{RECTANGLE} and PRINTcipher~\cite{PRINTcipher}. Indeed, these are tb ciphers with a strongly proper mixing layer. Furthermore, the RECTANGLE 4-bit S-Box and the PRINTcipher 3-bit S-Box satisfy the hypothesis of Theorem \ref{primitivity}.} \end{remark} Notice that the structure of $\Gamma_h(\mathcal{C})$ is also known. In fact, by $(i)$ of Lemma \ref{lambda inverse}, the group $\Gamma_h(\mathcal{C})$ contains $T(V)$ which is an {\em elementary} abelian regular subgroup. We are thus able to apply the characterization of finite primitive groups with an abelian regular subgroup \cite[Theorem 1.1, see also Lemma 3.6 for more details]{Li}. For reader's convenience, we restate it in the particular case where the degree is $2^d$, for some $d\geq 1$. \begin{theorem}\label{Li} Let $G$ be a primitive permutation group of degree $2^d$, with $d\geq1$. Then $G$ contains an abelian regular subgroup $T$ if and only if either \begin{itemize} \item[$(1)$] $G\leq \mbox{\rm AGL}(d,2)$, or \item[$(2)$] $G=(S_1\times\ldots\times S_c).O.P\quad and\quad T=T_1\times\ldots\times T_c,$ where $c\geq 1$ divides $d$, each $T_i<S_i$ with $|T_i|=2^{d/c}$, the $S_i$ are all conjugate, $O\leq \mbox{\rm Out}(S_1)\times\ldots\times \mbox{\rm Out}(S_c)$, $P$ permutes transitively the $S_i$, and one of the following holds: \begin{itemize} \item[$(i)$] $S_i\simeq \mbox{\rm PGL}(d,q)$ and $T_i$ is a cyclic group of order $(q^d-1)/(q-1)$, for $q$ a prime power, or \item[$(ii)$] $S_i\simeq \mbox{\rm Alt}(2^{d/c})$ or $\mbox{\rm Sym}(2^{d/c})$ and $T_i$ is an abelian group of order $2^{d/c}$. \end{itemize} \end{itemize} \end{theorem} In part (2) the notation $G=(S_1\times\ldots\times S_c).O.P$ denotes that $N=S_1\times\ldots\times S_c$ is normal in $G$ and $G/N$ is an extension of the group $O$ by the group $P$. Also, according to the O'Nan-Scott classification of finite primitive groups, the group in (1) is of affine type, while the group in (2) is either almost simple, if $c=1$, or a wreath product (in the product action), if $c>1$. As an immediate consequence of Theorem \ref{Li}, we have the following refinement when $T$ is elementary abelian. \begin{corollary}\label{Liref} Let $G$ be a primitive permutation group of degree $2^d$, with $d\geq 1$. Assume that $G$ contains an elementary abelian regular subgroup $T$. Then one of the following holds: \begin{itemize} \item[$(1)$] $G$ is of affine type, that is, $G\leq \mbox{\rm AGL}(d,2)$; \item[$(2)$] $G\simeq \mbox{\rm Alt}(2^d)$ or $\mbox{\rm Sym}(2^d)$; \item[$(3)$] $G$ is a wreath product, that is, $$G=(S_1\times\ldots\times S_c).O.P\quad and\quad T=T_1\times\ldots\times T_c,$$ where $c>1$ divides $d$, each $T_i$ is an abelian subgroup of $S_i$ of order $2^{d/c}$ with $S_i\simeq \mbox{\rm Alt}(2^{d/c})$ or $\mbox{\rm Sym}(2^{d/c})$, the $S_i$ are all conjugate, $O\leq \mbox{\rm Out}(S_1)\times\ldots\times \mbox{\rm Out}(S_c)$, and $P$ permutes transitively the $S_i$. \end{itemize} In particular, if $d\leq 5$, then $G$ cannot be a wreath product. \end{corollary} \begin{proof} Suppose that $G$ is not affine. Then $G$ is as in part (2) of Theorem \ref{Li}. Let us consider the case when $G$ satisfies $(i)$. Since $T$ is an elementary abelian $2$-group and each $T_i$ is cyclic, it follows that $|T_i|=2$. Hence $$(q^d-1)/(q-1)=q^{d-1}+q^{d-2}+\ldots+q+1=2,$$ which is impossible. Thus $G$ satisfies $(ii)$ and, of course, we have $G\simeq \mbox{\rm Alt}(2^d)$ or $\mbox{\rm Sym}(2^d)$ provided that $c=1$. Finally, if $d\leq 5$, then the socle $\mathrm{soc}(S_i)$ of each $S_i$ is the Klein group, if $S_i\simeq \mbox{\rm Alt}(4)$ or $\mbox{\rm Sym}(4)$, and it is trivial otherwise. On the other hand, by Lemma 3.6 in \cite{Li} (see the proof), the socle of $G$ is a non-abelian group given by $\mathrm{soc}(S_1)\times\ldots\times \mathrm{soc}(S_c)$. Therefore $(3)$ cannot occur in this case. \end{proof} Next we will show that a primitive group on $V=(\mathbb{F}_2)^d$ cannot be of affine type, if $d$ is small and the group is generated by two abelian regular subgroups. First we recall a few preliminary results. By \cite[Theorem 1]{CDVS06}, for any abelian regular subgroup $T$ of the affine group on $(V,+)$, there is a structure of an associative, commutative, nilpotent ring $(V,\circ,\cdot)$ on $V$, where the circle operation is given by $$x \circ v=x+v+x \cdot v,$$ for all $x,v\in V$. If $T$ is also elementary, then $(V,\circ)$ is a vector space over $\mathbb{F}_2$ such that $T$ is the related translation group. Let $T_+$ and $T_\circ$ be the translation groups with respect to the operations $+$ and $\circ$, respectively, and let us consider the following subspaces $$U_\circ=\{v\in V\mid x\cdot v=0\text{ for all }x\in V\}$$ and $$W_\circ=\langle x\cdot v\mid x,v\in V\rangle$$ of $V$. By \cite[Proposition 2.1.6]{Ca15}, if $T_+\neq T_\circ$, then $1\leq \dim(U_\circ)\leq d-2$. Hence, we have $d>2$. Furthermore, slight modifications of Theorem 2.1.18 and Corollary 2.1.29, in \cite{Ca15}, allows us to state that $W_\circ\leq U_\circ$ when $d=3,4$ or $5$; in addition, if $\dim(U_\circ)=d-2$, then $\dim(W_\circ)=1$. It follows that $\dim(W_\circ)=1$ if $d=3$ or $4$, and $\dim(W_\circ)=1$ or $2$ if $d=5$. \begin{proposition}\label{affine} Let $d=3,4$ or $5$, and let $G$ be a primitive group on $V=(\mathbb{F}_2)^d$. If $G$ is generated by two elementary abelian regular subgroups, then $G\simeq \mbox{\rm Alt}(V)$ or $\mbox{\rm Sym}(V)$. \end{proposition} \begin{proof} Suppose that $G$ is neither isomorphic to $\mbox{\rm Sym}(V)$ nor to $\mbox{\rm Alt}(V)$. Then, by Corollary \ref{Liref}, we may assume $G\leq \mbox{\rm AGL}(V,+)$. Since $G$ is generated by two elementary abelian regular subgroups, by \cite[Theorem 1]{CDVS06}, we may also assume $G=\langle T_\circ,T_{\scalebox{0.4}{$\square$}}\rangle$, where $T_\circ$ and $T_{\scalebox{0.4}{$\square$}}$ are the translations groups with respect to the operations $\circ$ and \scalebox{0.6}{$\square$}. Suppose $T_\circ=T_+$. Since $\dim(U_{\scalebox{0.4}{$\square$}})\geq 1$, there exists $x\in V\backslash\{0\}$ such that $x$ \scalebox{0.6}{$\square$} $v=x+v$, for all $v\in V$. Thus $\{\{0,x\}+v\mid v\in V\}$ is a block system for $G$ and $G$ is imprimitive, a contradiction. Similarly, if $T_{\scalebox{0.4}{$\square$}}=T_+$. Hence $T_\circ$ and $T_{\scalebox{0.4}{$\square$}}$ are both different from $T_+$, and therefore we can apply the results mentioned above. Keeping the same notation, we have $\dim(W_\circ +W_{\scalebox{0.4}{$\square$}})<d$, so that $W_\circ +W_{\scalebox{0.4}{$\square$}}$ is a proper subspace of $V$. For any $u\in V$, let $\tau_u^\circ$ and $\tau_u^{\scalebox{0.4}{$\square$}}$ be the translations by $u$ with respect to $\circ$ and \scalebox{0.6}{$\square$}, respectively. Then $\tau_u^\circ=\kappa_u^\circ\sigma_u$, for some $\kappa_u^\circ\in \mbox{\rm GL}(V,+)$ and $\sigma_u\in T_+$. Take any $x\in W_\circ +W_{\scalebox{0.4}{$\square$}}$. Recall that $x\circ u=x+u+x\cdot u$. Thus $$x\cdot u=x\tau_u^\circ+x+u=x\kappa_u^\circ+x$$ and, since $x\cdot u\in W_{\circ}$, we have $x\kappa_u^\circ=x\cdot u+ x\in W_\circ +W_{\scalebox{0.4}{$\square$}}$. It follows that $$((W_\circ +W_{\scalebox{0.4}{$\square$}})+v)\tau^\circ_u=(W_\circ +W_{\scalebox{0.4}{$\square$}})\kappa_u^\circ+v\kappa_u^\circ+u=(W_\circ +W_{\scalebox{0.4}{$\square$}})+v\kappa_u^\circ+u,$$ for all $v\in V$. Clearly, the same holds for $\tau^{\scalebox{0.4}{$\square$}}_u$. This proves that $\{(W_\circ +W_{\scalebox{0.4}{$\square$}})+v\,|\,v\in V\}$ is a block system for $G$, our final contradiction. \end{proof} We point out that the previous result cannot be extended to $d=6$, indeed a counterexample can be constructed with the aid of MAGMA \cite{MAGMA}. \section{The main results} \label{sec:main} In this section, for a tb cipher $\mathcal C$ over $(\mathbb{F}_2)^{mn}$ with a strongly proper round $h$, we wonder when $\Gamma_{\infty}(\mathcal C)$ is the alternating group. Of course, it is enough to consider $\Gamma_h(\mathcal C)$. Thus, assuming that $\Gamma_h(\mathcal C)$ is primitive, by Corollary \ref{Liref} and Lemma \ref{even} we can restrict our attention to the cases where $\Gamma_h(\mathcal C)$ is of affine type or a wreath product. Following \cite{Ca15}, we say that a vectorial Boolean function $f$\,:\,$V\rightarrow V$ is {\em anti-crooked} (AC, for short) if, for any $a\in V\backslash\{0\}$, the set $$\mathrm{Im}(\hat{f}_a) = \{f(x+a)+ f(x)\mid x\in V \}$$ is not an affine subspace of $V$. In \cite{ACDVS}, part (2) of Theorem 4.5, the AC condition has been used to avoid that $\Gamma_{\infty}(\mathcal{C})$ is of affine type. This result remains valid for $\Gamma_{h}(\mathcal{C})$. \begin{proposition}\label{Th 4.5} Let $\mathcal{C}$ be a tb cipher with a proper round $h$ where any brick is AC. If $\Gamma_{h}(\mathcal{C})$ is primitive, then it is not of affine type. \end{proposition} However the bricks of some tb ciphers, as for example the S-Box of PRESENT, are not AC. We therefore provide the following alternative condition. \begin{proposition}\label{gruppetto} Let $\mathcal C$ be a tb cipher over $V=(\mathbb{F}_2)^{mn}$, with $m\geq 3$ and $n\geq 2$. Suppose that there exists a brick $\gamma_i$ corresponding to a proper round $h$ such that \begin{equation}\label{W's cond} \mbox{\rm Alt}(V_i)\subseteq \langle T(V_i),\gamma_iT(V_i)\gamma_i^{-1}\rangle. \end{equation} If $\Gamma_h(\mathcal C)$ is a primitive group, then it is not of affine type. \end{proposition} \begin{proof} Suppose to the contrary that $\Gamma_h(\mathcal C)\leq\mbox{\rm AGL}(mn,2)$ (see Corollary \ref{Liref}). Let $\tau_i\in \mbox{\rm Alt}(V_i)$ be a 3-cycle and, for any $x=(x_1,\ldots,x_{n})\in V$, define $\tau\in Sym(V)$ to be the permutation $$x \mapsto (x_1,\ldots,x_{i-1},x_i\tau_i,x_{i+1},\ldots, x_{n}).$$ The claim will follow once it is shown that $\tau\in \Gamma_h(\mathcal C)$. In fact, the minimal degree of $\mbox{\rm AGL}(mn,2)$ is $2^{mn-1}$, where the minimal degree is the minimum number of elements moved by a non-identity permutation (see, for instance, \cite[Section 3.3]{DixMor}). On the other hand, $\tau$ moves exactly $3\cdot 2^{m(n-1)}$ elements. This is impossible because $3\cdot 2^{m(n-1)}<2^{mn-1}$, for any $m\geq 3$ and $n\geq 2$. By (\ref{W's cond}), we have $\tau_i=\tau_{i_1}\tau_{i_2}\ldots \tau_{i_s}$ where each $\tau_{i_r}=\sigma_{k_i}$ or $\gamma_i \sigma_{k'_i}\gamma_i^{-1}$, for some $k_i,k'_i\in V_i$. Since $h$ is a proper round, we can consider round keys $k,k'\in V$ such that $k=(0,\ldots,0,k_i,0\ldots,0)$ and $k'=(0,\ldots,0,k'_i,0\ldots,0)$. Put $\rho_k=\rho\sigma_k$, and similarly for $k'$. Then, for any $x\in V$, we have $$ x\rho_{k'}^{-1}\rho_{k-k'}=x( \sigma_{k'}\lambda^{-1}\gamma^{-1}\gamma \lambda \sigma_{k-k'})=x \sigma_{k} $$and $$ x\rho_{k'}\rho_{k\lambda-k'}^{-1}=x(\gamma \lambda \sigma_{k'} \sigma_{k\lambda-k'}\lambda^{-1}\gamma^{-1})=(x\gamma\lambda+k\lambda)\lambda^{-1}\gamma^{-1}=x\gamma\sigma_{k}\gamma^{-1}. $$ It follows that the permutation $$x \mapsto (x_1,\ldots,x_{i-1},x_i\tau_{i_r},x_{i+1},\ldots, x_{n})$$ belongs to $\Gamma_h(\mathcal C)=\langle\rho, T(V)\rangle$. We conclude therefore that $\tau\in\Gamma_h(\mathcal C)$, as desired. \end{proof} \begin{remark} {\rm By a computer check on $4$-bit S-Boxes, one can see that there exist maps which are AC but do not satisfy condition (\ref{W's cond}) and, conversely, maps satisfying condition (\ref{W's cond}) which are not AC.} \end{remark} In \cite[Section 7]{ACDVS}, with $\mathcal C$ as in Theorem \ref{Th 4.4}, it has been shown that $\Gamma_h(\mathcal{C})$ cannot be a wreath product (provided that the proper round is strongly proper). In the next result we recall the proof and extend it to tb ciphers satisfying Theorem \ref{primitivity}. \begin{proposition}\label{altnotpres} Let $\mathcal{C}$ be a tb cipher with a strongly proper round $h$. If the hypotheses of Theorem $\ref{primitivity}$ (or Theorem $\ref{Th 4.4}$, respectively) are satisfied, then the primitive group $\Gamma_h(\mathcal{C})$ is not a wreath product. \end{proposition} \begin{proof} Of course the group $\Gamma_{h}(\mathcal{C})=\langle \rho,T(V)\rangle$ is primitive, by Theorem \ref{primitivity} (or Theorem \ref{Th 4.4}, respectively). Recall that $\rho=\gamma\lambda$, where $\lambda$ is a strongly proper mixing layer. Suppose that $G=\Gamma_h(\mathcal C)=(S_1\times\ldots\times S_c).O.P$ is the group given in (3) of Corollary \ref{Liref}. Let $S=S_1\times\ldots\times S_c$. Then $T\leq S$ and $G/S$ is a cyclic group generated by $\rho S$. Thus, arguing as in Section 7 of \cite{ACDVS} (see also \cite[Section 6]{ONAN}), we have $$V=W_1\oplus\ldots\oplus W_c$$ where each $W_i=0T_i\subseteq 0S_i$ is a subspace of $V$ such that $W_i\rho=W_{i+1}$, for $i=1,\ldots, c-1$, and $W_{c}\rho=W_1$. Furthermore, for any $k\in\{1,\ldots,c\}$, we obtain \begin{equation}\label{lambda1} \hat{\gamma}_u(v)=(u+v)\gamma+v\gamma\in W_{k+1}\lambda^{-1} \end{equation} for all $u\in W_k$ and $v\in V$. Let $\pi_i$ be the projection of $V$ onto $V_i$ and take $I_k$ to be the set of all $i$ such that $\pi_i(W_k)\neq \{0\}$. Clearly $I_k\neq\emptyset$. Assume first $W_k\cap V_i=V_i$ for all $k\in\{1,\ldots,c\}$ and $i\in I_k$. Thus $W_k=\oplus_{i\in I_k} V_i$ is a wall, for any $k$. In particular, $W_k\gamma=W_k$. Since $W_k\rho=W_{k+1}$, it follows that $W_k\lambda=W_{k+1}$ which is impossible, being $\lambda$ strongly proper. We may therefore assume $W_k\cap V_j\subset V_j$, for some $k\in\{1,\ldots, c\}$ and $j\in I_k$. Now, if Theorem $\ref{primitivity}$ holds, then we get a contradiction arguing as in Theorem \ref{primitivity} with $U=W_k, W=W_k\gamma=W_{k+1}\lambda^{-1}$, and using (\ref{lambda1}) instead of Lemma \ref{lambda inverse}. Similarly, by \cite[Section 7, part (II)]{ACDVS}, we have a contradiction when Theorem $\ref{Th 4.4}$ is satisfied. \end{proof} Notice that in Proposition \ref{altnotpres} it is essential that the proper round is strongly proper. \begin{example}\label{example} {\rm Let $V=V_1\oplus V_2\oplus V_3\oplus V_4$, where each $V_i=({\mathbb F}_2)^4$. Consider a tb cipher $\mathcal C$ over $V$ with the inversion map in ${\mathbb F}_{2^4}$ as S-Box for any round $h$, i.e. $\gamma_i:x\mapsto x^{2^4-2}$. Suppose that there is a unique mixing layer given by the following matrix $$ \lambda=\left[\begin{array}{ccccc} 0&I&0&0\\ 0&0&I&0\\ 0&0&0&I\\ I&0&0&0\\ \end{array}\right] $$ where 0 and $I$ are respectively the zero and identity matrices acting on $({\mathbb F}_2)^4$. It is known that $\gamma_i$ is weakly $2$-uniform and strongly $1$-anti-invariant, namely $\gamma_i$ satisfies the hypotheses of Theorem \ref{Th 4.4}. It is also easy to verify that $\lambda$ is proper. On the other hand, $\lambda$ sends $V_i$ to $V_{i+1}$, for $i=1,2,3,$ and $V_4$ to $V_1$. Thus $\lambda$ is not strongly proper. Furthermore, a computer check shows that $\Gamma_h(\mathcal{C})=\Gamma_{\infty}(\mathcal{C})$ is a wreath product. } \end{example} We can now sum up our conclusions about the simplicity of $\Gamma_{\infty}(\mathcal{C})$, as follows. \begin{theorem}[see also \cite{ACDVS}]\label{main1} Let $\mathcal{C}$ be a tb cipher over $V=(\mathbb{F}_2)^{mn}$, with a strongly proper round $h$ such that the corresponding bricks are AC and satisfy the hypotheses of Theorem $\ref{primitivity}$ (or Theorem $\ref{Th 4.4}$, respectively). Then $\Gamma_{\infty}(\mathcal{C})=\mbox{\rm Alt}(V )$. \end{theorem} \begin{proof} By Theorem $\ref{primitivity}$ (or Theorem \ref{Th 4.4}, respectively), the group $\Gamma_{h}(\mathcal{C})$ is primitive. Thus the claim follows applying first Corollary \ref{Liref}, and then Propositions \ref{Th 4.5} and \ref{altnotpres}. \end{proof} \begin{theorem}\label{main2} Let $\mathcal{C}$ be a tb cipher over $V=(\mathbb{F}_2)^{mn}$, with $m\geq 3,n\geq 2$ and a strongly proper round $h$ such that the corresponding bricks satisfy the hypotheses of Theorem $\ref{primitivity}$ (or Theorem $\ref{Th 4.4}$, respectively). Suppose further that one of these bricks satisfies condition $(\ref{W's cond})$ of Proposition $\ref{gruppetto}$. Then $\Gamma_{\infty}(\mathcal{C})=\mbox{\rm Alt}(V )$. \end{theorem} \begin{proof} As in Theorem $\ref{main1}$, applying Proposition \ref{gruppetto} instead of Proposition~\ref{Th 4.5}. \end{proof} \begin{corollary}\label{cormain2} Let $\mathcal{C}$ be a tb cipher over $V=(\mathbb{F}_2)^{mn}$ with a strongly proper round $h$ such that the corresponding bricks satisfy the hypotheses of Theorem $\ref{primitivity}$ (or Theorem $\ref{Th 4.4}$, respectively). Suppose $m=3,4$ or $5$, and $n\geq 2$. Then $\Gamma_{\infty}(\mathcal{C})=\mbox{\rm Alt}(V )$. \end{corollary} \begin{proof} Let $\gamma_i$ be any brick in the round $h$. Put $V_i=(\mathbb{F}_2)^{m}$ and $$G=\langle \sigma_k, \gamma_i\sigma_k\gamma_i^{-1} \mid k\in V_i\rangle.$$ According to Theorem \ref{main2}, it is enough to prove that $\mbox{\rm Alt}(V_i)\subseteq G$. Actually, by Corollary \ref{Liref} and Proposition \ref{affine}, it suffices to show that $G$ is primitive. In fact, $G$ is a subgroup of $\mbox{\rm Sym}(V_i)$ generated by two elementary abelian regular subgroups, namely $T(V_i)$ and $\gamma_i T(V_i)\gamma_i^{-1}$. Suppose to the contrary that $G$ is imprimitive. Let $U$ be a nontrivial proper subspace of $V_i$ such that $\{U+v\mid v\in V_i\}$ is a block system of $G$ on $V_i$. Then $(U+v)\gamma_i\sigma_k\gamma_i^{-1}=U+w$, with $v,w\in V_i$. It follows that $v\gamma_i\sigma_k\gamma_i^{-1}\in U+w$ and so $(U+v)\gamma_i\sigma_k\gamma_i^{-1}=U+v\gamma_i\sigma_k\gamma_i^{-1}=U+(v\gamma_i+k)\gamma_i^{-1}$. Hence, $$(U+v)\gamma_i\sigma_k\gamma_i^{-1}+(v\gamma_i+k)\gamma_i^{-1}=U.$$ Now, if $v=0$ and $k\in U\gamma_i$, then $U\gamma_i\sigma_k\gamma_i^{-1}=U$ because $0\gamma_i=0$. Hence $U\gamma_i+k=U\gamma_i$, which implies that $U\gamma_i$ is a subspace of $V_i$. On the other hand, if $k=v\gamma_i$, we have $((U+v)\gamma_i+v\gamma_i)\gamma_i^{-1}=U$. Thus $\mathrm{Im}(\hat{\gamma_i}_{u})\cup\{0\}\subseteq U\gamma_i$, for any $u\in U\backslash\{0\}$. However $0\notin{\rm Im}(\hat{\gamma_j}_{u})$ and $\gamma_i$ is $2^r$-uniform, so that $|U\gamma_i|\ge 2^{m-r}+1$. In particular $\dim(U\gamma_i)\ge m-r+1$, in contradiction to the assumption that $\gamma_i$ is strongly $(r-1)$-anti-invariant. \end{proof} The next corollary is an immediate consequence of Corollary \ref{cormain2}, Remark \ref{present} and Remark \ref{recprin}. \begin{corollary} The round functions of PRESENT, RECTANGLE and PRINTcipher generate the alternating group. \end{corollary} \section{Final remarks}\label{sec:finrem} Vectorial Boolean functions used as S-boxes in block ciphers must have low uniformity to prevent linear cryptanalysis (see \cite{lincrit}), differential cryptanalysis (see \cite{crittoanalisi1,crittoanalisi2}) and high non-linearity. Differentially $2$-uniform functions, also called \emph{Almost Perfect Non-linear (APN) functions}, are optimal. The most common dimensions used for S-boxes are even, often powers of $2$. Unfortunately, no 4-bit APN permutation exists and no APN permutation on $({\mathbb F}_2)^m$ with $m$ even was known until in \cite{apnseidim} it has been constructed a $6$-bit APN permutation (still now this is the only example of APN permutation in even dimension). For this reason, the permutations used as S-boxes in block ciphers are $4$-uniform. The following computational result has been obtained looking at all the affine equivalence classes of $4$-bit S-Boxes. \begin{fact}\label{4uni-1anti} Let $\gamma:(\mathbb F_2)^4 \rightarrow (\mathbb F_2)^4$ be a $4$-bit S-Box. If $\gamma$ is $4$-uniform, then it is strongly $1$-anti-invariant. \end{fact} It would be interesting to give a direct proof of Fact \ref{4uni-1anti}. Indeed, by Theorem \ref{primitivity} and Corollary \ref{cormain2}, this implies that the group generated by the round functions of a tb cipher with a strongly proper round and bricks of dimension 4 is the alternating group, provided that the bricks of the strongly proper round are $4$-uniform. Next we relate the strongly 1-anti-invariance with the non-linearity, that is another algebraic property of vectorial Boolean functions. More precisely, we will show that, for a permutation, the strongly 1-anti-invariance is equivalent to the fact that the permutation has non-linearity greater than $0$. For any $m\geq 1$, let $f$ and $g$ be $m$-variable Boolean functions. The Hamming distance between $f$ and $g$ is $$\mathrm{d}(f,g)=|\{x\in ({\mathbb F}_2)^m\mid f(x)\neq g(x)\}|.$$ Recall also that $f$ can be represented as a multivariate polynomial over ${\mathbb F}_2$. This polynomial is called the Algebraic Normal Form (ANF) of $f$ (see for instance \cite{carlet}) and its degree is the (algebraic) degree of $f$ $\deg(f)$. The affine functions are Boolean functions of degree at most $1$, and they form a set which we denote by $\mathcal A_m$. The {\it non-linearity} of $f$ is then given by $$\mathcal{N}(f)={\rm min}\{{\rm d}(f,\alpha)\mid \alpha \in \mathcal A_m\}.$$ For a vectorial Boolean function $f:(\mathbb{F}_2)^m \to (\mathbb{F}_2)^m$, we denote by $_vf$ the component $\sum_{i=1}^m v_if_i$ of $f$, where $f_1,\dots,f_m$ are the coordinate functions of $f$, for all $v\in(\mathbb{F}_2)^m\backslash\{0\}$. Thus, we have $$ \mathcal{N}(f)={\rm min}\{\mathcal N(_vf)\mid v\in(\mathbb{F}_2)^m\backslash\{0\}\}. $$ The following result is Proposition 5 of \cite{Ca16}. \begin{proposition}\label{prop:Va} Let $f:(\mathbb{F}_2)^m\rightarrow(\mathbb{F}_2)^m$ be a vectorial Boolean function and $a\in (\mathbb{F}_2)^m\backslash\{0\}$. Let $V_a$ be the vector space $\{v\,\in(\mathbb{F}_2)^m\,|\,\deg(_v\hat{f}_a)=0\}$. Then $af+V_a^{\perp}$ is the smallest affine subspace of $(\mathbb{F}_2)^m$ containing $\mathrm{Im}(\hat{f}_a)$. \end{proposition} We are now ready to prove the above announced result. \begin{proposition}\label{nonlinanti} Let $m>1$ and let $\gamma$ be a permutation of $(\mathbb{F}_2)^m$ such that $0\gamma=0$. Then $\mathcal N(\gamma)\ne 0$ if and only if $\gamma$ is strongly 1-anti-invariant. \end{proposition} \begin{proof} Let $\mathcal N(\gamma)\ne 0$ and suppose that $\gamma$ is not strongly 1-anti-invariant. Then there exist $U,\,W<(\mathbb{F}_2)^m$ such that $\dim(U)=\dim(W)=m-1$ and $U\gamma=W$. As the non-linearity of a vectorial Boolean function is affine invariant (see for instance \cite{carlet}), without loss of generality, we may assume $U=W=\langle e_1,\ldots,e_{m-1}\rangle$, where $e_i$ denotes the vector with $1$ in the $i$-th position and $0$ elsewhere. Thus $(\mathbb{F}_2)^m=U\cupdot (U+e_m)$, in other words $U$ and $(U+e_m)$ give a partition of $(\mathbb{F}_2)^m$. Let $u \in U$ with $u\neq 0$. Since $U\gamma=U$, we have $(U+e_m)\gamma=U+e_m$ which implies that $(u+v)\gamma+v\gamma \in U$, for all $v\in (\mathbb{F}_2)^m$. Hence $\mathrm{Im}(\hat{\gamma}_u)\subset U$ and, by Proposition \ref{prop:Va}, we have $V_u^\perp+u\gamma\subseteq U$. But $u\gamma\in U$, so that $V_u^\perp\subseteq U$ and $U^\perp=\langle e_m\rangle\subseteq V_u$. It follows that $\langle e_m\rangle\subseteq V_{e_i}$ for all $e_i$ with $1\le i\le m-1$, i.e., the derivate $(\widehat{{\gamma}_{m}})_{e_i}$ is constant. This implies that in the ANF representation of $\gamma_{m}$ the variable $x_i$ could appear only in the part of degree $1$, that is $\gamma_{m}$ has at most degree $1$, in contrast to the fact that $\mathcal N(\gamma)\ne 0$. Conversely, let $\gamma$ be strongly 1-anti-invariant and suppose $\mathcal N(\gamma)=0$. As before, let $U=\langle e_1,\ldots,e_{m-1}\rangle$ and $(\mathbb{F}_2)^m=U\cupdot (U+e_m)$. Without loss of generality, we may assume $\gamma_{m}=x_m$. Since $\gamma$ is a permutation, it is easy to verify that $U\gamma=U$, which is a contradiction. \end{proof} By the previous proposition an S-Box $\gamma$ is strongly 1-anti-invariant if $\gamma$ does not have any affine component. In general, the minimum request to prevent linear and differential attacks on a block cipher is the 4-differential uniformity and the nonzero non-linearity of S-Boxes. The lightweight translation based ciphers are often designed using $m$-bit S-Boxes with $m<5$, in order to have a low implementation complexity in hardware. So, for this type of ciphers, by Corollary \ref{cormain2} and Proposition \ref{nonlinanti}, the choice of a strongly proper mixing layer allows to avoid some attacks based on the order or the imprimitivity of the group generated by the round functions. \subsection*{Acknowledgements} The authors would like to thank Ralph Wernsdorf for his useful comments after reading an earlier version of the manuscript.
1,314,259,992,720
arxiv
\section{Introduction} Quantum spin systems play a very important role in condensed matter physics, because of their underlying rich physics, such as the spin liquid state~\cite{Anderson1987} and the valence-bond solid (VBS) state~\cite{Read1989}. Typically, subjected to external magnetic field, the magnetization process of the spin systems can exhibit anomalous phenomena. Among them two kinds of nonanalytic magnetization behaviors have attracted many interests. One is the magnetization plateau, which usually accompanies with the spin excitation gap and has been found in many systems, such as the frustrated spin systems~\cite{Ono2003,Honecker2004}, and quasi-periodic systems with nontrivial topological property~\cite{Hida1993, Hu2014}. The other is the magnetization jump, which exhibits discontinuity in the magnetization density. The magnetization jump was first proposed by N\'eel~\cite{ap_neel} in the system with the Ising-like anisotropic exchange interaction, and then also investigated in various of lattice spin systems in different dimensions ~\cite{Kohno1997,Sakai1999,Aligia2000,Schulenburg2002,Dmitriev2006,Kalinov2006,Heidrich-Meisner2009,Kolezhuk2012,Albarracin2014,Kishine2014,Hiroki2015,Morita2016}. Most of these model systems involve anisotropy or frustration. Experimentally, the magnetization jump was first confirmed in hydrated copper compound ~\cite{Poulis1951}, and then was found in many kinds of magnetic materials ~\cite{Moller1977,Hardy2003,Ghivelder2004,Yoshii2007,Diop2016,Manago2016,Maji2010}. However, understanding the mechanism of the magnetization jump in an intuitive way is still in exploration. Many explanations have been presented for this issue, such as the magnetic domain reorientation~\cite{Moller1977,Hardy2003,Maji2010}, the spin-flop transition~\cite{Gerhardt1998, Sakai1999,Hiroki2015}, the formation of bound magnon pairs~\cite{Dmitriev2006}, and the macroscopically large degeneracy at the critical value of the external magnetic field~\cite{Schulenburg2002,Richter2004}. Recently, field driven phase transition has been proposed in one-dimensional (1D) $J-Q_{2}$ model~\cite{Adam2015,Adam2016}. This model was first introduced by Sandvik~\cite{Sandvik2007} to construct a spin valence-bond-solid (VBS) state without frustration. In the presence of external magnetic field, the numerical results by employing the exact diagonalization and the stochastic series expansion quantum Monte Carlo (QMC) method~\cite{Syljuasen2002} show that the magnetization curve of the model displays a sharp jump from a finite value to the saturated magnetization density at certain critical magnetic field. In their work, the origin of the magnetization jump is explained as the onset of attractive interactions between magnons, according to the analytical results for two magnons on a ferromagnetic background. However, one notes that the anisotropic exchange effect, which is usually closely related to the magnetization jump, has not been considered in Ref.~\cite{Adam2016}. In the present work, we numerically investigate the one-dimensional $J-Q_{2}$ model with XXZ anisotropy using DMRG method. We obtain a novel anisotropy dependent magnetization phase diagram with considerable physics. It shows that the magnetization jump behavior can be evidently influenced (either depressed or enhanced) by anisotropy. Interestingly, if the anisotropy strength $g$ is large enough, e.g. $g>4$ in units of $J$, a direct jump from a non-polarized to a fully-polarized state occurs. We emphasize that this direct magnetization jump observed in the strongly anisotropic case is found for the first time, which is absent in the isotropic one. We systemically explore the mechanism of the magnetization jump in the whole parameter regime by analysing the properties of $N$-magnon state, i.e. its ground state energy, correlation function and long-range order. Focusing on the excitation energy per magnon for $N$-magnon state and the corresponding excitation energy difference between (\textit{N}+1)- and \textit{N}-magnon states, we determine the critical magnetization density and external magnetic field at which the magnetization jump appears. Analysis of system's energy in the whole magnetization process indicates that magnetic domain forms in the \textit{jumped-over} states. This reveals that the magnetization jump shown in this work is due to the formation of the magnetic domain, in which region all spins are in a uniform direction. This understanding is also supported by analytical calculations in some limit cases, e.g., $g\rightarrow\infty$ and few magnon limit. In addition to energetic consideration, we further analyse the correlation function for each magnetization sector and different parameters. We find that while the experienced states in the magnetization process are Heisenberg-like without long-range order, all the \textit{jumped-over} states have antiferromagnetic or N\'eel long-range orders, or their mixing. The paper is organized as follows. In the following section we introduce the anisotropic $J-Q_{2}$ model and the numerical method we used. In the section ``Results", the magnetization jump behavior in different parameter regimes is illustrated and a novel anisotropy induced phase diagram is presented. In the section ``Discussion", we analyse the mechanism of the magnetization jump both in the few magnon limit and the whole magnetization process. \section{Model Hamiltonian and Numerical Method} \label{sec_2} The anisotropic $J-Q_{2}$ model in the presence of an external magnetic field is described by the Hamiltonian \begin{equation}\label{ham_jq} H=-J\sum_{i}P_{i,i+1}-Q\sum_{i}P_{i,i+1}P_{i+2,i+3}-h\sum_{i}S_{i}^{z}, \end{equation} where $P_{i,j}\equiv\frac{1}{4}-\left(S_{i}^{x}S_{j}^{x} +S_{i}^{y}S_{j}^{y}+g S_{i}^{z}S_{j}^{z}\right)$ and $g$ is the $XXZ$ anisotropy. $J$ is the Heisenberg exchange constant, $Q$ is the coupling strength of the nearest pairs, and $h$ is the external magnetic field. $g =1$ recovers the isotropic limit. In the isotropic limit without magnetic field, the competition between $J$ and $Q$ terms leads to a ground state phase transition from Heisenberg ground state to the doubly degenerate VBS phase~\cite{Tang2011}. In this paper, we are more interested in the adiabatic magnetization process of the system subjected to the external magnetic field. To describe the magnetization process, we define the magnetization density as \begin{equation} m=\frac{2}{L}\sum_{i}\langle S_{i}^{z}\rangle, \end{equation} where $L$ is the system size. It can be readily obtained that it is always fully magnetized ($m = 1$) if $g < -1$. On the other hand, $g > -1$ is a non-trivial case, in which we can analytically stress the behavior of $m$ in some limit cases: $m=0$ for $h=0$ and $m=1$ if $h$ is large enough. In this work, we explore how the magnetization density $m$ extrapolates from zero to saturation between these two limits. Of course, calculating $m$ in a general value of $h$ should resort numerical ways. In practice, we numerically employ the density matrix renormalization group (DMRG) method {\cite{white1992, white1993}}, which is extremely powerful for the one-dimensional systems. We perform the calculation for systems with different lattice sizes up to 240, to obtain the physics in the thermodynamic limit. The periodic boundary condition (PBC) is adopted and the DMRG many-body states $M$ are kept dynamically \cite{Legeza2003} in order to control the truncation error. In DMRG calculations, the computational cost is in the order of $M^3$. There are two ways to choose $M$, one is to fix $M$, in this case the truncation error is different for different steps. The other is to fix truncation error, and in this case the number of the kept many-body states changes. In this work, in order to reduce the computational cost, we choose the latter, and dynamically control $M$ up to 2000, to guarantee the truncation error $\varepsilon<10^{-8}$ in the whole calculations we performed. In the rest of the paper, we use $J=1$ as the energy scale and restrict $Q$ and $h$ to positive values. \section{Results} \label{sec_3} \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_1.eps} \caption{DMRG calculation of the magnetization density $m$ as a function of external field $h$. The system size $L=120$. (a-b) The isotropic case with $g=1$ and different coupling $Q$. (c) The anisotropic case exampled by $Q=1.5$ in different $g$. Here $h_{\rm sat}$ is the critical field when the magnetization density $m$ goes to its saturated value. } \label{fig:mz} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_2.eps} \caption{ Magnetization phase diagram consisting of four phases according to the behaviors of the magnetization jump processes: i) the ferromagnetic (FM) phase; ii) the no magnetization jump (N-MJ) phase; iii) the partially- to fully-polarized magnetization jump (PF-MJ) phase; iv) the non- to fully-polarized magnetization jump (NF-MJ) phase. (a) Magnetization phase boundaries with different system size. (b) Magnetization phase diagram shown by the critical magnetization $m_c$ in \{$Q,g$\} space with system size $L=80$. The white dashed-lines are phase boundaries for $L=80$ for comparison.} \label{fig:phase_diag} \end{figure} We first revisit the magnetization property in the isotropic case using DMRG calculation. The magnetization process in different strength of nearest pair coupling $Q$ is shown in Fig.~\ref{fig:mz} (a) and (b). When $Q=0$, the system is the spin-1/2 Heisenberg chain, and its zero temperature magnetization curve is continuous and smooth. Here the small jumps and plateaus come from the finite size effect and will disappear in the thermodynamic limit. For a small $Q=0.2$, comparing to $Q=0$, $m$ changes rapidly near the saturated magnetization, but still, goes smoothly to $m=1$ at the same saturated field $h_{\rm sat}$ without a macroscopic magnetization jump. Further increasing $Q$ to $0.4$, the magnetization density $m$ changes suddenly from a partially-polarized value $m_c$ to $m=1$, and the saturated field $h_{\rm sat}$ is also larger than that for the smooth magnetization curves. This sharp jump of the magnetization curve indicates a ground state phase transition induced by the external field. The above results obtained by DMRG calculation agree with that in Ref.\cite{Adam2016} well. The difference is that in our DMRG calculation, the zero temperature case can be directly addressed, while the QMC method needs an extrapolation from finite small temperature to zero. In the presence of a magnetization jump, the critical field $h_{\rm sat}$ increases as the coupling strength $Q$ increases, and the critical magnetization $m_c$ is smaller for a larger $Q$. However, as shown in Fig.~\ref{fig:mz}(b), even if $Q$ is sufficiently large, a finite value of $m_c$ does not decrease anymore and converges to a nonzero value. It implies no direct magnetization jump from $m=0$ to $m=1$ even when $Q\rightarrow\infty$. Then we extend our investigation to the general case with a tunable anisotropy $g$. Since $g < -1$ is a trivial case as mentioned above, we need just discuss the case of $g > -1$. In Fig.~\ref{fig:mz}(c), we show the magnetization curves with a fixed typical value of $Q$ (i.e. $Q = 1.5$) in several different values of anisotropy $g$. When $g=-0.5$, the magnetization density $m$ increases gradually from $0$ to $1$ without macroscopic magnetization jump. For larger values of anisotropy $g=0.5$ and $2.0$, we can observe the shape jumps from a finite $m_c$ to the saturated magnetization density. Furthermore, when $g$ is sufficiently large ($g=4.0$), a direct jump from $m=0$ to the fully-polarized state occurs. We again point out that this novel phenomenon can not be observed in the isotropic system. According to the different behaviors of the magnetization process, we can summarize our main results in a phase diagram consisting of four regions, as shown in Fig.~\ref{fig:phase_diag}(a). When $g<-1$, the system is in the ferromagnetic (FM) phase, and the magnetization property is trivial. When $g>-1$, the magnetization curve of the system has three different shapes: there is i) no magnetization jump (N-MJ), ii) a partially- to fully-polarized magnetization jump (PF-MJ), iii) a non-polarized to fully-polarized magnetization jump (NF-MJ). The phase boundaries obtained by DMRG show a good convergence as the system size increases, indicating that these phases are stable in the thermodynamic limit. From these boundaries, we see that both the pair coupling $Q$ and the anisotropy $g>-1$ can enhance the magnetization jump. Furthermore, the critical anisotropy $g$ for both boundaries seems to converge in the large $Q$ limit. We show a visual variation of critical magnetization density $m_{c}$ in $\{Q,g\}$ space in Fig.~\ref{fig:phase_diag}(b), one can see that $m_{c}$ decreases with the increase of pair coupling $Q$ or anisotropy $g$ which means that the magnetization jump is enhanced. \section{Discussion} \label{sec_4} \subsection{The magnetization jump in the few magnon limit} Macroscopic magnetization jumps have been extensively discussed for various systems in the literature\cite{Heidrich2006,Kecke2007,Heidrich2009,Adam2016}. Among others, the attractive interaction between magnons plays an important role in leading to magnetization jump. For example, in the isotropic $J-Q_{2}$ model \cite{Adam2016}, Iaizzi \textit{et al.} found from QMC simulation a macroscopic magnetization jump. At the same time, their theoretical analysis for two-magnon case demonstrates that in this case these two magnons form a bound state due to an effectively attractive interaction between them. However, Iaizzi \textit{et al.} also pointed out that the formation of a bound state between two magnons is not sufficient condition for the macroscopic magnetization jump, and furthermore, an effectively attractive interaction between the magnon pairs or a cluster including macroscopic number of magnons is needed \cite{Adam2016,Heidrich2006,Kecke2007,Heidrich2009}. In order to demonstrate this, in the following we consider the few magnon limit up to four magnons. \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_3.eps} \caption{ $\tilde{E}(2)-2\tilde{E}(1)$ as a function of (a) $g$ for different $Q$ , (b) $Q$ for different $g$. The results are obtained by exact diagonalization in the few-magnon basis for system size $L=128$. (c) Phase boundary between the N-MJ and PF-MJ phase. The blue solid-line is obtained in the few magnon limit. The symbols are obtained using DMRG with different system sizes. The black dashed line describes the asymptotic value at which the magnetization jump appears in the large $Q$ limit.} \label{fig:e1e2_compare} \end{figure} From Fig.\ref{fig:mz} we can see that with the increase of coupling strength $Q$ or anisotropy $g$, the magnetization jump first appears near $m=1$. Thus we can analyse the origin of the magnetization jump in the ferromagnetism background. In the system with up to two magnons, we can easily get the ground state energy of the system (details in supplementary material). For convenience, the $N$-magnon excitation energy is defined as \begin{equation}\label{eq:tilde_E} \tilde{E}(N)=E(N)-E(0), \end{equation} where $E(N)$ is the ground state energy of the system with $N$ magnons and without external magnetic field. The information from the value of $\tilde{E}(2)-2\tilde{E}(1)$ helps us to understand the mechanism of the magnetization jump in the few magnon limit. The negative value of $\tilde{E}(2)-2\tilde{E}(1)$ indicates that the effective interaction between the two magnons is attractive, and thus the magnetization curve exhibits a macroscopic magnetization jump near the saturated magnetization. In contrast, if $\tilde{E}(2)-2\tilde{E}(1)>0$, the effective interaction is repulsive and there is no signal of magnetization jump for the few magnon limit. $\tilde{E}(2)-2\tilde{E}(1)=0$ is the critical case, in which the two-magnon system is in an effectively noninteracting magnon ground state. In Fig.~\ref{fig:e1e2_compare}(a), (b), we show the results of the quantity $\tilde{E}(2)-2\tilde{E}(1)$ for the system with $L=128$, which is an example size with negligible size effect. The magnetization density curve is smooth and continuous if the pair coupling $Q=0$ because the system has no magnetization jump according to Fig.~\ref{fig:phase_diag}. Correspondingly, $\tilde{E}(2)-2\tilde{E}(1)$ is almost independent of $g$ and always positive as shown in Fig.\ref{fig:e1e2_compare}(a). However, for a very small $Q=0.05$, $\tilde{E}(2)-2\tilde{E}(1)$ is positive for small values of $g$, but negative when $g$ is large enough. As the anisotropy $g$ increases, the effective interaction between magnons changes from repulsive to attractive. This means the magnetization jump can be induced by the anisotropy. The boundary between the N-MJ phase and the PF-MJ phase in Fig.~\ref{fig:phase_diag} can be determined by a critical $g$ when $\tilde{E}(2)-2\tilde{E}(1)=0$. From the curves in different $Q$, we can also conclude that a needed $g$ for a magnetization jump is smaller when $Q$ is larger, in agreement with the results by DMRG (see Fig.~\ref{fig:phase_diag}). Similar to Fig.~\ref{fig:e1e2_compare}(a), we show $\tilde{E}(2)-2\tilde{E}(1)$ as a function of $Q$ for different $g$ in Fig.~\ref{fig:e1e2_compare}(b). In the isotropic case with $g=1$, $\tilde{E}(2)-2\tilde{E}(1)>0$ for small $Q$, but becomes negative for large $Q$. A magnetization phase transition from the N-MJ phase to the PF-MJ phase occurs at the critical $Q_c(g=1)=2/9$, in agreement with the result in Ref.~\cite{Adam2016}. Notably, our large-scale DMRG calculation gives exactly the same critical $Q_c$. Different curves for decreasing $g$ show that the magnetization jump exists in the anisotropic case, and the critical value of $Q$ is larger for smaller $g$. However, when $g$ is too small ($g=-0.6$), the curve of $\tilde{E}(2)-2\tilde{E}(1)$ goes up as $Q$ increases, and there is no cross with $\tilde{E}(2)-2\tilde{E}(1)=0$. In this case, the effective interactions between two magnons are always repulsive, and there is no signal for the magnetization jump from the two-magnon state to the saturated state. From Figs.~\ref{fig:e1e2_compare}(a) and (b), we can get the critical $g$ and $Q$ corresponding to $\tilde{E}_{2}-2\tilde{E}_{1}=0$. Thus we can obtain the phase boundary between N-MJ and NF-MJ phase as shown in Fig.~\ref{fig:e1e2_compare}(c). We can see that the phase boundary obtained in the few magnon limit perfectly agrees with the numerical results by DMRG calculation. We also notice that the asymptotic behavior of this curve can be analytically evaluated, namely, $g_c$ approaches to $\left( -4+ \sqrt{7} \right)/3$ in large $Q$ limit (details in supplementary material). \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_4.eps} \caption{Probability $P(V)$ of the magnon occupied volume $V$ for the system with (a) two magnons, (b)three magnons, (c)four magnons. (d) $V_{p}$ as a function of anisotropy $g$. In the calculation we take the system size $L=64$ and $Q=1.5$.} \label{fig:multi_magnons} \end{figure} In order to further unveil the origin of macroscopic magnetization jump, the analysis of a system with many (macroscopic number) magnons is necessary. We use $N$ intervals $d_{1},d_{2}\cdots d_{N}$, which describe the distance between the nearest neighbor magnons for an $N$-magnon state, to mark the different configurations of the $N$-magnon state. Due to the periodic boundary condition, only $N-1$ intervals are independent. Thus, to describe the distribution feature of the magnons, we define the magnon occupied volume as $V=\sum_{i}'d_{i}$\cite{Kecke2007}, where the prime means that the summation discards the largest interval. Obviously, a small value of $V$ indicates the preference of magnon condensation, while the large one corresponds to magnon separated case. Using the exact diagonalization method, we can get the probability $P(V)$ of the system with up to four magnons. The probability of the magnon occupied volume $V$ for a state $\left|\psi\right\rangle$ is defined as: \begin{equation} P(V)=\sum_{\sum_{i}^{\prime}d_{i}=V}\left|C_{d_{1},d_{2},\cdots,d_{N}}\right|^{2}, \end{equation} where $C_{d_{1},d_{2},\cdots,d_{N}}=\left\langle \psi\big|d_{1},d_{2},\cdots,d_{N}\right\rangle$. We plot the probability $P(V)$ for the ground state as a function of the magnon occupied volume $V$ (see Fig.~\ref{fig:multi_magnons}). From Fig.~\ref{fig:multi_magnons}(a), (b) and (c) it is noted that all the lines have a maximum value, and we define the corresponding value of $V$ as $V_p$. For the two-magnon system shown in Fig.~\ref{fig:multi_magnons}(a), we can see that $V_p=31$ when $g<-0.156$, in this case the two magnons tend to be separated and the effective interaction between magnons is repulsive. For $g>-0.156$, as $g$ increases, $V_p$ decreases to 2, which means the two magnons tend to condense and the effective interaction between magnons becomes attractive. For the threshold value of $g=-0.156$, $P(V)$ is almost flat, indicating that the magnons are effectively free. In Fig.\ref{fig:multi_magnons}(b), (c) we show the distribution of magnons with three and four magnons. When $g=-0.8$, $V_{p}=34$ for three-magnon case and $V_{p}=40$ for four-magnon case. The large value of $V_{p}$ means that the magnons prefer to disperse. With increase of $g$, $V_{p}$ shifts toward small value. Up to $g=0.8$, for three magnons case $V_{p}=5$ and for four magnons case $V_{p}=7$. This result indicates that the magnons tend to form a many magnon bound state. In Fig.~{\ref{fig:multi_magnons}}(d) we plot $V_{p}$ as a function of the anisotropy $g$. It is shown that the $V_{p}$ shifts toward small value with the increase of $g$, which means the magnons have a trend to form a bound state with a strong anisotropy. Moreover, for all different magnons cases, the $V_{p}$ has a dramatic drop for certain $g$, which indicates that the formation of the bound state is quite rapid. This result has a profound insight on the magnetization jump observed in the magnetization process. \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_5.eps} \caption{The energy per magnon $e(N)$ in (a) the N-MJ phase ($g=-0.5$), (c) the PF-MJ phase ($g=0.5$), and (e) the NF-MJ phase ($g=4.0$). (b), (d) and (f) are the corresponding energy difference $\Delta e(N)$ for (a), (c), and (e), respectively. For all the curves $Q=1.5$ and $L=120$.} \label{fig:energy_per_magnon} \end{figure} \subsection{The magnetization jump in the whole magnetization process} The analysis of the effective interaction between magnons in the few magnon limit already gives a clue to the origin of the magnetization jump. Furthermore, in this subsection, we explicitly investigate the magnetization process in the presence of the external field. In this case, the arbitrary $N$-magnon state has to be considered. The energy of the $N$-magnon state subjected to the external field $h$ is \begin{eqnarray} E(N,h)=E(N)-h\langle S^z_{tot}\rangle, \label{energy_in_field} \end{eqnarray} where the magnon number $N$ is equal to the number of the down spins, and $\langle S^z_{tot}\rangle=\sum_i \langle S^z_i\rangle$ is equal to $L/2-N$. For simplicity, we use $E(N)$ instead of $E(N,0)$ here and hereafter. \begin{figure}[!tb] \centering \includegraphics[width=0.7\columnwidth]{./fig_6.eps} \caption{Comparison between (a) the magnetization curve and (b) the rotated plot for $\Delta e(N)$ as a function of $m=1-2N/L$. The $N$-magnon states with positive (negative) $\Delta e(N)$ correspond to the continuous part (sharp jump) of the magnetization curve. Here $g=0.5$, $Q=1.5$, and $L=120$.} \label{fig:comparison} \end{figure} Consider one $N$-magnon state as the ground state of the system at some external magnetic field $h$ during the magnetization process, then the ground state energy $E(N,h)$ should satisfy $E(N,h) < E(M,h)$ for any $M \neq N$. Note the fact that for the model we investigate, the magnetization density $m$ increases monotonically as $h$ increases, and there is only one jump from some critical magnetization $m_c$ to the saturated magnetization. Therefore, the condition $E(N,h) < E(M,h)$ can be rewritten as \begin{equation} E(N,h) < E(0,h) \label{neq01} \end{equation} for $M < N$, and \begin{equation} E(N,h)<E(N+1,h) \label{neq02} \end{equation} for $M > N$. Inserting Eq.~(\ref{energy_in_field}) into the conditions Eqs.~(\ref{neq01}) and (\ref{neq02}), one can easily obtain the necessary requirement of the external field $h$: \begin{eqnarray} h&<&-\tilde{E}(N)/N, \label{neq11} \\ h&>&E(N)-E(N+1). \label{neq12} \end{eqnarray} where $\tilde{E}(N)$ is defined in Eq.(\ref{eq:tilde_E}). Combining Eqs.~(\ref{neq11}) and (\ref{neq12}), one further obtains \begin{eqnarray} e(N)<e(N+1), \label{condition} \end{eqnarray} where $e(N) = \tilde{E}(N)/N$ is the excitation energy per magnon for the $N$-magnon state in the absence of $h$. If the relationship in Eq.~(\ref{condition}) can not be satisfied, the $N$-magnon state can never be the ground state during the magnetization process. This is the origin of the macroscopic magnetization jump, from the perspective of the energy. More specifically, we can define the difference of the excitation energy per magnon as $\Delta e(N)=e(N+1)-e(N)$ as the determination condition of the $N$-magnon state during the magnetization process. When $\Delta e(N)>0$, the $N$-magnon state can be the ground state, and corresponds to the continuous part of the magnetization curve. Oppositely, the $N$-magnon state with $\Delta e(N)<0$ cannot be the ground state, and corresponds to the macroscopic magnetization jump. By taking $N=2$, one can also understand the reason why the phase boundary between the N-MJ and PF-MJ phases can be determined by comparing the excitation energies in the few magnon limit. In Fig.~\ref{fig:energy_per_magnon}, we show the excitation energy per magnon $e(N)$ and the energy difference $\Delta e(N)$ for $Q=1.5$ and several different $g$ as examples. Here $e(N)$ and $\Delta e(N)$ are numerically obtained by DMRG for each $N$-magnon states. In the N-MJ phase (e.g. $g=-0.5$), where the magnetization curve of the system is smooth and continuous (see Fig.~\ref{fig:mz}(c)), the excitation energy per magnon $e(N)$ increases monotonically as the number $N$ increases, as shown in Fig.~\ref{fig:energy_per_magnon}(a). In this case, the energy difference $\Delta e(N)$ shown in Fig.~\ref{fig:energy_per_magnon}(b) is always positive, i.e., Eq.~(\ref{condition}) is always satisfied. We also notice that $e(N)>e(1)$ holds for all these states. It means that the energy of the $N$-magnon state is larger than $N$ free magnons. In this sense the effective interactions between magnons is always repulsive. In the PF-MJ phase ($g=0.5$), as shown in Fig.~\ref{fig:energy_per_magnon}(c), as $N$ increases, the excitation energy per magnon $e(N)$ decreases for smaller $N$ but increases for larger $N$. As shown in Fig.~\ref{fig:energy_per_magnon}(d), there exists a region where the energy difference $\Delta e(N)<0$ , and adding a magnon to the $N$-magnon state will decreases the average energy of the magnons. This indicates the condensation of magnons, and the formation of the magnetic domain in these $N$-magnon states. These states can not be the ground state of the system in the magnetization process, and correspondingly the magnetization curve has a macroscopic jump. Figs.~\ref{fig:energy_per_magnon}(e) and (f) show the results for the NF-MJ phase ($g=4.0$) with the magnetization jump from $m=0$ to $1$. In this phase, the excitation energy per magnon $e(N)$ decreases monotonically as the number $N$ increases, and the energy difference $\Delta e(N)$ is negative for arbitrary magnetization density. We can further understand $\Delta e(N)$ in a more explicit way, by directly comparing the magnetization curve and the energy difference $\Delta e(N)$ as a function of $N$. Notice that magnetization $m=1-2N/L$, so Fig.~\ref{fig:comparison} (a) and (b) indeed have the same $y$-axis. As shown in Fig.~\ref{fig:comparison}(a), for a magnetization curve in the PF-MJ phase, there is a macroscopic jump from a critical $m_c$ to $m=1$. Correspondingly, the value of $\Delta e(N)$ shown in Fig.~\ref{fig:comparison}(b) has a transition from positive to negative at exactly same critical magnetization density $m_{c}$. The accordance of $m_c$ is marked by the horizontal dashed line. Moreover, by considering the critical case of Eq.~(\ref{neq11}), we can also get the critical field $h_{\rm sat}=-e(N)$, where the critical magnon number $N$ satisfies $\Delta e(N-1)<0<\Delta e(N)$. We can retrieve the magnetization phase diagram by plotting the critical magnetization $m_c$ in the parameter space \{$Q$, $g$\}, as shown in Fig.~\ref{fig:phase_diag}(b). When the magnetization curve is smooth and continuous, $m_c$ should be $1$ in the thermodynamic limit, indicating there is no magnetization jump. However, for the finite size system, we have $m_c=1-2/L$ because of a microscopic quantized jump. Nevertheless, the N-MJ phase denoted by the darkest blue is distinct in Fig.~\ref{fig:phase_diag}(b). For a fixed $g>(-4+\sqrt{7})/3$, the magnetization jump appears as $Q$ increases to the critical value, and $m_{c}$ decreases with the increasing of $Q$. Finally, when $g$ and $Q$ are both sufficiently large, the system is in the NF-MJ phase with $m_c=0$. All these phases and the corresponding phase boundaries are explicit and clear. \subsection{Understanding the direct magnetization jump in large anisotropy limit } In the macroscopic viewpoint, the direct magnetization jump can be understood in an analytical and intuitive way in the large $g$ limit. When the anisotropy is large enough, the system enters into an NF-MJ phase, as shown in Fig. \ref{fig:phase_diag}. In this limit, being divided by $g^{2}$ on both sides, the Hamiltonian described by Eq.~(\ref{ham_jq}) reads(details in supplementary material) \begin{equation} H/g^2 = -Q\sum_{i}S_{i}^{z}S_{i+1}^{z}S_{i+2}^{z}S_{i+3}^{z} + O\left(1/g\right) + O\left(1/g^2\right)-h^{\prime}\sum_{i}S_{i}^{z}, \end{equation} where $h^{\prime}=h/g^{2}$. By neglecting the $O(1/g)$ and $O(1/g^{2})$ terms, we have an effective Hamiltonian in the large $g$ limit \begin{equation}\label{eq:limit_ham} \mathcal{H}_{g\rightarrow\infty}=-Q\sum_{i}S_{i}^{z}S_{i+1}^{z}S_{i+2}^{z}S_{i+3}^{z}-h^{\prime}\sum_{i}S_{i}^{z}. \end{equation} Equation~(\ref{eq:limit_ham}) describes a classical Hamiltonian without quantum fluctuation, then we can easily get the ground state energy and the spin configuration of the system. The unit element of this Hamiltonian is a bond with 4 sites, and the total energy of the system is the summation of all the bonds. A bond contributes negative energy $-E_b$ when the numbers of both up and down spins are even, where $E_b = Q/16$ as the bond energy. Oppositely, when the numbers of both up and down spins are odd, a bond has positive energy $+E_b$. We have listed all possible spin arrangements of a single bond in Table~\ref{table:bond}. \begin{table} \centering \begin{tabular}{c|c} \hline Bond energy & Possible spin configurations \tabularnewline \hline \hline $-E_{b}$ & $\uparrow\uparrow\uparrow\uparrow$ \tabularnewline \cline{2-2} & $\downarrow\downarrow\downarrow\downarrow$\tabularnewline \cline{2-2} & $\uparrow\downarrow\uparrow\downarrow$, $\downarrow\uparrow\downarrow\uparrow$\tabularnewline \cline{2-2} & $\uparrow\uparrow\downarrow\downarrow$, $\uparrow\downarrow\downarrow\uparrow$, $\downarrow\downarrow\uparrow\uparrow$, $\downarrow\uparrow\uparrow\downarrow$\tabularnewline \hline $+E_{b}$ & $\uparrow\uparrow\uparrow\downarrow$, $\uparrow\uparrow\downarrow\uparrow$, $\uparrow\downarrow\uparrow\uparrow$, $\downarrow\uparrow\uparrow\uparrow$\tabularnewline \cline{2-2} & $\downarrow\downarrow\downarrow\uparrow$, $\downarrow\downarrow\uparrow\downarrow$, $\downarrow\uparrow\downarrow\downarrow$, $\uparrow\downarrow\downarrow\downarrow$\tabularnewline \hline \end{tabular} \caption{The energy and possible spin configurations for a single bond of the effective Hamiltonian described by Eq.~(\ref{eq:limit_ham}). } \label{table:bond} \end{table} Without loss of generality, we consider the system with even sites $L$ with external magnetic field $h^{\prime}=0$. The ground state of the system has magnetization $m=0$ or $m=\pm1$, with ground state energy $E_{g\rightarrow\infty}=-L E_b$, since there are $L$ bonds under PBCs. For $m=0$, the spin configuration can be a 2-fold degenerated spin pattern $\left|\cdots\uparrow\downarrow\uparrow\downarrow\cdots\right\rangle$, or a 4-fold degenerated spin pattern $\left|\cdots\uparrow\uparrow\downarrow\downarrow\cdots\right\rangle$. For $m=1(-1)$, the spin configuration can be $\left|\cdots\uparrow\uparrow\uparrow\uparrow\cdots\right\rangle$ ($\left|\cdots\downarrow\downarrow\downarrow\downarrow\cdots\right\rangle$). Introducing infinitely small quantum fluctuations, the ground state has $m=0$ when $h^{\prime}=0$, and will be fully-polarized under a small magnetic field. Thus, the direct jump is the only choice for the magnetization process. Furthermore, we consider the \textit{jumped-over} spin states with magnetization $0<m<1$. To minimize the energy, the spin pattern has to be separated into two regions: i) a magnon-full region with $m=0$ and ii) a fully-polarized domain region with $m=1$. Therefore in this case, all the bonds within the same region have negative energy, and only the bonds across the two regions can contribute positive energy. For example, the spin structure can be $\left|\cdots\downarrow\downarrow\uparrow\uparrow + \uparrow\uparrow\uparrow\uparrow \cdots\right\rangle$, and only the bond $\left|\cdots\downarrow\uparrow\uparrow + \uparrow \cdots\right\rangle$ that connects the two separated parts of the system contributes positive energy $+E_b$. Therefore, in the large $g$ limit, for all the \textit{jumped-over} states with magnetization $0<m<1$, its ground state has magnetic domains. For this special model, we can also conclude that all the states with magnetic domain can not be the ground state of the system. In other words, considering the magnetization process, all the states with magnetic domain will be jumped over during the magnetization process. We expect this point is not only valid for the large $g$ limit, but also be crucial for a general value of $g$. \subsection{Correlation functions} According to the previous subsections, we found that the states with magnetization domain structure are jumped over during the magnetization process. In this subsection, we are interested in the difference between the structures of the \textit{jumped-over} states and experienced states. To unveil the physics of the magnetization jump beyond the energy perspective, we investigate the spin-spin correlation function: \begin{eqnarray} C_{S}(r)=\left<S^z_0 S^z_r\right>-\left<S^z_0\right>\left<S^z_r\right>, \end{eqnarray} where $r$ is real space coordinate. \begin{figure}[!tb] \centering \includegraphics[width=0.9\columnwidth]{./fig_7.eps} \caption{ (a) Spin-spin correlation function in the long-range limit as a function of $m=1-2N/L$ for $Q=1.5$, different anisotropy $g$, and different system sizes. The inset is a zoom-in for $g=-0.5$. (b) The schematic phase diagram for a fixed nonzero $Q$ in the absence of external field $h$. In each inset, the black solid-line represents the schematic spin-spin correlation function $C_{S}\left(r\right)$ (details in supplementary material), and the magenta dashed-line denotes $C_{S}(r)=0$.} \label{fig:long_range_order} \end{figure} We plot the long-range correlation function $C_S(\infty)$ as a function of magnetization density $m=1-2N/L$ in Fig.~\ref{fig:long_range_order}(a). Here we define $C_S(\infty)=[C_S(L/2)+C_S(L/2-1)]/2$ to remove the strong oscillations when $g$ is very large. For the N-MJ phase without magnetization jump, $C_S(\infty)$ is very small for all the magnetization densities, and its amplitude decreases as $L$ increases (see inset). Therefore, in the thermodynamic limit $C_S(\infty)$ is zero, and there is no LRO in this phase. In the PF-MJ phase ($g=0.5$), $C_S(\infty)$ approaches 0 for small magnetization densities, where the magnetization curve is continuous. For these \textit{jumped-over} states at larger $m$, $C_S(\infty)$ is nonzero and show convergence for different system sizes. Therefore in the thermodynamic limit, the \textit{jumped-over} state has AFM-LRO because of the formation of magnetic domain. In the NF-MJ phase ($g=4.0$), $C_S(\infty)$ is nonzero for larger magnetization density. Specially, for $m$ between 0.7 and 1, $C_S(\infty)$ is the same as in the PF-MJ phase independent of the system size, as these $N$-magnon states share the same domain structure. However, different with the PF-MJ phase, the spin-spin correlation function has large oscillations in the long-range limit for those states with magnetization densities from $m = 0.1$ to $0.3$. The states near $m=0$ have nearly zero $C_S(\infty)$, there is no AFM-LRO or domain, but the spin-spin correlation function has long-range N\'eel oscillations (details in supplementary material), as large $g$ drives the system to the classical limit. \section{Conclusion} \label{sec_5} In this work, we systematically investigate the adiabatic magnetization properties of the 1D anisotropic $J-Q_{2}$ model at zero temperature by numerically using the DMRG method. We have found that the anisotropy $g$ plays a crucial role in the magnetization process. The characteristics of the magnetization behavior can be summarized by a magnetization phase diagram consisting of four phases: the FM phase, the N-MJ phase without magnetization jump, the PF-MJ phase with a partially- to fully-polarized magnetization jump, and specially the NF-MJ phase with a direct magnetization jump from non- to fully-polarized state, which does not exist in the isotropic $J-Q_{2}$ model. We further study the origin of the magnetization jump. In the few magnon limit, we analyse the system with up to four magnons and get the clue that the attractive interaction between magnons may effects the formation of magnetization jump. For the $N$-magnon state, we point out that the origin of the magnetization jump is the condensation of magnons from the energy consideration. For the direct magnetization jump which is absent in the isotropic system, the analysis in the limit of infinite large anisotropy shows that the magnetization domain plays an important role in the magnetization jump. By explicitly investigating the spin-spin correlation function, we confirm that the spins condense and form the magnetic domain in those \textit{jumped-over} states. A schematic phase diagram is shown in Fig.~\ref{fig:long_range_order}(b) for a fixed non-zero pair coupling: i) If the magnetization curve is continuous, the corresponding ground states of the system cannot have any long-range order; ii) The state with long-range orders (e.g. antiferromagnetic or N\'eel long-range orders, or their mixing) cannot be the ground state of the system during the magnetization process, and therefore the magnetization jump arises. This reminds us the fact that the 1D spin-1/2 chain cannot have a stable long-range ordered ground state \cite{Landau1958} with continuous symmetry breaking due to the strong quantum fluctuations. Therefore, the conclusion obtained here is not only valid to the $J-Q_{2}$ model we study, but also should be a general conjecture for a wide range of 1D spin models and materials.
1,314,259,992,721
arxiv
\section{Aravind's ramblings} \section{Background} \label{sec:paper-crit-path-pape-back} Critical paths have long been used to guide heuristic algorithms for scheduling directed acyclic graphs (DAGs) of tasks~\cite{hu1961parallel,lockyer1969introduction,kohler1975preliminary}. A task graph is a DAG where the vertices represent computational tasks and the edges represent data dependencies between the tasks. When scheduling for homogeneous parallel architectures, weights are assigned to vertices of the graph to represent the execution time of the task, and to the edges to represent the communication cost of data flow between tasks. We consider the problem of static scheduling of task graphs, where the execution time and communication costs can be estimated with some accuracy prior to scheduling. In common with existing literature on static scheduling, we do not consider the case of computation or communication costs that are strongly dependent on unpredictable data inputs or run-time artifacts. The conventional definition of the critical path of a DAG in static task scheduling is \textit{the longest path of from the entry node to the exit node in the task graph}. On homogeneous parallel computers the standard algorithm for finding the longest path in a DAG can be used to find the critical path. However for heterogeneous parallel architectures, finding the critical path is more difficult. The execution time of each task can be quite different, which means that the dependence length of a path through the graph depends not just on the tasks in the path, but also on which of the heterogeneous processors each task is allocated. Two approaches are commonly used to estimate the critical path in scheduling algorithms for heterogeneous architectures. Popular scheduling approaches such as HEFT and CPOP~\cite{topcuoglu2002performance} assign an execution cost to each task that is the average execution time of the task on each processor. They also assume that the communication cost between pairs of processors depends purely on the quantity of transferred data, so that costs between different pairs of processors are all the same. These two assumptions provide a single execution cost for each task and a single communication cost for each edge, so that the standard critical path algorithm for homogeneous architectures can be used. However, in cases where task execution times differ widely on different processors, this can be inaccurate. For example, a GPU might be an order of magnitude faster than a CPU on an array processing task, but absolutely hopeless for single-threaded code. The average of CPU and GPU execution on each task may be a multiple of the execution cost on the best architecture for that task. A second common approach is to assume that the \textit{entire} task graph will be executed on a single type of processor, and use the execution costs for that type of processor to compute a critical path~\cite{daoud2008high}. A heterogeneous machine may contain several different types of processor, and a critical path will need to be computed for each type. However, selecting the processor that results in the shortest critical path gives some estimate of the minimum possible time needed to execute the code. This heuristic may work well on heterogeneous machines where some processors are simply more powerful versions of others. However, where different types of processors --- such as CPUs and GPUs --- are suited to different types of tasks, this approach may result in an estimate of the critical path length that is much longer than can be achieved by choosing the best processor for each task. Neither of these approaches to estimating the critical path is entirely satisfactory. Both can result in very misleading paths being identified as critical, as we show in section~\ref{sec:paper-crit-path-expe-resu}. A better heuristic for identifying the critical path should take into account the different execution times of tasks on different processors and the cost of data communication between processors. \section{Conclusion} \label{sec:conc} In this paper, we have designed, implemented and tested a critical path finding algorithm (\textit{CEFT}) that finds the true critical path of an application for heterogeneous processors. The quality of the critical paths are shown to be better than those produced by the state of the art CPOP algorithm. We show that the critical path lengths produced by our algorithm is always at least as long as the ones produced by CPOP for the RGG-classic workload. Our experiments show that when the heterogeneity is better expressed in the workloads (RGG-high) our paths are shorter than CPOP's paths in 83.99\% of the experiments. We also extend our critical path finding algorithm into a DAG scheduling algorithm (\textit{CEFT-CPOP}) by replacing the path found by our algorithm (with its corresponding partial assignment) into the CPOP algorithm. We compare the efficacy of our algorithm mainly against CPOP through the use of makespan related comparison metrics like: schedule length ratio (SLR), speedup and slack. It is evident from the results that our algorithm outperforms CPOP even as a scheduling algorithm, in nearly all aspects. For the RGG-classic, RGG-low, RGG-medium and RGG-high, our algorithm (CEFT-CPOP) produces smaller makespans in 15.9\%, 75.94\%, 90.29\% and 89.69\% of the experiments respectively. We also consistently produce smaller SLR and slack values than CPOP. In some cases as explained in section~\ref{sec:paper-crit-path-resu-anal}, our algorithm outperforms HEFT in terms of SLR and makespans, but falls short of HEFT's capabilities to produce the tightest schedules (lowest slack values). We observe similarly consistent results from four real-world benchmarks: Fast Fourier Transform (FFT), Gaussian Elimination (GE), Molecular Dynamics code (MD) and the Epigenomics Workflow (EW). One of the biggest impediment for CEFT-CPOP on the road to lower makespans, is that it has been extended to function as a scheduling algorithm using CPOP. Although, this helps us in finding relatively good schedules, we believe the extension provided by CPOP is still a limiting factor. We also believe that, by extending our algorithm into a full scheduling algorithm without task duplication, one can attain even better results in terms of obtaining smaller makespans. \section{Experimental Setup} \label{sec:paper-crit-path-expe-resu} In this section we present a statistical comparison of our algorithm with the current state of the art algorithms (CPOP and HEFT) in the context of a critical path finding algorithm and its ability to be adapted into a scheduling algorithm. In section~\ref{sec:rand-gene-grap} we outline the workloads on which the experiments are based upon and outline the experimental setup. Section~\ref{sec:paper-crit-path-comp-metr} defines the metrics based on which the effectiveness of our algorithm as a critical path finding algorithm is evaluated. The experimental set-up consists of a dual socket system consisting of Intel Xeon E5620 CPU running at 2.4 GHz with 24GB DDR3 RAM. The system is running Linux kernel ver 3.0.40-1. The code was compiled using GCC version 4.7.1 with `-O3' optimization flag. \subsection{Randomly generated workloads} \label{sec:rand-gene-grap} In order to not bias the results towards any particular application, we present comparisons of our algorithm to its contemporaries on synthetically generated random graphs. We use a modified version of the random graph generator from~\cite{topcuoglu2002performance}. In the next subsection, we present four comparison metrics on which the relative performance of the three algorithms is compared. We generated four sets of input \underline{r}andomly \underline{g}enerated \underline{g}raphs (RGG) using the random graph generator: \textit{RGG-classic}, \textit{RGG-low}, \textit{RGG-medium} and \textit{RGG-high}. \textit{RGG-classic} is the first set of input application graphs that we generated to mimic the random graphs generated in the work presented by~\cite{topcuoglu2002performance} and \cite{arabnejad2014list}. These graphs use the heterogeneity factor that is embedded in them to generate the execution times of a given task on the different processors. Following on from the random graph generator used in the literature, the execution time for task $t_i$ on processor $p_j$ is randomly chosen from the following range: \begin{equation} w_i \times (1-\dfrac{\beta}{2}) \leq w_{i,j} \leq w_i \times (1+\dfrac{\beta}{2}) \end{equation} \begin{figure}[t!] \centering \subfloat[Application graph]{ \includegraphics[angle=0, scale=0.25]{figures/drawn/rgg-low-sample-application-crop.pdf} \label{fig:rgg-dag-app} } \qquad \centering \subfloat[Resource graph]{ \includegraphics[angle=0, scale=0.25]{figures/drawn/rgg-low-sample-resource-crop.pdf} \label{fig:rgg-dag-res} } \caption{Sample graphs with 2 node-weights} \label{fig:rgg-dag} \end{figure} \noindent The possible range of values for $\beta$ is $0 \leq \beta \leq 1$, which means that $w_{i,j}$ can only possibly take values between $\dfrac{w_i}{2}$ and $\dfrac{3 \times w_i}{2}$. This implies that for any processor graph, a task can only be 3 times as fast on the fastest processor as it is on the slowest processor. This level of heterogeneity might not be representative of clusters which have certain processors with hardware accelerators. This is the major source of inspiration for us to generate the other three workloads. In the case of \textit{RGG-low}, \textit{RGG-medium} and \textit{RGG-high}, we use a modified version of the random graph generator. Every task in the modified graph from each of these workloads contains two node-weights as shown in figure~\ref{fig:rgg-dag}. Table~\ref{tab:rgg-dag-exec-time} shows the corresponding execution times of the tasks from figure~\ref{fig:rgg-dag-app} on the processors from figure~\ref{fig:rgg-dag-res}. \ifdefined \begin{table}[b!] \else \begin{table}[t!] \fi \centering \begin{tabular}{|c|c|c|} \hline & \textbf{P1} & \textbf{P2} \\ \hline \textbf{T1} & 6 & 35.25\\ \hline \textbf{T2} & 60.18 & 10 \\ \hline \textbf{T3} & 9.5 & 15.8 \\ \hline \textbf{T4} & 25.35 & 6 \\ \hline \end{tabular} \caption{Execution time for the application and processor graph from figure~\ref{fig:rgg-dag}} \label{tab:rgg-dag-exec-time} \end{table} \begin{equation} Cost(t_i, p_j)= \left[\dfrac{w^t_1(t_i)}{W^r_1(p_j)} + \dfrac{w^t_0(t_i)}{W^r_0(p_j)}\right] \label{eq:latency} \end{equation} These execution times are calculated using a simple two-part cost model based on equation~\ref{eq:latency}. Every task and resource has two weights. The execution time of a task on a resource is given by the sum of the ratio of the corresponding node weights. We draw inspiration for this cost model from~\cite{vasudevan2014improved}. This cost model yields a higher variability in execution times with some tasks being fast on certain processors; while those processors being not universally faster for all tasks in the application graph. Consider the graphs from figure~\ref{fig:rgg-dag}; the value of node-weights of the tasks and processors determine the execution time of these tasks. We generated the same set of six processor graphs for the RGG-low, RGG-medium and RGG-high workloads. While creating said processor graphs, the values for the two node-weights are chosen from two intervals : $\{\mathcal{I}_1, \mathcal{I}_2\}$. At every node, a random number between 0 and 1 is chosen and if it is lower than $\beta$, the first node-weight is chosen from $\mathcal{I}_1$ and the second node-weight is chosen from $\mathcal{I}_2$. If it is higher than $\beta$ however, the two intervals are interchanged. This process of using the intervals to fill in the node-weights of the nodes in the graph is adopted for the application graphs too. For the workloads mentioned above, the following intervals were used: \begin{itemize} \item Resource graph -- $\mathcal{I}_1 = \{10^2, 10^3\}$ and $\mathcal{I}_2=\{10^3, 10^4\}$ \item \textit{RGG-low} -- $\mathcal{I}_1 = \{10^2, 10^3\}$ and $\mathcal{I}_2=\{10^3, 10^4\}$ \item \textit{RGG-medium} -- $\mathcal{I}_1 = \{10^2, 10^3\}$ and $\mathcal{I}_2=\{10^4, 10^5\}$ \item \textit{RGG-high} -- $\mathcal{I}_1 = \{10^2, 10^3\}$ and $\mathcal{I}_2=\{10^5, 10^6\}$ \end{itemize} The result of this kind of workload generation, enables us to create workloads that have significantly different execution times. These workloads have the same structure, but differ in the execution times of the tasks as discussed in the previous paragraph. In order to generate the structure of the graphs, we use the random graph generator from the literature with the following parameters: \begin{itemize} \item $n$ -- Number of tasks in the graph; -- $\textbf{\{128, 256, 512, 1024, 2048, 4096, 8192, 16384\}}$ \item $o$ -- The average outdegree of a node in the graph; -- $\textbf{\{2, 4, 8\}}$ \item $c$ -- Communication-to-Computation ratio (CCR)\ifdefined. It is the ratio of the weight of an edge leaving a vertex (i.e. communication cost) to the vertex weight (i.e. the computation cost). In order to incorporate heterogeneity in the communication backbone, the weight is chosen randomly in the range $w_i \times c \times ( 1-\dfrac{\beta}{2} ), w_i \times c \times ( 1+\dfrac{\beta}{2} )$; where $w_i$ represents the computation cost or the weight of the vertex and $\beta$ denotes the heterogeneity factor as described below; \fi-- $\textbf{\{0.001, 0.01, 0.1, 1, 5, 10\}}$ \item $\alpha$ -- Shape parameter of the graph\ifdefined. The height of the graph depends on this parameter as $\dfrac{\sqrt{n}}{\alpha}$. The width of the graph is randomly chosen from a uniform distribution with a mean equal to $\alpha \times \sqrt{n}$. Hence smaller values of $\alpha$ gives tall and skinny graphs, while larger values of $\alpha$ gives short and fat graphs; \fi-- $\textbf{\{0.1, 0.25, 0.75, 1.0\}}$ \item $\beta$ -- Heterogeneity factor of the graph\ifdefined. This parameter dictates the weights of the vertices in the graphs (i.e. computation costs) which is randomly chosen from the following range: \begin{equation} w_i \times (1-\dfrac{\beta}{2}) \leq w_{i,j} \leq w_i \times (1+\dfrac{\beta}{2}) \end{equation} \noindent where $w_i$ is the weight of the vertex or the computation cost which is chosen randomly from a uniform distribution in the range $[0, 2 \times w_{DAG}]$. $w_{DAG}$ is the average computation cost of the graph and is chosen randomly. This is the way heterogeneity is incorporated into the application graphs throughout this paper unless otherwise stated explicitly; \fi-- $\textbf{\{10, 25, 50, 75, 95\}}$ \item $\gamma$ -- Skewness parameter of the graph\ifdefined. It denotes how computation is spread across the graph. Smaller values of $\gamma$ gives uniformly distributed graphs while larger values give skewed graphs where pockets of the graph are more computationally intensive compared to other parts of the graph; \fi-- $\textbf{\{0.1, 0.25, 0.5, 0.75, 0.95\}}$ \end{itemize} With the above configuration of parameters, a total of 14400 graphs were created. Each of these randomly generated graphs are scheduled on six different processor graphs ($p$ -- $\textbf{\{2, 4, 8, 16, 32, 64\}}$; where $p$ is the number of processors). This amounts to 86400 experiments (an experiment corresponds to an input application DAG, processor graph pair) for every workload and a total of 345600 experiments across all the workloads. To our best knowledge, our experiments are the only ones to use application graphs that have a large number of nodes (between 128 and 16384) as benchmarks. Previous evaluation of other heuristics such as HEFT and PEFT have a maximum of 500 nodes in the randomly generated graphs. \ifdefined\footnote{To explore the different workloads further, we encourage the readers to download the code from Github at \url{https://github.com/aravind-vasudevan/graphgen} and experiment with the different input parameters.}\fi \ifdefined \else \vspace{-0.2cm} \fi \subsection{Real world graphs} \label{sec:paper-crit-path-real-worl-grap} In addition to the variants of the randomly generated graphs (RGG-classic, RGG-low, RGG-medium and RGG-high), we evaluate the performance of our algorithm on real-world applications, namely Gaussian elimination~\cite{cosnard1988parallel}, Fast Fourier Transform~\cite{topcuoglu1999task}, Molecular dynamics~\cite{kim1988general} and Epigenomics workflow~\cite{bharathi2008characterization}. As is a common trend in the scheduling research community~\cite{topcuoglu2002performance,arabnejad2014list}, we generate graphs based on the known structure of these real-world applications. These graphs are generated with differing values of some of the parameters discussed in section~\ref{sec:rand-gene-grap}. Since the structure is known for these applications, the $\alpha$ parameter of the graphs cannot be changed. The range of values for the other parameters are as follows: $\{\textbf{0.001, 0.01, 0.1, 0.5, 1, 5, 10}\}$ for $c$ (CCR) and $\{\textbf{10, 25, 50, 75, 95}\}$ for $\beta$ (heterogeneity). These real world applications are also run on the six different processor graphs as mentioned in the previous section. \ifdefined \begin{figure}[t] \centering \hspace*{\fill}% \csubfloat[Gaussian Elimination]{ \includegraphics[angle=0, scale=0.15]{./figures/drawn/gaussian.pdf} \label{fig:gaussian} } \centerhfill \csubfloat[Fast Fourier Transform]{ \includegraphics[angle=0, scale=0.2]{./figures/drawn/fft.pdf} \label{fig:fft} } \hspace*{\fill}% \caption[Application DAGs for Gaussian Elimination and Fast Fourier Transform]{Application DAGs for Gaussian Elimination and Fast Fourier Transform. Redrawn from~\cite{wu1990hypertool,cosnard1988parallel,topcuoglu2002performance}} \label{fig:gaussian-fft} \end{figure} \subsubsection{Fast Fourier Transform (FFT)} \label{sec:paper-crit-path-fft} Figure~\ref{fig:fft} shows the task graph of another real world application, \textit{Fast Fourier Transform}. The FFT algorithm can be split into two parts: recursive calls and the butterfly operation~\cite{topcuoglu2002performance} represented by the dashed line in the figure. All the tasks above the line represent the recursive calls and the ones below are the butterfly operation tasks. For a given input vector of size $m$ which is a power of two, there are $2 \times m-1$ recursive calls and $m \times {log}_2 m$ butterfly operations. This application is especially unique in that all the paths in this application are the \textit{critical-path} and they all have the same weight~\cite{cosnard1988parallel}. \subsubsection{Gaussian Elimination (GE)} \label{sec:paper-crit-path-gaus} \textit{Gaussian Elimination}~\cite{wu1990hypertool,cosnard1988parallel}, is an algorithm which solves a linear system of equations by performing a sequence of operations on the associated matrix of coefficients. Figure~\ref{fig:gaussian} shows the task graph for the Gaussian Elimination algorithm operating on a matrix of size 5. The total number of tasks in a Gaussian Elimination graph is given by $\dfrac{m^2+m-2}{2}$ and in the case of the figure, the number of tasks when $m=5$ is 14. \subsubsection{Molecular Dynamics (MD)} \label{sec:paper-crit-path-mole-dyna} \begin{figure}[hb!] \centering \includegraphics[angle=0, scale=0.45]{./figures/drawn/molecular_cropped.pdf} \caption[Application DAG for Molecular dynamics code]{Application DAG for Molecular dynamics code. Redrawn from~\cite{kim1988general}} \label{fig:molecular-dynamics} \end{figure} A commonly found application in the literature is the modified molecular dynamic code from~\cite{kim1988general}. The task graph of this code is presented in Figure~\ref{fig:molecular-dynamics}. This application serves as a benchmark for scheduling algorithms due the shape of its irregular task graph. The task graph was modified by Browne~\cite{kim1988general} from its original structure in order to increase the number of tasks and edges. They also modified the computation and communication times of the tasks and edges while synthetically generating the architecture on which this task graph was run. This was done in an attempt to increase the variability in the graph. In a similar vein, all the scheduling algorithms presented in this paper were tested on synthetically generated application and processor graphs, unless explicitly stated otherwise. \subsubsection{Epigenomics Workflow (EW)} \label{sec:paper-crit-path-epig-work} The epigenomics workflow is a data processing pipeline that automates the execution of various genome sequencing operations. It maps the epigenetic state of the human cells on a genome-wide scale. Parts of this application can be split into independent chunks (split on inputs) which can be executed in parallel. The outputs from these independent chunks are further processed to filter noise and contaminating sequences. The graph has a very compact parallel structure and is generally wider than it is taller. \else \vspace{-0.2cm} \fi \subsection{Comparison metrics} \label{sec:paper-crit-path-comp-metr} We compare the algorithms based on the following comparison metrics : critical path length (CPL), schedule length (makespan), speedup, schedule length ratio (SLR), slack and a pairwise comparison of number of occurrences of better solutions which are common heuristics used to compare the performance of scheduling algorithms~\cite{topcuoglu2002performance,kwok1996dynamic,arabnejad2014list,ahmad1998exploiting,topcuoglu1999task}. \ifdefined \subsubsection{Critical path length (CPL)} \label{sec:paper-crit-path-crit-path-leng} As we have discussed in section~\ref{sec:paper-crit-path-form}, the critical path is the longest path from the entry node to the exit node in the application graph. The length of the critical path in turn becomes a key metric as it serves as a hard lower bound for the schedule length (makespan). As our algorithm is primarily a critical path finding algorithm, this metric is of key importance and we compare the lengths of the paths produced by our algorithm and CPOP for a given input application graph and processor graph pair. HEFT is not a critical path based scheduling algorithm and hence we cannot present the statistics for it under this comparison metric. \subsubsection{Speedup} \label{sec:paper-crit-path-effi} Speedup is defined as the ratio of the sequential execution time to the parallel execution time (\textit{makespan}). The sequential execution time is calculated by assigning all tasks onto the processor which minimizes the total execution time of the task graph as shown in the following equation : \begin{equation} \label{eq:spee} Speedup = \frac{min_{p_j\in P}[\Sigma_{t_i \in T}compCost(t_i, p_j)]}{makespan} \end{equation} \noindent In equation~\ref{eq:spee}, the numerator represents the sequential execution time of the input application graph for the given processor graph. This value is independent of the choice of the scheduling algorithm and is therefore a constant for all the three algorithms (our critical path algorithm, CPOP and HEFT) under scrutiny here. This implies that the speedup is the makespan normalised against the sequential execution time which is constant across all the algorithms compared. Hence, speedup is often used as a better replacement metric for the makespan as it returns a normalised score. \subsubsection{Scheduling length ratio (SLR)} \label{sec:paper-crit-path-sche-leng-rati} The most commonly used metric to compare the performance of scheduling algorithms is the \textit{makespan}. Its use as a comparison metric has been well established in the literature~\cite{topcuoglu2002performance, kwok1996dynamic, topcuoglu1999task, braun2001comparison}. But in order to normalize the schedule length against any topology/processor graph, we adopt the normalized schedule length (NSL)~\cite{daoud2008high} which is also called the scheduling length ratio (SLR). It is defined as follows : \begin{equation} \label{eq:slr} SLR = \dfrac{makespan}{\underset{t_i \in CP}{\Sigma}min_{p_j \in P}[compCost(t_i, p_j)]} \end{equation} \noindent where $CP$ is the critical path. The denominator\footnote{The denominator of SLR is often confused with the numerator of speedup. They are not the same as the task set to which they are applied to is different. In the denominator of the SLR only tasks from the CP are considered, while the numerator of speedup considers all the tasks in the task graph.} of the equation gives the sum of the computation costs of the critical path tasks assuming they are assigned to the processors which minimize their individual execution times. The SLR of an application DAG (under an optimal assignment or using any other scheduling algorithm) cannot be less than one as no valid schedule of tasks on the processors can produce a smaller makespan than the denominator. Since the critical path serves as lower bound for the makespan, one can identify that the denominator value might be smaller\footnote{It is equal in the case where the input application graph is a linear DAG} than the \textit{true} critical path; as this formulation ignores communication cost and hence produces shorter critical path lengths than the true critical path length. \subsubsection{Slack} \label{sec:paper-crit-path-slac-metr} Slack is a commonly used metric in the context of comparing scheduling algorithms~\cite{shi2006robust}. It represents the ability of a schedule to deal with delays in the execution of some tasks. It represents how accommodative a schedule is and acts as representative of robustness in the scheduling algorithms literature~\cite{boloni2002robust}. The slack of a task represents the time window within which the task can be delayed without extending the makespan. Slack is defined as, \begin{equation} Slack = \dfrac{\mathlarger{\mathlarger{\sum}}_{t_i \in V}M-b_{level}(t_i)-t_{level}(t_i)}{|\mathcal{T}|} \end{equation} \noindent It is important to note that makespan and slack are conflicting metrics. Makespan is representative of the efficiency of the scheduling algorithms in terms of its capability to lower the execution time of the application DAG; whereas, slack is representative of the forgiving nature of the schedule. \footnote{If one were to reduce the problem \textit{ad absurdum}, a schedule that never finishes or finishes at infinity will have the highest (infinite) slack.} Static algorithms however, deal with the time dependence of certain application DAGs by using stochastic models where task execution times are random variables as discussed in~\cite{adam1974comparison}. Braun et al. also suggest that scheduling algorithms having higher slack are more \textit{robust} for DAGs who employ a stochastic model for task execution times. In our experiments, we do not use any such dynamic DAGs as our workloads are comprised entirely of static DAGs. In this case, a higher slack usually means that the schedule is not \textit{tight} enough. However, if a scheduling algorithm creates a schedule with low SLR and high slack, it means that there is still scope for improvement in the schedule and hence even lower SLR values can be obtained. \fi \section{Results and analysis} \label{sec:paper-crit-path-resu-anal} \ifdefined \begin{table}[t!] \resizebox{\linewidth}{!}{% \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Workload} & \textbf{\# of experiments} & \textbf{CPL(\%)} & \textbf{makespan(\%)}\\ \hline \multirow{3}{*}{RGG-classic}& 99346 & Longer & 60.06 & 26.95 \\ \cline{2-5} & 99346 & Equal & 39.93 & 57.12\\ \cline{2-5} & 99346 & Shorter & 0 & 15.9\\ \hline \multirow{3}{*}{RGG-low}& 100800 & Longer & 40.61 & 23.15\\ \cline{2-5} & 100800 & Equal & 0.46 & 0.89\\ \cline{2-5} & 100800 & Shorter & 58.92 & 75.94\\ \hline \multirow{3}{*}{RGG-medium}& 100800 & Longer & 16.52 & 7.96\\ \cline{2-5} & 100800 & Equal & 0.33 & 1.74\\ \cline{2-5} & 100800 & Shorter & 83.14 & 90.29\\ \hline \multirow{3}{*}{RGG-high}& 100800 & Longer & 15.20 & 7.66\\ \cline{2-5} & 100800 & Equal & 0.8 & 2.64\\ \cline{2-5} & 100800 & Shorter & 83.99 & 89.69\\ \hline \end{tabular} } \caption{Percentage of instances in the experiments where CEFT's CPL and makespan are longer, equal or shorter than CPOP's corresponding values} \label{tab:ceft-cpop} \end{table} \else \begin{table}[t!] \centering \resizebox{0.85\linewidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline \textbf{Workload} & & \textbf{CPL(\%)} & \textbf{makespan(\%)}\\ \hline \multirow{3}{*}{RGG-classic}& Longer & 60.06 & 26.95 \\ \cline{2-4} & Equal & 39.93 & 57.12\\ \cline{2-4} & Shorter & 0 & 15.9\\ \hline \multirow{3}{*}{RGG-low}& Longer & 40.61 & 23.15\\ \cline{2-4} & Equal & 0.46 & 0.89\\ \cline{2-4} & Shorter & 58.92 & 75.94\\ \hline \multirow{3}{*}{RGG-medium}& Longer & 16.52 & 7.96\\ \cline{2-4} & Equal & 0.33 & 1.74\\ \cline{2-4} & Shorter & 83.14 & 90.29\\ \hline \multirow{3}{*}{RGG-high}& Longer & 15.20 & 7.66\\ \cline{2-4} & Equal & 0.8 & 2.64\\ \cline{2-4} & Shorter & 83.99 & 89.69\\ \hline \end{tabular} } \caption{Percentage of instances in the experiments where CEFT's CPL and makespan are longer, equal or shorter than CPOP's corresponding values} \label{tab:ceft-cpop} \end{table} \fi \begin{figure}[t!] \centering \includegraphics[width=0.92\linewidth]{figures/results/cpl-comparison.pdf} \caption{Percentage of instances CEFT's CPL is longer, equal or shorter compared to CPOP's CPL} \label{fig:cpl-comparison} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=0.92\linewidth]{figures/results/makespan-comparison.pdf} \caption{Percentage of instances CEFT's makespan is longer, equal or shorter compared to CPOP's makespan} \label{fig:makespan-comparison} \end{figure} \ifdefined \else \begin{figure*}[h] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-classic/Number_Of_Resources-speedup.pdf} \label{fig:rgg-clas-res-speedup} } \centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-low/Number_Of_Resources-speedup.pdf} \label{fig:rgg-low-res-speedup} } \centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-medium/Number_Of_Resources-speedup.pdf} \label{fig:rgg-med-res-speedup} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-high/Number_Of_Resources-speedup.pdf} \label{fig:rgg-high-res-speedup} } \hspace*{\fill}% \caption{Comparing speedup across different workloads in terms of the number of processors in the processor graph. Higher is better.} \label{fig:res-speedup} \end{figure*} \begin{figure*}[t!] \centering \begin{minipage}{.49\textwidth} \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-classic/Alpha-cpl.pdf} \label{fig:rgg-clas-res-cpl} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-low/Alpha-cpl.pdf} \label{fig:rgg-high-cpl} } \hspace*{\fill}% \caption{Comparing the lengths of the critical paths across RGG-classic and RGG-high workloads in terms of $\alpha$ of the application graph} \label{fig:alpha-cpl} \end{minipage} \hfill \begin{minipage}{.24\textwidth} \includegraphics[width=.97\linewidth]{figures/results/RGG-medium/Beta-cpl.pdf} \caption{Comparing CPL for RGG-medium in terms of different values of $\beta$ in the input graphs} \label{fig:cpl-not-changed-for-beta} \end{minipage} \hfill \begin{minipage}{.24\textwidth} \centering \includegraphics[width=.97\linewidth]{figures/results/RGG-high/Number_Of_Tasks-speedup.pdf} \caption{Comparing speedup for RGG-high in terms of number of tasks in the input graphs. Higher is better.} \label{fig:task-speedup} \end{minipage}% \end{figure*} \fi In this section we compare the performance of our critical path finding algorithm (CEFT) against the current state of the art critical path algorithm (CPOP). We also present a brief comparison of the extension of our critical path algorithm (CEFT-CPOP) to function as a scheduling algorithm and compare its results against CPOP. Since the only difference between CEFT-CPOP and CPOP is the method by which the critical paths are found and mapped, makespan related metrics between these two algorithms help us clearly understand the effects of finding the right critical path. Table~\ref{tab:ceft-cpop} compares CEFT and CPOP in terms of the critical path lengths produced and corresponding makespans. Figures~\ref{fig:cpl-comparison} and \ref{fig:makespan-comparison} put table~\ref{tab:ceft-cpop} into graphical context. We can observe from these graphs that CEFT produces either longer or same length critical paths as CPOP in the classic workload. However, when heterogeneity is better expressed, we produce shorter makespans in about 83\% of the cases. This is similarly reflected in the corresponding makespans produced by CEFT. Note however, that the table only provides the percentage of the number of instances in which path lengths and corresponding makespans are \textit{longer}, \textit{equal} or \textit{shorter} and discloses nothing about the relative quality of the solutions obtained. Figures~\ref{fig:rgg-clas-res-cpl} and \ref{fig:rgg-high-cpl} on the other hand help understand the relative quality of the solutions obtained by the two algorithms. Both the plots shown here are scatter plots. As the density of the points in the scatter plot is so high, we chose to offset the points that are on the line corresponding to a particular $\alpha$, by a small random amount (in the x-axis; within a preset range) to form a ``bar'' that better displays how the ratios are distributed. All the points inside the bar correspond to the value of $\alpha$ that the bar sits on top of. As the graphs become wider (with increasing values of $\alpha$), the critical path lengths found by CEFT become shorter. This stems from the fact that, while no other application graph parameter changes, the increase in the width of the graph gives rise to more shorter paths from the source task to the exit task. Since the objective of CEFT is to find the longest shortest path from all the possible paths, the critical path lengths produced by it decrease as well. This holds true in the case of the high heterogeneity workloads as well (RGG-high). \footnote{\ifdefined At this juncture, we have to mention that this way of representing the critical path length ratio is a bit misleading. The density of the points in the lower portions of the graph is not clearly visible and hence it seems like CEFT always produces longer critical paths than CPOP. While this is true in the case of RGG-classic, the CPL produced by our algorithm (CEFT) is shorter in 83.99\% of the experiments in RGG-high. Another note on the graph, is that it appears as if t\else T\fi here are some values that look like they are below the zero line (which is impossible since the critical path length ratio between any two algorithms can never be negative). This is because the plot uses an 'x' marker to plot the points.} \ifdefined \begin{figure*}[ht!] \centering \begin{minipage}{.46\textwidth} \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-classic/Alpha-cpl.pdf} \label{fig:rgg-clas-res-cpl} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-low/Alpha-cpl.pdf} \label{fig:rgg-high-cpl} } \hspace*{\fill}% \caption{Comparing the lengths of the critical paths across RGG-classic and RGG-high workloads in terms of $\alpha$ of the application graph} \label{fig:alpha-cpl} \end{minipage} \hfill \begin{minipage}{.22\textwidth} \includegraphics[width=.97\linewidth]{figures/results/RGG-medium/Beta-cpl.pdf} \caption{Comparing CPL for RGG-medium in terms of different values of $\beta$ in the input graphs} \label{fig:cpl-not-changed-for-beta} \end{minipage} \hfill \begin{minipage}{.22\textwidth} \centering \includegraphics[width=.97\linewidth]{figures/results/RGG-high/Number_Of_Tasks-speedup.pdf} \caption{Comparing speedup for RGG-high in terms of number of tasks in the input graphs. Higher is better.} \label{fig:task-speedup} \end{minipage}% \end{figure*} \fi \ifdefined \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-classic/Number_Of_Resources-speedup.pdf} \label{fig:rgg-clas-res-speedup} } \centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-low/Number_Of_Resources-speedup.pdf} \label{fig:rgg-low-res-speedup} } \centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-medium/Number_Of_Resources-speedup.pdf} \label{fig:rgg-med-res-speedup} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-high/Number_Of_Resources-speedup.pdf} \label{fig:rgg-high-res-speedup} } \hspace*{\fill}% \caption{Comparing speedup across different workloads in terms of the number of processors in the processor graph. Higher is better.} \label{fig:res-speedup} \end{figure*} \fi It is evident from table~\ref{tab:ceft-cpop} that as heterogeneity in the workload becomes more apparent, CEFT outperforms CPOP in terms of both the critical path length and the makespan. In RGG-classic CEFT never produces a critical path that was shorter than the critical paths produced by CPOP which resulted in makespans that were longer in 26.95\% of the experiments. Another interesting thing to note here is that even in the case where heterogeneity is well expressed (RGG-high) our algorithm does not perform as well as CPOP only in 7.66\% of the experiments. \ifdefined \else \begin{figure*}[h] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-classic/Beta-slr.pdf} \label{fig:rgg-clas-beta-slr} }\centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-low/Beta-slr.pdf} \label{fig:rgg-low-beta-slr} }\centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-medium/Beta-slr.pdf} \label{fig:rgg-med-beta-slr} }\centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-high/Beta-slr.pdf} \label{fig:rgg-high-beta-slr} }\hspace*{\fill} \caption{Comparing SLR across different workloads in terms of $\beta$ of the input graphs. Lower is better.} \label{fig:Beta-slr} \end{figure*} \fi In the most heterogeneous workload RGG-high, CEFT produces shorter critical path lengths in 83.99\% of the experiments which lead to shorter makespans in 89.69\% of the experiments. There seems to be a strong correlation between shorter critical path lengths and shorter makespans. However, one cannot conclude that shorter critical path lengths result in shorter makespans as it is important to identify the \textit{correct} shorter critical path which would lead to shorter makespans. From the results in this table, our algorithm does well in terms of selecting the correct critical paths. \ifdefined \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-classic/Beta-slr.pdf} \label{fig:rgg-clas-beta-slr} }\centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-low/Beta-slr.pdf} \label{fig:rgg-low-beta-slr} }\centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-medium/Beta-slr.pdf} \label{fig:rgg-med-beta-slr} }\centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-high/Beta-slr.pdf} \label{fig:rgg-high-beta-slr} }\hspace*{\fill} \caption{Comparing SLR across different workloads in terms of $\beta$ of the input graphs. Lower is better.} \label{fig:Beta-slr} \end{figure*} \begin{figure*}[t] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-classic/Beta-speedup.pdf} \label{fig:rgg-clas-beta-speedup} } \centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-low/Beta-speedup.pdf} \label{fig:rgg-low-beta-speedup} } \centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-medium/Beta-speedup.pdf} \label{fig:rgg-med-beta-speedup} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results/RGG-high/Beta-speedup.pdf} \label{fig:rgg-high-beta-speedup} } \hspace*{\fill}% \caption{Comparing speedup across different workloads in terms of $\beta$ of the input graphs. Higher is better.} \label{fig:beta-speedup} \end{figure*} \fi Figure~\ref{fig:res-speedup} shows how the speedup metric \ifdefined from section~\ref{sec:paper-crit-path-effi} \fi varies when the number of processor is varied. As the number of processors is increased, the average speedup achieved is naturally higher which is clearly reflected in the graphs. In the standard workload, RGG-classic all the algorithms perform nearly equally. When the heterogeneity is increased, it is evident from figures~\ref{fig:rgg-med-res-speedup} and \ref{fig:rgg-high-res-speedup} that the average speedups achieved by CPOP become progressively lesser with increase in number of processors. This is mainly because of the fact that CPOP assigns all the tasks from the critical path onto a single processor. The choice of assigning the entire critical path on a single processor might prove to be an excellent choice in scenarios where the communication-to-computation ratio (CCR) is very high, thereby making communication costs high. But in the majority of the cases, where the computation costs dominate the communication costs, this proves to be a wrong decision and the makespans suffer in kind. This trend of CPOP not being able to catch-up with CEFT-CPOP is evident in graphs based on metrics like number of tasks, $\alpha$, $\beta$ etc. Figure~\ref{fig:task-speedup}, highlights another interesting result. Upon careful inspection, the average speedup of CEFT-CPOP is the highest among all three comparison algorithms, until the number of tasks cross 1024 (incidentally, 512 is the highest number of tasks in synthetic workloads that have been used for testing the efficiency of scheduling algorithms previously~\cite{arabnejad2014list}). In figure~\ref{fig:Beta-slr}, we compare the schedule length ratio metric across the different workloads through this context. It is evident from the graphs that RGG-classic and RGG-low exhibit similar SLR patterns (RGG-low has a slightly lower SLR value on average). It is interesting to note however, that in RGG-medium and RGG-high, our algorithm produces the lowest average SLR value when $\beta\sim=50$. Setting $\beta$ to values close to 50 during the input graph generation, generates a good mix of tasks that require the different types of processors in the processor graph\footnote{This is better understood by referring back to our discussion of the two intervals $\mathcal{I}_1$ and $\mathcal{I}_2$ from section~\ref{sec:rand-gene-grap}. When $\beta\approx50$, there are approximately an equal number of tasks that use $\mathcal{I}_1$ for its first node-weight $\mathcal{I}_2$ for its second, as the number of tasks that use it vice-versa. This results in the most varied execution times across tasks. When $\beta$ goes farther away from the mean value of 50, it results in graphs that have more tasks that confirm to a specific ordering of the two intervals, thereby making the tasks more similar which leads to a less varied execution time table.}. When there is such a good mix of the types of tasks, it is easier to schedule it onto different types of processors as contention for the same kind of processor would be low. However, when $\beta$ goes away from 50 either side, it leads to increased demand for a certain type of processor, hence increasing the contention among tasks. This leads to increased makespan values when $\beta$ is farther away from 50. However, the critical path lengths calculated by our algorithm remains unaffected as shown figure~\ref{fig:cpl-not-changed-for-beta}, as one does not need to account for processor availability while calculating the critical path (since, this is equivalent to scheduling a linear DAG where all processors are available whenever a task is ready to be scheduled. \ifdefined This notion of the graph generator producing graphs that could potentially lead to lower makespans is further accentuated by figure~\ref{fig:beta-speedup}. In RGG-classic, since heterogeneity is incorporated differently (recall our discussion about the conventional way of implementing a random graph generator, like the one presented by Topcuoglu et al. in \cite{topcuoglu2002performance}) the speedup values across the different algorithms are very similar and we do not observe the U-shaped curve from figure~\ref{fig:rgg-high-beta-slr}. However, in the heavier workloads, we can clearly see the curve forming again. Once again, CPOP's method of calculating the critical path lets it down. As CPOP assigns all the tasks from the critical path (which might be composed of any number of different types of tasks, i.e. tasks requiring different amounts of the different types of processors) onto the same processor, the makespan suffers which leads to reduced speedup. \fi Another important parameter in the graph generation process is $\alpha$ which dictates the \textit{width} of the graph. Lower values of $\alpha$ produce tall skinny graphs, while larger values produce larger wide graphs. It is evident from figure~\ref{fig:alph-slr}, that the average SLR produced by the schedules found by our algorithm are lower than both CPOP and HEFT for all the different values of $\alpha$. \begin{figure*}[t] \centering \hspace*{\fill}% \csubfloat[Alpha - SLR]{ \includegraphics[width=0.25\linewidth]{figures/results/RGG-classic/Alpha-slr.pdf} \label{fig:alph-slr} }% \centerhfill \csubfloat[CCR - SLR]{ \includegraphics[width=0.25\linewidth]{figures/results/RGG-classic/CCR-slr.pdf} \label{fig:ccr-slr} }% \centerhfill \csubfloat[CCR - Slack]{ \includegraphics[width=0.25\linewidth]{figures/results/RGG-classic/CCR-slack.pdf} \label{fig:ccr-slack} }% \hspace*{\fill}% \caption{Comparing Slack and SLR for RGG-classic in terms of $\alpha$ and CCR of the input graphs. Lower is better for SLR.} \label{fig:alph-ccr-slr} \end{figure*} For smaller values of $\alpha$, our algorithm's SLR is lower than CPOP's SLR by $\sim$19\% and lower than HEFT's SLR by $\sim$6\%, which we denote by the tuple $[19, 6]$. This gap however reduces as the graph becomes wider, with the gap between our algorithm and HEFT vanishing at high values of $\alpha$. We do not highlight the results obtained from the other workloads as they exhibit similar patterns to RGG-classic. As the workload gets heavier (RGG-classic to RGG-high) the gap between average SLR produced by our algorithm (CEFT-CPOP) and CPOP is increased. In the case of RGG-high, for smaller values of $\alpha$, CEFT-CPOP's SLR is lower than CPOP's SLR by $\sim$34\%. In terms of robustness, the value of slack increases for all three algorithms for increasing values of $\alpha$. The schedules produced for thinner graphs have a lower tolerance to accommodate delays in execution of certain tasks. In the trivial case of the thinnest graph (which is a linear DAG), any schedule produced by a static scheduling algorithm will have zero slack as there is no possibility to overlap computation and communication (due to the serial nature of the graph). As the graph gets wider, there is more scope for overlapping computation with communication which in turns helps in schedules being more accommodative to delays in task execution; this in turn increases the slack as the graphs get wider. \begin{figure}[b] \centering \hspace*{\fill}% \csubfloat[Number of tasks]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-classic/Number_Of_Tasks-slr.pdf} \label{fig:rgg-clas-task-slr} } \centerhfill \csubfloat[Number of resources]{ \includegraphics[width=0.49\linewidth]{figures/results/RGG-classic/Number_Of_Resources-slr.pdf} \label{fig:rgg-clas-res-slr} } \hspace*{\fill}% \caption{Comparing SLR across different workloads in terms of number of tasks and resources. Lower is better.} \label{fig:task-res-slr} \end{figure} As the value of the communication-to-computation ratio (CCR) increases, interprocessor communication overhead dominates computation and hence, the performance of all three scheduling algorithms tends to degrade. This is shown by the use of the schedule length ration (SLR) metric in figure~\ref{fig:ccr-slr}. Our algorithm produces SLRs which are lower than CPOP's SLR by $\sim13\%$, for lower CCR values with HEFT producing better average SLRs. CEFT-CPOP continues to outperform CPOP for all the CCR values and produces similar average SLR to HEFT, for extremely large CCR values. Slack on the other hand, which is a measure of robustness\ifdefined (section~\ref{sec:paper-crit-path-slac-metr})\fi, decreases for all the three scheduling algorithms with increasing CCR values. This trend of the schedules becoming less tolerant to delays for increasing values of CCR is highlighted in figure~\ref{fig:ccr-slack}. Our algorithm provides the highest slack from the three algorithms compared here, while HEFT provides the lowest\ifdefined\footnote{Heterogeneous Earliest Finish Time (HEFT) is a \underline{greedy} list scheduling heuristic as we have discussed before. It provides the lowest slack compared to all the algorithms thereby making it less robust, but more efficient in terms of schedule length. However, our algorithm provides a slightly higher slack than HEFT, while still providing a much lower SLR value compared to HEFT. This leads us to believe that there is more room for optimization (as higher slack usually translates to larger windows of time in which a tasks start time can be moved) with the schedules generated by CEFT-CPOP which can be utilised to further decrease its makespan.}\fi. The slack produced by CPOP and CEFT-CPOP are similar (our algorithm produces slacks that are $\sim 1\%$ -- $\sim 2\%$ larger). \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[FFT-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/FFT_15-CCR-slr.pdf} \label{fig:fft-med-ccr-slr} } \centerhfill \csubfloat[GE-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/Gauss_14-CCR-slr.pdf} \label{fig:ge-med-ccr-slr} } \centerhfill \csubfloat[MD-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/MolecularDynamics_41-CCR-slr.pdf} \label{fig:md-med-ccr-slr} } \centerhfill \csubfloat[EW-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/Epigenomics_24-CCR-slr.pdf} \label{fig:ew-med-ccr-slr} } \hspace*{\fill}% \caption{Comparing SLR across the different real-world benchmarks (medium variants) in terms of CCR of the input graphs. Lower is better.} \label{fig:realworld-med-ccr-slr} \end{figure*} \begin{figure*}[t] \centering \hspace*{\fill}% \csubfloat[FFT-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/FFT_15-CCR-speedup.pdf} \label{fig:fft-clas-ccr-speedup} } \centerhfill \csubfloat[GE-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/Gauss_14-CCR-speedup.pdf} \label{fig:ge-clas-ccr-speedup} } \centerhfill \csubfloat[MD-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/MolecularDynamics_41-CCR-speedup.pdf} \label{fig:md-clas-ccr-speedup} } \centerhfill \csubfloat[EW-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/Epigenomics_24-CCR-speedup.pdf} \label{fig:ew-clas-ccr-speedup} } \hspace*{\fill}% \caption{Comparing speedup across the different real-world benchmarks (classic variants) in terms of CCR of the input graphs. Higher is better.} \label{fig:realworld-clas-ccr-speedup} \end{figure*} In figure~\ref{fig:task-res-slr}, we present how the SLR varies with increasing number of tasks in the application DAG and increasing number of resources in the resource graph. Arabnejad et al. suggest that the decrease in performance of the scheduling algorithms with the increase in number of tasks is due to a marked increase in the number of concurrent tasks. According to them, algorithms that have lookahead features tend to suffer more as these algorithms tend to base the decision of scheduling the current task heavily on its children tasks. For some DAGs that have many concurrent tasks to schedule, the processor load is substantially changed by the concurrent tasks to be scheduled after the current task. Therefore, the conditions at the time of scheduling the current task are different than the conditions at the time of scheduling its child tasks. This implies that the decision made by the algorithm to to schedule the parent task might not be valid, hence leading to poorer solutions. Our algorithm, in spite of incorporating lookahead features, provides the lowest SLR of all the three algorithms compared for smaller number of tasks ($n=128$ to $n=1024$). For larger graphs, HEFT manages to produce better makespans and hence better SLR values, but our algorithm continues to outperform CPOP. \ifdefined \else For more results and analysis we encourage the readers read a longer version of this paper on arxiv~\ref{ceft-arxiv}. \fi \ifdefined \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[FFT-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/FFT_15-CCR-slr.pdf} \label{fig:fft-clas-ccr-slr} } \centerhfill \csubfloat[GE-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/Gauss_14-CCR-slr.pdf} \label{fig:ge-clas-ccr-slr} } \centerhfill \csubfloat[MD-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/MolecularDynamics_41-CCR-slr.pdf} \label{fig:md-clas-ccr-slr} } \centerhfill \csubfloat[EW-classic]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-classic/Epigenomics_24-CCR-slr.pdf} \label{fig:ew-clas-ccr-slr} } \hspace*{\fill}% \caption{Comparing SLR across the different real-world benchmarks (classic variants) in terms of CCR of the input graphs. Lower is better.} \label{fig:realworld-clas-ccr-slr} \end{figure*} \fi \subsection{Real World Benchmarks} \label{sec:real-worl-benc} \ifdefined \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[FFT-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/FFT_15-CCR-speedup.pdf} \label{fig:fft-med-ccr-speedup} } \centerhfill \csubfloat[GE-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/Gauss_14-CCR-speedup.pdf} \label{fig:ge-med-ccr-speedup} } \centerhfill \csubfloat[MD-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/MolecularDynamics_41-CCR-speedup.pdf} \label{fig:md-med-ccr-speedup} } \centerhfill \csubfloat[EW-medium]{ \includegraphics[width=0.22\linewidth]{figures/results/benchmarks-medium/Epigenomics_24-CCR-speedup.pdf} \label{fig:ew-med-ccr-speedup} } \hspace*{\fill}% \caption{Comparing speedup across the different real-world benchmarks (medium variants) in terms of CCR of the input graphs. Higher is better.} \label{fig:realworld-med-ccr-speedup} \end{figure*} \fi \ifdefined Figures~\ref{fig:realworld-clas-ccr-slr},~\ref{fig:realworld-med-ccr-speedup} and~\ref{fig:realworld-med-ccr-slr} show the performance of the three algorithms in terms of the schedule length ratio (SLR) and speedup \else Figure~\ref{fig:realworld-med-ccr-slr} shows the performance of the three algorithms in terms of the schedule length ratio (SLR) \fi for the four real-world benchmarks from section~\ref{sec:paper-crit-path-real-worl-grap}: Fast Fourier Transform (FFT), Gaussian Elimination (GE), Molecular Dynamics code (MD) and the Epigenomics Workflow (EW). The values in the SLR graphs are the primary motivating factors for exploring the effectiveness of our algorithm using randomly generated graphs. Scheduling length ratio (SLR) is the ratio of the makespan to the length of the critical path (ignoring communication costs) when it is mapped onto the fastest processor. SLR is hence used as a metric to measure size of the application compared to the critical path. Lower values of SLR imply that the makespan is comparable to the length of the critical path and hence scheduling algorithms with lower values of SLR are preferred. Intuitively, applications that exhibit higher SLR (applications whose optimal makespan is much larger compared to the length of the critical path) is useful for testing the effectiveness of scheduling algorithms as they require the scheduling algorithm to make the right decision on a larger percentage of the total number of tasks in the application. \ifdefined Figure~\ref{fig:realworld-clas-ccr-slr} shows that the average SLR values of the different algorithms on the real-world benchmarks is much lower than the SLR values of the algorithms on the modified versions (medium variants; generated in a similar fashion to RGG-medium using the structure of the real world graphs as discussed in sections~\ref{sec:paper-crit-path-real-worl-grap} and~\ref{sec:rand-gene-grap}). The graphs for the medium variants of the real-world graphs are shown in the paper as they are a token representative of the three randomly generated datasets (low, medium and high).\fi From figure~\ref{fig:realworld-med-ccr-slr}, we can see that the performance (SLR) of all the algorithms on the real-world benchmarks suffer as the CCR increases. As explained earlier, the SLR is the ratio between the makespan and the length of the critical path. As communication costs increase well beyond computation costs ($CCR >>1.0$), the tasks from the critical paths are mapped onto the same processor in attempt to minimize the critical path length. The makespans however increase as a general consequence of increased communication costs. This explains why the average SLR values go up as communication costs increase. \ifdefined In the classic versions of the real-world benchmarks (figures~\ref{fig:realworld-clas-ccr-slr}) CEFT-CPOP produces the lowest average SLR values. Across all the four real-world benchmarks, CEFT produces critical paths of the same length as CPOP in $\sim 97.28\%$ of the cases. This is reflected in the SLR values in figure~\ref{fig:realworld-clas-ccr-slr}. Owing to the small number of tasks, HEFT outperforms CEFT by a small fraction in all the real-world benchmarks except in GE (this is further supported by the trend from figure~\ref{fig:rgg-clas-task-slr} where HEFT produces lower SLRs as the number of tasks increases). However, when the heterogeneity is generated using our method (as discussed in section~\ref{sec:rand-gene-grap}; medium variant of the real-world benchmarks), CEFT produces critical paths that are shorter than CPOP's in $\sim 73.8\%$ which lead to better makespans in $\sim 77.77\%$ of the cases.\fi \subsection{HEFT Ranking Function with CEFT} \label{sec:heft-rank-func-with-ceft} \ifdefined \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-classic/Alpha-speedup.pdf} \label{fig:all-rgg-clas-alpha-speedup} } \centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-low/Alpha-speedup.pdf} \label{fig:all-rgg-low-alpha-speedup} } \centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-medium/Alpha-speedup.pdf} \label{fig:all-rgg-med-alpha-speedup} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-high/Alpha-speedup.pdf} \label{fig:all-rgg-high-alpha-speedup} } \hspace*{\fill}% \caption{Comparing speedup across different workloads in terms of $\alpha$ of the input graphs. Higher is better.} \label{fig:all-alpha-speedup} \end{figure*} \begin{figure*}[ht] \centering \hspace*{\fill}% \csubfloat[RGG-classic]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-classic/Alpha-slr.pdf} \label{fig:all-rgg-clas-alpha-slr} } \centerhfill \csubfloat[RGG-low]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-low/Alpha-slr.pdf} \label{fig:all-rgg-low-alpha-slr} } \centerhfill \csubfloat[RGG-medium]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-medium/Alpha-slr.pdf} \label{fig:all-rgg-med-alpha-slr} } \centerhfill \csubfloat[RGG-high]{ \includegraphics[width=0.22\linewidth]{figures/results-all/RGG-high/Alpha-slr.pdf} \label{fig:all-rgg-high-alpha-slr} } \hspace*{\fill}% \caption{Comparing slr across different workloads in terms of Alpha of the input graphs. Lower is better.} \label{fig:all-alpha-slr} \end{figure*} \fi Topcuoglu et al.~\cite{topcuoglu2002performance} calculate two heuristics for assigning priorities to tasks: downward rank ($rank_d$) and upward rank ($rank_u$). The downward rank of a task $t_i$ is the length of the longest path from the entry task to $t_i$ in the DAG. Consequently, the upward rank ($rank_u$) of task $t_i$ is the length of the longest path from $t_i$ to an exit node. Both these ranks are calculated using average execution times and average communication times. In order to calculate the priorities more accurately we propose two new ranking schemes called $rank_{ceft-down}$ and $rank_{ceft-up}$. For the former, we use the CEFT dynamic programming array that has been calculated by traversing the application graph in a topological order. For every task, we choose the processor that minimizes the CEFT value and use that as its downward rank. \ifdefined As explained in section~\ref{sec:paper-crit-path-crit-path-pape-dyna-prog-solu}, CEFT calculates the length of the critical path from the source task to a given task using accurate execution times, which serves as the primary motivation for modifying $rank_d$ in this manner.\fi In order to calculate the upward rank using CEFT ($rank_{ceft-up}$), we transpose the application graph (invert the edges, keeping the vertices same) and run the CEFT algorithm on this newly transposed graph. We then employ a similar strategy as before, assigning every task to the processor that minimizes its CEFT value and use that value as its upward rank. In all the graphs presented in this section, the bars labelled HEFT refer to the default HEFT algorithm using the upward rank $rank_u$, while CEFT-HEFT-UP refers to the HEFT algorithm using $rank_{ceft-up}$. HEFT-DOWN refers to HEFT using the downward rank $rank_d$ while CEFT-HEFT-DOWN refers to HEFT using $rank_{ceft-down}$. Figure~\ref{fig:all-alpha-speedup} compares the average \textit{speedup} obtained by these algorithms with the three algorithms from the previous section. It is evident from the graphs that these variants perform very similar to the HEFT variants. In the classic variant of the randomly generated benchmarks, HEFT produces an average speedup $\sim5\%$ greater than CEFT-HEFT-UP while HEFT-DOWN and CEFT-HEFT-DOWN produce very similar speedups with CEFT-HEFT-DOWN winning marginally for wider graphs. However, in the other variants of the workload we can see that the upward ranking function calculated with accurate computation and communication costs yields marginally better results than HEFT.\ifdefined Figure~\ref{fig:all-alpha-slr} shows the corresponding SLR values of all the algorithms.\fi \section{Defining a critical path for heterogeneous processors} \label{sec:paper-crit-path-form} As we saw in the previous section, identifying the critical path of a task graph on a heterogeneous parallel computer is complicated because the execution and communication times vary depending on the allocation of tasks to processors. Existing approaches use simplifying assumptions, such as taking the average execution time of a task across all processors. For a restricted version of the problem, we can find a more accurate estimate than previous approaches. If we assume that communication costs between processors are zero, then for each task we simply choose the processor allocation that minimizes the execution time of the task. This provides us with a single minimal execution time for each task, allowing us to find the critical path using the standard algorithm for homogeneous architectures. This same approach can also be used if we make the same assumption about communication costs as Topcuoglu et al.~\cite{topcuoglu2002performance}: the communication cost is the same irrespective of the source and destination processor, even if the source and destination are the same processor. Where communication costs are entirely independent of the allocation of tasks, we can simply allocate each task to the processor that minimizes its execution time. Although this approach is no more complex than Topcuoglu et al.'s and is likely more accurate, to our knowledge we are the first to propose it. However, this assumption that communication costs are independent of processor allocation of tasks is unsatisfactory for two main reasons. First, on large scale parallel computers communicating with a nearby processing element is typically faster than sending the same data over a long distance. Second, there is an important case where two tasks are allocated to the same processor, which can eliminate communication costs entirely. A critical path for heterogeneous parallel computers should ideally take account of heterogeneous communication costs as well as computation costs. However, when we consider both the costs together, there is no longer a simple strategy for choosing the best allocation of tasks to processors that will minimize the critical path. There is an exponential number of possible allocations of processors to tasks. In section~\ref{sec:paper-crit-path-crit-path-pape-dyna-prog-solu} we present a polynomial time dynamic programming algorithm that finds a critical path considering both heterogeneous computation and communication costs. First, we present several definitions to formalize the problem. \subsection{Definitions} \label{sec:paper-crit-path-pape-defi} Table~\ref{tab:nota} gives a list of notations and their descriptions that we will be using for the rest of this paper. A task graph is a weighted directed acyclic graph $G_t(V_t,E_t)$, such that each vertex $V_t$ is a program statement(s) or kernel in the application, and \mbox{$E_t \subseteq (V_t \times V_t)$}, represents the communication edges between the vertices. The system resources (processor graph) are represented by a weighted undirected graph $G_r(V_r,C_r)$. Where $V_r$ represents a processing element in the underlying processor graph, while the edge $C_r \subseteq (V_r \times V_r)$, represents the communication links. For the sake of simplicity in the rest of this paper we will refer to a task $v_t \in V_t$ as $t_i$ and a processor $v_r \in V_r$ as $p_j$. \ifdefined \begin{mydef} We define an \textbf{assignment} of a task as it being mapped on to a processor for execution. It differs from a schedule in the sense that we do not have to specify order of execution as we deal with one task and one processor. The \textbf{assignment} of a path is set of individual assignments for all the tasks in the given path\footnote{In the rest of this paper, we use \textbf{assignment} and \textbf{mapping} interchangeably.}. \end{mydef} \begin{table}[t!] \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|} \hline \textbf{Notation} & \textbf{Description} \\ \hline DP & The dynamic programming array \\ \hline $|\mathcal{R}|$ & Number of processors \\ \hline $|\mathcal{T}|$ & Number of tasks \\ \hline $|\mathcal{E}|$ & Number of edges \\ \hline $\mathcal{P}(t_i)$ & Parents of the task $t_i$ \\ \hline $compCost(t_i, p_j)$ & Execution time of task $t_i$ on processor $p_j$ \\ \hline $\mathcal{L}(p_i)$ & Communication startup time of processor $p_i$\\ \hline \end{tabular} } \caption{List of notations} \label{tab:nota} \end{center} \end{table} \begin{mydef} $\mathcal{P}(t_i)$ denotes the set of parents (commonly referred to as the set of immediate predecessors in the literature) of task $t_i$ in the given DAG. Any task that has no parent is called a \textit{source task} or an \textit{entry task}. Similarly, any task that doesn't have any children are identified as a \textit{leaf node} or \textit{exit task}. \end{mydef} \else \begin{table}[t!] \begin{center} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|} \hline \textbf{Notation} & \textbf{Description} \\ \hline $compCost(t_i, p_j)$ & Execution time of task $t_i$ on processor $p_j$ \\ \hline $\mathcal{L}(p_i)$ & Communication startup time of processor $p_i$\\ \hline DP & The dynamic programming array \\ \hline $|\mathcal{R}|$ & Number of processors \\ \hline $|\mathcal{T}|$ & Number of tasks \\ \hline $|\mathcal{E}|$ & Number of edges \\ \hline $\mathcal{P}(t_i)$ & Parents of the task $t_i$ \\ \hline $EST(t_i, p_j)$ & Earliest start time of task $t_i$ on processor $p_j$\\ \hline $EFT(t_i, p_j)$ & Earliest finish time of task $t_i$ on processor $p_j$\\ \hline \end{tabular} } \caption{List of notations} \label{tab:nota} \end{center} \end{table} \fi \begin{mydef} We define \textbf{communication cost} between task $t_k$ on processor $p_l$ and task $t_i$ on processor $p_j$ as, \begin{equation} \label{eq:crit-path-comm-cost} C_{comm}(\{t_k, p_l\}, \{t_i, p_j\}) \\ = \begin{cases} \mathcal{L}(p_l) + \dfrac{data_{t_k, t_i}}{c_{p_l, p_j}} & \text{, $p_j \neq p_l$} \\ 0 & \text{, $p_j = p_l$} \end{cases} \end{equation} \noindent where $\mathcal{L}(p_l)$ is the setup time associated with a processor every time it has to send data; $data_{t_k, t_i}$ represents the data size that has to be sent from task $t_k$ to task $t_i$. Similarly, $c_{p_l, p_j}$ is the bandwidth between processor $p_l$ and $p_j$. \end{mydef} \noindent The critical-path of a DAG is conventionally~\cite{arabnejad2014list} defined as following, \begin{mydef} \label{def:crit-path-defi} \textbf{Critical-Path} (CP) of a DAG is the longest path from the entry node to the exit node in the application graph. The minimum critical-path length ($CP_{MIN}$) is computed by considering the minimum computational costs of each task in the critical path. \end{mydef} This definition of a critical-path suggests that the critical-path is a property \textit{only} of the application DAG. Although this holds true in the homogeneous setting, it breaks down in heterogeneous setting. We examine this through the following lemmas and their proofs. \begin{lemma} \label{lemm:crit-path-inde} The critical path cannot be independent of its mapping to the processors in a heterogeneous parallel machine \end{lemma} \begin{proof} In a heterogeneous setting, the execution time of tasks in the DAG is given by $compCost(t_i, p_j)$ which is a cell in a two-dimensional matrix of order $|\mathcal{T}|\times|\mathcal{R}|$. This implies that we cannot reduce the vector $compCost(t_i)$ into a scalar value and hence the vertex weights of the DAG cannot be known a priori as the weights do not exist independent of a mapping for the tasks onto processors. Weights cannot be given a single value independent of a mapping, and hence the critical path can not exist independent of its mapping. \end{proof} \subsection{Defining the critical path} In section~\ref{sec:paper-crit-path-pape-back}, we saw that existing definitions of the critical path are inadequate for the heterogeneous execution setting. The definition of critical-path, which was valid in the homogeneous setting, is being used to estimate the critical path in the heterogeneous setting. In this section, we will attempt to redefine the critical-path for the newer setting. \ifdefined Traditionally, the \textit{earliest finish time} of a task is the earliest time at which it can finish under a legal partial schedule and is defined as follows. \begin{mydef} Earliest Start Time (EST) is defined as the earliest time in the schedule at which a given task $t_i$ can start \begin{equation} \label{eq:crit-path-est} EST(t_i, p_j) = max(avail[j], \max_{t_m \in pred(t_i)}(AFT(t_m) + c_{m,i})) \end{equation} \end{mydef} \begin{mydef} Consequently, Earliest Finish Time (EFT) is defined as the earliest time at which the task $t_i$ can finish, \begin{equation} \label{eq:crit-path-eft} EFT(t_i, p_j) = EST(t_i, p_j) + w(t_i, p_j) \end{equation} \noindent where $w(t_i, p_j)$ is the execution time of task $t_i$ on processor $p_j$. \end{mydef} Although this definition conveys the meaning of the earliest start time and end times, they are not adequate when it comes to defining the earliest start and finish times of tasks when calculating the critical-path. Several pieces of literature~\cite{kwok1996dynamic,shi2006scheduling,arabnejad2014list} have redefined the earliest finish time to suit their needs. In order to define the critical-path in a heterogeneous setting, we also redefine the start and finish times of tasks. \fi \begin{mydef} A \textbf{Critical-Path} (CP) is the longest path in the DAG when it has a corresponding optimal partial assignment \end{mydef} This definition of the critical-path stems from Lemma~\ref{lemm:crit-path-inde} which states that the CP cannot be independent of its partial assignment. Hence, we define our CP as the path that has the longest path length when all the tasks in that path are mapped in the most effective way possible \begin{mydef} From these two restrictions imposed by the new definition of the CP, we redefine our earliest finish time as Critical Earliest Finish Time (CEFT) : \begin{multline} \label{eq:crit-path-ceft} DP(t_i, p_j) = \\\textbf{max}_{T_k \in \mathcal{P}(t_i)} \{ \textbf{min}_{p_l \in \{0, \cdots, |\mathcal{R}|-1\}} \{compCost(t_i, p_j)\\ + DP(T_k, p_l)+ comm(\{t_i, p_j\}, \{T_k, p_l\})\} \} \end{multline} \end{mydef} Note that the above definition~\ref{eq:crit-path-ceft} is entirely satisfactory in two circumstances: when we consider the length of a single path within the task graph and when tasks can be duplicated onto multiple processors. When we consider multiple paths through the graph simultaneously and task duplication is not possible, the problem becomes more difficult. In the next section we deal with the case where paths are considered in isolation or tasks may be duplicated. \section{Introduction} \label{sec:paper-crit-path-intr} \IEEEPARstart{S}{cheduling} of tasks onto resources is one of the most fundamental problems in parallel computing. A \textit{critical path} is the longest chain of dependent tasks in a graph. It is impossible to execute the graph in less time than the length of the critical path, even with infinite resources. Many successful scheduling algorithms~\cite{kohler1975preliminary, topcuoglu2002performance} prioritise tasks on the critical path. Although the critical path is well defined when the computing resources are all of the same type, a problem arises on heterogeneous parallel computers. A given task usually has a different execution time depending on the type of computing resource upon which it is executed. For example, a heterogeneous parallel machine may consist of a small number of powerful processors that execute tasks quickly, and a larger number of low-power processors that are more energy efficient. Without a fixed execution time for each task, the critical path is poorly defined\footnote{However, if there is no communication cost, the problem is simple to solve. One can simply form a new graph by placing every task on a processor that minimizes its finish time and calculate the longest path in this resultant graph.}. In addition, communication time between processors typically varies in real machines. For example, communicating between cores on a single chip usually has a low cost, whereas communicating over the network in a cluster is much slower and hence has a higher cost. \ifdefined Existing algorithms to compute the critical path of a graph for heterogeneous machines make simplifying assumptions. One simple strategy is to take each of the execution times of a given task on various processors and average them~\cite{kwok1996dynamic}. Another, is to assume that all tasks on the critical path are executed on a single processor, and to simply choose the processor that minimizes the critical path length \cite{topcuoglu2002performance}. \fi In this paper we propose a definition of the critical path for heterogeneous processors that is much closer to the intuitive idea of the shortest possible execution time, based purely on dependencies. A practical problem with our definition is that for each task we need to consider all of the different processors that it can be executed on. This could potentially lead to an exponential search space. However, we provide a dynamic programming algorithm that can consider all possible allocations in polynomial time. Our main contributions are: \begin{itemize} \item We propose a new definition of the critical path for tasks graphs on parallel computers with heterogeneous execution and communication times. \item We provide a novel dynamic programming algorithm for finding our critical path (Critical Earliest Finish Time (CEFT)) in $\mathcal{P}^2|\mathcal{E}|$ time, where $\mathcal{P}$ is the number of classes of processors and $|\mathcal{E}|$ is the number of edges in the task graph. We evaluate our new approach and find that quality of the critical paths are shown to be better than those produced by the state of the art CPOP with our critical paths being shorter in most cases. \item We also extend our critical path finding algorithm into a DAG scheduling algorithm (\textit{CEFT-CPOP}) by replacing the path found by our algorithm (with its corresponding partial assignment) into the CPOP algorithm. Our experiments suggest that our algorithm (CEFT-CPOP) produces smaller makespans in 15.9\%, 75.94\%, 90.29\% and 89.69\% of the experiments in four different models of parallel workloads. \end{itemize} \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This work was supported by Science Foundation Ireland grant 12/IA/1381 and IRC Enterprise Partnership Scheme in collaboration with IBM Research, Ireland. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{A dynamic programming solution} \label{sec:paper-crit-path-crit-path-pape-dyna-prog-solu} \begin{algorithm}[t!] \scriptsize \caption{Identify \& map the critical-path of a given DAG onto a set of processors} \label{algo:map-path} \begin{algorithmic}[1] \Require Given application graph is a DAG and vertex IDs are in topological order \Function{FindAndMapCriticalPath}{$G_t, G_r$} \For {$t_i=0 \cdots |\mathcal{T}|$} \If {current task is a source task} \State Set DP($t_i$, $p_j$) as the execution time of $t_i$ on each processor \Else \For {$p_j=0 \cdots |\mathcal{R}|$} \For {$t_k \in pred(t_i)$} \For {$p_l=0 \cdots |\mathcal{R}|$} \State $t_i$ is the current task under investigation \State $p_j$ is the current processor to which $t_i$ is mapped currently \State $t_k$ is a parent of $t_i$ \State $p_l$ is the processor to which $t_k$ is mapped currently \Let{$commCost$}{$comm(\{t_k, p_l\},\{t_i,p_j\})$} \Let{$compCost$}{$compCost(t_i,p_j)+DP(t_k,p_l)$} \Let{$totalCost$}{$compCost+commCost$} \EndFor \State Choose the processor $p_l$ that minimizes the EFT of $t_k$ \EndFor \State From among these minimized choices of $(t_k, p_l)$ pairs, choose the one that maximizes eq~\ref{eq:crit-path-ceft} and call it $(t_k^{max},p_j^{min})$ \Let{$DP(t_i,p_j)$}{$totalCost$ belonging to the $(t_k^{max},p_j^{min})$ pair} \Let{$DP(t_i,p_j).path$}{$DP(t_k^{max},p_j^{min}).path$} \State $DP(t_i,p_j).path.push\_back((t_i,p_j))$ \EndFor \EndIf \EndFor \For {$t_s \in listOfSinks$} \For {$p_s=0 \cdots |\mathcal{R}|$} \Let{$cost$}{$DP(t_s,p_s)$} \EndFor \Let{$p_s^{min}$}{$p_s$ that minimizes $cost$} \EndFor \State From among these minimized costs, choose the task $t_s^{max}$ that maximizes the minimized cost \State The critical-path is the path represented by $DP(t_s^{max},p_s^{min}).path$ \EndFunction \end{algorithmic} \end{algorithm} Our definition~\ref{eq:crit-path-ceft} of the critical earliest finish time (CEFT) includes an optimal mapping of tasks to processors, and allows us to define a more accurate critical path for heterogeneous architectures than previous approaches. However, there are an exponential number of allocations of tasks to processors, so any algorithm that considers all mappings individually will require exponential time. In this section, we present a dynamic programming approach that computes the length of a path in the dependence graph using our CEFT-based definition of dependence length. Using this approach we formulate a polynomial time algorithm that finds the CEFT-longest dependence path in the task graph. In the case where tasks can be duplicated to reduce communication costs, a longest path is also a critical path in the task graph. Algorithm~\ref{algo:map-path} traverses the vertices of the task graph in topological order. The algorithm computes the critical earliest finish time ($CEFT(t_i,p_j)$) of each task, $t_i$, on each of the processors, $p_j$, in the machine. Note that where the machine contains groups of multiple identical processors (with the same computation and communication times) the entire group can be considered a single processor for the purposes of computing a critical path. A critical path computes a lower bound on the execution time of the task graph based solely on the length of dependence chains rather than on resource constraints. Therefore, having multiple instances of the same class of processor does not affect the critical path. Where a task, $t_i$ has no predecessors (i.e. a source task) its $CEFT(t_i,p_j)$ on processor $p_j$ is simply the execution time of that task on the processor. Where a task has one or more parents in the task graph, the task cannot start executing until the predecessor task, $t_k$, has completed and its results have been communicated. However, we do not have a single critical earliest finish time for $t_k$. Instead we have a separate $CEFT(t_i,p_j)$ for each processor $p_j \in p$. To compute $CEFT(t_i, p_j)$, that is the earliest finish time of task $t_i$ on processor $p_j$ we consider each of the possible processor allocations, $p_l$, of the predecessor task, $t_k$. We select the allocation of $t_k$ to $p_l$ that results in the lowest value for $CEFT(t_i, p_j)$, taking into account the execution time of task $t_i$ on processor $p_j$ and the communication time between processor $p_l$ and $p_j$. Where tasks $t_k$ and $t_i$ are allocated to the same processor, we assume that communication costs are zero. The resulting algorithm is significantly more complicated than the critical path algorithm for homogeneous architectures while having a higher time complexity. However, the time complexity is polynomial and we find the critical earliest finish time for each task without having to separately enumerate all possible allocations of all tasks. The objective of our algorithm is to redefine the earliest finish time to more closely represent the intuitive idea behind the shortest possible execution time based on dependencies. To this end, we do not fix the allocation/mapping of a task to a processor as we progress through the algorithm by iterating over the dynamic programming array. We instead, compute where the parents of the current task should be mapped to minimize the $CEFT$ of the current task on the current processor. This is done in lines 16--18 in the algorithm. We set up our loops to be able to iterate over four variables : $t_i$ (current task), $p_j$ (current task's current processor), $t_k$ (current task's current parent) and $p_l$ (current parent's current processor). From lines 6--12, it is evident that for every combination of a current task and a specific processor for the current task ($t_i, p_j$) we examine all possible assignments of all possible parents and choose the set of assignments of the parents which minimizes the earliest finish time of task $t_i$ on processor $p_j$. We reiterate by stressing on the fact that this \textit{does not} fix the assignment for the current task on the current processor. It simply examines it. This also does not fix the assignment of any of the parents of task $t_i$. The algorithm only fixes the assignment of the parent that has led to the minimization of the earliest finish time of the task $t_i$, locally to the pair of $t_i, p_j$ under consideration\ifdefined\footnote{This gives us a major added benefit of incorporating lookahead features for much lesser complexity.~\cite{arabnejad2014list} claim to incorporate lookahead features into their algorithm by calculating an \underline{o}ptimistic \underline{c}ost \underline{t}able (OCT) for $O(|\mathcal{R}|^2|\mathcal{E}|)$ complexity. The original lookahead based scheduling algorithm~\cite{bittencourt2010dag}, incorporates lookahead features at a much higher complexity of $O(|\mathcal{T}|^4|\mathcal{R}|^3)$. We incorporate lookahead features into our algorithm without compromising as much complexity. We ensure tasks that are being inspected are not immediately assigned to a processor. Hence any a given task has a set of finish times that it could take potentially, based on the assignment of its parent and itself. Only when an all exit tasks have been visited and one is chosen, we fix the assignment of all the tasks in the CP. In doing so, we gain greater flexibility and the ability to find the right critical path.}\fi. This is reflected in the lines 19--20. Once the latest completing parent ($t_k^{max}$) has been identified, we copy the path information from this parent and append the pair ($t_i, p_j$) onto this path information. Lines 21--26 show how the critical-path can be fixed by examining the dynamic programming array ($DP$) entries of all the sink/exit nodes and identify $DP(t_s^{max},p_s^{min})$. This gives an idea of how the critical-path is always in a state of flux and is not fixed until the algorithm finishes. \begin{figure}[t!] \centering \includegraphics[scale=0.15]{figures/drawn/task-dupl-grap} \caption{Section of a sample application graph} \label{fig:paper-crit-path-task-dupl-grap} \end{figure} \subsection{Task duplication} \label{sec:paper-crit-path-task-dupl} Our algorithm finds the critical path by identifying the longest dependence path through the task graph. We use our CEFT definition of path length, meaning the time required for computation and inter-processor communication, assuming an optimal allocation of the tasks on the path to various classes of processors. However, this definition of the longest path contains an important assumption. We assume that we can compute the length of two different paths independently. However, the length of a path depends on the allocation of the tasks in the path to processors. If a single task appears on multiple different paths, the different paths may require a different allocation of that task to minimize path length. For instance, let us consider a section of an application graph shown in Figure~\ref{fig:paper-crit-path-task-dupl-grap}. Let us assume that all the tasks that are denoted using concentric circles are in the critical path of the application. For the sake of simplicity, let us assume that the amount of data to be transferred from task $t_j$ to tasks $t_k$ and $t_l$ is $\infty$. In this scenario when $t_k$ is being evaluated, its parent $t_j$ will be assigned to the same processor that $t_k$ gets assigned to and the same holds true in the case of $t_l$. When the final critical path is decided as we reach exit tasks in this application graph, there might be a situation where $t_k$ and $t_l$ might be assigned to different processors. If this situation arises, the task $t_j$ (even though it is not on the critical path) has to be assigned to two different processors to make sure that $t_k$ and $t_l$ stay on the critical path. Many existing scheduling algorithms use task duplication~\cite{kruatrachue1988grain,ahmad1998exploiting} to reduce communication costs on heterogeneous parallel architectures. Where a parent task has more than one successor, it can sometimes improve the schedule to duplicate the parent so that identical copies of the task execute in parallel on two different processors. This can reduce communication time between processors, particularly on heterogeneous architectures where different tasks are often suited to different types of processor. Where task duplication is used, our Algorithm~\ref{algo:map-path} will compute a correct critical path in all cases. However, where the subsequent scheduling approach does not allow task duplication, our algorithm may result in an overly-optimistic critical path length. In the case where task duplication is not allowed, each task must be allocated to exactly one processor. As with the case where tasks may be duplicated, we must deal with two sets of costs when computing path lengths. We must consider the cost of executing a given task on a given processor, and the cost of the communication between tasks on each possible pair of processors. Where the communication costs between processors can vary arbitrarily, this is equivalent to the Partitioned Boolean Quadratic Problem (PBQP) which is known to be NP-complete \cite{Scholz:2002}. Thus, in the absence of task duplication, finding a critical path in a task graph on parallel architectures with heterogeneous execution times and communication costs is NP-complete. \section{Complexity analysis} \label{sec:paper-crit-path-crit-path-pape-comp-anal} In this section we analyse the space and time complexity of the dynamic programming method from algorithm~\ref{algo:map-path} proposed in Section~\ref{sec:paper-crit-path-crit-path-pape-dyna-prog-solu}. The outermost loop in the algorithm runs from $t_i=0 \cdots |\mathcal{T}|$. This loop inspects all the tasks in the DAG. The second level loop inspects all possible processors for the current task $t_i$. This implies that, $p_j$ runs from $0 \cdots |\mathcal{R}|$. For each \mbox{(task, processor)} pair, we need to inspect every parent of $t_i$ as the algorithm tends to fix the parents processors based on its child's requirements. This necessitates that $t_k$ runs from $0 \cdots pred(t_i)$. To fix the parents processor, we need to inspect all the processors again to see which processor for the parent gives the earliest start for the current child. Hence, $p_l$ runs from $0 \cdots |\mathcal{R}|$. Then the complexity of the entire algorithm in the worst case of all the upper limits of these nested loops is: $|\mathcal{T}|\times|\mathcal{R}|\times n_{pred(t_i)}\times|\mathcal{R}|$. $n_{pred(t_i)}$ is the number of parents for any given task. In the general case, this can be assumed to be the average in-degree of the application DAG. The average in-degree of a DAG can be further written as $|\mathcal{E}|/|\mathcal{T}|$. Hence the complexity of the algorithm can be simplified as $O(|\mathcal{R}|^2|\mathcal{E}|)$. In the worst case, where the DAG is a fully connected graph, the number of edges in the graph is equal to $|\mathcal{T}|^2$. In this case, the complexity of the algorithm is increased to $O(|\mathcal{T}|^2|\mathcal{R}|^2)$ which is higher than the complexity of other list scheduling heuristics like HEFT and CPOP which is $O(|\mathcal{T}|^2|\mathcal{R}|)$. However, if processors can be divided into $\mathcal{P}$ processors (where processors in each class have identical computation and communication costs), then the algorithm only needs to deal with the number of such classes of processors rather than $|\mathcal{R}|$. This is feasible as our algorithm is a critical path finding algorithm and hence doesn't need to keep track of availability of processors. When trying to map a task from the critical path all processors will be free. This greatly reduces the computational complexity of our algorithm from $O(|\mathcal{T}|^2|\mathcal{E}|)$ to $O(\mathcal{P}^2|\mathcal{E}|)$, where $\mathcal{P}$ is the number of types of processors. The space complexity of the algorithm at first glance is $O(|\mathcal{R}||\mathcal{T}|)$ as the $DP$ is a two dimensional array of size $|\mathcal{T}|\times|\mathcal{R}|$. But this can be further reduced by storing the path information of only a frontier that is moving down along the DAG. Since we incorporate the path information from the previous states into the current state, we can ignore the state information of all the $DP$ elements that have been absorbed into other $DP$ elements. This would in turn be a frontier that is moving down the DAG. Hence, the space complexity can be reduced down to $O(\beta|\mathcal{R}|)$ where $\beta$ is the width parameter of the graph. \section{From critical path to makespan: CEFT-CPOP} \label{sec:paper-crit-path-from-crit-path-to-make} \begin{algorithm}[t!] \scriptsize \caption{The critical path on a processor (CPOP) algorithm} \label{algo:cpop} \begin{algorithmic}[1] \Function{CPOP}{} \State Set the comp costs of tasks and comm costs of edges with mean values \State Compute $rank_u$ of tasks by traversing graph upward, starting from exit task \State Compute $rank_d$ of tasks by traversing graph downward, from entry task \State Compute $priority(t_i)=rank_d(t_i)+rank_u(t_i)$ for each task $t_i$ in the graph \State $|CP|=priority(t_{entry})$, where $t_{entry}$ is the entry task \State $SET_{CP}=\{t_{entry}\}$, where $SET_{CP}$ is the sect of tasks on the critical path \State $t_k \leftarrow t_entry$ \While{$t_k$ is not the exit task} \State Select $t_j$ where (($t_j \in succ(t_k)$) and ($priority(t_i)==|CP|$)) \State $SET_{CP}=SET_{CP} \bigcup \{t_j\}$ \State $t_k \leftarrow t_j$ \EndWhile \State Select the critical-path processor ($p_{cp}$), $p_{cp}$ minimizes $\sum_{t_i \in SET_{CP}} w_{i,j}$ \State Initialize the priority queue with the entry task \While{there is an unscheduled task in the priority queue} \State Select the highest priority task $t_i$ from the priority queue \If{$t_i \in SET_{CP}$} \State Assign the task $t_i$ on $p_{cp}$ \Else \State Assign the task $t_i$ to the processor $p_j$ which minimizes the $EFT(t_i, p_j)$ \EndIf \State Update the priority-queue with the successors of $t_i$, if they become ready tasks \EndWhile \EndFunction \end{algorithmic} \end{algorithm} CEFT is a critical path finding algorithm for application DAGs on heterogeneous processors. We extend this critical path finding algorithm into a DAG scheduling algorithm for heterogeneous processors by incorporating the critical path obtained from CEFT into CPOP. We have cleverly named this \textit{CEFT-CPOP}. Let us recall the CPOP algorithm from the brief discussions in section~\ref{sec:paper-crit-path-rela-work}. It is a critical path based list scheduling algorithm that calculates its critical path based on mean values of computation costs and communication costs as shown in line 2 of algorithm~\ref{algo:cpop}. In lines 3--5 the authors of CPOP calculate the priority function which orders the tasks according to their relative importance. The entry task is added to the CP and the graph is then traversed downward from the entry task. Then, a child $t_j$ of the entry task that has the same priority value as itself is added to the critical path. Consequently, $t_j$'s children are examined and the one that has the same priority value is added to the path and the algorithm continues until it reaches the exit task. This path is then assigned to a single processor $p_{cp}$ in line 13 of the algorithm, in an attempt to produce the smallest possible critical path length for the tasks in the critical path\ifdefined\footnote{As we have discussed before, we believe that the tasks in this path are faulty as they have been calculated based on average values.}\fi. Once the path has been assigned to the processor that minimizes the path length, a priority queue with the entry task in it is examined. The task that has the highest priority function value is popped out of this queue. If this path is part of the critical path calculated earlier, it is scheduled on $p_{cp}$, otherwise it is scheduled on the processor $p_j$ which minimizes the $EFT(t_i,p_j)$. If any of the successors of the task that was just scheduled are now ready to be scheduled, they are added onto the priority queue and the algorithm continues until all the tasks in the priority queue have been scheduled. In order to extend our critical path finding algorithm into a scheduling algorithm, the only modification we make to the CPOP algorithm is one of finding the critical path. Hence, we remove lines 2 -- 13 of the CPOP algorithm and assign $SET_{CP}$ to the critical path found by our algorithm. The rest of the algorithm remains the same. Our main comparison in terms of makespan and related metrics is between CEFT-CPOP and CPOP. This provides us with a basis of a real comparison of the effectiveness of the critical path as the only difference between the two algorithms is the way the critical paths are calculated. We also provide a comparison against HEFT, to show far away our results are from the state of the art scheduling algorithm. \section{Related Work} \label{sec:paper-crit-path-rela-work} In the past, the intractability of finding optimal solutions for the DAG scheduling problem has been well explored~\cite{kohler1975preliminary,michael1979computers,bruno1976computer}. As a result, efforts in the recent past have been focused on finding sub-optimal solutions in shorter runtimes, using heuristics. On the one hand, the heuristic solutions have been studied in terms of guided search space based techniques in~\cite{daoud2005gats,gao2008hybrid,sathappan2011modified,sanyal2005match,orsila2006parameterizing,pan2015improved} but are generally computationally intensive. On the other, list scheduling algorithms which are not as computationally expensive, produce results not far from the optimal~\cite{braun2001comparison}. In this section, we introduce key critical path based static list scheduling algorithms and methods of calculating the critical path. The idea of using Critical Paths in heuristics for scheduling DAGs has existed for a long time~\cite{kohler1975preliminary,hu1961parallel,lockyer1969introduction}. The conventional definition of the critical path as given in Definition~\ref{def:crit-path-defi}, is as follows : \textit{\textbf{Critical-Path} (CP) of a DAG is the longest path of from the entry node to the exit node in the application graph}. Existing algorithms to compute the critical path of a graph for heterogeneous machines make simplifying assumptions. As mentioned before, a simple strategy is to take execution times of a given task on various processors and average them \cite{kwok1996dynamic}. Another~\cite{topcuoglu2002performance}, is to assign all tasks on the critical path to a single processor, and to simply choose the processor that minimizes the critical path length. The latter approach also avoids having to consider communication costs because all tasks are assumed to be on the same processor. However, for some scenarios these algorithms perform poorly as we discuss in section~\ref{sec:paper-crit-path-expe-resu}. In general, the approach of calculating critical paths based on averages can give a result that is greater or lesser than the \textit{true} critical path. Also, adding a new processor can radically change the critical path which is not handled well by the average approach. \ifdefined In the past decade, there have been two \ifdefined\footnote{These algorithms are DAG scheduling algorithms that are based on the idea of critical path. Since the critical path is an integral part of their work, they define said critical path, hence making it relevant to the work presented in this paper. At this juncture, we would like to stress on the fact that our algorithm presented in the previous sections, is strictly a critical path finding algorithm which can be extended to form a DAG scheduling algorithm.}\fi main critical path based scheduling algorithms : the Dynamic Critical Path algorithm (DCP)~\cite{kwok1996dynamic} and the Critical Path On a Processor (CPOP)~\cite{topcuoglu2002performance}. Kwok et al. in 1996 developed the DCP algorithm which used the idea of critical-path to solve the problem of DAG scheduling. In their algorithm, the authors do not calculate the critical path of the application before scheduling. During the scheduling process, tasks from the graph can get dynamically added to the critical-path (CP). In order to distinguish the CP at an intermediate step in scheduling from the original CP, Kwok et al. term the CP at an intermediate step as the \textit{Dynamic Critical Path} (DCP). They then proceed to construct a theoretical basis by which they either remove nodes from the DCP or include nodes to it to monotonically reduce the schedule length. That is, at every consecutive step of the scheduling process, the intermediate schedule length or the DCPL remains the same or reduces. The CPOP algorithm borrows a lot of ideas from it's superior counterpart : HEFT~\cite{topcuoglu2002performance}. It differs from HEFT by redefining its ranking function. The rank of every task is calculated as the sum of its static upward rank and static downward rank. These two ranks signify the distance of the given task from the exit task and the source task respectively which are in turn used to calculate the critical path. However, using this ranking function has its advantage. The rank score of the source task and the exit task are the same which is equal to the length of the critical-path. The critical path can then be easily found by traversing the graph depth-first and looking for tasks that have the same ranking score. Once the critical path is found, it is scheduled onto a single processor which minimizes $\sum_{t_i \in CP}^{}w(t_i, p_j), \forall p_j \in P$. This is the second biggest shortcoming of CPOP. By restricting that the tasks from the critical-path can be scheduled only onto one processor, the authors take away the ability to explore different assignments of the tasks in the CP to potentially obtain a lower/higher schedule length and hence a lower/higher makespan. Once the CP is scheduled, the processor selection phase proceeds as defined in HEFT. For each task $t_i$ that is not on the CP, processor $p_j$ is chosen that minimizes the earliest finish time of task $t_i$ on processor $p_j$. \fi
1,314,259,992,722
arxiv
\section{Introduction} Topic models are probabilistic models for dimensionality reduction of count data \citep{blei2003latent}. They are widely used in modern biostatistics, finding application in population genetics, genome-wide association studies, metabolomics, and microbiota studies \citep{al2019inference, reder2021supervised, leite2020you, gonzalez2019cistopic, sankaran2019latent}. These models are appealing because they are more expressive than clustering yet have simple interpretations \citep{airoldi2014introduction}. Like clustering, topic models provide a small set of ``prototypical'' data points; this enables summarization of the overall collection. Unlike clustering, where each sample must belong to exactly one cluster, topic models support varying grades of membership. Therefore, samples are allowed to smoothly blend from one prototype to another. Alternatively, topic models can be viewed as a form of constrained dimensionality reduction, where factors and loadings are constrained to lie on the probability simplex \citep{carbonetto2021non}. The sum-to-one constraint can make the results more interpretable than standard PCA, NMF, or factor analysis: each sample can be written as a mixture of underlying types, and each topic is a probability distribution across data dimensions. For example, for microbiota data, each topic can be interpreted as a sub-community of bacteria and each sample is a mixture of a few underlying sub-communities. Like most clustering and dimensionality reduction methods, topic models come with a hyperparameter, $K$, that controls the complexity of the resulting fit, and choosing a good value of $K$ to aid downstream analysis remains a challenge. Past work has focused on automatic selection of this hyperparameter, typically by referring to the marginal likelihood of a test set \citep{wallach2009evaluation, kass1995bayes}. In this study, we explore an alternative, a process we call \emph{topic alignment} (Figure \ref{fig:annotated_alignment}), which is based on describing how models fit across a range of $K$ relate to one another. \begin{figure} \centering \includegraphics[width=\textwidth]{figure/sketches/alto_sketches_annotated_alignment.png} \caption{How to read a topic alignment. Construction of weights is discussed in Section \ref{sec:alignment} and paths are defined in Subsection \ref{subsec:paths}.} \label{fig:annotated_alignment} \end{figure} This reframing has appeared in previous literature, though typically in the context of new models, rather than new algorithms applied to existing models. For example, a hierarchical extension of topic models \citep{blei2003hierarchical} provides a similar multiscale interpretation of topic structure. However, computational challenges have made these models somewhat difficult to extend and apply, compared to fixed $K$ topic models. In the hierarchical clustering context, a comparison across choices of $K$ is central to the HOPACH algorithm \citep{pollard2005cluster}, which evaluates cluster stability using a bootstrap procedure. Instead of introducing a novel multiscale model, we focus on post-estimation comparison of an existing ensemble. This is in the spirit of methodology for comparing clusterings \citep{meilua2007comparing, wagner2007comparing}, which introduce metrics for navigating the space of clustering results. Similarly, a description of the relationship between models across choices of $K$ is provided by graphical posterior predictive analysis \citep{gelman2004exploratory, gelman2013philosophy}. A posterior predictive check can highlight the lack of fit at particular choices of $K$, in addition to guiding the selection of $K$. We also note a connection to Tukey’s process of iterative data structuration \citep{tukey1977exploratory, holmes1993comment, holmes2018modern}. Alignment of models across scales naturally supports a coarse-to-fine analysis, ensuring that subtle patterns can be related to their overall context. First, this helps navigate the interpretability-expressivity trade-off associated with different choices of $K$. Models with small $K$ tend to be more interpretable, but may suppress interesting variation in the data. Conversely, models with large $K$ are more faithful to the data, but can be overwhelming to the analyst. By streamlining comparison across $K$, we get the best of both worlds — topics at large values of $K$ can be interpreted in context of the coarser ones to which they relate. Second, topic alignment is still relevant to the challenge of choosing $K$. In a way that is made precise in Section \ref{sec:diagnostics}, true topics tend to be more stable across choices of $K$, while spurious ones are more transient. Finally, alignment can help practitioners discover mis-specifications in topic models. For example, it is biologically plausible that microbiota data deviate from the topic model generative mechanism in the following ways: \begin{itemize} \item Elevated heterogeneity: Topic models assume that all samples are a mixture of a few underlying sub-communities. If samples have more heterogeneity than expected — e.g., due to unmodeled external factors — then topic models may be inappropriate, even for large $K$. \item Strain switching: There may be strains of a species that compete for the same ecological niche. If one strain is successful, then the other would be expected to be absent. This can result in sharp differences in strains within an otherwise well-defined community structure. \end{itemize} In Section \ref{sec:simulations}, we generate data inspired by these phenomena and apply topic alignment to them. We describe the degree to which the resulting alignments reflect underlying heterogeneity or switching. As long as the mis-specification is not too subtle, topic alignment can suggest specific structure to incorporate into follow-up analysis. In the remainder of this paper, we present the following contributions: \begin{itemize} \item The design of algorithms and diagnostics to support the comparison of topic models fit across a range of scales $K$. \item An analysis of the properties of these algorithms and diagnostics, using simulation experiments across several generative mechanisms. \item An illustration of topic alignment applied to a microbiota data analysis problem. \item The release of an R package, \texttt{alto}, implementing these methods. \end{itemize} Sections \ref{sec:background} and \ref{sec:methods} review relevant background material and present algorithms and diagnostics for topic alignment, respectively. Subsection \ref{subsec:package} briefly describes the \texttt{alto} package and the workflow that it supports. Section \ref{sec:simulations} presents a suite of simulation experiments, with an emphasis on exploring model mis-specification through alignment. Section \ref{sec:analysis} describes the application of topic alignment to a data analysis problem associated with the vaginal microbiota. This is a setting where high-level structure is dominated by a few well-known species, but where additional, systematic variation is present at finer scale. \section{Background} \label{sec:background} We first review topic models. Then, we summarize approaches to compare probability distributions, which are used in Section \ref{sec:methods}. \subsection{Latent Dirichlet Allocation} Latent Dirichlet Allocation (LDA) is a flexible way to summarize high-dimensional count data \citep{blei2003latent}. Suppose that the data are made up of $N$ samples $x_{i} \in \naturals^{D}$. For example, in text analysis, these are the counts of $D$ words across $N$ documents\footnote{A table of all notation is given in the supplementary materials.}. In the data analysis given in Section \ref{sec:analysis}, these are the counts of $D$ Amplicon Sequence Variants (ASVs)\footnote{This is the number of times specific regions of the 16S rRNA gene have been sequenced -- see \cite{callahan_replication_2017} for details of 16S sequencing technology.} across $N$ samples collected from the study participants. Let $n_{i} = \sum_{d}x_{id}$ be the total count of sample $i$. Then, LDA supposes that each $x_{i}$ is drawn independently according to \begin{align*} x_i \vert \gamma_i &\sim \textnormal{Mult}\left(n_{i}, B\gamma_{i}\right) \\ \gamma_{i} &\sim \textnormal{Dir}\left(\lambda_{\gamma} \cdot \*1_{K}\right), \end{align*} where the $K$ columns $\beta_{k}$ of $B \in \simplex^{D}$ lie in the $D$ dimensional simplex and are themselves drawn independently from \begin{align*} \beta_{k} \sim \textnormal{Dir}\left(\lambda_{\beta}\cdot \*1_{D}\right). \end{align*} In this mechanism, $\gamma_{i}\in \simplex^{K}$ can be interpreted as mixed-membership weights, with each $\gamma_{ik}$ giving the degree to which sample $i$ ``belongs'' to topic $K$. Since each $\gamma_{i}$ can vary continuously through the simplex, the model is more flexible than a simple clustering model, which would assign each sample to exactly one of $K$ clusters (i.e., the simplex corners). The three hyperparameters in this model are the number of topics $K$ and the prior parameters $\lambda_{\gamma}, \lambda_{\beta}$. Large $\lambda_{\gamma}$ and $\lambda_{\beta}$ result in Dirichlet distributions that place more mass near the uniform distribution. Small $\lambda_{\gamma}$ and $\lambda_{\beta}$ place more mass on edges and corners of the simplex, resulting in sparser $\gamma_{i}$ or $\beta_k$, respectively. In the case of microbiota analysis, each $\beta_{k}$ corresponds to a pattern of ASV abundance. Each sample $i$ is a mixture of these underlying communities, with mixing weights $\gamma_{i}$. Note that, though the topics are amenable to compositional interpretations — the $\beta_k$ lie on the simplex — the original count data are modeled directly, rather than initially transformed to centered-log-ratios, for example. This makes it possible to account for differential uncertainty in samples with high and low sequencing depth and decreasing the amount of processing that takes places between raw data and final interpretation, reducing the risk for analysis errors. \subsection{Simplex Distances and Optimal Transport} We next review methods for comparing probability distributions. These are useful in the LDA context, because the parameters $\gamma_i$ and $\beta_k$ all lie on the probability simplex. We first consider distances on the simplex. Let $p, q \in \simplex^{D}$ (i.e., two discrete probability distributions over $D$ categories). The Jensen-Shannon Divergence (JSD) between them is defined as \begin{align*} JSD\left(p, q\right) := \frac{1}{2}\left[\text{KL}\left(p\vert\vert \frac{1}{2}\left(p + q\right)\right) + \text{KL}\left(q \vert \vert \frac{1}{2}\left(p + q\right)\right)\right], \end{align*} where $\text{KL}\left(a \vert\vert b\right) := \sum_{i}a_i \log\left(\frac{a_i}{b_i}\right)$ is the Kullback-Liebler divergence between $a$ and $b$. The JSD can be viewed as a symmetrized version of the Kullback-Liebler divergence, allowing it to serve as a distance measure. Intuitively, for $p$ and $q$ to have low JSD to one another, samples from either distribution should have high probability under the averaged distribution $\frac{1}{2}\left(p + q\right)$. Alternatively, the cosine similarity $\text{cossim}\left(p, q\right) := \frac{p^{T}q}{\|p\|_{2}\|q\|_{2}}$ may be used. The numerator here is large when both $p$ and $q$ place high mass on the same coordinates, and the denominator is smallest when both $p$ and $q$ are far from uniform. Both the JSD and cosine similarity treat all coordinates of $\simplex^{D}$ symmetrically. They are also only defined when $p$ and $q$ have the same number of categories $D$. Alternatively, we may relax these constraints, requiring instead only a notion of pairwise similarity between coordinates in $p$ and $q$. This is formalized in optimal transport, which assigns costs for ``transporting mass'' between pairs of coordinates. Represent the costs of transporting mass between the $D$ coordinates of $p$ and the $D^\prime$ coordinates of $q$ by a matrix $C \in \reals_{+}^{D \times D^\prime}$. Then, the optimal transport between $p$ and $q$ is the coupling $\Pi$ minimizing \begin{align*} &\min_{\Pi \in \mathcal{U}\left(p, q\right)} \left<C,\Pi\right> \\ \mathcal{U}\left(p, q\right) := &\{\Pi\in \reals^{D \times D^\prime}_{+} : \Pi \*1_{D^\prime} = p \text{ and } \Pi^{T} \*1_{D} = q\}, \end{align*} where $\left<A, B\right>$ is shorthand for the Frobenius inner product, $\text{tr}\left(A^T B\right)$. The smaller the transport cost $\left<C, \Pi\right>$, the more similar the distributions $p$ and $q$, with respect to the costs induced by $C$. A useful analogy is due to Kantorovich \citep{peyre2019computational}. Imagine there are $D$ mines and $D^\prime$ factories. An amount $p_i$ of raw material is produced by mine $i$; on the other hand, factory $j$ requires $q_j$ total input. Suppose $C$ captures the transport costs between all pairs of mines $i$ and factories $j$. Then, the optimal transport plan $\Pi$ specifies how much material produced by mine $i$ should be shipped to factory $j$. \section{Methods} \label{sec:methods} In this section, we set up the problem of topic alignment, provide associated algorithms, and discuss an R package implementation. Although more general treatments are possible, we focus on the case that the topics are derived from a sequence of models with increasing $K$. Alignment across a sequence of models supports multiscale analysis: topics from models with large $K$ distinguish between subtle variations in samples, and an alignment shows how these topics are related to overview topics derived at small $K$. In Section \ref{sec:discussion}, we discuss how the methods proposed here could be generalized and applied for other purposes than multiscale analysis. For example, topic alignment could be used to compare topics identified in different environment (\textit{i.e.}, datasets) or across different modalities (\textit{i.e.}, different types of data have been collected on the same samples). \subsection{Topic Alignment} \label{sec:alignment} Suppose we have estimated topics across an ensemble of LDA models $\mathcal{M}$. The topic alignment problem consists of constructing a weighted graph whose nodes are topics from across models and whose edge weights reflect the similarity between the topics. Formally, let $V$ be the set of topics across all models in $\mathcal{M}$. We suppose the investigator has specified pairs $e = \left(v, v^\prime\right) \in E$, where, $v, v^\prime \in V$ and $E$ is the set of edges in the topic alignment graph, of topics of interest to compare. Then, an alignment should provide weights $w: E \to \reals_{+} $ that are large when $v$ and $v^\prime$ have similar estimated parameters, and low otherwise. The graph $\left(V, E, w\right)$ contains the result of the topic alignment. Let $k\left(v\right)$ denote the topic associated with node $v \in V$, and suppose it lies in model $m \in \mathcal{M}$. Write $\gamma\left(v\right) := \left(\gamma_{i k\left(v\right)}^m\right) \in \reals^N_{+}$ for the vector of mixed memberships associated with this topic. Similarly, set $\beta\left(v\right) := \beta_{k\left(v\right)}^m \in \simplex^{D}$. \subsection{Algorithms} \subsubsection{Weight estimation} We propose two methods for estimating weights $w\left(e\right)$, one using sample composition ($\gamma_{i}$) and another using topic composition ($\beta_{k}$). We call the approaches \emph{ product alignment} and \emph{transport alignment}, respectively. In product alignment, we set $w\left(e\right) = \gamma\left(v\right)^T\gamma\left(v^\prime\right)$. Intuitively, if two topics have a similar pattern of $\gamma_{ik}$ across samples $i$, then they are given a high weight (Figure \ref{fig:combined_alignment}a). Further, topics that have small $\gamma_{ik}$ across all samples are given lower weight, regardless of their similarity. In transport alignment, we compute $w\left(e\right)$ by solving a collection of optimal transport problems (Figure \ref{fig:combined_alignment}b). Consider two subsets $V_{p}, V_{q} \subset V$ with $V_{p} \cap V_{q} = \varnothing $; we take these two sets to be all topics $v$ from models $m$ and $m'$. Let $p = \left(\gamma\left(v\right)^T \*1_{N}\right)_{v \in V_{p}}$ and $q = \left(\gamma\left(v\right)^T \*1_{N}\right)_{v \in V_{q}}$. These summarize the ``mass'' of each topic across all samples, within each of the two sets. For example, these will both sum to $N$ if the $V_p$ and $V_q$ equal to the sets of topics from two models, since each $\gamma_i$ lies in the simplex. Define the cost of transporting mass from node $v$ to $v^\prime$ by $C\left(v, v^\prime\right) := JSD\left(\beta\left(v\right), \beta\left(v^\prime\right)\right)$. This ensures that weights are lower between topics with very different distributions, regardless of sample weights $\gamma_{ik}$. Arrange these costs into a matrix $C$ of size $\absarg{V_p} \times \absarg{V_q}$. The weight matrix $W$ between pairs of topics in $V_{p}$ and $V_{q}$ is the $\reals^{\absarg{V_p} \times \absarg{V_q}}_{+}$ matrix formed by solving the transport problem \begin{align*} &\min_{W \in \mathcal{U}\left(p, q\right)} \left<C,W\right> \\ \mathcal{U}\left(p, q\right) := &\{W\in \reals^{\absarg{V_p} \times \absarg{V_q}}_{+} : W \*1_{\absarg{V_q}} = p \text{ and } W^{T} \*1_{\absarg{V_p}} = q\}. \end{align*} We note that in the case that $V_p$ and $V_q$ contain topics from models $m$ and $m + 1$, it is natural to construct a directed graph, with edges from topics in model $m$ to those in $m + 1$. In this case, we refer to the topic subsets as $V_{m}, V_{m + 1}$, respectively. For a directed graph, it is possible to normalize weights according to either the total inflow or outflow for each node. We will use these normalized weights in the computations of the topic orderings given in Supplementary Section 1 and in the computations of some of the diagnostic scores given in Section \ref{sec:diagnostics}. Specifically, we normalize weights for edges flowing out of $v$ according to $\wout\left(v, v^\prime\right) = \frac{w\left(v, v^\prime\right)}{\sum_{\tilde{v} : v \to \tilde{v}}w\left(v, \tilde{v}\right)}$. Similarly, normalization for edges flowing into $v$ is defined by $\win\left(v^\prime, v\right) = \frac{w\left(v^\prime, v\right)}{\sum_{\tilde{v} : \tilde{v} \to v} w\left(\tilde{v}, v\right)}$. \begin{figure} \centering \includegraphics[width=\textwidth]{figure/sketches/alto_sketches.png} \caption{Top panels (a-b) illustrate topic alignment for product (a) and transport (b) alignments. The bottom panels (a-c) illustrate the diagnostic scores characterizing the alignment. (a) Each vertical column corresponds to a topic. Each circle encodes weights $\gamma_{iv}$ for a single sample $i$. The width of the links between circles encodes the product $\gamma_{iv}\gamma_{iv^\prime}$. Note that this product is large only if both $\gamma_{iv}$ and $\gamma_{iv^\prime}$ are large. The product alignment between two topics is high if the sum of products across all $N$ is large. (b) Each vertical bar describes a single topic $v$. The heights of bars provide the weights $\sum_{i} \gamma_{iv}$ for each topic $v$; their locations encode $\beta_{v} \in \Delta^D$. Green and purple topics are estimated by LDA models with $K = 2$ and 3 topics, respectively. In transport alignment, the mass from the green bars is redistributed to the purple bars and alignment weights are derived from the associated optimal transport plan. (c) To assign a path to a topic $v$, the edges $e^\ast\left(v\right)$ from which topics $v$ derive most of their weight are identified. (d) A topic has a high coherence score if all normalized weights ($\win$ and $\wout$) between this topic and topics on the same path are large. (e) A topic has a high refinement score if the downstream alignment structure is ``tree-like'', i.e. if all descendant topics recognize $v$ as their main parent. Note that a topic $v$ (highlighted by a black outline here) may have a low coherence score but a high refinement score.} \label{fig:combined_alignment} \end{figure} Figure \ref{fig:combined_true_lda} provides visualizations of product and transport alignments on simulated data. Note that topics are not returned by the LDA fit in a specific order. Consequently, topics connected by high weights across models may have different index $k$ within their respective model. For visualization purposes, we order topics within each model such that similar topics are close to each other. The ordering procedure is described in Supplementary Section 1. \subsubsection{Paths} \label{subsec:paths} Topic reordering places topics with high alignment weights next to one another, giving the appearance of chains of mutually similar topics. To highlight this phenomenon, we partition the alignment graph into a collection of paths. The partition is grown iteratively, adding topics to existing subsets based on alignment weights. Let $\text{Path}\left(v\right)$ be the path ID associated with topic $v$, and let $M$ be the model with the largest number of topics. For each topic $v \in V_M$, we initialize $\text{Path}\left(v\right) = k\left(v\right)$. Suppose $\text{Path}\left(v\right)$ is known for all $v \in V_{m + 1}$. Then, the path membership $\text{Path}\left(v\right)$ of a node $v \in V_m$ is set to $\text{Path}\left(v^{\ast}\right)$, where \begin{align*} v^\ast &:= \arg\max_{v' \in V_{\left(m + 1\right):M}} (\wout\left(v, v^\prime\right) + \win\left(v, v^\prime\right)), \end{align*} is the topic from one of the levels $m + 1, \dots, M$ that shares the highest total normalized weight with $v$. \subsection{Diagnostics} \label{sec:diagnostics} We next propose three diagnostic measures that compactly describe the results of a topic alignment. These statistics reflect the added value of introducing each additional topic, the specificity of ancestor-descendant ties, and the coherence of topics across $K$. In addition to summarizing the alignment, these statistics can also serve to diagnose model mis-specification in the original fits. \subsubsection{Number of paths} Paths found by the iteration of Subsection \ref{subsec:paths} connect the most similar topics across resolutions. Spurious topics introduced at high resolution tend to be different from one another, limiting their ability to maintain a path. Instead, they connect to more stable paths. Consequently, counting the number of paths at a given resolution provides an indication of the number of true topics. Formally, the number of paths for a model $m$ is the size of the set $\{\text{Path}\left(v\right) : v \in V_m\}$ In simulations below, we find that, when a topic model is appropriate, the true value $K$ is captured by a plateau in the number of paths (Figure \ref{fig:combined_true_lda}a). Hence, this metric can be used analogously to the identification of an ``elbow'' from a scree plot. Further, consistently slow growth in the number of paths identified may indicate departures from the assumed LDA model. Examples of both phenomena are provided in Section \ref{sec:simulations}. The number of paths is a property of a model within the alignment. In contrast, the scores introduced below focus on individual topics. \subsubsection{Topic Coherence} We call a topic \emph{coherent} if it is found in models fitted across a range of values of $K$. When coherent topics are recovered across multiple levels of an alignment, there is more evidence that the discovered structure is real, because it is not sensitive to the particular $K$ of the model used. Topic coherence is defined in the context of paths. It measures the similarity between a given topic $v$ and the other topics on the same path $\mathcal{P}\left(v\right) = \{v^\prime: \text{Path}\left(v^\prime\right) = \text{Path}\left(v\right)\}$. It is defined as \begin{align*} c(v) = \frac{1}{|\mathcal{P}\left(v\right)|} \sum_{v' \in \mathcal{P}\left(v\right)} \min\left(\win\left(v, v'\right), \wout\left(v, v'\right) \right). \end{align*} Our simulations illustrate how this score can be used to identify ``good'' values of $K$ in LDA as well as detect departures from assumed LDA structure. Note that coherence focuses solely on the path containing a topic. We introduce another measure, topic refinement, to reflect the richer branching pattern downstream of a topic. \subsubsection{Topic Refinement}\label{sec:refinement} A topic identified at a small value of $K$ may have low coherence but still be a useful topic if it is the sole ancestor of topics in subsequent models. We expect true topics and compromises between true topics to have this property. We introduce the \emph{refinement} score to identify such topics. Recall that for a node $v'$, $\win(v, v')$ measures the extent to which mass at $v'$ flows from parent node $v$. For each $v$, the refinement score is a weighted average of $\win(v, v')$ over all its children $v'$. More formally, collect topics into levels $V_{1}, \dots, V_{M}$. We define the refinement score of node $v$ in level $m$ as \begin{align} \label{eq:refinement} r\left(v\right) &= \frac{|V_m|}{M-m}\sum_{m'=m+1}^M \ \sum_{v_{m'}^\prime \in V_{m'}} \wout\left(v, v_{m'}^\prime\right)\win\left(v, v_{m'}^\prime\right). \end{align} To better understand this score, we can establish its properties in some simple cases (proofs given in the supplementary materials). Continuing to assume that node $v$ is in level $m$, \begin{itemize} \item The refinement score is maximized ($r(v) = |V_m|$) if and only if $w(v, v'_{m'}) > 0$ implies $w(u, v'_{m'}) = 0$ for any $u \in V_m \setminus \{v\}$. This condition means that every descendant of $v$ has $v$ as its sole parent in level $m$. \item The refinement score is minimized ($r(v) \to 0$) when all of the descendants of $v$ descend primarily from other nodes in level $m$ (i.e., the score is smallest for nodes that don't have any descendants that recognize them as parents at all). Indeed, suppose that for every $v' \in V_{m'}$, we have fixed weights $w(v, v')$. Then $r(v) \to 0$ when for each $w$ s.t. $w(v, v')> 0$, $w(u, v') \to \infty$ for some $u \in V_m \setminus \{v\}$. \item The refinement score is defined such that $r(v) = 1$ if all the weights in the graph are equal, which indicates an absence of topic structure in the data. \end{itemize} \subsubsection{Comparing diagnostics} The diagnostics measure different properties of an alignment. Both low coherence / high refinement and low refinement / high coherence combinations are possible, although in the examples below the diagnostics tend to track each other. We would expect the refinement score to be high but the coherence score to be low in the case that the alignment plot has a branching structure. On the other hand, the refinement score can be small for a topic with high coherence if that topic doesn't have many descendants. We discuss this further and provide examples in the Supplementary Section 4. Overall, the coherence score describes how ``good'' or ``trustworthy'' a topic is; topics with high coherence scores appear consistently across levels. This is true even if the refinement score is low — in that case, the refinement score is likely to be low simply because the topic is present at low frequency. On the other hand, the combination of high refinement and low coherence score suggests that the topic is a mixture of several high-coherence topics. These topics can still be useful to the analyst, as they simply represent a coarser-grained summary of the data. \subsection{R package} \label{subsec:package} We have released an R package, \texttt{alto}, to support \emph{al}ignment of \emph{to}pics from LDA models. The package provides functions for \begin{itemize} \item Fitting a set of topic models. \item Aligning topics across a collection of models, identifying paths, and computing coherence and refinement measures from the alignment. \item Visualizing the resulting alignment object. \end{itemize} The design emphasizes the modularity of the alignment workflow, and separate functions are given for each of the steps above. To illustrate, we include an example use of the package on random multinomial data. \begin{verbatim} library(purrr) library(alto) # simulate data and fit models x <- rmultinom(20, 5000, rep(0.1, 500)) lda_params <- setNames(map(1:10, ~ list(k = .)), 1:10) lda_models <- run_lda_models(x, lda_params) # perform alignment and plot result <- align_topics(lda_models) plot(result) \end{verbatim} Note that \texttt{result} is an S4 class (class \texttt{alignment}) with its own plot method. This class is associated with accessor functions for extracting the underlying model parameters (\texttt{models()}), alignment weights (\texttt{weights()}), and topic-level diagnostics (\texttt{topics()}). In addition to the product and transport methods that are currently implemented, the package allows users to pass in arbitrary functions for computing weights between sets of topics. Further, in addition to computing alignments across a sequence of increasing $K$, the package implements topics comparison and weight construction over arbitrary topic graphs. Examples of these functions, as well as all data analysis and simulations described here, are available as package vignettes. The package homepage is available at \url{lasy.github.io/alto/} and its source code can be found at \url{github.com/lasy/alto}. \section{Simulations} \label{sec:simulations} In this section, we study the extent to which learned topic alignments and their associated diagnostics distinguish between types of variation that can arise in count data. We apply methods in a few controlled settings, verifying that derived interpretations are consistent with the known generative mechanism. We investigate alignment when simulating from true LDA models as well as under certain types of mis-specification. The latter cases inform the extent to which alignment can inform model assessment. \subsection{Latent Dirichlet Allocation} If the data were in fact simulated from an LDA model with $K$ topics, then what will the associated topic alignment and diagnostics look like? We simulate $N = 250$ samples $x_i \in \naturals^{D}$ from an LDA model with $K = 5$ true topics and $D = 1000$. For mixed memberships, we draw $\gamma_{i} \sim \textnormal{Dir}\left(0.5 \cdot \*1_{K}\right)$, while topics are assumed sparser, with $\beta_{k} \sim \textnormal{Dir}\left(0.1 \cdot \*1_{D}\right)$. These parameters have been chosen to maintain simplicity while exhibiting both high-dimensionality in $x_i$ and sparse structure in $\gamma_i$ and $\beta_k$. At this scale, alignments can be made interactively: computation of product and transport alignments each takes 5 - 6 seconds on a laptop with a 3.1 GHz Intel Core i5 processor and 8GB memory (in contrast, to fit LDA models with $K \in \{2, \dots, 10\}$ requires 352 seconds). We provide the true $\lambda_{\gamma}$ and $\lambda_{\beta}$ hyperparameters. In practice, these would be chosen quantitatively according to marginal likelihood or qualitatively to enforce a desired level of sparsity. However, providing the true hyperparameters allows us to concentrate on the properties of alignment in an ideal case. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figure/combined_true_lda.png} \caption{Alignments for data simulated from LDA with $K = 5$. Parts (a) and (c) are estimated using product and transport alignment, respectively. Rectangles correspond to topics, and their sizes give the mass $\sum_{i} \gamma_{ik}$. Vertical sections give fitted models. The width of links encodes the weights $w\left(e\right)$. Topics and edges are colored to show paths. Parts (b) and (d) give $\beta_{kd}$, colored in according to (a) and (c), respectively. Each column encodes a topic, each row is a dimension, and circle size is proportional to $\beta_{kd}$. Sets of topics from one model are grouped into panels. Circles with $\beta_{kd} < 0.001$ are omitted. Dimensions $d$ are sorted according to $\text{Distinctiveness}\left(d\right) := \min_{l \neq k} \beta_{kd} \log \frac{\beta_{kd}}{\beta_{ld}}+\beta_{ld}-\beta_{kd}$, as in \citep{dey2017visualizing}, but with $k, l$ varying over topics from multiple models. Only the 25 most distinctive dimensions are displayed.} \label{fig:combined_true_lda} \end{figure} With this setup, we simulate 200 datasets and fit models with $K \in \left\{2,\dots, 10\right\}$ topics. Each set of models is aligned using both the product and transport methods. The product and transport alignments from a randomly chosen replicate are shown in Figure \ref{fig:combined_true_lda}. The primary distinguishing feature between product and transport alignments is the sparsity in weights estimated using the transport approach. Both alignments provide hints that $K = 5$: \begin{itemize} \item The number of paths (i.e. number of distinct colors) remains 5 for $K > 5$. \item For $K \leq 5$, most mass is conserved along a few major paths. For $K > 5$, this structure fragments and each topic tends to align with multiple descendant topics. \end{itemize} Supplemental Figures 4 - 9 provide ten additional replicates, along with ten replicates of a simulation from a null model in which the counts are drawn from independent multinomials whose means come from a $\textnormal{Dir}(\*1_{D})$ distribution. Each of the three diagnostics are shown for each of the simulated datasets. Clear differences are visible across all diagnostics for the data generated under the null \textit{vs.} topic model. In the topic model, the number of paths generally plateaus at 5. In the null model, the number of paths continues to increase as we add more topics. The coherence scores are all around 0 and the refinement scores are all around 1 in the null model. In the topic model, topics with low coherence and low refinement scores emerge for $K > 5$. These topics are likely spurious. The other topics (matching the true topics) have high coherence and refinement scores. We next present a more systematic description of the diagnostics across all 200 simulation replicates. Figure \ref{fig:gradient-combined}a counts the number of paths at each $K$. Up to $K = 5$, and for most simulation runs, each new topic created a new path. For $K = 5, 6$, nearly all alignments estimated that 5 paths were present, though for larger $K$, additional topics were sometimes added to this subset. Transport alignment tended to more frequently overestimate the number of paths. For example, transport alignment occasionally found up to 8 paths when $K = 9$, while product alignment rarely estimated more than 5 topics. Figures \ref{fig:gradient-combined}b-c show topic-wise coherence and refinement scores as a function of $K$ in the alignment. The lower envelope of the distributions for both coherence and refinement scores show an abrupt drop-off for $K > 5$ across both alignments, reflecting the low coherence and refinement of newly estimated topics with less similarity to the $K = 5$ true topics. For $K < 5$, refinement scores remain high as topics in these models are parents of true topics. Overall, three practical rules of thumb are (1) a plateau in the number of paths indicates that the true number of topics has been reached, (2) a rapid drop-off in coherence or refinement scores indicates low-dimensional structure, and (3) topics with high coherence or refinement scores are more likely to reflect true topic structure. To evaluate the extent to which these practical rules guide selection of $K$, we have repeated these simulations with datasets of increasing sample size. We observe that the probability of choosing the true $K$ increases with the sample size (Supplementary Figure 10). These simulations are a sanity check — in the case that data are exactly generated by an LDA model with a known $K$, then alignment can help identify it. However, a number of methods are available for selecting $K$ when models are correctly specified, and real data are unlikely to so perfectly correspond to a proposed generative mechanism. In the spirit of ``all models are wrong, but some are useful,'' we consider, in the next two sections, scenarios where an LDA model is fit to data that are not simulated from the LDA mechanism, but where alignment can nonetheless inform an understanding of the essential latent structure. \subsection{LDA with background variation} To begin describing properties of alignment in this approximate regime, we simulate data from the case where sample compositions exhibit an extra level of heterogeneity not present in LDA. We suppose that most, but not all, variation in latent sample compositions lies on a $K$-dimensional subspace spanned by $K$ topics $B$. The closer the compositions lie to this subspace, the closer the LDA model is to being correct. Specifically, we simulate from \begin{align*} x_{i} \vert B, \gamma_{i}, \nu_i &\sim \textnormal{Mult}\left(n_{i}, \alpha B\gamma_{i} + \left(1 - \alpha\right)\nu_i\right) \\ \nu_{i} &\sim \textnormal{Dir}\left(\lambda_{\nu}\right) \\ \gamma_i &\sim \textnormal{Dir}\left(\lambda_{\gamma}\right) \\ \beta_{k} &\sim \textnormal{Dir}\left(\lambda_{\beta}\right). \end{align*} This generative mechanism is identical to that of LDA, except that instead of being centered around $B\gamma_{i}$, sample $i$ is centered around $\alpha B\gamma_i + \left(1 - \alpha\right)\nu_i$ for a $\nu_i\in \Delta^{D}$ drawn without reference to the $K$ topics in $B$. As before, we simulate with $N = 250, D = 1000, K = 5$. For each $\alpha \in \{0, 0.05, \dots, 1\}$, we generate 50 datasets and then fit and align topic models with $K \in \{1, \dots, 10\}$. Randomly chosen alignments for a range of $\alpha$ are given in Supplementary Figure 2. For large $\alpha$, most mass is concentrated in 5 core paths, and there is limited exchange from one topic to another. For small $\alpha$, mass is more evenly distributed across branches and a high-degree of exchange is present. The number of paths across $K$ for each $\alpha$ is shown Figure \ref{fig:gradient-combined}c. At $\alpha = 0$ (data simulated from random multinomials), there is no plateau in the number of paths. As $\alpha$ increases, a plateau at $K = 5$ emerges and becomes increasingly well-defined. The definition of paths appears effective at distinguishing low- from high-rank sample compositions — a gradual increase in the number of paths, without any visible plateau, would suggest that an LDA model is missing true sample-to-sample variation, even with large choices of $K$. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figure/gradient-combined.png} \caption{Diagnostic measures when background variation is and is not present. a) The number of estimated paths across simulations from an LDA model with $K = 5$. The circle size encodes number of replicates for which that number of paths was identified. The product method tends to be more conservative, and is less prone to overestimate the number of topics, compared to the transport method. b) Coherence and c) refinement scores for topics fitted to data from an LDA model. Points represent estimated topics from across replicate. Color encodes similarity to a true underlying topic, which would be unknown in reality. d) The estimated number of paths varies as a function of background variation $\alpha$. The closer the data are to being drawnfrom an LDA model, the faster the initial increase in the number of estimated paths and the more definitive the plateau. e) For small $\alpha$, coherence drops-off starting at $K = 1$, with no visible increases. For larger $\alpha$, a subset of topics has elevated coherence and the largest average topic coherence occurs at the true latent dimensionality. f) Refinement scores are higher and exhibit larger range when the LDA model is approximately correct. The range and trend refinement scores can be used to distinguish between datasets that have more or less unmodeled heterogeneity.} \label{fig:gradient-combined} \end{figure} The distribution of coherence scores also shows differences depending on $\alpha$ (Figure \ref{fig:gradient-combined}e). For large $\alpha$ (\textit{i.e.}, generative mechanisms closer to LDA), the upper envelope of coherence scores rapidly increases up to $K = 5$. For $K > 5$, the lower envelope rapidly drops off while the upper envelope slightly decays. For small $\alpha$, there is no local maximum in the distribution of coherence scores and all topics have small scores. This suggests that, when an underlying LDA model is more approximately correct, the associated alignment includes more coherent topics with a peak coherence around the true latent dimensionality. Figure \ref{fig:gradient-combined}f shows the analogous display for refinement. In the small $\alpha$ case (no true topics), all topics have essentially the same refinement score for all $K$, and the score is as expected if there is no relationship among the topics. In contrast, in the more approximately low rank, large $\alpha$ case, a larger spread in scores is visible. In that case, topics with high refinement scores for $K \geq 5$ have high similarity with the true topics, and the $K = 5$ transition is marked by a drop-off in the lower envelope of refinement scores. Further, reading each panel from bottom to top (increasing topic structure), we find that the upper envelope of refinement scores noticeably increases. Altogether, these diagnostics suggest that alignment can detect departures from the underlying topic model assumption that samples are concentrated on a $K$-dimensional topic simplex, across a range of candidate $K$. Paths with low coherence can be a warning flag. Further, low refinement scores and the absence of any plateau in the number of paths may suggest that observations exhibit higher sample-to-sample variation than an LDA model alone may capture. Since it is possible to simulate new data from each fitted LDA model, these guidelines can be formalized into a graphical posterior predictive check. Posterior predictive samples can be drawn from the model with the largest $K$, which has the most flexibility. Aligning topic models fit to these data could provide a reference distribution for each of the diagnostics, and comparing the observed measures with this reference can give evidence for or against model fit. \subsection{Strain switching} \label{subsec:strain_switching} Our final simulation studies whether alignment can detect mis-specifications in topic modeling due to highly correlated topics. Our setup is motivated by the strain switching phenomenon observed in some microbiota environments \citep{jeganathan2021statistical}. In this situation, there are strains that can be exchanged between what are otherwise similar communities. These strains can be thought of as being functionally equivalent, competing for a niche within an ecosystem. The consequence is that two nearly identical communities may be present in the ecosystem, but with systematic differences for some strains. From a topic modeling perspective, these communities have anti-correlated topic memberships -- only one of the competing strains can be present in a sample at a time. The existence of these communities can be detected by comparing topics estimated at different scales. At coarse scale, two communities may be indistinguishable from one another, swamped by larger variations in species signatures across the ecosystem. At finer scale, the subsets of strains that distinguish them may become apparent after close inspection of the estimated topics. Our goal is to study the extent to which alignment can support this multiscale analysis. Our simulation mechanism first draws $\gamma_{i}$ and $\beta_{k}$ as in the simulations above. Instead of directly using $\beta_{k}$, however, perturbed versions $\tilde{\beta}_{k}^{r}$ are generated for $r \leq R_{k}$, a pre-specified number of perturbed replicates $R_{k}$. The perturbation mechanism is given in Supplementary Algorithm 1. The resulting $\tilde{\beta}_{k}^{r}$ and $\tilde{\beta}_{k}^{r^{\prime}}$ differ only on a subset of $S$ coordinates and can be viewed as functionally equivalent sub-communities. Given perturbed topics, sample $i$ is drawn by first randomly selecting one perturbed version from each of the $K$ topics, \begin{align*} \beta_{k}^{i} &\sim \textnormal{Unif}\left(\left\{\tilde{\beta}_{k}^{1}, \dots, \tilde{\beta}_{k}^{R_k}\right\}\right) \end{align*} binding the results into a $K$ column matrix $B_{i}$, and then drawing \begin{align*} x_{i} &\sim \textnormal{Mult}\left(n_{i}, B_{i}\gamma_{i}\right) \end{align*} as in standard LDA. We set $K = 5$ and $\left(R_{1}, \dots, R_{5}\right) = \left(2, 2, 1, 1, 1\right)$. We draw $N = 250$ samples with dimension $D = 1000$. We use $S = 230$; results with varying $S$ are given in the supplement. In the microbiota interpretation, our samples include counts of 1000 species each, and 5 underlying community types are present. Two versions of the first two types are present, differing on a subset of $S$ competing species. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figure/equivalence-combined.png} \caption{Results from the strain switching simulation. a) An alignment from one replicate with $S = 230$. Only the 200 most distinctive dimensions are displayed. The purple-dark blue and green-light blue pairs of branches correspond to two perturbed versions of the same underlying community, as suggested by b) the similar columns of $\beta_{kd}$ for $K = 6,7$. c) Cosine similarities between known and estimated topics across increasingly finer-scale models $m$. Rows 1-2 and 3-4 corresponding to perturbed versions of two underlying communities. For $K = 5$, the estimated topics do not distinguish between versions. At $K = 6$, rows 3 and 4 are slightly distinguished from one another, and at $K = 7$, both sets of perturbed topics are detected.} \label{fig:equivalence-combined} \end{figure} The resulting alignment is given in Figures \ref{fig:equivalence-combined}a-b. The learned topics for $K = 5$ to $7$ are given in the right panel. We note that, at $K = 6$, the purple and blue topics have similar weights across many, but not all species. Likewise, at $K = 7$, the brown-orange and brown-green topic signatures are similar. The accompanying flow diagram shows that, in both cases, the pairs of similar topics had been merged when $K = 5$, suggesting that the model begins to detect perturbed versions of the same topics once $K$ is increased. To attribute these differences to the known perturbation mechanism, we compute the cosine similarity between estimated and true topics. Figure \ref{fig:equivalence-combined}c shows the cosine similarity $\xi_{kk'}^m := \text{cossim}\left(\beta_{k}, \hat{\beta}_{k^\prime}^m\right)$ for models $m$ with 5 to 7 topics. Each row corresponds to a true topic; rows 1-2 and 3-4 are perturbed versions of two underlying sub-communities, respectively. For $K = 5$, the patterns of cosine similarities across rows 1-2 and 3-4 are similar, suggesting that the estimated topics are not sensitive to strain switching. However, for $K = 6$ and 7, new topics emerge that distinguish between the pairs of nearly equivalent sub-communities. The off-diagonal elements for the two squares indicates that the newly estimated topics remain similar to both versions of the underlying mechanism. However, since only a subset of species is perturbed in each version, some remaining similarity is to be expected. \section{Data Analysis} \label{sec:analysis} We applied topic alignment to vaginal microbiota composition data; the results are given in Figure \ref{fig:microbiota_figure}. The data are ASV counts from longitudinal samples collected throughout pregnancy in 135 individuals \citep{callahan_replication_2017}. In most individuals, the vaginal microbiota have low heterogeneity compared to other human microbiotas: one of four Lactobacillus species (\textit{crispatus}, \textit{iners}, \textit{gasserii} or \textit{jensenii}) completely dominates the flora. However, some individuals may present ``dysbiosis,'' defined by a high compositional diversity and the absence of Lactobacillus dominance. Topic analysis offers an opportunity to identify sub-communities that may co-exist within these diverse non-\textit{Lactobacillus} communities. Applying topic alignment to these data, we observe that the number of paths (Figure \ref{fig:microbiota_figure}a) shows a small plateau around $K = 12$ with both methods (product and transport). As in the simulations, the number of paths are lower and the plateau is stronger when paths are identified using product rather than transport alignment. A small plateau is likely indicative that the data generation process does not strictly follow the LDA model assumption. However, most of the identified topics around $K=12$ are coherent across $K$ (Figure \ref{fig:microbiota_figure}b-c). The distribution of refinement scores (Figure \ref{fig:microbiota_figure}d) shows, for both alignment methods, the emergence of low refinement score topics from $K = 14$. This supports the idea that a higher number of topics is likely over-fitting the data. Further, the median refinement score is highest for $K = 7$ when using the product method. This suggests that topics identified at that resolution are a mixture of true, higher resolution topics. In summary, a biologist may interpret this analysis by stating that topic models with $K = 12$ provides the best summary of the sub-communities found in the vaginal microbiota. Among those 12 sub-communities, two (topic 11 and 12, with low coherence scores) might be ``spurious'' in the sense that they may not represent well-defined sub-communities but instead capture a set of bacteria that may be sample-specific (background noise). The analyst could also choose to model their data at a coarser resolution by setting $K = 7$. At that resolution, they would identify four coherent \textit{Lactobacillus}-dominated topics and three non-\textit{Lactobacillus} dominated topics. Among these three topics, one topic has a high coherence score and is composed of specific species of Gardnerella and Atopobium. The other two topics, with lower coherence scores but high refinement scores, identify two distinct mixtures of sub-communities which are revealed at higher resolution. These results are useful from a biological perspective because they provide a more detailed, and yet still succinct, description of the vaginal microbiota structure. Historically, vaginal microbiota data have been clustered into five community state types (CST), four of them corresponding to one of the four most prevalent \textit{Lactobacillus} species, and the fifth one being ``everything else.'' Our analysis is consistent with this interpretation, but provides a more precise view into the structure of the fifth state. \begin{figure} \centering \includegraphics[width=\textwidth]{figure/microbiome_figure.png} \caption{(a) Number of paths for each number of estimated topics in the LDA model. Number of paths identified by the transport method are shown in blue, those identified by the product method are shown in red. (b-c) Transport alignment of topics across $K$ where topics are colored by paths (b) or by their coherence score (b). (d) Small colored dots show the refinement score of each topic across $K$ for both alignment method. Colors match path colors in panel (b). The gray ribbon shows the envelope of the coherence scores (min to max), while the thick black lines shows the coherence scores median value. (e) Topic composition (dot size shows estimated $\beta$) for $K \in {3,7,12,18}$. Topics ($x$-axis) are colored by path (see panel b). Species ($y$-axis) are ordered by the topic with the highest $\beta$ for that species in model $K = 18$.} \label{fig:microbiota_figure} \end{figure} \section{Comparison with alternatives} \label{sec:alternatives} In this section, we present and discuss analyses comparing topic alignment with alternative methods (perplexity) or models (hierarchical LDA). \subsection{Perplexity} Here, we evaluate train and test perplexity for each fitted model across simulations. Perplexity is a measure of the probability of test samples under a fitted model (see supplementary equation 4.1). For each simulation setup, we compute perplexity both on the data used to train the model and an independent sample with the same topics $\beta_{k}$. Supplementary Figure 11 shows that, when the data are generated via LDA, an ``elbow'' in train and test perplexities highlights the correct choice of $K = 5$. In the case of data generated with background noise (Supplementary Figure 12), a subtle drop-off around $K = 5$ is visible at small $\alpha$ and grows more apparent and concentrated around $K = 5$ as the noise decreases. For strain switching (Supplementary Figure 13), an elbow at $K = 5$ is visible, but even for large $S$, no indication of switching emerges. In each case, test perplexity never increases after the true $K = 5$, but the location of the ``elbow'' nonetheless suggests the correct $K$, in most cases. While perplexity can be used to inform the selection of the number of topics, alignment can provide relevant, complementary information. For example, perplexity is defined on subsets of samples, and so, unlike coherence or refinement, it cannot be used to evaluate the quality of individual topics. Further, in the case that the optimal perplexity appears at a large $K$, it can still be worthwhile to use topics at a smaller $K$ to guide interpretation of aligned topics at the optimal, larger $K$. Perplexity alone does not support such a details-on-demand analysis. Finally, though subtle differences in perplexity curves for true LDA vs. mis-specified models are apparent (e.g., subtle decreases in strain-switching perplexity after $K = 5$), variations across types of mis-specification are more clearly evident through model alignment. \subsection{Hierarchical LDA (hLDA)} In this section, we contrast the proposed topic alignment with hierarchichal LDA \citep{blei2003hierarchical}. While topic alignment visualizations are similar to visualizations of hierarchical structures, it is important to note that topic alignment is not a hierarchical method. Topic alignment relies on different assumptions and fulfills a different purpose than hierarchical topic models (hLDA). Applying these methods on the same datasets leads to different results, interpretations, and conclusions. First, hLDA assumes that topics follow a strict tree-like hierarchical structure: child-topics have only one parent. In contrast, the alignment structure is not tree-like and topics at higher resolution may be connected to several topics at lower resolution. Second, in the hLDA framework, samples belong to a single path in the hierarchy; they can only be composed of topics that are part of the same branch. In contrast, topic alignment only describes relationships between topics at different resolutions. Within a resolution, samples are described as mixture of topics. Third, because hLDA is a more complex model, it has two additional hyperparameters (the depth, and the concentration parameter for introducing new topics) compared to LDA. Consequently, deploying hLDA on datasets requires additional effort to identify optimal values for these parameters. Finally, in hLDA, we can interpret child-topics as sub-topics, and the hierarchical structure as a topic taxonomy. For example, in the context of analysing a magazines corpus, \textit{football}, \textit{tennis}, and \textit{climbing} could be sub-topics of a \textit{sport} topic. Specific terms characterize these sub-topics (e.g., ``harness'' for \textit{climbing}, or ``racket'' for \textit{tennis}), while the \textit{sport} parent topic might be characterized by terms such as ``competition,'' ``training,'' or ``fitness'' which we expect to find in documents related to either \textit{football}, \textit{climbing}, or \textit{tennis}. Topic alignment may also lead to a similar interpretation of topic relationships, but exclusively for topics with high refinement scores. Topic alignment is a post-estimation method aimed to guide scientists in their exploratory analyses when modeling their data with topic models. There are no assumptions regarding the relationships between topics at different resolutions. These relationships and the diagnostic scores provide information to users for interpreting their data. This is especially useful if the data generation process does not strictly follow the LDA assumptions and when perplexity curves do not show a clear elbow. Importantly, for microbiota structure analyses, the hLDA assumptions are not in agreement with observed data and current understanding of the microbial biology. Even if bacterial sub-communities were organized hierarchically (e.g., because of strain switching) we would still expect sub-communities from different branches of the hierarchy to co-exist within a given ecosystem (i.e., within a sample). This results in more complex interpretations; Supplementary Figure 11 demonstrates how hLDA introduces a degree of redundancy to account for mixtures across branches (see Supplementary Figure 11). Finally, current implementations of hLDA \citep{tomotopy} are not well suited for analyses of microbiota composition for two practical reasons. First, they require samples to be provided in a corpus format, as opposed to a matrix of counts. Given current library depths, transforming ASV counts into text files leads to large files (7+ GB). Second, the time required to fit a single hLDA model is larger than to fit LDA models at multiple resolutions and aligning the topics. For example, fitting hLDA on a subset of the vaginal microbiota data takes just under a minute. In comparison, it takes approximately 20-25 seconds to fit 15 LDA models and perform the topic alignment on the same dataset. \section{Discussion} \label{sec:discussion} We have introduced techniques for aligning topics across an ensemble of topic models. The resulting estimates provide a multiscale view of count data, showing how topics from large and small $K$ models compare and contrast with one another. We framed the alignment problem as the construction of the appropriate weighted graph whose nodes represent topics and whose edges encode topic similarity. We provided algorithms for estimating weights based on either the inner product or the optimal transport between fitted model parameters. Based on these alignment weights, we proposed diagnostics describing (1) the extent to which any given topic persists across a range of $K$ (coherence score) and (2) the definitiveness with which finely-resolved topics emerge from coarser ones (refinement score). We studied the properties of the proposed methods through a series of simulations, emphasizing the potential for alignment to detect biologically plausible departures from the LDA generative mechanism. We also applied the overall workflow to a vaginal microbiota dataset and recovered both known, high-level CSTs, and novel finer-grained sub-community structure. We note several limitations and opportunities for future study. We have not provided any theoretical guarantees about the estimated alignment weights or diagnostics. In order to make our approach applicable to the ensembles of fitted LDA models that are most frequent in practice, we have deliberately avoided proposing an overarching multiscale model. Requiring a new model would increase the burden for adoption -- it is easier to compute post-estimation statistics within a familiar workflow. Nonetheless, though beyond our scope, it would be worthwhile to understand the behavior of alignment weights or diagnostics in such a multiscale setting where model parameters are assumed to be drawn from a plausible distribution. Further, we have not incorporated any interactive visualization principles to streamline the analysis of the final alignment data structure. The static views provided by our package describe a single aspect of alignment at a time, showing the alignment weights, the estimated model parameters, and diagnostics in isolation from one another. It would be useful to link these views interactively. For example, the ``top'' species associated along each branch could be highlighted interactively, or the species whose distributions change the most from one topic to the next. Also absent from our views are any visualizations of how individual samples relate to the topic alignment overall. Finally, we note that, though we have focused on the case of increasing $K$, the principle of computing summaries that characterize an ensemble of models is more generally applicable. For example, the choice of hyperparameters $\lambda_{\gamma}, \lambda_{\beta}$ controls the sparsity of the posterior mixed membership and topic estimates. A view of which estimates are most strongly influenced by these hyperparameters would be informative. Further, in the data integration context, it may be simpler to relate separate models fit across data modalities rather than to construct a new global model for each new combination of component modalities. Similarly, for datasets collected across multiple sites or environments, alignment may provide a compromise between fitting a separate model per site, which fails to pool any shared information, and implementing a full hierarchical model, which can be a labor-intensive exercise. In these cases, the sets $V_{p}$ and $V_{q}$ for alignment contain not just topics from adjacent models, but topics from across a larger ensemble. As the types of data incorporated in biostatistical studies grow in number and complexity, flexible techniques for dimensionality reduction and visualization will continue to be an important component of the data analysis workflow. Exploratory analysis can guide the critical examination of complex problems, and topic alignment is a simple but useful addition to the toolbox available for count data. \section{Software} \label{sec5} The R package \texttt{alto} is available at \url{lasy.github.io/alto}. Simulations and data analysis can be reproduced through package vignettes. Scripts for reproducing simulations in a high-performance computing environment are available at \url{github.com/krisrs1128/topic_align}. \section{Supplementary Material} \label{sec6} Supplementary figures, algorithms, and proofs are available online at \url{http://biostatistics.oxfordjournals.org}. \section*{Acknowledgments} The authors thank Prof. Susan Holmes and Prof. Karl Rohe for fruitful discussions and constructive feedback on the manuscript. {\it Conflict of Interest}: None declared. \section*{Funding} This work was supported by the Bill and Melinda Gates Foundation grant OPP1189205-2019 (L.S.). \bibliographystyle{biorefs} \section*{Notations} \begin{tabular}{rl} $K$ & The number of topic in a model. The resolution. \\ $k$ & The topic index. $k \in 1,..,K$ \\ $D$ & The number of features in the dataset (i.e. number of words or number of ASVs). \\ & The number of columns in the data. \\ $d$ & The feature index. $d \in {1,..,D}$ \\ $N$ & The number of samples / documents in the data. \\ $i$ & The sample index. $i \in {1,..,N}$ \\ $x_{id}$ & The number (count) of feature $d$ in sample $i$ \\ $x_i$ & The vector of counts for sample $i$. $x_i \in \naturals^{D} $\\ $n_i$ & The total count of features in sample $i$ ($n_{i} = \sum_{d}x_{id}$ ) \\ $\Delta^{K}$ & The $K$-dimensional simplex. \\ $\*1_{K}$ & A vector of $K$ ones. \\ $\gamma_i$ & The mixed-membership weights for sample $i$. $\gamma_i \in \Delta^K$ \\ $\beta_k$ & The composition of topic $k$. $\beta_{k} \in \simplex^{D}$. \\ $B$ & The $D \times K$ matrix describing the topic composition. $\beta_k$ is the $k^{\text{th}}$ column of $B$. \\ $\lambda_{\gamma}$ & The parameter of the Dirichlet distribution from which the $\gamma_i$ are drawn. \\ $\lambda_{\beta}$ & The parameter of the Dirichlet distribution from which the $\beta_k$ are drawn. \\ $m \in \mathcal{M}$ & A single model ($m$) within the larger ensemble of all models ($\mathcal{M}$) \\ $M$ & The number of models considered in topic alignment \\ $V$ & The set of topics across all models. \\ $v$ & A specific topic within $V$. \\ $e(v, v')$ & A pair of topics. \\ $E$ & The set of all pairs of topics. $e \in E$ \\ $w$ & $ : E \to \reals_{+} $ The alignment weights. \\ $w(e)$ & The alignment weight for the pair $e$. \\ $\win, \wout$ & Normalized alignment weights, when weights are associated with directed edges. \\ $k(v)$ & The topic index associated with node $v$. \\ $\gamma(v)$ & $:= \left(\gamma_{i k\left(v\right)}^m\right) \in \reals^N_{+}$ is the vector of mixed membership associated with topic $v$. \\ $\beta\left(v\right)$ & $ := \beta_{k\left(v\right)}^m \in \simplex^{D}$ is the vector of topic composition for topic $v$. \\ $V_p$ & A subset of topics. $V_p \subset V$ \\ $V_m$ & The topics of model $m$. \\ $p, q$ & In general, two distributions. In all examples, these correspond to the locations and weights associated with topics $v$ in the graph defining an alignment. \\ $C$ & A matrix of transport costs from $D$ coordinates of $p$ to $D^\prime$ coordinates of $q$. In examples, C holds the JSDs between topics across two models. $C \in \reals_{+}^{D \times D^\prime} $ \\ $\Pi$ & The optimal transport map between two distributions. $\Pi \in \reals_{+}^{D \times D^\prime}$\\ $ \text{Path}\left(v\right) $ & The path ID associated with topic $v$. \\ $ \mathcal{P}\left(v\right) $ & $= \{v^\prime: \text{Path}\left(v^\prime\right) = \text{Path}\left(v\right)\}$. The set of topics with the same path ID as $v$. \\ $\psi_{m}$ & The permutation used to reorder topics for model $m$. \\ $\Psi_{m}$ & The set of potential permutations for reordering topics in model $m$. \\ $c\left(v\right)$ & The coherence score associated with topic $v$. \\ $r\left(v\right)$ & The refinement score associated with topic $v$. \\ $\nu_i$ & The sample-specific background noise multinomial parameter in the background noise simulation. \\ $\alpha$ & The extent of LDA structure in the background noise simulation. $\alpha = 1$ gives a true LDA model, while $\alpha = 0$ corresponds to the multinomial null model. \\ \end{tabular} \end{document} \subsection{"specific topic of paper"} [paragraph on how microbiome data is typically analyzed] sequencing results: table of ASV counts for each sample. The number of ASV is typically very large, (order of 1000-10,000), often larger than the number of samples. Consequently, it is often useful to use dimension reduction techniques, such as PCA/PCoA or clustering, to find features of the microbiome that may be associated with specific outcomes of interest. Recently, topic analysis, or Latent Dirichlet Allocation (LDA), has been used to model microbiome composition. Topic analysis was introduced by Blei et al [cite] to identify topics in written documents. This statistical method is particularly suited to model microbiome composition because xxxx Pros: dimension reduction, interpretable, mixed membership (so more flexible than clustering) [explain a bit more about topic models, betas, gammas, maybe add an illustration?] However, similarly to clustering methods, topic models are defined [better word than defined?] by a few key hyper-parameters such as the number of topic ($K$) or the expected sparsity of topics ($\alpha$). Scientists relying on topic models or on clustering are faced with the task of choosing an appropriate value for these hyper-parameters. \subsection{"gap in knowledge, motivations"} While several solution based on cross-validation (clustering and topic analyses) [cite] or based on the tightness of clusters (clustering) [cite] have been proposed to choose the appropriate number of topic or clusters, these methods often solely provide an optimal value without providing context or knowledge on the clusters or the topics. [that's not very well explained - let's think how we can explain better what they lack and what we offer] For example, none of these solutions provide a method to evaluate if some clusters or some topics might be more robust than others nor how a change of parameter impacts the clustering or the topic composition. \subsection{"overview of proposed methods, results and conclusions"} Here, we propose a method to align topics across $K$ such that robust topics or clusters might be identified. We developed an interactive R package \verb alto implementing our method to align and identify robust topics, and applied our method on synthetic and real-world data (here, we used microbiome data) to evaluate its performance and discuss how results may be interpreted. The problem of comparing and contrasting LDA models fitted across a range of K is related to the large literature on selecting “퐾” in various unsupervised contexts. For example,() discuss436 selection of the number of clusters in clustering algorithms, and() give methods for selecting the dimensionality in dimensionality-reduction problems. However, we view the problem of selecting an optimal model as distinct from comparing across an ensemble. The former question focuses on whether there are differences between models, the latter focuses on where. In this spirit, the proposed alignment technique has more in common with methodology for comparing clusterings WagnerandWagner(2007), visualmodelcriticism Gabryetal.(2019); Pregibon(1981), 442 and posterior predictive analysis Gelman(2004); GelmanandShalizi(2013), for example Though the LDA model is flexible, it can be challenging to interpret, especially in settings where $K$ is large. Existing work has addressed this difficulty by either modifying the model’s generative mechanism or proposing improved visualization of the model’s output. For example, \citep{singer2014interpretability} modify the original LDA mechanism by compartmentalizing interpretation across subsets of columns. Specifically, the original $D$ columns can be divided into a collection of user-specified blocks of dimension $\tilde{D}_{1}, \dots, \tilde{D}_{B}$ such that $\sum_{b}\tilde{D}_{b} = D$. Each block is modeled with its own topics, \begin{align*} p\left(x_i \vert \left(\gamma_ib\right), B_{b}\right) = \prod_{b} p\left(x_{iD_{b}} \vert \gamma_{ib}, B_b\right) \end{align*} where the density within each block $b$ is modeled using LDA. Since each block can be viewed separately from the others, this approach can be more directly interpretable in many problems. An alternative, proposed by \citep{blei2003hierarchical} modifies the generative mechanism by basing it on an interpretable tree structure. This approach begins by imagining a depth $L$ tree with topics $\left(\beta_{k}^{l}\right)_{k = 1}^{K_{l}}$ at nodes on level $l$ in the tree. Each sample $i$ is associated with a path $\nu_i$ from the root of the tree to a leaf. Simulate $\gamma_i \sim \textnormal{Dir}\left(\lambda_{\gamma} 1_L\right)$ giving mixture weights over nodes on this path. The sample’s counts are then drawn according to, \begin{align*} x_{i} \vert \gamma_i, \left(\beta_{k}^{l}\right)_{k, l} \sim \textnormal{Mult}\left(n_i, \sum_{l =1}^{L} \gamma_{il} \beta_{\nu_i\left(l\right)}\right) \end{align*} The model is completed by specifying a prior on paths $\nu_i$ and $\beta_{k}^{l}$. Briefly, the $\nu_{i}$ are induced through a nested Chinese Restaurant Process while the $\beta_{k}^{l}$ are drawn from a Dirichlet, we refer the reader to \citep{blei2003hierarchical} for details. Note that, since the $\beta_k^{l}$ near the root of the tree will be shared by many samples, they tend to concentrate most of their probability on the most common terms. Topics near the leaves will account for systematic, but rarer, departures from these coarser, foundational topics. This multiresolution structure can facilitate interpretation, allowing construction of tree visualizations with each topic's most common words placed within nodes. Possible titles (* = my favorite; @ = Laura's favorite), \begin{enumerate} \item Characterizing Fine-Grained Variation in Topic Models using Alignment (@) \item Alto: An R Package for Aligning Topics at Varying Resolution (*, @) \item Alignment Strategies for Resolving Differences in Topic Models \item Interpreting Topic Models by Modulating Topic Resolution \item Interpreting Topic Models by Aligning Topics at Varying Resolution (@) \item Improving Topic Model Interpretability through Refinement \item Choosing and Comparing K: Strategies for Topic Models (but we don’t really do this) \item Hierarchical Interpretations from Flat Topic Models \item Multiresolution Analysis of Count Data using Topic Alignment (*, @) \item Comparing Ks in Topic Analysis using Alignment (*) \end{enumerate} \section{Results} \subsection{Real-world dataset: vaginal microbiome samples} [FIGURE: topic "Sankey" for different Ks] [FIGURE: topic "Sankey" for different alphas (on a different dataset?)] [FIGURE: topic "Sankey" for different modalities (on a different dataset?)] \section{Discussion} \section{Methods and Materials} \subsection{Topic analyses} Our methods for topic alignment are built on the solutions that result from fitting an LDA model. We fit LDA using the {\tt topicmodels} package in R \cite{}. The two matrices that are important to our later analyses are the sample composition matrices, which we will denote $\Gamma$, and the topic composition matrices, which we will denote $B$. If we have $n$ documents, $p$ words, and $k$ topics, $\Gamma \in [0,1]^{n \times k}$, and $\Gamma_{ij}$ represents what fraction of document $i$ is composed of topic $j$. We will similarly have $B \in [0,1]^{p \times k}$, the column sums of $B$ will be equal to 1, and the $j$th column of $B$ is a probability vector describing the word composition of the $k$th topic. \subsection{Topic alignment based on sample composition (gammas)} Our aim is to quantify the extent to which a topic in one LDA solution is aligned with a topic in another LDA solution, using the sample composition matrices corresponding to the two solutions. To do this, we proceed by analogy to hard clustering. Suppose that $\Gamma_1 \in \{0,1\}^{n \times k_1}$ and $\Gamma_2 \in \{0,1\}^{n \times k_2}$ are matrices such that $(\Gamma_1)_{ij}$ is the indicator that sample $i$ is a member of the $j$th cluster in clustering solution 1, and similarly for $\Gamma_2$ and clustering solution 2. We can then take $(\Gamma_1^T \Gamma_1)^{-1} \Gamma_1^T \Gamma_2$ to be a matrix whose $ij$th element gives the similarity between cluster $i$ in clustering solution 1 and cluster $j$ in clustering solution 2. The intuition here is that $(\Gamma_1^T \Gamma_2)_{ij}$ is the number of samples that are members of both cluster $i$ in clustering solution 1 and cluster $j$ in clustering solution 2. $(\Gamma_1^T \Gamma_1)$ is a diagonal matrix whose $ii$th element is the number of samples present in cluster $i$ in clustering solution 1. Therefore, $(\Gamma_1^T \Gamma_1)^{-1} \Gamma_1^T \Gamma_2$ is a matrix whose rows sum to 1, and the $i$th row can be interpreted as a vector of alignment scores between cluster $i$ in clustering solution 1 and all of the clusters in clustering solution 2. To adapt this intuition to topic models, where every sample is assigned a weight for each document instead of being classified as being in exactly one topic, we can do something very similar. Suppose now that $\Gamma_1 \in [0,1]^{n \times k_1}$ and $\Gamma_2 \in [0,1]^{n \times k_2}$ are the sample composition matrices for two LDA solutions. For topic alignment scores, we can use \begin{align*} \text{diag}(\Gamma_1^T \Gamma_2 \mathbf 1_{k_2})^{-1} \Gamma_1^T \Gamma_2. \end{align*} The $\text{diag}(\Gamma_1^T \Gamma_2 \mathbf 1_{k_2})^{-1}$ part of the expression is analogous to $(\Gamma_1^T \Gamma_1)^{-1}$ from hard clustering, and serves to ensure that the row sums of the resulting expression are all equal to 1. The $\Gamma_1^T \Gamma_2$ part of the expression is exactly the same as its counterpart in hard clustering, and serves as an ``unnormalized'' measure of the similarity between two topics. As before, the $i$th row of this matrix gives us a vector of alignment scores whose sum is 1 and that tells us about the similarity between topic $i$ in LDA solution 1 and all of the topics in LDA solution 2. \subsection{Topic alignment based on topic composition (betas)} Our second aim is to quantify the extent to which a topic in one LDA solution is aligned with a topic in another LDA solution, using the topic composition matrices corresponding to the two solutions. Our strategy is... and is motivated by... [write down how the weights and normalized weights are computed - optimization problem] [LSY: should we have how the synthetic data are simulated in the method? And just keep the performance on simulated data in the results? I'd move items 1-5 that Kris wrote (I assume it was Kris) in the simulation part to the Methods] Each rectangle corresponds to one topic, and their associated models increase in resolution proceeding from left to right. The height of a topic is proportional to the total document mass of that topic, i.e. $\sum_i \gamma_{ik}^{(m)}$. The width of links between pairs of topics encodes their alignment. \textit{Right}: The topic signatures $\beta_{k}^{(m)}$ associated with models $m$ with $K = 2$ to 10 topics. Topics with high alignments in the left plot have similar signatures on the right The left side of each figure is a flow diagram, representing both the total mass across topics and the alignment weights $w\left(e\right)$. Each rectangle corresponds to a single topic, and its size corresponds to its mass $\sum_{i} \gamma_{ik}$. Vertical sections of the flow correspond to fitted models. The model with only one topic is given on the far left, while the model with 10 topics is on the far right. The width of links encodes the topic alignment weights $w\left(e\right)$. Topics and edges are colored to show paths. The right side of each figure gives topic weights $\beta_{kd}$ across dimensions $d$. Each column corresponds to one topic, each row is a dimension, and the size of each circle is proportional to $\beta_{kd}$. The colors of topics correspond to those in the flow. Sets of topics corresponding to one model are grouped into panels. Topic-dimension combinations with $\beta_{kd} < 0.001$ are ommitted, to prevent cluttering the display. Dimensions $d$ are sorted according to the measure \begin{align*} \text{Distinctiveness}\left(d\right) := \min_{l \neq k} \beta_{kd} \log \frac{\beta_{kd}}{\beta_{ld}}+\beta_{ld}-\beta_{kd}, \end{align*} which is the same as that proposed by \citep{dey2017visualizing}, with the exception that now $k, l$ vary over topics from multiple models. Only the 25 most distinctive dimensions are displayed. When using transport alignment, there is a detectable increase in the number of estimated key topics for $K \geq 5$; however, the rate of increase is much slower than for $K < 5$. Therefore, a practical rule-of-thumb would be to place higher trust in topics that have high refinement scores after the upper envelope plateaus. Therefore, a drop-off in refinement scores can be used to detect the introduction of spurious topics. Further, for larger $K$, the overall fraction of points with low refinement scores increases. This means that if we are considering an edge connecting nodes at $k < k'$, where $k \le k_0$, the nodes connected by the edge could either be true topic/true topic, mixture of topics/true topic, true topic/arbitrary split of that topic, or mixture of topics/arbitrary split of one of the topics in the mixture. In all of these cases, we expect the node at $k'$ to have only one parent at level $k$. On the other hand, if we are considering an edge connecting nodes at $k$ and $k'$, where $k_0 < k < k'$, then in addition to all of the possibilities listed before, we can have the nodes at the two levels be different arbitrary splits of the same topic. In that case, if we expect a different arbitrary split in the different levels, we expect the node at $k'$ to have more than one parent in level $k$. \section*{Supplementary Materials} These supplemental materials provide further theoretical and experimental results that do not appear in the main paper. In more detail, these supplemental sections describe, \begin{itemize} \item Section 1: A table of notation. \item Additional conceptual discussion of alignment and diagnostic measures, \begin{itemize} \item Section 2: A description of the topic reordering strategy used to ensure that alignment visualizations do not become tangled as models increase in resolution. \item Section 3: Properties of the refinement score. Proves the maximization and minimization results given in the main manuscript. Also derives refinement scores in the case that weights are equal. \item Section 4: Contrasting diagnostics. Provides a simple example where the refinement score is large, but not the coherence score, and vice versa. \item Section 5: Comparisons of alignment diagrams from data drawn from true LDA and null multinomial generative mechanisms. Shades results in according to either path ID or the proposed diagnostic measures. \end{itemize} \item Further simulation results and commentary, \begin{itemize} \item Section 6: Provides alignment visualizations corresponding to the background noise simulation in the main text. \item Section 7: Visualizes the convergence of diagnostic measures as the number of samples increases. Suggests a form of consistency, albeit in a limited case. \item Section 8: Discusses strain switching properties across number of switched species S and all simulation replicates. Also provides specific algorithm for strain switching. \end{itemize} \item Discussion of related methods, supporting Section 6 of the main text. \begin{itemize} \item Section 9: Perplexity measures across simulation experiments. Provides a complementary approach to model selection. \item Section 10: Application of hierarchical LDA to the vaginal microbiota dataset. Discusses interpretation of fitted parameters and contrasts this with our proposed alignment. \end{itemize} \end{itemize} \section{Notation} \begin{longtable}{rp{12cm}} \textbf{Notation} & \textbf{Interpretation} \\ \hline $N$ & The total number of samples. \\ $D$ & The dimensionality of each sample. \\ $x_i$ & The vector of counts for sample $i$. $x_i \in \naturals^{D} $\\ $n_i$ & The total count of sample $i$. That is, $n_i = \sum_{d = 1}^{D} x_{id}$. \\ $\Delta^{K}$ & The $K$-dimensional simplex. \\ $\gamma_i$ & The topic memberships for sample $i$. $\gamma_i \in \Delta^{K}$\\ $K$ & The number of topics. \\ $\*1_{K}$ & A vector of $K$ ones. \\ $\beta_{k}$ & The composition of topic $k$. $\beta_{k} \in \Delta^{D}$\\ $B$ & A $D \times K$ matrix where the $k^{\text{th}}$ column is $\beta_{k}$ (composition of topic $k$). \\ $\lambda_{\gamma}, \lambda_{\beta}$ & Hyperparameters of the Dirichlet distributions for $\gamma_{i}$ and and $\beta_{k}$, respectively. \\ $p, q$ & In general, two distributions. In all examples, these correspond to the locations and weights associated with topics $v$ in the graph defining an alignment. \\ $C$ & A matrix of transport costs from $D$ coordinates of $p$ to $D^\prime$ coordinates of $q$. In examples, C holds the JSDs between topics across two models. $C \in \reals_{+}^{D \times D^\prime} $ \\ $\Pi$ & The optimal transport map between two distributions. $\Pi \in \reals_{+}^{D \times D^\prime}$\\ $m \in \mathcal{M}$ & A single model ($m$) within the larger ensemble of all models ($\mathcal{M}$) \\ $V, E$ & The vertices and edges representing topics across all models and the potential alignments between them. \\ $v$ & A specific topic within $V$. \\ $e(v, v')$ & A pair of topics. $e \in E$ \\ $w$ & $: E \to \reals^{+}$ The alignment weights associated with pairs of models on $E$. \\ $w(e)$ & The alignment weight for the pair $e$. \\ $W$ & The matrix of weights $w\left(e\right)$ for all edges with a specified subset. \\ $\win, \wout$ & Normalized alignment weights, when weights are associated with directed edges. \\ $k\left(v\right)$ & The index of topic $v$ within a subset of $V_m$ of topics derived by model $m$. \\ $\gamma\left(v\right)$ & $ \in \reals_{+}^{N}$ The vector of memberships $\gamma_{ik\left(v\right)}$ for topic $v$ across all $N$ samples $i$. \\ $V_{p}, V_{q}$ & Two subsets of topics. When these are written as $V_{m}$ and $V_{m+ 1}$, these are subsets from two models with $m$ and $m + 1$ topics, respectively. \\ $\psi_{m}$ & The permutation used to reorder topics for model $m$. \\ $\Psi_{m}$ & The set of potential permutations for reordering topics in model $m$. \\ $\text{Path}\left(v\right)$ & The (scalar) path identity associated with topic $v$. \\ $\mathcal{P}\left(v\right)$ & The subset of vertices with the same path identity as topic $v$. \\ $c\left(v\right)$ & The coherence score associated with topic $v$. \\ $r\left(v\right)$ & The refinement score associated with topic $v$. \\ $\nu_i$ & The sample-specific background noise multinomial parameter in the background noise simulation. \\ $\alpha$ & The extent of LDA structure in the background noise simulation. $\alpha = 1$ gives a true LDA model, while $\alpha = 0$ corresponds to the multinomial null model. \\ \hline \caption{Glossary of notation used in this paper.} \label{tab:notation} \end{longtable} \section{Topic ordering} Topics are not returned by the LDA fit in a specific order. Consequently, topics connected by high weights across models may have different index $k$ within their respective model. For visualization purposes, it is useful to order topics within each model such that similar topics are close to each other (Supplementary Figure \ref{fig:reordering}). The ordering procedure seeks the optimal permutation of topic indices $\psi_{1:M}^\ast$ such that the distance between strongly connected consecutive topics is minimized: \begin{align*} \arg \min_{\psi_{1:M} \in \Psi_{1:M}} \sum_{m = 1} ^{M - 1} \sum_{e \in E_{m, m + 1}} \absarg{\psi_{m}\left[k\left(v\right)\right] - \psi_{m + 1}\left[k\left(v'\right)\right]} w\left(e\right), \end{align*} where the optimization is taken over the set of possible topic permutations $\Psi_{m}$ of topic labels in each model $m$ and $E_{m, m+1}$ is the set of edges between topics in models $m$ and $m + 1$. Finally, the reordered topic label for node $v$ at level $m$ is given by $k\left(v\right) \leftarrow \psi_{m}^\ast\left(k\left(v\right)\right)$. For example, in Supplementary Figure \ref{fig:reordering}, suppose that the topics $v \in V_{m}$ for the purple topic have values of $k\left(v\right)$ of $1, \dots, 4$, arranged from top to bottom. Then, the associated permutation $\psi\left(v\right)$ is $\left(3, 1, 2, 4\right)$, also proceeding from top to bottom. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figure/sketches/alto_sketches_re-ordering.png} \caption{Given the high alignment weights with topic A, topic 3 index is permuted such that this topic become the first one of its model.} \label{fig:reordering} \end{figure} Instead of searching over all possible permutations, we approximate the optimal solution across a sequence of models $M$ by applying a forward and a backward pass, both of which rank the centers of gravity of a topic based on the weights connecting it to topics from the previous (forward pass) or next (backward pass) model. We find that additional forward and backward passes have little impact on the rankings. Specifically, the set of topic indices is updated using Algorithm \ref{alg:reorder}. \begin{algorithm}[H] \For{m = 2:M}{ $k^\prime\left(v_m\right) := \text{rank} \left(\sum_{v_{m-1} \in V_{m-1}} k\left(v_{m-1}\right) \win\left(v_{m-1}, v_{m}\right)\right), \forall v_m \in V_m$ } \For{m = M:2}{ $k^\prime\left(v_{m-1}\right) := \text{rank} \left(\sum_{v_{m} \in V_{m}} k\left(v_{m}\right) \wout\left(v_{m-1}, v_{m}\right) \right), \forall v_{m-1} \in V_{m-1}$ } \label{alg:reorder} \caption{Forward and backward pass for the topic ordering algorithm. In the forward pass, topics are indexed so that they are close to the source topics from which they draw the most weight, while in the backward pass, they are placed near their high weight descendants.} \end{algorithm} \section{Properties of the refinement score} Our definition of the refinement score is \begin{align*} r(v) &:= \frac{|V_l|}{L - l} \sum_{l' = l + 1}^L \sum_{v'_{l'} \in V_{l'}} \wout(v, v'_{l'})\win(v, v'_{l'})\\ &= \frac{|V_l|}{L-l} \sum_{l' = l+1}^L \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{(\sum_{u \in V_l} w(u, v'_{l'}))(\sum_{w \in V_{l'}} w(v, w))} \end{align*} where $V_{l}$ is the set of all nodes in level $l$. Here we give proofs for the assertions about the refinement score given in the main text. \subsection*{Maximizing $r(v)$} Suppose $v$ is in level $l$, and suppose further that $w(v, v'_{l'}) > 0$ implies $w(u, v'_{l'}) = 0$ for any $u \in V_l \setminus \{v\}$. This means that every node in level $l'$ has only one parent in level $l$. In that case, $\sum_{u \in V_l} w(u, v'_{l'}) = w(v, v'_{l'})$, and we can write the inner sum in the definition of $r(v)$ as \begin{align*} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{(\sum_{u \in V_l} w(u, v'_{l'}))(\sum_{w \in V_{l'}} w(v, w))} &= \frac{1}{\sum_{w \in V_{l'}}w(v,w)} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{\sum_{u \in V_l} w(u, v'_{l'})} \\ &= \frac{1}{\sum_{w \in V_{l'}}w(v,w)} \sum_{v'_{l'} \in V_{l'}} w(v, v'_{l'}) = 1 \end{align*} Then the overall value for the refinement score is \begin{align*} r(v) &= \frac{|V_l|}{L-l} \sum_{l' = l + 1}^L\sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{(\sum_{u \in V_l} w(u, v'_{l'}))(\sum_{w \in V_{l'}} w(v, w))}\\ &= \frac{|V_l|}{L-l}\sum_{l'=l+1}^L 1 = 1 \end{align*} This is the largest $r(v)$ can be, as can be seen by noting that if $w(u,v'_{l'}) > 0$ for some $u \in V_l \setminus \{v\}$, we will have \begin{align*} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{(\sum_{u \in V_l} w(u, v'_{l'}))(\sum_{w \in V_{l'}} w(v, w))} &= \frac{1}{\sum_{w \in V_{l'}}w(v,w)} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{\sum_{u \in V_l} w(u, v'_{l'})} \\ &< \frac{1}{\sum_{w \in V_{l'}}w(v,w)} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{w(v, v'_{l'})} \\ &= \frac{1}{\sum_{w \in V_{l'}}w(v,w)} \sum_{v'_{l'} \in V_{l'}} w(v, v'_{l'}) = 1 \end{align*} Therefore, a node will have a refinement score of 1 if and only if every node in level $l' > l$ has only one parent in level $l$. Note that if all the refinement scores take their maximum values, then each node will have only one parent in the previous level, and the graph visualized will be a tree. However, the parent-child relationships ($l' - l = 1$) do not have to be consistent with the ancestor-descendant ($l' - l > 1$) relationships for all of the refinement scores to be 1. \subsection*{Minimizing $r(v)$} Suppose we want to minimize $r(v)$ for a node $v$ in level $l$. We can write the inner sum in the definition of $r(v)$ as \begin{align*} \frac{1}{\sum_{w \in V_{l'}} w(v,w)} &\sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})^2}{\sum_{u \in V_l} w(u, v'_{l'})} \\ &= \frac{1}{\sum_{w \in V_{l'}} w(v,w)} \sum_{v'_{l'} \in V_{l'}} \frac{w(v, v'_{l'})}{1 + \sum_{u \in V_l \setminus \{v\}} w(u, v'_{l'}) / w(v, v'_{l'})} \end{align*} Supposing that the weights $w(v, w)$, $w \in V_{l'}$ are fixed, the quantity above goes to zero when for each $v'_{l'}$ s.t. $w(v, v'_{l'})> 0$, $w(u, v'_{l'}) \to \infty$ for some $u \in V_l \setminus \{v\}$. The refinement score $r(v)$ is an average over these values, and so the refinement score will also go to zero. Therefore, for $r(v)$ to be small, all the descendants of $v$ need to primarily descend from some other node in the same level as $v$. \subsection*{Refinement scores when all the weights are equal} One of the ``edge'' cases we are particularly interested in is one in which all the weights are equal. This is our intuition about what will happen if the clusters in the different levels don't correspond to each other at all. If all the edges are equal, no matter what $l$ is, we will have $\win(v, v'_{l'}) = 1$, and so the expression for the refinement score simplifies to \begin{align*} r(v) &= \frac{|V_l|}{L-l} \sum_{l' = l + 1}^L \sum_{v'_{l'} \in V_{l'}} \wout(v, v'_{l'}) \win(v, v'_{l'})\\ &= \frac{|V_l|}{|V_l|(L-l)} \sum_{l' = l + 1}^L \sum_{v'_{l'} \in V_{l'}} \wout(v, v'_{l'}) = 1 \end{align*} Overall, these results show us how the refinement scores work, and give us some insight into how the weights that we don't visualize enter into the refinement score calculations. For example, if we had an alignment graph for which all the weights between subsequent levels were equal, we could still have a node with a relatively high refinement score if the weights that we didn't see satisfied the criteria for maximizing the refinement score (each node in a later level has only one ancestor in the level of the node we are interested in). On the other hand, the alignment graph could look like a tree when we just look at the weights between subsequent levels, but if the weights in the levels we don't see are either all equal or such that the node we are interested in doesn't have descendants in the later levels, its refinement score could be very small. \section{Comparing diagnostics} The diagnostics measure different properties of an alignment. Both low coherence / high refinement and low refinement / high coherence combinations are possible, although in the examples below the diagnostics tend to track each other. We would expect the refinement score to be high but the coherence score to be low in the case that the alignment plot has a branching structure. If we do the calculations for an alignment (or piece of an alignment) as shown in Supplementary Figure \ref{fig:coherence_refinement_toy}, we can see that the refinement score for the highlighted node will be 1 (the largest possible value for that node), but the coherence score will be $\frac{1}{2}(p + p^2)$. Plugging in similar numbers, where each of the children has only a as an ancestor in that level, with more branching downstream of the highlighted node shows that as the branching below that node increases, the coherence score for a decreases but the refinement score stays at 1. On the other hand, the refinement score can be small for a topic with high coherence if that topic doesn't have many descendants. As we see in the example in Supplementary Figure \ref{fig:coherence_refinement_toy}, the highlighted node has a coherence score of 1 because it is connected with weight 1 to the only other node in its same path. On the other hand, it has a low refinement score of $\frac{3 \delta}{1 + \delta}$ because all of the nodes in the subsequent level are much more closely aligned to the other competing topics. Overall, the coherence score describes how ``good'' or ``trustworthy'' a topic is; topics with high coherence scores appear consistently across levels. This is true even if the refinement score is low — in that case, the refinement score is likely to be low simply because the topic is present at low frequency. On the other hand, the combination of high refinement and low coherence score suggests that the topic is a mixture of several high-coherence topics. These topics can still be useful to the analyst, as they simply represent a coarser-grained summary of the data. \begin{figure} \centering \includegraphics[height=.25\textwidth]{figure/sketches/cr1.png} \hspace{.1\textwidth} \includegraphics[height=.25\textwidth]{figure/sketches/cr2.png} \caption{Examples of situations where coherence and refinement scores are not aligned. In the first example, the coherence score for the highlighted node is 1 but the refinement score is $3 \delta / (1 + \delta)$. In the second example, the refinement score for the highlighted node is 1 but the coherence score is $\frac{1}{2} (p + p^2)$.} \label{fig:coherence_refinement_toy} \end{figure} \newpage \section{Comparing alignments of LDA-generated \textit{vs} null model datasets} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/product-paths-250.png} \caption{Product alignment colored by path for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:product_paths} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/product-coherence-250.png} \caption{Product alignment colored by coherence score for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:product_coherence} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/product-refinement-250.png} \caption{Product alignment colored by refinement score for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:product_refinement} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/transport-paths-250.png} \caption{Transport alignment colored by path for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:transport_paths} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/transport-coherence-250.png} \caption{Transport alignment colored by coherence score for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:transport_coherence} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/new_sims/transport-refinement-250.png} \caption{Transport alignment colored by refinement score for simulated data. Left two columns correspond to data coming from the true LDA model with $K = 5$, and right two columns correspond to data coming from a null model.} \label{fig:transport_refinement} \end{figure} \newpage \section{Example of alignments with increasing level of background noise} \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{figure/gradient_flow-1.png} \includegraphics[width=0.48\textwidth]{figure/gradient_flow-2.png} \includegraphics[width=0.48\textwidth]{figure/gradient_flow-3.png} \includegraphics[width=0.48\textwidth]{figure/gradient_flow-4.png} \caption{Flows for LDA with background variation at levels $\alpha \in \{0, 0.4, 0.6, 1\}$. A more definitive topic structure emerges for larger $\alpha$, with less exchange between neighboring branches.} \label{fig:lda_flow_gradients} \end{figure} \newpage \section{Convergence of diagnostics as $N$ increases} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/summary_alto_asymptotic_behavior.png} \caption{Summary scores as the number of samples ($N$) in simulated datasets increases (vertical panels). For each $N$, 50 datasets were generated and topic alignment was performed on each dataset. Each line represents the score summary for one dataset. Panel (a) and (b) show the minimum of these scores for each simulated dataset and model. We chose to show the minimum of the scores because we observed in simulations that "spurious" topics introduced at higher resolution were characterized by low coherence and/or refinement scores. Consequently, the minimum of the scores allows to identify drop-offs in the lower envelope for the scores. Panel (c) shows the number of path identified at each resolution. Panel (d) shows the distribution of the number of path at which a plateau is identified in panel (c).} \label{fig:asymptotic_behavior} \end{figure} \newpage \section{Strain switching across $S$} In this appendix, we extend the discussion of strain switching. We provide details of the perturbation mechanism (Algorithm \ref{alg:perturbations}) and investigate the sensitivity of topic alignment across a wider range of $S$. We simulate strain-switching data for $S \in \{10, 30, \dots, 230\}$. For each choice of $S$, we generate 50 datasets and align topic models across a range $K = 2, \dots, 10$ of topics. To gauge sensitivity to perturbed topics, we measure cosine similarities across simulation replicates. If strain switching cannot be detected, then we expect $\xi_{1k}^{m} \approx \xi_{2k}^{m}$ and $\xi_{3k}^{m} \approx \xi_{4k}^{m}$ for all $k, m$ — the estimated topics will lack specificity for any member of the equivalent pairs. Figure \ref{fig:equivalence_similarities} shows the estimation specificity, $\frac{1}{K}\sum_{k = 1}^{m}\absarg{\xi_{1k}^m - \xi_{2k}^m} + \absarg{\xi_{3k}^m - \xi_{4k}^m}$ for each of the 50 replicates for each $S$. This statistic quantifies the difference between rows 1-2 and 3-4 visible in the heatmap of topic similarities, but across all simulation replicates. \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{figure/equivalence_similarities.png} \caption{The ability of models to detect strain switching as a function of $K$ and $S$. Model resolution increases across panels moving from left to right. Within each panel, the size of the number of swapped species $S$ is plotted against the estimation sensitivities defined in the main text. The larger the subset $S$, the higher the sensitivity. Further, high-resolution models can more easily distinguish perturbed topics, as indicated by the steeper slopes for panels on the right.} \label{fig:equivalence_similarities} \end{figure} As expected, larger perturbations are more easily detected. For models with $K \leq 5$, there is a small increase in estimation specificity as $S$ increases; strain switching might have a small effect on the dominant signatures in the data. For $K > 5$, the specificity as a function of $S$ steepens -- more highly resolved topics can more easily distinguish between perturbed topics. \begin{algorithm} \KwData{Topics $\beta_{k}$, subset size $S$ to perturb, $\tilde{K}$ of the topics to perturb, number of perturbations $R$.} \For{$k \leq \tilde{K}$}{ Sample $S$ coordinates to perturb, and define a mapping $\pi$ such that $\pi\left(s\right)$ provides the $s^{th}$ perturbed index. \\ \For{$r \leq R$}{ For the subset $S$, draw $\nu_{k}^{r} \sim \textnormal{Dir}\left(\lambda_{S} 1_{\absarg{S}}\right)$\\ Renormalize $\nu_{k}^{r} := \frac{\|\beta_{k}\left[S\right]\|_{1}}{\|\nu_{r}\|_{1}}\nu_{r}$ \\ Perturb $\beta_{k}$ at coordinates specified by $S$, \begin{align*} \tilde{\beta}_{kd}^{r} := \begin{cases} \beta_{kd} & \text {if } d \notin S \\ \nu_{k\pi\left(s\right)}^{r} & \text {otherwise.} \end{cases} \end{align*} } } \caption{Strategy for generating perturbed topics.} \label{alg:perturbations} \end{algorithm} \newpage \section{Perplexity comparisons} \label{sec:perplexity} Perplexity is defined as \begin{align} \label{eq:perplexity} \text{perplexity}\left(x_{1}^{\ast}, \dots, x_{n}^{\ast}\right) = \exp{-\frac{\sum_{i = 1}^{n} \log p\left(x^{\ast}_{i}\right)}{\sum_{i = 1}^{n} N_i}}, \end{align} where $N_i$ is the total count of document $i$. Hence, test documents $x_{i}^{\ast}$ with low likelihood-per-read under the fitted model $p$ have high perplexity. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{figure/lda-perplexity.png} \caption{Perplexity for train and test samples for data generated by a true LDA model with $K = 5$ topics. The ``elbow'' in train and test perplexity can be used to detect the true value of $K$.} \label{fig:lda-perplexity} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{figure/gradient-perplexity.png} \caption{Perplexity for train and test samples from data generated by an LDA model with varying levels of background noise. Panel titles match the $\alpha$ from the corresponding simulation in the main text. For smaller $\alpha$, the ``elbow'' in perplexity sometimes appears at incorrect values of $K$ (e.g., 6 - 7 for $\alpha = 0.2$ and 4 - 5 for $\alpha = 0.4$). The specific locations of these drop-offs is dependent on the $\lambda_{\nu}$ hyperparameter generating this background noise.} \label{fig:gradient-perplexity} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure/switching-perplexity.png} \caption{In and out-of-sample perplexity for data generated according to the strain switching setup. Rows provide different values of $S$, the number of switched strains. Perplexity at small $K$ is slightly larger when $S$ is large. Further, for large $S$, perplexity continues to decrease slightly even beyond the ``elbow'' at $K = 5$. However, no structure at $K = 7$ suggests that two of the topics may exhibit switching behavior.} \label{fig:switching-perplexity} \end{figure} \newpage \section{Hierarchical LDA comparison} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure/hlda_m1.png} \caption{Hierarchical LDA (hLDA) model of the vaginal microbiome data. } \vspace{-2em} \flushright \emph{(continued)\\[.25em]} \hrule \end{figure} \begin{figure}[t!] \ContinuedFloat \caption{ (Caption continued.) hLDA was fit to a subset of the vaginal microbiome data with a depth of 4 and a concentration parameter (gamma) of 0.1. (Top) Hierarchical structure of hLDA topics. (Middle) Topic composition for each vertical path of the hierarchical structure. There is a vertical path for each leave of the hierarchical tree and path are labeled and colored according to the leave topic number. Each path is shown on a vertical panels. Topics from each vertical path are on the x-axis, ordered by depth, starting with the root topic on the left (This implies that the topic composition of the root topic is repeated for each vertical path). Features ("words") are shown vertically. The dot size is proportional to the proportion of a feature in a topic. The dot color is set to match the color of the topics on the top panel. (Bottom) Sample composition in terms of topic proportion. Horizontal panel and x-axis are the same as in the middle panel. Each horizontal line represents a sample. Given that samples can only be composed of topics on a given vertical path, samples have been assigned to their path and ordered by path. Colors match the topics color on the top panel. Transparency is inversely proportional to topic proportion in each sample.} \hrule \end{figure}
1,314,259,992,723
arxiv
\section{Introduction} Carnot's theorem can be considered as a generalization of Ceva's theorem. The theorem of Carnot gives a necessary and sufficient condition for two points on each side of a triangle to form a conic. \begin{theorem}[Carnot's theorem] Let $\triangle A B C$ be a triangle and let $A_1$, $A_2$ be the points on the line $B C$, $B_1$, $B_2$ on the line $C A$ and $C_1$ and $C_2$ on the line $A B$. The points $A_1$, $A_2$, $B_1$, $B_2$, $C_1$ and $C_2$ lie on the same conic $\mathcal{C}$ if and only if \begin{equation}\label{r}\frac{\overrightarrow{A C_1}}{\overrightarrow{C_1 B}} \cdot \frac{\overrightarrow{A C_2}}{\overrightarrow{C_2 B}} \cdot \frac{\overrightarrow{B A_1}}{\overrightarrow{A_1 C}} \cdot \frac{\overrightarrow{B A_2}}{\overrightarrow{A_2 C}}\cdot \frac{\overrightarrow{C B_1}}{\overrightarrow{B_1 A}}\cdot \frac{\overrightarrow{C B_2}}{\overrightarrow{B_2 A}}=1.\end{equation} \end{theorem} \medskip In Section 2 we give a classical proof of Carnot's theorem, using the theorems of Menelaus and Pascal. This proof can be found in \cite{Hatt}. We also study some natural points and lines involved in the configuration and its relations to the side lines of triangle $\triangle A B C$. Theorems \ref{t2} and \ref{t5} summarize these results. These theorems are generalizations of classical Euclidean theorems for incircle of a triangle. In Section 3 we give an synthetic proof of the following statement (see Figure \ref{bradley}) which was the first time formulated in \cite{Bradley}: \begin{theorem}[Bradley's theorem]\label{glavna} There is a conic $\mathcal{D}$ such that the lines $A A_1$, $A A_2$, $B B_1$, $B B_2$, $C C_1$ and $C C_2$ are tangents of $\mathcal{D}$ if and only if the points $A_1$, $A_2$, $B_1$, $B_2$, $C_1$ and $C_2$ lie on the same conic $\mathcal{C}$. \end{theorem} \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.8\textwidth]{bradley.pdf}} \caption{Bradley's theorem} \label{bradley} \end{figure} Our goal is to prove an equivalent statement, Corollary \ref{bradley1} which together with the Poncelet Triangle theorem implies Bradley's theorem. In the paper \cite{Bradley}, Bradley formulated the following conjecture (see Figure \ref{bradleygs}): \begin{theorem}[Bradley's theorem about quadrilaterals]\label{glavnag} Let $ A B C D$ and $P Q R S$ be quadrilateral which are in axial perspective, that is $T= A B\cap PQ$, $U = BC\cap QR$, $V = CD\cap S$, $W = DA\cap SP$ are collinear. The other twelve intersections of the sides of the quadrilaterals are marked with notation exemplified by $13 = AB\cap RS$, $42 = DA\cap QR$ etc, in such way that number $1$ corresponds to the sides $A B$ and $P Q$, $2$ to $B C$ and $Q R$, $3$ to $C D$ and $R S$ and $4$ to $D A$ and $S P$. Then there exist four conics $\mathcal{C}_1$, $\mathcal{C}_2$, $\mathcal{C}_3$ and $\mathcal{C}_4$ such that the points $23$, $24$, $32$, $34$, $42$, $43$ lie on conic $\mathcal{C}_1$, the points $13$, $14$, $31$, $34$, $42$, $43$ lie on conic $\mathcal{C}_2$, the points $12$, $14$, $21$, $24$, $41$, $42$ lie on conic $\mathcal{C}_3$ and $12$, $13$, $21$, $23$, $31$, $32$ lie on conic $\mathcal{C}_4$. \end{theorem} \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.8\textwidth]{bradleyg.pdf}} \caption{Bradley's theorem about quadrilaterals} \label{bradleygs} \end{figure} This theorem is proved in Section 4. \medskip Theorems of Ceva, Menelaos and Carnot are used in \cite{JRG} as 'prototheorems' to build new theorems that involve lines and conics. It is shown in \cite{JRG} and \cite{Gerb} that any oriented triangulated 2-manifold can be a frame. This procedure works for theorems studied in this paper as well. Deep relation among classical projective geometry and more advanced topics in mathematics and computer science is explained in \textit{Perspectives on Projective Geometry}, an inspirative book by J\"{u}rgen Richter-Gerbert, \cite{Gerb}. Software 'Cinderella' developed by Ulrich Kortenkamp and J\"{u}rgen Richter-Gerbert is used as experimental tool for discovering new results about Carnot's configuration. \section{Carnot's theorem} We start this section with proof of the Carnot theorem. \begin{figure}[!h!h] \centerline{\includegraphics[width=0.5\textwidth]{carnot1.pdf}} \caption{The Carnot theorem} \label{carnot1} \end{figure} \noindent \textbf{{Proof of Carnot's theorem:}} Let the points $A_1$, $A_2$, $B_1$, $B_2$, $C_1$ and $C_2$ lie on the same conic $\mathcal{C}$ and let $L$ be the intersection of the lines $A_1 C_1$ and $A C$, $M$ the intersection of the lines $B_1 C_2$ and $B C$ and $N$ the intersection of the lines $A_2 B_2$ and $A B$, Figure \ref{carnot1}. By the Pascal theorem, the points $L$, $M$ and $N$ lie on the same line, and from the Menelaos theorem the following holds:\begin{equation}\label{r1} \frac{\overrightarrow{A L}}{\overrightarrow{L C}}\cdot \frac{\overrightarrow{C M}}{\overrightarrow{M B}}\cdot \frac{\overrightarrow{B N}}{\overrightarrow{N A}}=-1. \end{equation} Applying the Menelaos theorem three times for the lines $A_1 C_1$, $B_1 C_2$ and $A_2 B_2$ and $\triangle A B C$, we obtain: \begin{equation}\label{r2} \fbox{\parbox{7 mm}{\Large{$\frac{\overrightarrow{A L}}{\overrightarrow{L C}}$}}}\cdot \frac{\overrightarrow{C A_1}}{\overrightarrow{A_1 B}}\cdot \frac{\overrightarrow{B C_1}}{\overrightarrow{C_1 A}}=-1,\end{equation} \begin{equation}\label{r3} \frac{\overrightarrow{A B_1}}{\overrightarrow{B_1 C}}\cdot \fbox{\parbox{7 mm}{\Large{$\frac{\overrightarrow{C M}}{\overrightarrow{M B}}$}}}\cdot \frac{\overrightarrow{B C_2}}{\overrightarrow{C_2 A}}=-1, \end{equation} \begin{equation}\label{r4} \frac{\overrightarrow{A B_2}}{\overrightarrow{B_2 C}}\cdot \frac{\overrightarrow{C A_2}}{\overrightarrow{A_2 B}}\cdot \fbox{\parbox{7 mm}{\Large{$\frac{\overrightarrow{B N}}{\overrightarrow{N A}}$}}}=-1. \end{equation} Multiplying the relations (\ref{r2}), (\ref{r3}) and (\ref{r4}) and division by (\ref{r1}), yields the relation (\ref{r}). In the opposite direction, the proof is similar. By the Menelaos theorem, the relations (\ref{r2}), (\ref{r3}) and (\ref{r4}) hold. From the relations (\ref{r2}), (\ref{r3}), (\ref{r4}) and (\ref{r}) one can easily deduce the relation (\ref{r1}), so by the converse of the Menelaos theorem, the points $L$, $M$ and $N$ lie on the same line. The converse of the Pascal theorem then implies that the points $A_1$, $A_2$, $B_1$, $B_2$, $C_1$ and $C_2$ lie on the same conic. \hfill $\square$ \medskip Let $D_1$, $D_2$, $E_1$, $E_2$, $F_1$ and $F_2$ be the second intersection points of the conic $\mathcal{C}$ and the lines $A A_1$, $A A_2$, $B B_1$, $B B_2$, $C C_1$ and $C C_2$, respectively. Let $B_3$ be the intersection point of the lines $A_1 C_2$ and $F_1 D_2$ and $B_4$ be the intersection point of the lines $C_1 A_2$ and $D_1 F_2$. The points $C_3$, $C_4$, $A_3$ and $A_4$ are defined analogously. Let $E_3$ be the intersection point of the lines $A_1 C_1$ and $D_2 F_2$ and $E_4$ be the intersection point of the lines $A_2 C_2$ and $D_1 F_1$. The points $F_3$, $F_4$, $D_3$ and $D_4$ are defined analogously. \begin{theorem}\label{t1} The points $B_3$, $B_4$, $E_3$ and $E_4$ lie on the line $C A$, the points $C_3$, $C_4$, $F_3$ and $F_4$ lie on the line $A B$ and the points $A_3$, $A_4$, $D_3$ and $D_4$ lie on the line $B C$. \end{theorem} \noindent \textbf{Proof:} We shall prove that $B_3$ lie on the line $C A$. Let $R$ be the intersection of $A_1 C_2$ and $A C$ and $R'$ the intersection of the lines $F_1 D_2$ and $A C$. \begin{figure}[h!h!h!] \centerline{\includegraphics[width=\textwidth]{Carnot2.pdf}} \caption{Theorem \ref{t1}} \label{carnot2} \end{figure} From the Menelaos theorem for the line $A_1 C_2$, we obtain: \begin{equation}\label{j1} \frac{\overrightarrow{C R}}{\overrightarrow{R A}}=-\frac{\overrightarrow{C_2 B}}{\overrightarrow{A C_2}} \cdot \frac{ \overrightarrow{A_1 C}}{\overrightarrow{B A_1}}. \end{equation} \medskip Let $X$ be the intersection point of the lines $A A_2$ and $C C_1$. The Menelaos theorem for the line $F_1 D_2$ and $\triangle A X C$ yields: \begin{equation}\label{j3} \frac{\overrightarrow{C R'}}{\overrightarrow{R' A}}\cdot \frac{\overrightarrow{A D_2}}{\overrightarrow{D_2 X}}\cdot \frac{\overrightarrow{X F_1}}{\overrightarrow{F_1 C}}=-1. \end{equation} From the Carnot theorem for the conic $\mathcal{C}$ and $\triangle A X C$ we obtain: \begin{equation}\label{j4} \frac{\overrightarrow{A D_2}}{\overrightarrow{D_2 X}} \cdot \frac{\overrightarrow{A A_2}}{\overrightarrow{A_2 X}} \cdot \frac{\overrightarrow{X F_1}}{\overrightarrow{F_1 C}} \cdot \frac{\overrightarrow{X C_1}}{\overrightarrow{C_1 C}} \cdot \frac{\overrightarrow{C B_1}}{\overrightarrow{B_1 A}} \cdot \frac{\overrightarrow{C B_2}}{\overrightarrow{B_2 A}}=1. \end{equation} By the Law of Sines we have: $$\overrightarrow{A_2 X} \sin \sphericalangle A_2 X C= \overrightarrow{A_2 C} \sin \sphericalangle B C C_1$$ and $$\overrightarrow{C_1 C} \sin \sphericalangle B C C_1=\overrightarrow{C_1 B} \sin \sphericalangle \beta.$$ From these two equations one can deduce: \begin{equation}\label{j5} \overrightarrow{A_2 X} \cdot \overrightarrow{C_1 C} \sin \sphericalangle A_2 X C = \overrightarrow{A_2 C} \cdot \overrightarrow{C_1 B} \sin \sphericalangle \beta. \end{equation} Similarly, the following equality holds: \begin{equation}\label{j6} \overrightarrow{A A_2} \cdot \overrightarrow{X C_1 } \sin \sphericalangle C_1 X A = \overrightarrow{B A_2 } \cdot \overrightarrow{A C_1} \sin \sphericalangle \beta. \end{equation} From (\ref{j5}) and (\ref{j6}) (using the equality $\sphericalangle C_1 X A=\sphericalangle A_2 X C$) we conclude that: \begin{equation}\label{j7} \frac{\overrightarrow{A A_2}}{\overrightarrow{A_2 X}} \cdot \frac{\overrightarrow{X C_1}}{\overrightarrow{C_1 C}}=\frac{\overrightarrow{B A_2}}{\overrightarrow{A_2 C}}\cdot\frac{\overrightarrow{A C_1}}{\overrightarrow{C_1 B}}. \end{equation} Now, from the relations (\ref{j3}), (\ref{j4}) and (\ref{j7}) we have: $$\frac{\overrightarrow{C R'}}{\overrightarrow{R' A}}=\frac{\overrightarrow{B A_2}}{\overrightarrow{A_2 C}}\cdot\frac{\overrightarrow{A C_1}}{\overrightarrow{C_1 B}}\cdot \frac{\overrightarrow{C B_1}}{\overrightarrow{B_1 A}} \cdot \frac{\overrightarrow{C B_2}}{\overrightarrow{B_2 A}}.$$ But, the Carnot relation \ref{r} implies $$\frac{\overrightarrow{C R'}}{\overrightarrow{R' A}}=-\frac{\overrightarrow{C_2 B}}{\overrightarrow{A C_2}} \cdot \frac{ \overrightarrow{A_1 C}}{\overrightarrow{B A_1}},$$ and $R\equiv R'\equiv B_3$. The proof for the other points is analogous. \hfill $\square$ \begin{figure}[h!h!h!] \centerline{\includegraphics[width=\textwidth]{Carnotprave.pdf}} \caption{Theorem \ref{t2}} \label{carnot3} \end{figure} From Pascal's theorem the following theorem is true (see Figure \ref{carnot3}): \begin{theorem} \label{t2} The following 8 triples of points ($A_3$, $B_3$, $C_3$), ($D_3$, $E_3$, $C_4$), ($A_3$, $E_4$, $F_3$), ($D_3$, $B_3$, $F_4$), ($A_4$, $E_3$, $F_4$), ($D_4$, $E_3$, $C_3$), ($D_4$, $B_4$, $F_3$), and ($A_4$, $B_4$, $C_4$) are collinear. \end{theorem} In the sequel, we encounter the relations of higher order. We use the theorem of Carnot to prove that certain points in the configuration lie on the same conic. \begin{theorem}\label{t4} The points $D_3$, $D_4$, $E_3$, $E_4$, $F_3$ and $F_4$ lie on the same conic $\mathcal{D}$. \end{theorem} \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.8 \textwidth]{Carnot4.pdf}} \caption{Theorem \ref{t4}} \label{carnot4} \end{figure} \noindent \textbf{Proof:} From the proof of Theorem \ref{t1} we also deduce that: $$\frac{\overrightarrow{C E_3}}{\overrightarrow{E_3 A}}=-\frac{\overrightarrow{C_1 B}}{\overrightarrow{A C_1}} \cdot \frac{ \overrightarrow{A_1 C}}{\overrightarrow{B A_1}},\, \frac{\overrightarrow{A F_3}}{\overrightarrow{F_3 B}}=-\frac{\overrightarrow{A_1 C}}{\overrightarrow{B A_1}} \cdot \frac{ \overrightarrow{B_1 C}}{\overrightarrow{C B_1}},\,\frac{\overrightarrow{B D_3}}{\overrightarrow{D_3 C}}=-\frac{\overrightarrow{B_1 A}}{\overrightarrow{C B_1}} \cdot \frac{ \overrightarrow{C_1 B}}{\overrightarrow{A C_1}},$$ $$\frac{\overrightarrow{C E_4}}{\overrightarrow{E_4 A}}=-\frac{\overrightarrow{C_2 B}}{\overrightarrow{A C_2}} \cdot \frac{ \overrightarrow{A_1 C}}{\overrightarrow{B A_1}},\,\frac{\overrightarrow{A F_4}}{\overrightarrow{F_4 B}}=-\frac{\overrightarrow{A_2 C}}{\overrightarrow{B A_2}} \cdot \frac{ \overrightarrow{B_2 C}}{\overrightarrow{C B_2}},\,\frac{\overrightarrow{B D_4}}{\overrightarrow{D_4 C}}=-\frac{\overrightarrow{B_2 A}}{\overrightarrow{C B_2}} \cdot \frac{ \overrightarrow{C_2 B}}{\overrightarrow{A C_2}}.$$ Then the following holds by \ref{r}: $$\frac{\overrightarrow{C E_3}}{\overrightarrow{E_3 A}}\cdot \frac{\overrightarrow{C E_4}}{\overrightarrow{E_4 A}}\cdot \frac{\overrightarrow{A F_3}}{\overrightarrow{F_3 B}}\cdot \frac{\overrightarrow{A F_4}}{\overrightarrow{F_4 B}}\cdot \frac{\overrightarrow{A F_3}}{\overrightarrow{F_3 B}}\cdot \frac{\overrightarrow{B D_4}}{\overrightarrow{D_4 C}}=1.$$ By the converse of Carnot's theorem, the points $D_3$, $D_4$, $E_3$, $E_4$, $F_3$ and $F_4$ lie on the same conic. \hfill $\square$ In the same fashion we prove that: \begin{theorem}\label{t5} The following 4 sextuples of the points $(D_3, D_4, E_3, E, 4, F_3, F_4)$,\\ $(A_3, A_4, B_3, B_4, F_3, F_4)$, $(A_3, A_4, E_3, E_4, C_3, C_4)$ and $(D_3, D_4, B_3, B_4, C_3, C_4)$ are the sextuples of the points lying on the same conic. \end{theorem} \begin{figure}[h!h!h!] \centerline{\includegraphics[width=\textwidth]{carnot5.pdf}} \caption{Theorem \ref{t5}} \label{carnot5} \end{figure} \section{Bradley's Theorem} In this section we give an elementary proof of the Bradley's conjecture \cite{Bradley}. The first proof, given by Zolt\'{a}n Szilasi in \cite{Szilasi}, used barycentric coordinates. We use different approach and prove several other interesting things about Carnot's configuration. Let $X_1$ be the intersection points of the lines $A A_1$ and $B B_1$, $X_2$ of $B B_1$ and $C C_1$ and $X_3$ of $C C_1$ and $A A_1$. Let $Y_1$ be the intersection points of the lines $A A_2$ and $B B_2$, $Y_2$ of $B B_2$ and $C C_2$ and $Y_3$ of $C C_2$ and $A A_2$. Define $T_2$ as the intersection point of the lines $X_1 Y_3$ and $X_3 Y_1$. The points $T_3$ and $T_1$ are defined analogously. \begin{theorem} $T_2$ lies on the line $B C$, $T_3$ on $C A$ and $T_1$ on $A B$. \end{theorem} \noindent \textbf{Proof:} Let $T'$ be the intersection point of the lines $X_3 Y_1$ and $B C$ and let $T''$ be the intersection point of the lines $X_1 Y_3$ and $B C$. By the Menelaos theorem applied at $\triangle A B A_1$ and the line $B C_2$ we obtain: $$\frac{\overrightarrow{A X_3}}{\overrightarrow{X_3 A_1}}=-\frac{\overrightarrow{C B}}{\overrightarrow{A_1 C}} \cdot \frac{ \overrightarrow{A C_1}}{\overrightarrow{C_1 B}}.$$ The same reasoning for $\triangle A C A_2$ and the line $C C_1$ we obtain: $$\frac{\overrightarrow{A_2 Y_1}}{\overrightarrow{Y_1 A}}=-\frac{\overrightarrow{B_2 C}}{\overrightarrow{A B_2}} \cdot \frac{ \overrightarrow{B A_2}}{\overrightarrow{C B}}.$$ Then from the Menelaos theorem for $\triangle A A_1 A_2$ and the line $X_1 Y_3$ we get: \begin{equation}\label{t'}\frac{\overrightarrow{A_1 T'}}{\overrightarrow{T' A_2}}=-\frac{\overrightarrow{A B_2}}{\overrightarrow{A C_1}} \cdot \frac{ \overrightarrow{A_1 C}}{\overrightarrow{B_2 C}} \cdot \frac{ \overrightarrow{C_1 B}}{\overrightarrow{B A_2}}.\end{equation} In the same fashion we prove that: \begin{equation}\label{t"}\frac{\overrightarrow{A_1 T''}}{\overrightarrow{T'' A_2}}=-\frac{\overrightarrow{A C_2}}{\overrightarrow{A B_1}} \cdot \frac{ \overrightarrow{B_1 C}}{\overrightarrow{A_2 C}} \cdot \frac{ \overrightarrow{B A_1}}{\overrightarrow{C_2 B}}.\end{equation} By the relation (\ref{r}) we conclude that: $$\frac{\overrightarrow{A_1 T'}}{\overrightarrow{T' A_2}}=\frac{\overrightarrow{A_1 T''}}{\overrightarrow{T'' A_2}}, $$ so $T'\equiv T''\equiv T_2$. For the points $T_1$ and $T_3$ the proof is analogous. \hfill $\square$ Since the points $T_2$, $B$ and $C$ are collinear, by the converse of Pascal's theorem for the hexagon $X_3 Y_1 Y_2 Y_3 X_1 X_2$ we get (see Figure \ref{bradley1s}): \begin{figure}[h!h!h!] \centerline{\includegraphics[width=\textwidth]{bradley1.pdf}} \caption{Corollary \ref{bradley1}} \label{bradley1s} \end{figure} \begin{corollary}\label{bradley1} The points $X_1$, $X_2$, $X_3$, $Y_1$, $Y_2$ and $Y_3$ lie on the same conic. \end{corollary} \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.7\textwidth]{bradley2.pdf}} \caption{Corollary \ref{bradley2}} \label{bradley2s} \end{figure} An immediate consequence of this fact is (see Figure \ref{bradley2s}): \begin{corollary} \label{bradley2} The points $T_1$, $T_2$ and $T_3$ lie on the same line. \end{corollary} Bradley's theorem \ref{glavna}, directly follows from Corollary \ref{bradley1} and the Poncelet triangle theorem \cite[Theorem 5, p.184-185]{Pra}, see Figure \ref{bradley3s}. \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.75\textwidth]{bradley3.pdf}} \caption{Bradley's theorem} \label{bradley3s} \end{figure} \section{Proof of Theorem \ref{glavnag}} In this section we give the proof of Theorem \ref{glavnag}. The proof illustrates a nice application of the Menelaus and the Carnot theorems. \noindent \textbf{Proof:} We prove that the points $23$, $24$, $32$, $34$, $42$, $43$ lie on conic $\mathcal{C}_1$. The proof for other points is analogous. \begin{figure}[h!h!h!] \centerline{\includegraphics[width=0.8\textwidth]{bradleyq.pdf}} \caption{Bradley's theorem about quadrilaterals} \label{bradleyqs} \end{figure} Let $X$ be the intersection point of the lines $A D$ and $B C$. We apply the Menelaus theorem for $\triangle X D C$ and the lines $S W$ , $R U$, $S V$ and $V W$ and get: \begin{equation}\label{g1} \frac{\overrightarrow{X W}}{\overrightarrow{W D}}\cdot \frac{\overrightarrow{D (34)}}{\overrightarrow{(34) C}}\cdot \frac{\overrightarrow{C (24)}}{\overrightarrow{(24) X}}=-1, \end{equation} \begin{equation}\label{g2} \frac{\overrightarrow{X (41)}}{\overrightarrow{(41) D}}\cdot \frac{\overrightarrow{D (32)}}{\overrightarrow{(32) C}}\cdot \frac{\overrightarrow{C U}}{\overrightarrow{(U X}}=-1, \end{equation} \begin{equation}\label{g3} \frac{\overrightarrow{X (43)}}{\overrightarrow{(43) D}}\cdot\frac{\overrightarrow{D V}}{\overrightarrow{V C}}\cdot \frac{\overrightarrow{C (23)}}{\overrightarrow{(23) X}}=-1, \end{equation} \begin{equation}\label{g4} \frac{\overrightarrow{D W}}{\overrightarrow{W X}}\cdot \frac{\overrightarrow{X U}}{\overrightarrow{U C}}\cdot \frac{\overrightarrow{C V}}{\overrightarrow{V D}}=-1. \end{equation} After multiplication of (\ref{g1}), (\ref{g2}), (\ref{g3}) and (\ref{g4}), we obtain: $$ \frac{\overrightarrow{D (34)}}{\overrightarrow{(34) C}}\cdot \frac{\overrightarrow{C (24)}}{\overrightarrow{(24) X}}\cdot \frac{\overrightarrow{X (41)}}{\overrightarrow{(41) D}}\cdot \frac{\overrightarrow{D (32)}}{\overrightarrow{(32) C}}\cdot \frac{\overrightarrow{X (43)}}{\overrightarrow{(43) D}}\cdot \frac{\overrightarrow{C (23)}}{\overrightarrow{(23) X}}=1.$$ From the converse of Carnot's theorem it follows that the points $23$, $24$, $32$, $34$, $42$, $43$ lie on the same conic. \hfill $\square$ \begin{center}\textmd{Acknowledgements } \end{center} \medskip This research is done during my stay in Switzerland. The author wishes to thank his friends the Hajdin family: Katarina, Rade, Nikola, Luka and Matija for generous hospitality and support.
1,314,259,992,724
arxiv
\section{Background}\label{sec:background} \subsection{Elliptic PDEs on manifolds} The results of this article are motivated by the study of elliptic PDEs, possibly nonlinear, of the form \begin{equation}\label{eq:PDE1} F(x, \nabla u(x), D^2u(x)) = 0, \quad x \in\Omega. \end{equation} \begin{definition}[Elliptic equation]\label{def:elliptic} The equation~\eqref{eq:PDE1} is \emph{elliptic} if for all $(x, p) \in {\Omega} \times \mathbb{R} \times \mathbb{R}^{d}$, then \begin{equation} F(x, p, A) \leq F(x, p, B), \quad \forall A, B \in S^{d} \ \text{s. t.} \ A \geq B \end{equation} where $A \geq B$ denotes that $A - B$ is a positive definite matrix. \end{definition} The specific focus of the present article is linear divergence structure operators of the form \begin{equation} \mathcal{L}[u](x) = -\text{div}_M(A(x)\nabla_Mu(x)),\end{equation} which are defined on a compact manifold $M$. These equations are elliptic if $A$ is a symmetric positive definite matrix. Given sets $\Omega\subset\mathbb{R}^2$, $\Omega'\subset M$ and local coordinates $y:\Omega\to\Omega'$ we can locally recast this as the following linear divergence structure operator in Euclidean space: \begin{equation}\label{eq:localcoords} \mathcal{L}[u] = -\frac{1}{\sqrt{\det G}}\nabla\cdot\left(\sqrt{\det G}A G^{-1} \nabla u\right)\end{equation} where $G$ is the metric tensor~\cite{cabre}. \subsection{Approximation of elliptic operators} To build approximation schemes for the PDE~\eqref{eq:PDE}, we begin with a point cloud $\mathcal{G}^h\subset M$ discretizing the underlying manifold and let \begin{equation}\label{eq:h} h = \sup\limits_{x\in M}\min\limits_{y\in\mathcal{G}^h} d_{M}(x,y) \end{equation} denote the characteristic (geodesic) distance between discretization nodes. In particular, this guarantees that any ball of radius $h$ on the manifold will contain at least one discretization point. In this manuscript, we will consider finite difference discretizations of the PDE~\eqref{eq:PDE} of the form \begin{equation}\label{eq:approx1} F^h[u](x) \equiv F^h \left( x,u(x),u(x)-u(\cdot) \right) = 0, \quad x\in \mathcal{G}^h. \end{equation} Critically, the approximation scheme~\eqref{eq:approx1} needs to be \emph{consistent} with the underlying PDE~\eqref{eq:PDE}. \begin{definition}[Consistency error]\label{consistency} We say that the approximation $F^h$ of the PDE operator $F$ has consistency error $\mathcal{O} \left( h^{\alpha} \right)$ if for every smooth $\phi\in C^{2,1}(M)$ there exists a constant $C$ such that \[ \|F^h(x,\phi(x),\phi(x)-\phi(\cdot)) - F \left(x, \nabla\phi(x), D^2 \phi(x) \right)\|_{L^\infty(\mathcal{G}^h)} \leq C h^{\alpha} \] for every sufficiently small $h>0$. \end{definition} \begin{remark} In this article, we assume solutions are $C^{2,1}$. It is also possible to design approximation schemes that depend on higher-order derivatives of the solution; indeed, this is assumption is typically needed for schemes with higher-order consistency error ($\alpha>2$). The schemes analyzed in this article are required to satisfy an additional monotonicity assumption, which limits the consistency error to at most second-order ($\alpha \leq 2$). See~\cite[Theorem~4]{ObermanSINUM}. \end{remark} Another concept that has proved important in the numerical analysis of fully nonlinear elliptic equations is \emph{monotonicity}~\cite{BSNum}. At its essence, monotone schemes reflect at the discrete level the elliptic structure of the underlying PDE. This allows one to establish key properties of the discretization including a discrete comparison principle. Even in the linear setting, monotonicity can play an important role in establishing well-posedness and stability of the approximation scheme~\eqref{eq:approx1}. \begin{definition}[Monotonicity]\label{def:monotone} The approximation scheme $F^h$ is \emph{monotone} if it is a non-decreasing functions of its final two arguments. \end{definition} Closely related to monotonicity is the concept of a \emph{proper} scheme. \begin{definition} The finite difference scheme $F^h$ is \emph{proper} if it is an increasing function of its second argument. \end{definition} We note that any consistent, monotone scheme $F^h$ can be perturbed to a proper scheme by defining \[ G^h(x,u,p) = F^h(x,u,p) + \epsilon^hu \] where $\epsilon^h\to0$ as $h\to0$. Monotone, proper schemes satisfy a strong form of the discrete comparison principle~\cite[Theorem~5]{ObermanSINUM}. \begin{theorem}[Discrete comparison principle]\label{thm:discreteComparison} Let $F^h$ be a proper, monotone finite difference scheme and suppose that \[ F^h(x,u(x),u(x)-u(\cdot)) \leq F^h(x,v(x),v(x)-v(\cdot)) \] for every $x\in\mathcal{G}^h$. Then $u \leq v$. \end{theorem} Finally, we make a continuity assumption on the scheme in order to guarantee the existence of a discrete solution. \begin{definition}[Continuity]\label{def:continuous} The scheme $F^h$ is \emph{continuous} if it is continuous in its final two arguments. \end{definition} \begin{remark} We recall that the domain of the first argument of $F^h$ is the discrete set $\mathcal{G}^h$. Thus it is not meaningful to speak about continuity with respect to the first argument. \end{remark} Critically, continuous, monotone, and proper schemes always admit a unique solution~\cite[Theorem~8]{ObermanSINUM}. Moreover, under mild additional assumptions, it is easy to show that the solution can be bounded uniformly independent of $h$. \begin{lemma}[Solution bounds]\label{lem:properBounds} Suppose the PDE~\eqref{eq:PDE1} has a unique $C^{2,1}$ solution. Let $F^h$ be continuous, monotone, proper, and have consistency error $\mathcal{O}(h^\alpha)$. Suppose also that there exists a constant $C>0$, independent of $h$, such that for every $\delta>0$, \[ F^h(x,u+\delta,p) \geq F^h(x,u,p) + Ch^\alpha\delta. \] Then for every sufficiently small $h>0$, the scheme~\eqref{eq:approx1} has a unique solution $u^h$ that is uniformly bounded independent of $h$. \end{lemma} \begin{proof} Since $F^h$ is continuous, monotone, and proper, a solution $u^h$ exists by~\cite{ObermanSINUM}. Let $u$ be the exact solution of~\eqref{eq:PDE}. By consistency, we know that there exists a constant $K$, that does not depend on $h$, such that \[-Kh^{\alpha} \leq F^h(x,u(x), u(x) - u(\cdot)) \leq Kh^{\alpha}.\] Now let $M$ be any constant and compute \begin{align*} F^h(x,u(x)+M, u(x) - u(\cdot)) &\geq F^h(x,u(x), u(x) - u(\cdot)) + Ch^{\alpha} M\\ &\geq (-K+CM)h^\alpha. \end{align*} Thus by choosing $M > K/C$, we find that \[ F^h(x,u(x)+M, u(x) - u(\cdot)) > 0 = F^h(x,u^h(x), u^h(x) - u^h(\cdot)). \] Then by the Discrete Comparison Principle~\ref{thm:discreteComparison} \[u+M \geq u^h.\] By an identical argument, we obtain \[ u-M \leq u^h. \] We conclude that \[ \|u^h\|_{L^\infty} \leq \|u\|_\infty + M \] and thus $u^h$ is uniformly bounded. \end{proof} \subsection{Tangent plane approximations} A variety of approaches are available for discretizing PDEs on manifolds. Particularly simple are methods that allow the surface PDE to be approximated using schemes designed for PDEs in Euclidean space~\cite{ClosestPoint,TsaiManifolds}. Here we overview one particular approach, which can easily be used to design monotone approximation schemes that satisfy the assumptions of our main result on error bounds (Theorem~\ref{thm:mainconvergence}). Consider the PDE operator \begin{equation}\label{eq:PDEGen} -\text{div}_M(A(x)\nabla_Mu(x)), \quad x\in M \end{equation} at a particular point $x_0\in M$. We relate this to an equivalent PDE posed on the local tangent plane $\mathcal{T}_{x_0}$ through a careful choice of local coordinates. In general, local coordinates will introduce distortions to the differential operators. However, this problem was avoided in~\cite{HT_OTonSphere, HT_OTonSphere2} with the use of \emph{geodesic normal coordinates}, which preserve distance from the reference point $x_0$. In these coordinates the metric tensor is an identity matrix and the Christoffel symbols vanish at the point $x_0$. Given some neighbourhood $N_{x_0}\subset M$ of the point $x_0\in M$, we let $v_{x_0}:N_{x_0}\to\mathcal{T}_{x_0}$ denote geodesic normal coordinates. Because they are chosen to preserve distances from the point $x_0$, they satisfy \[ d_M(x,x_0) = \| v_{x_0}(x) - x_0 \| \] where $d_M$ represents the geodesic distance along $M$ and $\|\cdot\|$ the usual Euclidean distance on the tangent plane. We can now introduce a local projection of $u$ onto the relevant tangent plane $\mathcal{T}_{x_{0}}$ in a neighborhood of $x_0$ as follows \begin{equation}\label{eq:tangentFunction} \tilde{u}_{x_0}(z) = u\left( v_{x_0}^{-1}(z) \right). \end{equation} This allows us to re-express the PDE~\eqref{eq:PDEGen} at the point $x_0$ as an equivalent PDE on the local tangent plane. We define \begin{equation}\label{eq:Tangent} \tilde{\mathcal{L}}[\tilde{u}_{x_0}](x) = -\nabla \cdot \left( A(z) \nabla \tilde{u}_{x_{0}}(z) \right), \quad z \in \mathcal{T}_{x_{0}} \end{equation} where now $\nabla$ is the usual Euclidean differential operator. Because the particular choice of coordinates does not introduce distortions, the PDE operator will preserve its original form. In particular, \[ \mathcal{L}[u](x_0) = \tilde{\mathcal{L}}[\tilde{u}_{x_0}](x_0). \] The problem of approximating the PDE operator~\eqref{eq:PDEGen} at a point $x_0\in M$ is now reduced to the problem of approximating the operator~\eqref{eq:Tangent} at the point $x_0$ in the local tangent plane. This immediately allows one to make use of any existing method for designing monotone approximation of PDEs in Euclidean space. \subsection{Monotone approximation schemes} There is a growing body of literature on the design of monotone finite difference approximations for PDEs (both linear and nonlinear) in Euclidean space~\cite{benamou2014monotone, BenamouDuval, BFO_OTNum, mirebeau,chenwanlin, FroeseTransport, FroeseMeshfreeEigs, FO_MATheory, fastfinitedifference, FO_FilteredSchemes, HS_Quadtree, HamfeldtBVP2, hamfeldt2, HL_LagrangianGraphs, HL_ThreeDimensions, junliu, Nochetto_MAConverge, ObermanSINUM, ObermanEigenvalues}. Many of these are specifically constructed on Cartesian grids; however, recent work on generalized or meshfree finite difference methods demonstrate how these can be adapted to unstructured grids~\cite{FroeseMeshfreeEigs, Seibold}. We briefly review the procedure for designing monotone generalized finite difference methods for approximating linear divergence structure operators of the form \begin{equation}\label{eq:linearEuclidean} -\nabla\cdot (A(x)\nabla u(x))\end{equation} in Euclidean space. Generalization to non-divergence structure operators, many nonlinear elliptic equations, and first-order operators is straightforward~\cite{FroeseMeshfreeEigs, HT_OTonSphere2, Seibold}. Consider the problem of approximating~\eqref{eq:linearEuclidean} at a point $x_0$. It is natural to want to use values of $u$ at the ``nearest neighbors'' to accomplish this. Surprisingly, though, given any fixed stencil width, it is always possible to find a linear elliptic PDE operator that does not admit a consistent, monotone discretization on that stencil~\cite{Kocan,MotzkinWasow}. For general degenerate elliptic operators, it is sometimes necessary to allow the stencil to grow wider as the grid is refined in order to achieve both consistency and monotonicity. We attempt to discretize~\eqref{eq:linearEuclidean} at $x_0$ using points within some search neighborhood. To this end, we associate a search radius $r\geq h$ to the point cloud $\mathcal{G}^h$ and require $r\to0$ as $h\to0$. We define by \[ \mathcal{N}(x_0) = \{x\in\mathcal{G}^h \mid \|x-x_0\| < r\} \] the set of neighbors to the reference point $x_0$. A common choice is $r = \sqrt{h}$, which ensures that as the point cloud is refined ($h\to0$), the number of neighboring points grows without bound. For uniformly elliptic operators, a choice of $r = \mathcal{O}(h)$ may be sufficient. We notice that the PDE operator can be written in the form \begin{equation}\label{eq:opform} \nabla\cdot(A\nabla u) = -\sum\limits_{k \in K} \frac{\partial}{\partial x_{k_1}}\left(a_{k}\frac{\partial u}{\partial x_{k_2}}\right) \end{equation} where \[ K = \{k\in\mathbb{N}^2 \mid \|k\|_\infty \leq d\}. \] This motivates us to seek a finite difference approximation of the form \begin{equation}\label{eq:fdform} -\nabla\cdot(A\nabla u)(x_0) \approx -\sum\limits_{k\in K}\sum\limits_{x\in\mathcal{N}(x_0)}\sum\limits_{y\in\mathcal{N}(x_0)}c_{k}(x,y) a_k(y)u(x).\end{equation} The monotonicity condition requires that the coefficient of $u(x)$ be non-positive for each $x\neq x_0$. This leads to the set of linear inequality constraints \[ \sum\limits_{k\in K}\sum\limits_{y\in\mathcal{N}(x_0)}c_k(x,y)a_k(y) \geq 0, \quad x \in \mathcal{N}(x_0) \backslash \{x_0\}. \] To achieve consistency, we Taylor expand the terms in~\eqref{eq:fdform} about the reference point $x_0$. We then compare the coefficients of each term with the desired operator~\eqref{eq:opform}, which leads to a system of linear equations that must be satisfied by the coefficients $c_k(x,y)$. In typical implementations, one possibility is to exploit the structure of the underlying PDE to set many of the coefficients $c_k(x,y)$ to zero \emph{a priori} and obtain closed form expressions for the (small) number of non-zero coefficients~\cite{FroeseMeshfreeEigs}. Another option is to use simple analytical and computational optimization tools to numerically determine the values of the coefficients and establish bounds needed to ensure consistency~\cite{HL_ThreeDimensions,Seibold}. \emph{Example.} As an illustrative example, which easily generalizes to higher-dimensions, consider the problem of approximating the one-dimensional elliptic divergence-structure operator \begin{equation}\label{eq:1d} -(a(x)u'(x))' \end{equation} at the point $x_0 \in \mathbb{R}$. Given a set of discretization points \[\ldots < x_{-2} < x_{-1} < x_0 < x_1 < x_2 < \ldots\] we define $h_i = x_i-x_0$. Note that for $x_i\in \mathcal{N}(x_0)$, we have $\abs{h_i} \leq r$. Then we seek an approximation of the form \[ L^h[u](x_0) \equiv -\sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}a(x_j)u(x_i). \] We first Taylor expand about $x_j$: \begin{align*} L^h[u](x_0)& = -\sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}a(x_j)\left(u(x_j) + (h_i-h_j)u'(x_j) + \frac{1}{2}(h_i-h_j)^2u''(x_j) + \mathcal{O}(r^3)\right). \end{align*} Next we multiply this out and Taylor expand the resulting products about the point $x_0$: \begin{align*} L^h[u](x_0)& = -\sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}\left((au)(x_0) + h_j(au)'(x_0) + \frac{1}{2}h_j^2(au)''(x_0) + \mathcal{O}(r^3)\right)\\ &\phantom{=}-\sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}(h_i-h_j)\left((au')(x_0) + h_j(au')'(x_0) + \mathcal{O}(r^2)\right)\\ &\phantom{=}-\sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}\frac{1}{2}c_{ij}(h_i-h_j)^2\left((au'')(x_0) + \mathcal{O}(r)\right). \end{align*} Finally, we compare this with the desired operator~\eqref{eq:1d} to obtain the following system of linear equations for the coefficients $c_{ij}$: \begin{equation}\label{eq:system}\begin{split} \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij} = 0\\ \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}h_j = 0\\ \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}\frac{1}{2}c_{ij}h_j^2 = 0\\ \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}(h_i-h_j) = 0\\ \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}h_j(h_i-h_j) = 1\\ \sum\limits_{x_i\in\mathcal{N}(x_0)}\sum\limits_{x_j\in\mathcal{N}(x_0)}\frac{1}{2}c_{ij}(h_i-h_j)^2 = 0. \end{split}\end{equation} These are coupled to the monotonicity condition \begin{equation}\label{eq:mon1D} \sum\limits_{x_j\in\mathcal{N}(x_0)}c_{ij}a(x_j) \geq 0, \quad x_i\in\mathcal{N}(x_0)\backslash\{x_0\}. \end{equation} The constrained linear system~\eqref{eq:system}-\eqref{eq:mon1D} can be analyzed and solved numerically using optimization techniques as in~\cite{HL_ThreeDimensions,Seibold} or solved exactly as in~\cite{FroeseMeshfreeEigs}. Consider the special case of equally spaced nodes $x_i = ih$ for which \[ \mathcal{N}(x_0) = \{x_i \mid i\in\{-2,-1,0,1,2\}\}. \] The consistency and monotonicity conditions lead to an underdetermined system. One particular solution is \[ c_{2,1} = c_{-2,-1} = \frac{1}{4h^2}, \quad c_{0,\pm1} = -\frac{1}{4h^2} \] with $c_{ij} = 0$ otherwise. This leads to the approximation \begin{align*} (au')'(x_0) &\approx \frac{a(x_1)u(x_2) + a(x_{-1})u(x_{-2}) - a(x_1)u(x_0) - a(x_{-1})u(x_0)}{4h^2}\\ &= \frac{1}{2h}\left(a(x_1)\frac{u(x_2)-u(x_0)}{2h} - a(x_{-1})\frac{u(x_0)-u(x_{-2})}{2h}\right), \end{align*} which has a simple interpretation in terms of standard centered differences. \section{Empirical Convergence Rates in One Dimension}\label{sec:empirical} This section will consider the very simple example of Laplace's equation on the one-dimensional torus $\mathbb{T}^1$: \begin{equation}\label{eq:torus1D} \begin{cases} -u''(x) = 0, & x \in \mathbb{T}^1\\ u(0) = 0, \end{cases} \end{equation} which has the trivial solution $u(x) = 0$. We use this toy problem to demonstrate several surprising properties of consistent and monotone approximations on compact manifolds, which motivate and validate the main results presented in the remainder of this article. In particular, we observe that: \begin{enumerate} \item[1.] Consistent, monotone, proper schemes need not converge to the true solution unless the solvability condition~\eqref{eq:solvability} is carefully taken into account. \item[2.] Typical approaches for proving convergence rates for linear elliptic PDEs with Dirichlet boundary conditions fail on compact manifolds. \item[3.] Actual error bounds achieved by convergent schemes can be asymptotically worse than the truncation error of the finite difference approximation. \item[4.] A simple consistent scheme for the gradient need not produce a convergent approximation of the gradient when applied to a numerically obtained solution. \end{enumerate} \subsection{A non-convergent scheme} We begin by describing a natural ``textbook'' approach to attempting to solve~\eqref{eq:torus1D} numerically, which does not lead to a convergent scheme. Consider the uniform discretization of the one-dimensional torus \[ x_i = i h, \quad i = 0, \ldots, n-1 \] where $h = 1/n$. Let $L^h$ be a consistent, monotone approximation of the Laplacian and let $f^h$ be a consistent approximation of the right-hand side (which is zero in this case). We would like to solve the discrete system \begin{equation}\label{eq:torus1D_discrete1} L^h(x_i,u^h(x_i),u^h(x_i)-u^h(\cdot)) = f^h(x_i), \quad i = 0, \ldots, n-1. \end{equation} However, this does not enforce the additional uniqueness constraint $u^h(0) = 0$. Adding this as an additional equation leads to an over-determined system. Instead, a natural approach is to replace the equation~\eqref{eq:torus1D_discrete1} at $x_0 = 0$ with this additional constraint. This leads to the system \begin{equation}\label{eq:torus1D_discrete2} \begin{cases} L^h(x_i,u^h(x_i),u^h(x_i)-u^h(\cdot)) = f^h(x_i), \quad i = 1, \ldots, n-1\\ u^h(x_0) = 0. \end{cases} \end{equation} As a specific implementation, we consider a wide-stencil approximation of the Laplacian, which mimics the type of scheme that is often necessary for monotonicity in higher dimension~\cite{Kocan,MotzkinWasow}. We also make the scheme proper, which ensures that the system~\eqref{eq:torus1D_discrete2} has a unique solution~\cite{ObermanSINUM}. Let $n=4^k$ be a perfect square (where $k\in\mathbb{N}$). We will build schemes with stencil width $\sqrt{n} = 2^k$. Define \begin{equation}\label{eq:torus1D_laplacian} L^h(x_i,u(x_i),u(x_i)-u(\cdot)) = -\frac{u(x_{i+\sqrt{n}}) + u(x_{i-\sqrt{n}}) - 2u(x_i)}{nh^2} + h(1+x_i)u(x_i) \end{equation} and \begin{equation} f^h(x_i) = h. \end{equation} The resulting approximation~\eqref{eq:torus1D_discrete1} is consistent with~\eqref{eq:torus1D}, monotone, and proper. The use of wide stencils degrades the truncation error of the usual centered scheme from $\mathcal{O}(h^2)$ to $\mathcal{O}(h)$, which is of the same order as the consistency error introduced by the proper term and the approximation of the right-hand side. Nevertheless, the discrete solution obtained by solving the system~\eqref{eq:torus1D_discrete2} does \emph{not} converge to the true solution of~\eqref{eq:torus1D}. \begin{figure}% \subfigure[]{\includegraphics[width=0.45\textwidth]{torus1D_errorNoSolvability}\label{fig:error1}} \subfigure[]{\includegraphics[width=0.45\textwidth]{torus1D_truncation}\label{fig:truncation}} \caption{\subref{fig:error1}~Maximum error and \subref{fig:truncation}~effective maximum truncation error in the solution of~\eqref{eq:torus1D_discrete2}.}% \label{fig:torus1D_noConvergence}% \end{figure} An issue that arises in this approach is that even though $L^h$ and $f^h$ are consistent with the original equation, they are not designed in a way that attempts to mimic the solvability condition~\eqref{eq:solvability} at the discrete level. As a result, all the work of imposing this compatibility condition must be made up for at the single point $x_0=0$ where no approximation of the Laplacian is explicitly enforced in~\eqref{eq:torus1D_discrete2}. This is evident in Figure~\ref{fig:truncation}, which plots the value of $L^h(x_0,u^h(x_0),u^h(x_0)-u^h(\cdot))$ (the ``effective'' truncation error of the scheme). This does not converge to zero as the grid is refined. In other words, the failure to incorporate the solvability condition at the discrete level has led to a scheme that is effectively inconsistent. Enforcing a solvability condition at the discrete level is not straightforward: the discrete condition may not be known explicitly, and in many problems even the continuous solvability condition is not known explicitly~\cite{HL_LagrangianGraphs}. The solution we propose and analyze in this work is to automatically ``spread out'' the effects of the solvability condition by first solving a discrete system that is consistent with Laplace's equation at every grid point, then enforcing the uniqueness constraint in a second step. The resulting procedure is \begin{equation}\label{eq:torus1D_discrete3} \begin{cases} L^h(x_i,v^h(x_i),v^h(x_i)-v^h(\cdot)) = f^h(x_i), & i = 0, \ldots, n-1\\ u^h(x_i) = v^h(x_i) - v^h(x_0), & i = 0, \ldots, n-1. \end{cases} \end{equation} We notice that the resulting discrete solution satisfies the system \begin{equation}\label{eq:torus1D_discrete4} L^h(x_i,u^h(x_i),u^h(x_i)-u^h(\cdot)) = f^h(x_i) - h(1+x_i)v^h(x_0), \quad i = 0, \ldots, n-1. \end{equation} This is consistent at all grid points since the first-step solution $v^h$ is uniformly bounded (Lemma~\ref{lem:properBounds}). Moreover, the resulting solution automatically satisfies the uniqueness condition $u^h(0) = 0$ by construction. \subsection{The Dirichlet Problem} We are interested in establishing error bounds for solutions of~\eqref{eq:torus1D_discrete4} (and, of course, generalizations to non-trivial higher-dimensional problems). To gain intuition and inspiration, we first review a standard approach to establishing error bounds for monotone schemes approximating the Dirichlet problem. Consider as an example Poisson's equation with Dirichlet boundary conditions on a domain $\Omega\subset\mathbb{R}^d$. \begin{equation} \begin{cases} - \Delta u(x) + f(x) = 0, & x \in \Omega \\ u(x) - g(x) = 0, & x \in \partial \Omega. \end{cases} \end{equation} Suppose, in addition, that we have a consistent, monotone discretization scheme \begin{equation} \begin{cases} L^h(x,u^h(x),u^h(x)-u^h(\cdot)) + f^h(x) = 0, & x \in \Omega\cap\mathcal{G}^h \\ u^h(x) - g(x) = 0, & x \in \partial \Omega \cap\mathcal{G}^h \end{cases} \end{equation} with truncation error on the exact solution given by \[ L^h(x,u(x),u(x)-u(\cdot)) + f^h(x) = \tau^h(x), \quad \abs{\tau^h(x)} \leq Ch^\alpha. \] Let $z^h = u - u^h$ denote the solution error. We notice that $z^h$ satisfies the discrete system \begin{equation}\label{eq:errorEqn} \begin{cases} L^h(x,z^h(x),z^h(x)-z^h(\cdot)) = \tau^h(x), & x \in \Omega\cap\mathcal{G}^h\\ z^h(x) = 0, & x \in \partial\Omega\cap\mathcal{G}^h. \end{cases} \end{equation} If the discrete linear operator and the underlying grid are sufficiently structured, we may be able to explicitly determine its eigenvectors and eigenvalues. In this case, we immediately obtain error bounds via \begin{equation}\label{eq:matrixNorm} \|z^h\| \leq \|(L^h)^{-1}\| \|\tau^h\|. \end{equation} If the discrete problem does not have a simple enough structure, we can instead choose some bounded $w$ such that \[ \begin{cases} L^h(x,w(x),w(x)-w(\cdot)) \geq 1, & x \in \Omega\cap\mathcal{G}^h \\ w(x) = 0, & x \in \partial \Omega \cap\mathcal{G}^h. \end{cases} \] This can always be accomplished for a consistent approximation of a well-posed PDE. For example, we may choose $w$ to be the solution of the homogeneous Dirichlet problem \begin{equation}\label{eq:wDirichlet} \begin{cases} -\Delta w(x) = \frac{3}{2}, & x \in \Omega\\ w(x) = 0, & x \in \partial\Omega. \end{cases} \end{equation} Now we define the auxiliary grid functions \[ v^h_\pm(x) = \pm z^h(x) - w(x)\,\| L^h(x,z^h(x),z^h(x)-z^h(\cdot)) \|_{L^\infty(\Omega\cap\mathcal{G}^h)}. \] We notice that \[L^h(x,v^h_\pm(x),v^h_\pm(x)-v^h_\pm(\cdot)) \leq 0, \quad x \in \Omega\cap\mathcal{G}^h. \] Applying the discrete maximum principle, we obtain \begin{align*} \pm z^h(x) - & w(x)\,\| L^h(x,z^h(x),z^h(x)-z^h(\cdot)) \|_{L^\infty(\Omega\cap\mathcal{G}^h)} \\ &\leq \max\limits_{x\in\partial\Omega\cap\mathcal{G}^h}\left\{\pm z^h(x) - w(x)\,\| L^h(x,z^h(x),z^h(x)-z^h(\cdot)) \|_{L^\infty(\Omega\cap\mathcal{G}^h)}\right\}\\ &= 0. \end{align*} Thus we find that \begin{equation}\label{eq:errorDirichlet} \|z^h\|_{L^\infty(\Omega\cap\mathcal{G}^h)} \leq \|w\|_{L^\infty(\Omega)}\| L^h(x,z^h(x),z^h(x)-z^h(\cdot)) \|_{L^\infty(\Omega\cap\mathcal{G}^h)} \leq C\|w\|_{L^\infty(\Omega)} h^\alpha. \end{equation} In other words, the solution error is proportional to the truncation error of the underlying approximation scheme. \subsection{Error bounds on the 1D torus} It is natural to try to adapt the techniques used for the Dirichlet problem to error bounds for PDEs on manifolds without boundary. Indeed, we may attempt to interpret~\eqref{eq:torus1D} as the ``one-point'' Dirichlet problem \[ \begin{cases} -u''(x) = 0, & x \in \mathbb{T}^1 \backslash \{0\}\\ u(x) = 0, & x = 0. \end{cases} \] However, this is not a well-posed PDE and attempting to solve an analog of~\eqref{eq:wDirichlet} for the auxiliary function $w$ will not lead to a function that is smooth on the torus. We might attempt to carry this argument through at the discrete level, noticing that the solution $u^h$ of~\eqref{eq:torus1D_discrete3} does satisfy the following discrete version of a one-point Dirichlet problem \[ \begin{cases} L^h(x_i,u^h(x_i),u^h(x_i)-u^h(\cdot)) = f^h(x_i) - h(1+x_i)v^h(x_0), & i = 1, \ldots, n-1\\ u^h(x_0) = 0. \end{cases} \] The resulting discrete linear system involves a strictly diagonally dominant $M$-matrix. However, standard bounds on the inverse of such a matrix~\cite{ChengHuang_Mmatrix} yield the estimate \[ \|(L^h)^{-1}\|_\infty \leq \mathcal{O}\left(\frac{1}{h}\right), \] which cannot provide any convergence guarantees when substituted into~\eqref{eq:matrixNorm}. The degradation of this bound as $h\to0$ is due to the fact that, the the scheme~\eqref{eq:torus1D_discrete4} is proper, it is not uniformly proper as $h\to0$. Instead, we attempt to utilize the techniques outlined above, which requires us to construct a function $w^h$ satisfying the system \begin{equation}\label{eq:discreteW} \begin{cases} L^h(x_i,w^h(x_i),w^h(x_i)-w^h(\cdot)) = 1, & i = 1, \ldots, n-1\\ w^h(x_0) = 0. \end{cases} \end{equation} As this is a proper scheme, it does admit a unique solution. However, the numerically obtained solution is not uniformly bounded as the grid is refined (Figure~\ref{fig:w}) and the resulting estimate in~\eqref{eq:errorDirichlet} does not provide a useful error bound. \begin{figure}[htp] \includegraphics[width=0.45\textwidth]{torus1D_w} \caption{The maximum norm of the auxiliary function $w^h$ obtained from~\eqref{eq:discreteW}.} \label{fig:w} \end{figure} The approach we will use in \autoref{sec:convergence} to obtain error bounds involves effectively expanding the ``Dirichlet'' condition $u(x_0) = 0$ onto a larger set, which shrinks to a point as the grid is refined. A downside to this approach is that it degrades the error bounds from the size of the truncation error $h^\alpha$ to the asymptotically worse rate of $h^{\alpha/(d+1)}$. Surprisingly, though, our simple one-dimensional example indicates that this may be the best we can hope for. Consider again the discrete solution $u^h$ obtained by solving~\eqref{eq:torus1D_discrete4}, which has a truncation error of $\mathcal{O}(h)$ at all grid points on the one-dimensional torus and exactly satisfies the uniqueness condition $u^h(0) = 0$. We solve this system numerically and present the error in Figure~\ref{fig:converge1}. This example does display numerical convergence to the true solution. However, the observed accuracy is only $\mathcal{O}(\sqrt{h})$, precisely what is predicted by Theorem~\ref{thm:mainconvergence}. \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{torus1D_error} \caption{Maximum error in the solution of~\eqref{eq:torus1D_discrete4} on $\mathbb{T}^1$.} \label{fig:converge1} \end{figure} \subsection{Convergence of gradients} Finally, we recall the important and well-known fact in numerical analysis that pointwise convergence of an approximation does not imply convergence of gradients. In particular, if we consider the approximation $u^h$ obtained by solving~\eqref{eq:torus1D_discrete3}, we might try to obtain information about the solution derivative by using the standard centered difference scheme \[ u'(x_i) \approx \frac{u^h(x_{i+1})-u^h(x_{i-1})}{2h}. \] However, this fails to converge to the true solution derivative $u'(x) = 0$ as the grid is refined; see Figure~\ref{fig:deriv1D}. \begin{figure}% \centering \includegraphics[width=0.45\textwidth]{torus1D_ux}% \caption{Maximum error in a centered difference approximation of $u'(x)$ obtained from the solution of the scheme~\eqref{eq:torus1D_discrete3}.}% \label{fig:deriv1D}% \end{figure} This non-convergence is perhaps unsurprising given that $u^h$ is a low-accuracy approximation to $u$. Indeed, a closer look at the centered difference scheme reveals that \[ \frac{u^h(x_{i+1})-u^h(x_{i-1})}{2h} = \frac{u(x_{i+1})-u(x_{i-1}) + \mathcal{O}(\sqrt{h})}{2h}. \] The theoretical error of this approximation is potentially as large as $\mathcal{O}\left(h^{-1/2}\right)$, which is unbounded as $h\to0$. Nevertheless, the numerical solution $u^h$ does still contain information about the true solution derivative. In order to obtain this, we will require approximations of the gradient that utilize sufficiently wide stencils to overcome potential high-frequency components in the solution error. This has the effect of making the size of the denominator in the finite difference approximation larger than the solution error, which leads to a convergent approximation as $h\to0$. This idea will be developed in \autoref{sec:mapping}. \section{Convergence Rate Bounds}\label{sec:convergence} We now establish error bounds for a class of consistent, monotone approximations schemes for~\eqref{eq:PDE},~\eqref{eq:uniqueness}. The main result is presented in Theorem~\ref{thm:mainconvergence}. The approach we take here is to construct barrier functions, which are shown to bound the error via the discrete comparison principle. Importantly, the error estimates we obtain are consistent with the empirical convergence rates observed in~\autoref{sec:empirical}. \subsection{Hypotheses on Geometry and PDE} We begin with the hypotheses on the geometry $M$ and PDE~\eqref{eq:PDE} that are required by our convergence result. \begin{hypothesis}[Conditions on PDE and manifold] \label{hyp:convergence} The Riemannian manifold $M$ and PDE~\eqref{eq:PDE} satisfy: \begin{enumerate} \item The manifold $M$ is a 2D compact and connected orientable surface without boundary. \item The matrix $A(x)\in C^2(M)$ is symmetric positive definite. \item The function $f(x)\in C^1(M)$ satisfies $\int_M f(x)\,dx = 0$. \end{enumerate} \end{hypothesis} \begin{remark} The compactness of the 2D manifold $M$ implies that it is geodesically complete, has injectivity radius strictly bounded away from zero, and that the sectional curvature (equivalent to the Gaussian curvature in 2D) is bounded from above and below~\cite{LeeManifolds}. \end{remark} \begin{remark} The fact that $M$ is $2$-dimensional can quite easily be generalized, since nothing in our convergence proof fundamentally depends upon the dimension $d$ (though the convergence bound does vary with $d$). We emphasize, however, that the lack of boundary on our manifold $M$ is an essential point in this paper. \end{remark} \subsection{Approximation Scheme} Next, we describe the class of approximation schemes that are covered by our convergence result. The starting point of the scheme is the idea that the uniqueness constraint~\eqref{eq:uniqueness} should be posed at the point $x_0$, with a reasonable discrete approximation of the PDE posed on other grid points. However, as discussed in \autoref{sec:empirical}, this approach may not yield a convergent scheme. Instead, we will create a small cap around $x_0$ and fix the values of $u$ at all points in this cap. To construct an appropriate scheme, we begin with any finite difference approximation $L^h(x,u(x)-u(\cdot))$ of the PDE operator~\eqref{eq:PDEoperator} that is defined for $x\in\mathcal{G}^h$ and that satisfies the following hypotheses. \begin{hypothesis}[Conditions on discretization scheme] \label{hyp:scheme} We require the scheme $L^h$ to satisfy the following conditions: \begin{enumerate} \item $L^h$ is linear in its final argument. \item $L^h$ is monotone. \item There exist constants $C, \alpha>0$ such that for every smooth $\phi\in C^{2,1}(M)$ the consistency error is bounded by \[ \left\vert L^h(x,\phi(x)-\phi(\cdot)) - L(x,D\phi(x),D^2\phi(x)) \right\vert \leq C[\phi]_{C^{2,1}(M)} h^{\alpha}, \quad x \in \mathcal{G}^h. \] \end{enumerate} \end{hypothesis} Next we define some regions in the manifold $M$ that will be used to create ``caps'' where $u$ is fixed in this scheme, and where additional conditions will be posed on barrier functions. Choose any $0<\gamma < \alpha$. Define the regions \begin{align*} b^h &= \left\{x \in M \mid d_{M}(x,x_0) < h^{\gamma} \right\}\\ S^h &= \left\{x \in M \mid h^{\gamma} \leq d_{M}(x,x_0) \leq 2h^{\gamma} \right\}\\ B^h &= M \setminus (b^h \cup S^h). \end{align*} See Figure~\ref{fig:setup}. \begin{figure}[htp] \centering \includegraphics[width=0.8\textwidth]{setup} \caption{The construction of a small cap about $x_0$ on the manifold $M$} \label{fig:setup} \end{figure} We then define the modified scheme $F^h$ as follows: \begin{equation}\label{eq:schemeDef} F^h(x,u(x),u(x)-u(\cdot)) \equiv \begin{cases} L^h(x,u(x)-u(\cdot)) + h^{\alpha} u(x) + f(x), & x \in B^h\cap\mathcal{G}^h \\ u(x), & x \in (S^h\cup b^h) \cap \mathcal{G}^h. \end{cases} \end{equation} \begin{remark} The condition $u(x) = 0,\, x \in S^h \cup b^h$ can be relaxed provided the resulting discrete solution has a uniformly bounded Lipschitz constant in this region and the values of $u$ are close to zero. Pinning the value to zero has the particularly strong effect of setting the local Lipschitz constant to zero. \end{remark} Note that the discretization $F^h$ is automatically proper by construction. Therefore, this scheme has a uniformly bounded solution by Lemma~\ref{lem:properBounds}. \begin{lemma}\label{thm:boundedness} Under the assumptions of Hypothesis~\ref{hyp:convergence},\ref{hyp:scheme}, the discrete scheme \begin{equation}\label{eq:scheme} F^h(x,u^h(x),u^h(x)-u^h(\cdot)) = 0\end{equation} has a unique solution $u^h$ that is bounded uniformly independent of $h$ for sufficiently small $h>0$. \end{lemma} \subsection{Convergence Rates} The idea in this section is to establish the convergence of the discrete solution of a monotone (and proper) scheme to the unique solution of the underlying PDE. We accomplish this by constructing a barrier function $\phi^h$ such that \begin{equation} F^h[-\phi^h] \leq F^h[u^h - u] \leq F^h[\phi^h] \end{equation} and then by invoking the discrete comparison principle to conclude that \begin{equation} -\phi^h \leq u^h-u \leq \phi^h. \end{equation} The barrier function can be chosen to satisfy $\phi^h = \mathcal{O} \left( h^{\alpha/(d+1)} \right)$. In this article, we explicitly treat the case $d=2$. In Section~\ref{sec:empirical}, we saw for $\mathbb{T}_1$ that the empirical convergence rate was $\mathcal{O} \left( h^{\alpha/2} \right)$, which is consistent with our theoretical error bound when $d=1$. The factor $(d+1)$ appears because there is a contribution of $d$ from the dimension of the underlying manifold (which arises due to the solvability condition~\eqref{eq:solvability}), and a contribution of $1$ from deriving a Lipschitz bound (also constrained by the solvability condition). Thus, we see that it is the solvability condition on the manifold without boundary that leads to the reduced convergence rate overall of a monotone and proper discretization. We state the main convergence result: \begin{thm}[Convergence Rate Bounds]\label{thm:mainconvergence} Under the assumptions of Hypotheses~\ref{hyp:convergence} and~\ref{hyp:scheme}, let $u \in C^{2,1}(M)$ be the solution of~\eqref{eq:PDE}, ~\eqref{eq:uniqueness}. Then the discrete solution $u^h$ solving~\eqref{eq:scheme} satisfies \begin{equation} \left\Vert u^h - u \right\Vert_{L^\infty(\mathcal{G}^h)} \leq Ch^{\alpha / 3}. \end{equation} where $C>0$ is a constant independent of $h$. \end{thm} \begin{remark} This convergence rate can be extended to more general $d$-dimensional manifolds as follows: \begin{equation} \left\Vert u^h - u \right\Vert_{L^\infty(\mathcal{G}^h)} \leq Ch^{\alpha / (d+1)}. \end{equation} \end{remark} \subsubsection{Construction of barrier functions} We now define the barrier functions $\phi^h$ by solving a linear PDE on the manifold $M$ with an appropriately chosen (small) right-hand side $f^h$ that satisfies the solvability condition~\eqref{eq:solvability}. In particular, given a fixed $K_0>0$ (which will be determined later), we let $\phi^h$ be the solution of the PDE \begin{equation}\label{eq:phiplus} \begin{cases} \mathcal{L} [\phi^h](x) = f^h(x), \quad x \in M \\ \phi^h (x_0) = K_0 h^{\gamma}. \end{cases} \end{equation} We emphasize that while the barrier function ${\phi}^h$ depends on the grid parameter $h$, it is the solution of the PDE on the continuous level. Now we outline the construction of an appropriate function $f^h$; see Figures~\ref{fig:setup} and~\ref{fig:barrier} for two complementary visualizations of the resulting function $f^h(x)$. Let $K_1>0$ be a fixed constant, to be determined later. We let $\abs{U} = \int_U dx$ denote the volume of a set $U\subset M$ and note that in two dimensions, \[ \abs{B^h} = \mathcal{O}(1), \quad \abs{S^h},\,\abs{b^h} = \mathcal{O}(h^{2\gamma}). \] We define the following real numbers \begin{align*} Q^h &= \int_{S^h} \cos\left(\pi\frac{d(x,x_0)-h^\gamma}{h^\gamma}\right)\,dx\\ A^h &= \abs{B^h}\frac{2\abs{b^h}+\abs{S^h}+Q^h}{2\abs{B^h}+\abs{S^h}-Q^h}. \end{align*} We record the fact that $\abs{Q^h} \leq \abs{S^h} = \mathcal{O}(h^{2\gamma})$ and $A^h \geq ch^{2\gamma}$ for some $c>0$. Finally, we introduce a smooth cutoff function \[ \psi^h(t) = -\frac{K_1h^\alpha}{2}\left(\frac{1}{A^h} + \frac{1}{\abs{B^h}}\right)\cos\left(\pi\frac{t-h^\gamma}{h^\gamma}\right) + \frac{K_1h^\alpha}{2}\left(\frac{1}{\abs{B^h}}- \frac{1}{A^h}\right).\] Now we define the right-hand side function by \begin{equation}\label{eq:fh} f^h(x) = \begin{cases} \dfrac{K_1h^\alpha}{\left\vert B^h \right\vert}, & x \in B^h \\ \psi^h(d(x,x_0)), & x \in S^h \\ -\dfrac{K_1h^\alpha}{A^h}, & x \in b^h. \end{cases} \end{equation} In particular, this is chosen to be on the order of the local truncation error of~\eqref{eq:schemeDef} throughout most of the domain, but is allowed to take on larger values in the small cap $S^h\cup b^h$ in order to ensure the solvability condition is satisfied. See Figure~\ref{fig:barrier}. \begin{figure}[htp] \includegraphics[width=\textwidth]{barrier} \caption{The construction of the function $f^h_{\pm}$ from a ``side profile" parametrized by distance from the point $x_0$} \label{fig:barrier} \end{figure} \subsubsection{Properties of the barrier function equation} Next we verify several key properties of the right-hand side function $f^h$, which will in turn be used to produce estimates on the barrier function $\phi^h$. \begin{lemma}[Mean-zero]\label{lem:meanZero} For every sufficiently small $h>0$, the function $f^h$ defined in~\eqref{eq:fh} satisfies the solvability condition~\eqref{eq:solvability} \[ \int_M f^h(x)\,dx = 0. \] \end{lemma} \begin{proof} We can directly compute \begin{align*} \int_{M} f^h(x)dx &= \int_{B^h} \frac{K_1 h^{\alpha}}{\left\vert B^h \right\vert} dx + \int_{S^h} \psi^h \left( d(x,x_0) \right)dx - \int_{b^h} \frac{K_1 h^{\alpha}}{A^h} dx\\ &= K_1 h^{\alpha} \left( 1 - \frac{Q^h}{2} \left(\frac{1}{A^h} + \frac{1}{\left\vert B^h \right\vert} \right) + \frac{\left\vert S^h \right\vert}{2} \left(\frac{1}{\left\vert B^h \right\vert} - \frac{1}{A^h} \right)- \frac{\left\vert b^h \right\vert}{A^h} \right)\\ &=\frac{K_1 h^{\alpha}}{A^h \left\vert B^h \right\vert} \left( A^h \left\vert B^h \right\vert - \frac{Q^h}{2} \left(\left\vert B^h \right\vert + A^h\right) + \frac{\abs{S^h}}{2} \left( A^h - \left\vert B^h \right\vert\right) - \left\vert b^h \right\vert \left\vert B^h \right\vert \right)\\ &= \frac{K_1 h^{\alpha}}{A^h \left\vert B^h \right\vert} \left(\frac{A^h}{2}\left(2\abs{B^h}+\abs{S^h}-Q^h\right) - \frac{\abs{B^h}}{2}\left(2\abs{b^h}+\abs{S^h}+Q^h\right)\right). \end{align*} Then by substituting in the value of $A^h$, we obtain \[\int_{M} f^h(x)dx = 0. \] \end{proof} \begin{lemma}[Regularity of right-hand side]\label{lem:fhC1} For every sufficiently small $h>0$, $f^h \in C^1(M)$. \end{lemma} \begin{proof} First we recall that $f^h$ is constant in the regions $b^h$ and $B^h$ respectively. In the region $S^h$, we can easily verify that \[ \lim\limits_{d(x,x_0) \downarrow h^{\gamma}} \psi^h \left( d(x,x_0) \right) = -\frac{K_1 h^{\alpha}}{A^h}, \quad \lim\limits_{d(x,x_0) \uparrow 2h^{\gamma}} \psi^h \left( d(x,x_0) \right) = \frac{K_1 h^{\alpha}}{\left\vert B^h \right\vert},\] which coincide with the values in $b^h$ and $B^h$ respectively. Next, we note that \[ \frac{d}{dt} \psi^{h} (t) = \frac{\pi K_1 h^{\alpha}}{2 h^{\gamma}} \left( \frac{1}{A^{h}} + \frac{1}{\left\vert B^{h} \right\vert} \right) \sin \left( \pi \frac{t - h^{\gamma}}{h^{\gamma}} \right). \] Thus we readily verify that \[\lim\limits_{t \downarrow h^{\gamma}} \frac{d}{dt}\psi^{h}(t) = 0, \quad\lim\limits_{t \uparrow 2h^{\gamma}} \frac{d}{dt}\psi^{h}(t) = 0.\] Finally, we produce an explicit Lipschitz bound. \begin{align*} \left\vert \nabla_{M} \psi^h (d(x,x_0)) \right\vert &\leq \max\limits_t \left\vert \frac{d}{dt} \psi^h(t) \right\vert \\ &= \frac{\pi K_1 h^{\alpha}}{2 h^{\gamma}} \left( \frac{1}{A^{h}} + \frac{1}{\left\vert B^{h} \right\vert} \right). \end{align*} Using our previous observations about the size of $A^h$ and $\abs{B^h}$, we conclude that \[ \left\vert \nabla_{M} f^h(x) \right\vert \leq \frac{\pi K_1 h^{\alpha}}{2 h^{\gamma}} \left( \frac{1}{ch^{2\gamma}} + \frac{1}{\left\vert B^{h} \right\vert} \right) = \mathcal{O}(h^{\alpha-3\gamma}).\] \end{proof} \begin{lemma}[$L^2$ norm bounds]\label{lem:fhL2} There exists a constant $C>0$ such that for every sufficiently small $h>0$, \[ \|f^h\|_{L^2(M)} \leq Ch^{\alpha-\gamma}. \] \begin{proof} We can directly compute \begin{align*} \left\Vert f^h \right\Vert_{L^{2}(M)} &\leq \left( \int_{S^h \cup b^h} \left( \frac{K_1 h^{\alpha}}{A^h} \right)^{2}dx +\int_{B^h} \left( \frac{K_1 h^{\alpha}}{\left\vert B^h \right\vert} \right)^2 dx \right)^{1/2}\\ &= K_1 h^{\alpha} \left( \frac{\left\vert S^h \cup b^h \right\vert}{(A^h)^2} + \left\vert M \right\vert \right)^{1/2}\\ &\leq Ch^{\alpha}\left(\frac{h^{2\gamma}}{h^{4\gamma}} +\left\vert M \right\vert \right)^{1/2}. \end{align*} Here we have used the fact that $\left\vert S^h \cup b^h \right\vert = \mathcal{O}(h^{2\gamma})$ and $A^h \geq c h^{2\gamma}$ for some constant $c>0$. We conclude that \[ \left\Vert f^h \right\Vert_{L^{2}(M)} \leq C h^{\alpha - \gamma} \left( 1 + \mathcal{O} \left( h^{\gamma} \right) \right). \] \end{proof} \end{lemma} Using these properties of $f^h$, we are now able to establish existence of the barrier functions $\phi^h$. \begin{lemma}[Existence of barrier function]\label{lem:phiExists} There exists a function $\phi^h\in C^3(M)$ satisfying~\eqref{eq:phiplus}. \end{lemma} \begin{proof} Recall that $f^h\in C^1(M)$ for any $h>0$. Then by~\cite[Theorem 4.7]{aubin} we have the existence of a solution $\phi^h\in C^3(M)$ to the PDE \[\mathcal{L} \phi^h(x) = f^h(x),\] which is unique up to an additive constant. The condition $\phi^h(x_0) = K_0h^{\gamma}$ fixes the constant. \end{proof} \subsubsection{Local coordinate patches} Our goal is to use regularity results for linearly elliptic PDEs in Euclidean space in order to develop estimates for the barrier function $\phi^h$, which solves a linearly elliptic PDE on the manifold $M$. In order to do this, we will need the ability to locally re-express the barrier equation~\eqref{eq:phiplus} as a uniformly elliptic PDE in Euclidean space. \begin{lemma}[PDE on local coordinate patches]\label{lem:localPDE} Under the assumptions of Hypothesis~\ref{hyp:convergence}, there exists some $r>0$ such that for every $x_0\in M$ there exists a bounded region $\Omega\subset\mathbb{R}^2$ and set of coordinates $y:\Omega\to B(x_0,r)$ corresponding to a metric tensor $G\in C^2(M)$ such that the PDE operator~\eqref{eq:PDEoperator} can be expressed as \[ \mathcal{L}[\phi] = -\nabla\cdot\left((\det A)^{1/2} \nabla\phi\right). \] \end{lemma} \begin{proof} Let $x_0 \in M$ and fix any $r<r_I$ where $r_I$ is the injectivity radius of the manifold $M$. Then we can consider a bounded set $\Omega\subset\mathbb{R}^2$ and a set of coordinates $y:\Omega \to B(x_0,r)$. In local coordinates~\cite{cabre}, the PDE operator~\eqref{eq:PDEoperator} takes the form \[ \mathcal{L}[\phi] = \frac{-1}{\sqrt{\det G}}\nabla \cdot\left(\sqrt{\det G} AG^{-1}\nabla \phi\right), \quad y \in \Omega. \] Now we choose a local metric such that $G = (\det A)^{-1/2}A$. We note that $G\in C^2(M)$ is strictly positive definite since $A$ has both these properties. We note that $\det(G) = 1$ so that the PDE in local coordinates becomes \[ \mathcal{L}[\phi] = -\nabla\cdot\left((\det A)^{1/2} \nabla\phi\right). \] This is a uniformly elliptic operator since $A$ is positive definite. \end{proof} Importantly, because our manifold is compact, we can cover it with finitely many coordinate patches. \begin{lemma}[Finite covering of the manifold]\label{lem:geodesicballs} For every $r>0$, there exists a finite set of geodesic balls $\left\{ B_{r}^{i} \right\}_{i=1}^n$ such that \[ M \subseteq \bigcup\limits_{i=1}^n B_r^i. \] \end{lemma} \subsubsection{Properties of barrier function} We can now use standard regularity results for uniformly elliptic PDEs in Euclidean space to deduce key properties of the barrier function $\phi^h$. \begin{lemma}[Bounds on barrier function]\label{lem:phiBounded} There exists a constant $C>0$ such that for all sufficiently small $h>0$ \[ \|\phi^h\|_{L^\infty(M)} \leq C(h^\gamma + h^{\alpha-\gamma}). \] \end{lemma} \begin{proof} Since $\phi^h$ is continuous on a compact manifold, it achieves a maximum and minimum at some points $x, y\in M$. Then since $M$ is connected, we can use Lemmas~\ref{lem:localPDE}-\ref{lem:geodesicballs} to construct a finite set of balls of radius $r/4$: $\left\{B_{r/4}^i\right\}_{i=1}^n$ such that \[ x\in B_{r/4}^n, \quad y \in B_{r/4}^1, \quad B_{r/4}^i \cap B_{r/4}^{i+1} \neq \emptyset. \] On each corresponding (larger) ball $B_r^i$ of radius $r$, we can interpret the barrier equation~\eqref{eq:phiplus} as a uniformly elliptic divergence structure PDE on a local coordinate patch in $\mathbb{R}^2$. Now we denote \begin{equation} \bar{\phi}^h(x) \equiv \phi^h(x) - \min_{M} \phi^h. \end{equation} This is non-negative, which allows us to apply the de Giorgi-Nash-Moser Harnack inequality, which applies to PDEs in divergence form. Taking $q=4$ in~\cite[Theorems~8.17-8.18]{gilbargtrudinger}, there exists a constant $C>0$ such that for every $i=1, \ldots, n$ we have \[ \sup_{B_{r/4}^i} \bar{\phi}^h \leq C \left( \inf_{B_{r/4}^i} \bar{\phi}^h + \left\Vert f^h \right\Vert_{L^{2}(M)} \right). \] Recalling that $\bar{\phi}^h(y) = 0$, we find that \[ \sup_{B_{r/4}^1} \bar{\phi}^h \leq C_1 \|f^h\|_{L^2(M)}.\] Now we use this to obtain an estimate in the ball $B_{r/4}^2$, which overlaps with $B_{r/4}^1$. \begin{align*} \sup_{B_{r/4}^2} \bar{\phi}^h &\leq C_1 \left( \inf_{B_{r/4}^2} \bar{\phi}^h + \left\Vert f^h \right\Vert_{L^{2}(M)} \right) \\ &\leq C_1 \left( \sup_{B_{r/4}^1} \bar{\phi}^h + \left\Vert f^h \right\Vert_{L^{2}(M)} \right) \\ &\leq C_2\|f^h\|_{L^2(M)}. \end{align*} Continuing this chaining argument $n$ times, we find that \[ \|\bar{\phi}^h\|_{L^\infty(M)} = \bar{\phi}^h(x) \leq C_n\|f^h\|_{L^2(M)}. \] By Lemma~\ref{lem:fhL2}, $\|f^h\|_{L^2(M)} = \mathcal{O}(h^{\alpha-\gamma})$. We recall also that \[ \min\limits_M \phi^h \leq \phi^h(x_0) = K_0h^\gamma, \] which completes the proof. \end{proof} \begin{lemma}[Derivative bounds]\label{lem:phideriv} There exists a constant $C>0$ such that for all sufficiently small $h>0$ \[ \left\Vert \phi^h \right\Vert_{C^{1} (M)} +\left\Vert \phi^h \right\Vert_{C^{2} (M)} + \left[ \phi^h \right]_{C^{2,1} (M)} \leq C (h^\gamma+h^{\alpha-3\gamma}) . \] \end{lemma} \begin{proof} As in the previous lemma, we can use Lemmas~\ref{lem:localPDE}-\ref{lem:geodesicballs} to construct a finite set of balls of radius $r/2$: $\left\{B_{r/2}^i\right\}_{i=1}^n$ such that on each corresponding (larger) ball $B_r^i$ of radius $r$, we can interpret the barrier equation~\eqref{eq:phiplus} as a uniformly elliptic divergence structure PDE on a local coordinate patch in $\mathbb{R}^2$. We now apply a classical interior regularity result for uniformly elliptic PDE~\cite[Corollary~6.3]{gilbargtrudinger}. In particular, there exists a constant $C>0$ such that for every $i=1, \ldots, n$ we have \[ \left\Vert \phi^h \right\Vert_{C^{1} (B_{r/2}^i)} + \left\Vert \phi^h \right\Vert_{C^{2} (B_{r/2}^i)} + \left[ \phi^h \right]_{C^{2,1} (B_{r/2}^i)} \leq C \left( \left\Vert \phi^h \right\Vert_{L^{\infty}(B_r^i)} + \left\Vert f^h \right\Vert_{C^{0, 1}(B_r^i)} \right). \] Then a corresponding H\"{o}lder estimate over the entire manifold is obtained by summing the estimates over the $n$ coordinate patches. Thus we find that \[ \left\Vert \phi^h \right\Vert_{C^{1} (M)} + \left\Vert \phi^h \right\Vert_{C^{2} (M)} + \left[ \phi^h \right]_{C^{2,1} (M)} \leq C' \left( \left\Vert \phi^h \right\Vert_{L^{\infty}(M)} + \left\Vert f^h \right\Vert_{C^{0, 1}(M)} \right). \] We recall from Lemmas~\ref{lem:fhC1} and~\ref{lem:phiBounded} the estimates \[ \left\Vert f^h \right\Vert_{C^{0, 1}(M)} = \mathcal{O}(h^{\alpha-3\gamma}), \quad \left\Vert \phi^h \right\Vert_{L^{\infty}(M)} = \mathcal{O}(h^\gamma+h^{\alpha-\gamma}), \] which completes the proof. \end{proof} \subsubsection{Convergence rates} The preceding regularity results allow us to select a value for $\gamma$ (which determines the radius of the small cap about $x_0$) that ensures that the family of barrier functions $\phi^h$ are uniformly Lipschitz continuous. \begin{corollary}[Lipschitz bounds]\label{lem:lipschitz} Let $\gamma \leq \alpha/3$. Then there exists a constant $K_\phi >0$ such that for all sufficiently small $h>0$, \[ \abs{\nabla\phi^h}_{C^0(M)} \leq K_\phi. \] \end{corollary} The requirement of Corollary~\ref{lem:lipschitz}, combined with the fact that the barrier function scales like $h^{\gamma} + h^{\alpha-\gamma}$ (Lemma~\ref{lem:phiBounded}), suggests $\gamma=\alpha/3$ as an optimal choice. Now we prove the main result. \begin{proof}[Proof of Theorem \ref{thm:mainconvergence}] We substitute both the error $u^h-u$ and the barrier $\phi^h$ into the scheme~\eqref{eq:schemeDef} at all $x\in\mathcal{G}^h$. {\bf Case 1}: Let $x\in B^h\cap\mathcal{G}^h$. Then we can use the linearity of the scheme to compute \begin{align*} F^h(x,&u^h(x)-u(x),u^h(x)-u(x)-u^h(\cdot)+u(\cdot)) \\ &= \left(L^h(u^h(x)-u^h(\cdot)) + h^\alpha u^h(x) + f(x)\right)-\left(L^h(u(x)-u(\cdot)) + h^\alpha u(x)\right) \\ &\leq -L(x,\nabla u(x), D^2u(x)) + C\left[u(x)\right]_{C^{2,1}(M)}h^\alpha + h^\alpha\|u\|_{L^\infty(M)}\\ &= f(x) + C_1h^\alpha. \end{align*} Above, we have used the fact that $u$ solves the linear PDE~\eqref{eq:PDE} and $u^h$ solves the scheme~\eqref{eq:scheme}. Similarly, we can compute \begin{align*} F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) &= L^h(\phi^h(x)-\phi^h(\cdot)) + h^\alpha\phi^h + f(x)\\ &\geq f^h(x) - [\phi^h]_{C^{2,1}(M)}h^\alpha - \|\phi^h\|_{L^\infty(M)}h^\alpha + f(x)\\ &\geq \frac{K_1h^\alpha}{\abs{B^h}}-C_2h^\alpha(h^\gamma + h^{\alpha-3\gamma}+h^{\alpha-\gamma})+f(x) \end{align*} where we utilize the regularity bounds in Lemmas~\ref{lem:phiBounded}-\ref{lem:phideriv}. Making the particular choice of $\gamma = \alpha/3$ yields \[ F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) \geq h^\alpha\left(\frac{K_1}{\abs{B^h}}-C_2\right)-C_3h^{4\alpha/3} + f(x). \] Then if we make the choice $K_1 > (C_2+C+1)\abs{B_h}$ when we define the barrier functions in~\eqref{eq:phiplus}, we find that \[ F^h(x,u^h(x)-u(x),u^h(x)-u(x)-u^h(\cdot)+u(\cdot)) < F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) \] for sufficiently small $h>0$. {\bf Case 2}: Let $x\in (b^h\cup S^h)\cap\mathcal{G}^h$. Recalling that $u(x_0) = 0$, we can bound $u$ in this region by \[ \abs{u(x)} \leq K_u d_M(x,x_0) \leq 2K_uh^\gamma. \] Since $u^h(x) = 0$ uniformly in this small cap, we have \[ F^h(x,u^h(x)-u(x),u^h(x)-u(x)-u^h(\cdot)+u(\cdot)) = u^h(x)-u(x) \leq 2K_uh^\gamma. \] Similarly, we recall that $\phi^h(x_0) = K_0h^\gamma$ so that \[ F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) = \phi^h(x) \geq (K_0-2K_\phi)h^\gamma. \] Then if we make the choice $K_0 > 2(K_u+K_\phi)$ in the definition of the barrier function~\eqref{eq:phiplus}, we find that \[ F^h(x,u^h(x)-u(x),u^h(x)-u(x)-u^h(\cdot)+u(\cdot)) < F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) \] for sufficiently small $h>0$. Combining these two cases, we find that \[ F^h(x,u^h(x)-u(x),u^h(x)-u(x)-u^h(\cdot)+u(\cdot)) < F^h(x,\phi^h(x),\phi^h(x)-\phi^h(\cdot)) \] for all $x\in\mathcal{G}^h$ and sufficiently small $h>0$. This allows us to appeal to the Discrete Comparison Principle (Theorem~\ref{thm:discreteComparison}) to conclude that \[ u^h(x)-u(x) \leq \phi^h(x), \quad x \in \mathcal{G}^h. \] Combined with the maximum bound on $\phi^h$ (Lemma~\ref{lem:phiBounded}) applied to the case $\gamma=\alpha/3$, we obtain the result \[ u^h(x)-u(x) \leq Ch^{\alpha/3}. \] We can do the same procedure using $u(x)-u^h(x)$ to obtain the final result \[ \|u^h-u\|_{L^\infty(\mathcal{G}^h)} \leq Ch^{\alpha/3}. \] \end{proof} \section{Approximation of Solution Gradients}\label{sec:mapping} Having established convergence rates for the solution $u^h$ of the discrete operator, we can now use them to establish a convergence approximate of the gradient of $u^h$ with rates. Given a function $u\in C^1(M)$ and its values on a discrete set of points $\mathcal{G}^h$, the design of approximations to its first derivatives is a well-studied ``textbook'' problem. However, as discussed in \autoref{sec:empirical}, consistent approximations for first derivatives may not produce correct results when they are applied to a discrete approximation $u^h$ instead of the limiting function $u$. Here we describe a framework for producing a family of convergent approximations of the gradient, which are based on a given discrete approximation $u^h$ with error bounds. We provide error bounds for the gradient of $u^h$; unsurprisingly, these are bounded by the $L^\infty$ error in the approximation of $u^h$. Combined with the convergence rate bounds of Theorem~\ref{thm:mainconvergence}, this immediately provides a provably convergent method for approximating the gradient of the solution to a divergence-structure linear elliptic PDE~\eqref{eq:PDE} on a compact manifold. Let $x_0\in M$ be any point on the manifold and let $\nu\in\mathcal{T}_{x_{0}}$ be a unit vector in the tangent plane. We focus on the construction of a discrete approximation to $\frac{\partial u(x_0)}{\partial \nu}$, the first directional derivative of $u$ in the direction $\nu$. By projecting into the tangent plane as described in \autoref{sec:background}, this is equivalent to constructing convergent approximations of a first directional derivative in $\mathbb{R}^d$. We will consider finite difference approximations of the form \begin{equation}\label{eq:fd1} \mathcal{D}_\nu u(x_0) = \frac{1}{r}\sum\limits_{i=1}^k a_i (u(x_i)-u(x_0)) \end{equation} where $x_i\in\mathcal{G}^h$ are discretization points satisfying $\abs{x_i-x_0} = \mathcal{O}(r)$ and $r \geq h$ denotes the stencil width of this approximation. We make the following assumptions on the discrete solution $u^h$ and the gradient approximation~\eqref{eq:fd1}. \begin{hypothesis}[Conditions on gradient approximation] \label{hyp:gradient} We make the following assumptions on the approximations: \begin{enumerate} \item There exists $p>0$ such that at every point $x\in\mathcal{G}^h$, the discrete approximation $u^h$ satisfies \[ u(x) = u^h(x) + \mathcal{O}(h^p). \] \item The stencil width $r \geq h$ for every $h>0$. \item There exist constants $C_1, C_2>0$ such that for every $\mathcal{G}^h$ there exist points $x_1, \ldots, x_k \in \mathcal{G}^h$ satisfying \[ C_1 r \leq \abs{x_i-x_0} \leq C_2 r, \quad i = 1, \ldots, k.\] \item There exists $\beta>0$ such that the gradient approximation applied to the limiting function $u$ satisfies \[ \mathcal{D}_\nu u(x_0) = \frac{\partial u(x_0)}{\partial\nu} + \mathcal{O}(r^\beta). \] \item The coefficients in the gradient approximation satisfy $a_i = \mathcal{O}(1)$ as $h\to 0$. \end{enumerate} \end{hypothesis} Under these assumptions, we can immediately provide error bounds for the gradient approximation applied to the discrete solution $u^h$. Moreover, we can use these bounds to determine an optimal stencil width $r$ as a function of $h$. \begin{theorem}[Error bounds for gradient]\label{thm:errorGrad} Under the assumptions of Hypothesis~\ref{hyp:gradient} and choose $r = \mathcal{O}\left(h^{p/(\beta+1)}\right)$. Then \[ \frac{\partial u(x_0)}{\partial\nu} = \mathcal{D}_\nu u^h(x_0) + \mathcal{O}\left(h^{\frac{p\beta}{\beta+1}}\right). \] \end{theorem} \begin{corollary}[Error bounds for solution gradient]\label{cor:errorGrad} Assume the conditions of Hypotheses~\ref{hyp:convergence},~\ref{hyp:scheme},~\ref{hyp:gradient} are satisfied. Let $u$ be the solution of the PDE~\eqref{eq:PDE}, $u^h$ be the solution of the approximation scheme~\eqref{eq:schemeDef}, and $r = \mathcal{O}\left(h^{\frac{\alpha}{(d+1)(\beta+1)}}\right)$. Then \[ \frac{\partial u(x_0)}{\partial\nu} = \mathcal{D}_\nu u^h(x_0) + \mathcal{O}\left(h^{\frac{\alpha\beta}{(d+1)(\beta+1)}}\right). \] \end{corollary} \begin{proof}[Proof of Theorem~\ref{thm:errorGrad}] We can substitute directly into the approximation scheme to compute \begin{align*} \frac{\partial u(x_0)}{\partial\nu} &= \mathcal{D}_\nu u(x_0) + \mathcal{O}(r^\beta)\\ &= \frac{1}{r}\sum\limits_{i=1}^k a_i (u(x_i)-u(x_0)) + \mathcal{O}(r^\beta)\\ &= \mathcal{D}_\nu u^h(x_0) + \frac{1}{r} \mathcal{O}(h^p) + \mathcal{O}(r^\beta)\\ &= \mathcal{D}_\nu u^h(x_0) + \mathcal{O}\left(\frac{h^p}{h^{p/(\beta+1)}} + h^{\frac{p\beta}{\beta+1}}\right)\\ &= \mathcal{D}_\nu u^h(x_0) + \mathcal{O}\left(h^{\frac{p\beta}{\beta+1}}\right). \end{align*} \end{proof} We conclude by demonstrating with several examples that the assumptions of Hypothesis~\ref{hyp:gradient} are extremely reasonable. \subsection{One dimension} Consider first the simple case $d=1$ that was examined in \autoref{sec:empirical}. We would like to approximate the derivative by a standard centered difference approximation of the form \[ \mathcal{D}_x u^h(x_0) = \frac{u^h(x_0+r) - u^h(x_0-r)}{2r}. \] This centered scheme has second-order accuracy, so $\beta=2$. Recall that when we attempted to do this with a narrow stencil $r=h$, the result failed to correctly approximate the solution derivative as $h\to0$. Instead, we can consider the stencil width proposed by Corollary~\ref{cor:errorGrad} of \[ r = \mathcal{O}\left(h^{\alpha/6}\right). \] For the wide stencil scheme explored in~\eqref{eq:torus1D_discrete3}, the stencil width of $r = \mathcal{O}(h^{1/6})$ yields a convergent derivative approximation with error bounded by $\mathcal{O}(h^{1/3})$. \subsection{Two dimensions} In higher-dimensions, gradient approximations of arbitrary order are easily generated via Taylor expansion provided the function $u$ is sufficiently regular. We describe explicitly the case of a second-order approximation ($\beta=2$) in two-dimensions ($d=2$), but the procedure is easily generalized. Suppose without loss of generality that the local coordinates $(x,y)$ on the tangent plane are chosen so that $\nu=(1,0)$. Since $u^h$ is obtained by solving a monotone approximation of a second-order linear elliptic equation, its truncation error is at most second-order ($\alpha \leq 2$); see~\cite{ObermanSINUM}. Thus we expect to require a search radius \[ r = \mathcal{O}\left(h^{\frac{\alpha}{3(\beta+1)}}\right) \geq \mathcal{O}\left(h^{\frac{2}{3(\beta+1)}}\right) \geq \mathcal{O}\left(h^{2/3}\right). \] We consider four points $(x_i,y_i) \in \mathcal{G}^h$, $i=1, \ldots, 4$ chosen to belong to the four balls of radius $h$ centered at $(r, h)$, $(-r,h)$, $(-r,-h)$ and $(r,-h)$. Note that such points exist from the definition of $h$ and the resulting points satisfy \[ r-h \leq \abs{x_i} \leq r+h, \quad \abs{y_i} \leq 2h. \] Thus the requirements of Hypothesis~\ref{hyp:gradient} are satisfied by these four points. Now we seek an approximation of the first directional derivative of the form \begin{align*} \mathcal{D}_\nu u_0 &= \frac{1}{r}\sum\limits_{i=1}^4 a_i(u_i-u_0) \\ &= \frac{1}{r}\sum\limits_{i=1}^4 a_i\left[\frac{\partial u_0}{\partial x}x_i + \frac{\partial u_0}{\partial y}y_i + \frac{1}{2}\frac{\partial^2u}{\partial x^2}x_i^2 + \frac{\partial^2u}{\partial x\partial y}x_iy_i + \mathcal{O}\left(y_i^2 + x_i^3\right)\right]. \end{align*} We obtain the desired approximation by solving the system of equations \[ \begin{cases} &\sum\limits_{i=1}^4 a_ix_i = r\\ &\sum\limits_{i=1}^4 a_iy_i = 0\\ &\sum\limits_{i=1}^4 a_ix_i^2 = 0\\ &\sum\limits_{i=1}^4 a_ix_iy_i = 0 \end{cases} \] for the coefficients $a_i$. By inspection, given that $x_i = \mathcal{O}(r)$ and $y_i = \mathcal{O}(h)$, the coefficients satisfy $a_i = \mathcal{O}(1)$. Then the resulting error in the approximation is \[\mathcal{O}\left(\frac{h^2}{r} + r^2\right). \] Since the stencil width satisfies $r \geq h^{2/3}$, we have $h \leq r^{3/2}$ and obtain an approximation error of $\mathcal{O}(r^2)$. This corresponds to $\beta = 2$. Using the gradient approximation just described in combination with the numerical method~\eqref{eq:schemeDef}, we can approximate the gradient of solutions to~\eqref{eq:PDE} with an error bound of $\mathcal{O}\left(h^{\frac{2\alpha}{9}}\right)$. Using the highest-order monotone scheme possible for a second-order linear elliptic equation ($\alpha=2$) yields a gradient approximation with accuracy $\mathcal{O}\left(h^{4/9}\right)$. We can also consider higher-order approximations of the gradient ($\beta>2$), which are easily generated by considering higher-order Taylor expansions that rely on a larger number of grid points $x_i$, $i = 1, \ldots, k$. Note that there is \emph{no} monotonicity requirement on the approximation of the gradient terms. In this case, we can achieve error bounds of $\mathcal{O}\left(h^{\frac{\alpha\beta}{3(\beta+1)}}\right)$. It is worth noting that as we take higher-order approximations ($\beta\to\infty$), we find that the best error bound we can achieve is $\mathcal{O}\left(h^{\alpha/3}\right)$. That is, the $L^\infty$ error bound guaranteed by Theorem~\ref{thm:mainconvergence} is also the best possible error bound we can expect on approximations of the solution gradient. \section{Conclusion}\label{sec:conclusion} In this manuscript, we studied convergence rates of monotone finite difference approximations for uniformly elliptic PDEs on compact manifolds. When applied to the Dirichlet problem, solutions of monotone finite difference schemes are expected to converge with an error proportional to their formal consistency error. We demonstrated empirically that on manifolds without boundary, convergence rates can be lower than the formal consistency error. We then derived explicit error bounds by carefully constructing barrier functions and exploiting the fact that monotone and proper schemes have a discrete comparison principle. The barrier functions solved a linear elliptic PDE in divergence form with a right-hand side proportional to the formal consistency error of the scheme in the majority of the domain. However, because of the need to satisfy an additional solvability condition, the right-hand side was permitted to become larger in a small cap on the manifold. This resulted in a barrier function that was asymptotically larger than the formal consistency error. Because the scaling of the volume of the small cap was dependent on dimension, we found that specific convergence rates depend on the dimension of the underlying manifold. In particular, the reduction in accuracy becomes worse as the dimension increases. Next, we demonstrated that knowledge of convergence rates can be used to design convergent approximations of the solution gradient through the use of wide finite difference stencils. We described a family of discrete gradients, with the optimal convergence rate in the gradient bounded by the $L^\infty$ convergence rate of the discrete solution. Further work will involve utilizing convergence rates for linear elliptic PDEs to prove error bounds for the solutions of fully nonlinear elliptic PDEs. This would apply, for example, to PDEs arising from solving the Optimal Transport problem on the sphere, which is of particular interest due to its application to optical design problems~\cite{Wang_Reflector} and mesh generation~\cite{Weller_OTonSphere}. The results of this article also highlight the ongoing need to design higher-order numerical methods for elliptic PDEs. \bibliographystyle{plain}
1,314,259,992,725
arxiv
\section{\label{sec1}Introduction} Optical absorption in semiconductors has received increasing attention due to its broad application scenarios, solar cells\cite{Zhou2016, Ma2018, Wang2020a}, photodetection\cite{Tagliabue2018, Guo2020}, and biosensing\cite{Rodrigo2015, Tan2018, Ren2020, Zhao2020}, to name a few. In particular, guaranteeing high absorption in optically thin semiconductors is the key to reducing the carrier extraction time and enhancing the device performance. However, there is a ceiling of absorption ($50\%$) in all ultra thin free-standing optical film\cite{Kim2016}. To break such upper limit, a variety of strategies with the aid of metals, including the Dallenbach\cite{Dallenbach1938}, the Salisbury\cite{Salisbury1952}, and the metasurface perfect absorbers\cite{Landy2008, Liu2010, Xiong2017, Luo2018, Cheng2020} (based on period subwavelength resonant unit cells) have been proposed, in which a common used metal back mirror provides the required one-port configuration for perfect absorption, but at the cost of a broadband reflection outside the absorption band\cite{Alaee2017}. More importantly, the substantial optical absorption occurs in the metal components leads to significant photothermal conversion, namely, the Joule heat instead of photo-induced carriers is generated, which is not favored in most optical and optoelectronic devices. In recent years, the so-called all-dielectric meta-optics that utilizes semiconductors (Si, Ge, GaAs, etc) with the Mie resonances excited in their patterned subwavelength resonators emerges as a new milestone in the field of nanophotonics and metasurfaces\cite{Kuznetsov2016, Jahani2016, Decker2016, Koshelev2018, Li2019, Huang2021}. The high refractive index and low loss characteristics make these materials great candidates for the localization of light. Under the circumstances, achieving high absorption by directly using the Mie resonances and further overlapping these modes has been predicted in photonic crystals and metasurfaces, without the aid of metals\cite{Piper2014, Ming2017, Tian2018, Tian2020, Mitrofanov2020, Hale2020, Fan2021}. The concept of degenerate critical coupling is developed with quite strict criteria: 1) the resonator structure should support two modes of odd and even symmetries, $\omega_{1}=\omega_{2}$; 2) the radiation rate of the two modes should exactly match the dissipative loss rate of the absorbing at the resonant wavelength, $\gamma_{1}=\gamma_{2}=\delta$, and both modes will be critically coupled under such conditions, which renders the perfection absorption in a two-port configuration. As a matter of fact, such degenerate requirements are not easy to fulfill, and the perfect absorption in some of previous works indeed employs overlapping of multiple Mie resonances. In this work, we revisit the concept of degenerate critical coupling and demonstrate perfect absorption in ultra thin semiconductor metasurfaces based on free-standing GaAs nanocylinders in the near-infrared. The proposed structure is a symmetric two-mode, two-port system with symmetry plane lying in the center of the nanocylinders perpendicular to the cylindrical axis, which is obviously different from the conventional absorbers with metal reflector mirror. Two modes, i.e., the electric dipole and magnetic dipole, possess opposite symmetries and reach their respective critical coupling state where the radiation rate is equal to the dissipative loss rate of GaAs, rendering their maximum contribution of $50\%$ at a specific radius (height) of the nanocylinders, and finally achieve the perfect absorption as a whole. The temporal coupled-mode theory (TCMT) together with rigorous multipole analysis is applied to separate the contributions of the two modes to the perfect absorption. This work lays out the foundation for the next generation high-efficiency optical and optoelectronic devices, in which the bulky semiconductors can be replaced with ultra thin perfect absorption semiconductor metasurfaces. \section{\label{sec2}Theoretical formalism of degenerate critical coupling} TCMT is used to derive the degenerate critical coupling conditions for perfect absorption. Here we consider the input-output behaviors of a mirror-symmetric resonator structure which supports two modes of opposite symmetries and couples with the outside through two identical ports. If there is no dissipative loss in the system, the dynamical equations can be given as\cite{Piper2014, Suh2004}, \begin{eqnarray} \frac{da}{dt}&=&(i\omega_{0}-\gamma)a+D^{T}|s_{+}\rangle,\label{eq1} \\ |s_{-}\rangle&=&C|s_{+}\rangle+Da,\label{eq2} \end{eqnarray} where $a$ is a vector (rather than a number in single-mode system) which represents the resonance amplitudes, with $|a_{j}|^2$ corresponding to the energy stored in the $j$th mode. $\omega_0$ and $\gamma$ are real diagonal matrices describing the resonance frequency and radiation loss, respectively. $|s_{+}\rangle$ and $|s_{-}\rangle$ are the amplitudes of incoming and outgoing waves. $C$ is the scattering matrix of the direct process, which describes the transmission and reflection between the two ports in the absence of the resonator structure, and $D$ is the coupling matrix account for coupling between the ports and the modes, with $D^{\dagger}D=2\gamma$ and $CD^{\ast}=-D$, due to the time-reversal symmetry and energy conservation arguments. For the case where the incoming wave with unit amplitude is only incident from a single port, the energy stored in the resonator structure can be derived from Eqs. (\ref{eq1}) and (\ref{eq2}), \begin{eqnarray} |a|^{2}=\sum_{j=1}^{2}\frac{\gamma_{j}}{(\omega-\omega_{j})^2+\gamma_{j}^{2}}.\label{eq3} \end{eqnarray} When a dissipative loss is loaded to the above system and the basic symmetry of the resonator structure is unchanged, the loss of each mode remains independent, thus the Eq. (\ref{eq1}) can be simply modified to $da/dt=(i\omega_{0}-\gamma-\delta)a+D^{T}|s_{+}\rangle$, by adding a real diagonal matrix $\delta$, which gives the dissipative loss rate. Then the stored energy $|a|^{2}$ can be updated by amending $\gamma^{2}$ in the denominator with $(\gamma+\delta)^2$, and the absorption in the system is expressed as \begin{eqnarray} A=\sum_{j=1}^{2}\frac{2\delta_{j}\gamma_{j}}{(\omega-\omega_{j})^2+(\gamma_{j}+\delta_{j})^{2}}.\label{eq4} \end{eqnarray} It can be seen from Eq. (\ref{eq4}) that the total absorption is the sum of contributions of each mode, which is the result of their opposite symmetry properties. Each of the two terms in Eq. (\ref{eq4}) will achieve a theoretical maximum of $50\%$ when the radiation rate of mode exactly match the dissipative loss rate of material at the resonant wavelength, $\omega=\omega_{j}$, $\gamma_{j}=\delta_{j}$, which is the classic critical coupling condition for single-mode, two-port system\cite{Wang2019, Xiao2020, Wang2020}. If the resonant frequency of the two modes falls apart, $|\omega_{1}-\omega_{2}|\gg\gamma_{j}$, the total absorption is mainly contributed by only one mode and is again limited to $50\%$. However, if the two modes are degenerate in frequency, $\omega=\omega_{1}=\omega_{2}$, and simultaneously the radiation rate equals the dissipative loss rate, $\gamma_{1}=\delta_{1}$, $\gamma_{2}=\delta_{2}$, the entire system reaches the so-called degenerate critical coupling state, and perfect absorption of $100\%$ will be achieved at this time. It must be pointed out that the dissipative loss rate is an intrinsic property of the absorbing material, which puts forward an essential constraint $\delta_{1}=\delta_{2}=\delta$ at the same wavelength. In combination with the above analysis, the requirements for the degenerate critical coupling are quite strict, i.e., $\omega=\omega_{1}=\omega_{2}$, and $\gamma_{1}=\gamma_{2}=\delta$, which have been excessively relaxed in some of previous works. \section{\label{sec3}Numerical results and discussions} As an illustration of the theoretical concept of degenerate critical coupling presented above, we consider a two-port system of the free-standing GaAs metasurface absorber, which comprises of a GaAs nanocylinder in each unit cell, as shown in Fig. \ref{fig1}. The free-standing GaAs nanocylinders show the mirror symmetry about the $x$-$y$ plane lying in the center of the nanocylinders. The proposed structure is designed so that it supports the two resonance modes including an electric dipole and a magnetic dipole, when the structure is illuminated by the $x$-polarized incident plane waves propagating along the –$z$ axis. We utilize the finite-difference time-domain (FDTD) method to numerically simulate the optical responses of the proposed structure. In simulations, the GaAs nanocylinders have periodicity $p=650$ nm, the radius $r$ and the height $h$, and the wavelength-dependent parameters of the GaAs material (Palik data) is used\cite{Palik1998}. The spectral range in the simulation is from 700 nm to 1100 nm in the near infrared. \begin{figure*}[htbp] \centering \includegraphic [scale=0.60]{fig1.eps} \caption{\label{fig1} Schematic of the proposed ultrathin semiconductor metasurface absorber comprising of the free-standing GaAs nanocylinders with the radius $r$, the height $h$, and the periodicity $p$.} \end{figure*} To achieve the perfect absorption in the proposed structure, we conduct a search in the three-dimensional space of parameters including the wavelength $\lambda$, the radius $r$, and the height $h$ of the GaAs nanocylinders. Fig. \ref{fig2}(a) shows the simulated absorption of the structure as a function of wavelength and the radius of the GaAs nanocylinders with a fixed height $h= 140$ nm, while the electric dipole and magnetic dipole modes are marked by dashed lines. As the radius increases, it is observed that the absorption of the structure for each single mode never exceeds $50\%$, and the absorption is considerably enhanced when the two modes start to overlap. Once the two resonance modes cross at the wavelength of 878.236 nm near the bandgap edge of GaAs where the intrinsic absorption of GaAs approaches zero, the perfect absorption is achieved at this point. When the radius continues to increase, leaving the degenerate critical coupling position, the absorption shows a downward trend. In Fig. \ref{fig2}(b), the peak absorption of $99.067\%$ occurs at 878.236 nm for the nanocylinder radius $r= 170$ nm and the height $h=140$ nm, while both transmission and reflection are significantly suppressed near this wavelength of the mode crossing. Based on the TCMT, the theoretical absorption spectrum agrees excellently with the simulated absorption spectrum in the vicinity of resonance. By fitting the absorption spectrum using Eq. (\ref{eq4}), the radiation rate and dissipative loss rate of the structure are obtained as $\gamma_{1}=\gamma_{2}=\delta_{1}=\delta_{2}$=47.04 THz at the resonance wavelength. Hence the condition of degenerate critical coupling is exactly fulfilled in this case, accounting for the perfect absorption of the structure. To further unveil the contributions of the electric dipole and magnetic dipole modes to the perfect absorption under the degenerate coupling condition, the multipole decompositions of the scattering cross sections of the GaAs arrays are conducted, where the multipolar contributions from electric dipole, magnetic dipole, toroidal dipole, electric quadrupole, and magnetic quadrupole are shown in Fig. \ref{fig3}(a). The electric dipole and magnetic dipole modes dominate at 878.236 nm and spectrally oscillate in phase with each other, while the contributions of higher-order resonances including toroidal dipole, electric quadrupole, and magnetic quadrupole can be neglected within this wavelength region. According to the corresponding field distributions of the resonances in the $x$-$z$ plane inside each unit cell in Fig. \ref{fig3}(b), the electric dipole resonance is excited along the $x$ direction, and the excitation of the magnetic dipole resonance is in the $y$ direction and forms displacement current loop in the $x$-$z$ plane, satisfying the opposite symmetry requirement of the TCMT. Through the degenerate critical coupling of the two modes with opposite symmetry, the absorption of the structure is significantly enhanced at the wavelength of mode crossing. \begin{figure}[htbp] \centering \includegraphic [scale=0.50]{fig2.eps} \caption{\label{fig2} (a) The simulated absorption spectra of the proposed structure as a function of wavelength and radius of the GaAs nanocylinders, with the nanocylinder height $h=140$ nm. The electric dipole and magnetic dipole modes are marked by dashed lines. (b) The simulated transmission, reflection and absorption spectra at the wavelength 878.236 nm of crossing resonance, and the theoretical absorption spectrum is denoted by the dashed line. } \end{figure} \begin{figure}[htbp] \centering \includegraphic [scale=0.50]{fig3.eps} \caption{\label{fig3} (a) The multipole decomposition of the scattering cross sections, including the multipolar contributions from electric dipole (ED), magnetic dipole (MD), toroidal dipole (TD), electric quadrupole (EQ), and magnetic quadrupole (MQ). (b) The corresponding electric and magnetic field distributions in the $x$-$z$ plane at the resonance wavelength of 878.236 nm, and the arrows indicate the displacement currents. } \end{figure} When searching for the optimal geometry for the perfect absorption, the absorption spectra as a function of the GaAs nanocylinder height $h$ under normal incidence are also illustrated in Fig. \ref{fig4}, in addition to the dependence of absorption on the nanocylinder radius $r$ presented above. Similarly with the case in Fig. \ref{fig2}, the electric dipole and magnetic dipole modes of opposite symmetry are simultaneously excited within the wavelength of interest. It can be clearly observed that as the height $h$ increases with fixed $r=170$ nm, the electric dipole and magnetic dipole modes experience wavelength redshifts, leading to the change from separation to crossing and at last separation again, accompanied with the different levels of absorption enhancement. At the point of the mode crossing of 878.236 nm and $h=140$ nm, the conditions for degenerate critical coupling can be fulfilled, giving rise to the perfect absorption of the structure. Thus the geometry optimization and in-plane symmetry control of the absorption structures provide one simple and general way to achieve and fine-tune the degenerate condition for perfect absorption. \begin{figure}[htbp] \centering \includegraphic [scale=0.50]{fig4.eps} \caption{\label{fig4} The absorption spectra of the proposed structure as a function of wavelength and the height of the GaAs nanocylinders, with the nanocylinder radius $r=170$ nm. The electric dipole and magnetic dipole modes are marked by dashed lines.} \end{figure} The dependence of the absorption performance in the proposed structure on the incident polarization and angle is also investigated. Fig. \ref{fig5} illustrates the simulated absorption spectra as a function of incident angle under TM and TE polarizations when the nanocylinder radius and height are fixed as $r= 170$ nm and $h=140$ nm. In comparison of Fig. \ref{fig5}(a) and \ref{fig5}(b), it can be seen that the absorption spectra are almost insensitive to the polarizations in the vicinity of wavelength 878.236 nm, showing high tolerance to the polarizations. Especially, the high absorption arising from the crossing electric dipole and magnetic dipole modes near this wavelength can be maintained within the polarization angle range of $25^{\circ}$. \begin{figure}[htbp] \centering \includegraphic [scale=0.50]{fig5.eps} \caption{\label{fig5} The absorption spectra of the proposed structure with the GaAs nanocylinder radius $r=170$ nm and height $h=140$ nm under (a) TM and (b) TE polarization.} \end{figure} \section{\label{sec4}Conclusions} In conclusion, a kind of ultrathin semiconductor metasurfaces in the near infrared is theoretically and numerically demonstrated for perfect absorption based on the concept of degenerate critical coupling. In the two-port system comprising of the free-standing GaAs nanocylinders, the simultaneous excitation of the electric dipole and magnetic dipole modes with opposite symmetry leads to the enhanced absorption. By the geometry optimization such as adjusting the radius and height of the GaAs nanocylinder, the electric dipole and magnetic dipole modes reach their respective critical coupling state at the same resonance wavelength. At the point of mode crossing, the strict condition for degenerate critical coupling can be fulfilled and the perfect absorption is achieved in the exemplary structure, breaking the limit of $50\%$ absorption in the free-standing film. The TCMT and the multipole decompositions are also conducted for theoretical analysis, verifying the mechanism of the degenerate critical coupling. In addition, the proposed structure also shows the polarization-independent and angle-insensitive properties. This work provides the guidance for the ultrathin perfect absorption semiconductor metasurfaces for breaking the absorption limitation of free-standing films and the various limitations of metal solutions, and holds great potential for high-efficiency optical and optoelectronic devices in the future application. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (Grants No. 11847132, No. 11947065, No. 61901164, and No. 12004084), the Natural Science Foundation of Jiangxi Province (Grant No. 20202BAB211007), the Interdisciplinary Innovation Fund of Nanchang University (Grant No. 2019-9166-27060003), the Natural Science Research Project of Guizhou Minzu University (Grant No. GZMU[2019]YB22), and the China Scholarship Council (Grant No. 202008420045). The authors would also like to thank Dr. S. Li for her guidance on the effective multipole expansion. W.C. and X.W. contributed equally to this work. \end{acknowledgments}
1,314,259,992,726
arxiv
\section{Introduction} Maser lines from different molecular species, including water, hydroxyl, and methanol, are common observational phenomena associated with massive star forming regions. The relation between different types of masers found around young stellar objects may yield important information about the evolutionary state of regions (e.g., Szymczak \& Gerard 2004; Breen et al. 2010). Moreover, excitation of different maser species may occur within overlapping ranges of physical conditions, thus, masers of different species originating from the same volume of gas can help narrow the physical conditions of specific regions (e.g., Edris et al. 2005; see also Fish 2007). Masers from hydroxyl (OH) transitions are the prototypical example of astrophysical masers; indeed, astrophysical masers were first detected in OH (Weaver et al. 1965). The ground state lines at 1612, 1665, 1667, and 1720$\,$MHz are the most commonly observed OH masers, however, masers from a number of exited states have also been detected (e.g., Baudry \& Desmurs 2002). OH masers have been found in a variety of environments, from galactic star forming regions (e.g., Argon et al. 2000; Fish et al. 2005) and supernova remnants (e.g., Brogan et al. 2000), to extragalactic environments (e.g., Darling \& Giovanelli 2002; Baan et al. 1982). In the case of massive star forming regions, many OH masers are found associated with compact H{$\,$\small II} regions (e.g., Fish et al. 2005), however a significant fraction of OH masers are also associated with earlier phases of massive star formation (e.g., Forster \& Caswell 2000). One maser line that is studied in massive star forming regions corresponds to the F = 3$^-$ $-$ 3$^+$ hyperfine transition of the $^2\Pi_{3/2}$ (J = 5/2) excited state of OH at a frequency of 6.035$\,$GHz (Knowles et al. 1976). First detected by Yen et al. (1969) toward W3(OH), maser lines from this transition have been found associated with massive star formation in galactic and extragalactic environments (Caswell \& Vaile 1995; Caswell 1995). The 6.035$\,$GHz OH line shows a large flux density range in galactic maser sources, from more than 100$\,$Jy to $\sim 0.1\,$Jy (e.g., Baudry et al. 1997). The linewidths are narrow; for example, most of the 6.035$\,$GHz OH masers in the sample of Baudry et al. (1997) were more narrow than 0.35$\,$\kms~(some were narrower than 0.18\kms) with no correlation between linewidth and peak intensity. Many 6.035$\,$GHz maser lines show a high fraction of circular polarization consistent with Zeeman pairs, allowing for the measurement of magnetic field strengths (e.g., Caswell \& Vaile 1995; Caswell et al. 2009; Fish \& Sjouwerman 2010). Most models agree that gas densities $\ga 10^7$~cm$^{-3}$ are necessary to enable the population inversion (e.g., Baudry et al.~1997, Cragg et al.~2002). \newpage The 6.035$\,$GHz OH masers often exhibit variability on timescales of months to years. Nevertheless, some maser sources have shown relatively constant spectral profiles, in particular, the OH maser in W3(OH) showed almost no variability over two decades (Baudry et al. 1997). A remarkable characteristic of 6.035$\,$GHz OH masers is their association with 6.7$\,$GHz CH$_3$OH masers. For example, Caswell (1997) reported interferometric observations of a sample of 30 massive star forming regions and found multiple examples of groups of 6.035$\,$GHz OH and 6.7$\,$GHz CH$_3$OH masers coincident within $\sim 1$\arcsec~(see also Etoka et al. 2005). The association between these two maser species is particularly interesting in the context of the discovery of periodic CH$_3$OH maser flares in massive star forming regions (Goedhart et al. 2004, 2009; van der Walt et al. 2009; van der Walt 2011; Szymczak et al. 2011). Among the periodic maser flare sources known, IRAS$\,$18566+0408 (a massive star forming region at a distance of 6.7$\,$kpc) is unique because it exhibits periodic flares of 6.7$\,$GHz CH$_{3}$OH {\it and} 6$\,$cm H$_{2}$CO masers (Araya et al. 2010). Here we present the results of monitoring observations of the 6.035$\,$GHz OH maser in IRAS$\,$18566+0408. \section{Observations and Data Reduction} The observations were conducted with the 305$\,$m Arecibo Telescope\footnote{The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. M\'endez-Universidad Metropolitana, and the Universities Space Research Association.} in Puerto Rico between October 2008 and January 2010. We monitored the 6.035$\,$GHz main excited-state line of hydroxyl (OH; $\nu_0 = 6035.0932\,$MHz, $^2\Pi_{3/2}$, J = 5/2, F = 3$^-$ $-$ 3$^+$)\footnote{JPL spectra line catalog (Pickett et al. 1998) accessed through the database for astronomical spectroscopy (splatalogue.net).} toward the young massive stellar object IRAS$\,$18566+0408 (pointing position, R.A. = 18\h59\m09.98\s, Decl. = +04\d12\am15.6\as, J2000) for a total of 24 epochs. We used the WAPP spectrometer, two orthogonal linear polarization setup, 9-level sampling, 3.125$\,$MHz (155\kms) bandwidth, and 2048 channels per polarization, resulting in a final channel separation of 1.53$\,$kHz (0.076\kms). We observed in position-switching (on-off) mode, with integration times of 1 to 5 minutes on-source per run. The reference (off) position was selected to cover the same hour-angle and declination as the on-source observations, with angular offsets of 2 to 6 minutes East from the Right Ascension of the source. The center bandpass LSR velocity was set to 85\kms. Data reduction was done in IDL using specialized reduction routines provided by the Arecibo Observatory. After checking for consistency, we averaged the polarizations and subtracted linear baselines. The spectra were imported to CLASS\footnote{CLASS is part of the GILDAS software package developed by IRAM.} to measure line parameters and for further analysis. The cryogenics system of the C-Band High receiver of the Arecibo Telescope was not always available, thus, the system temperatures ranged from $\sim$30$\,$K (when the cryogenics were operational) to more than 200$\,$K (when the cryogenics were turned off). This resulted in rms noise in the spectra of $\sim 20\,$mJy (with cryogenics) to more than 100$\,$mJy (without cryogenics). The calibrator B1857+129 was observed in every run for pointing and system checking (1$\,$min on-source observations). The pointing was better than 15\arcsec~(typically better than 10\arcsec). We measured a telescope beam size of $\sim 44$\arcsec~(at 6.6$\,$GHz), and a gain of $\sim6\,$K$\,$Jy$^{-1}$. We also observed the 6.035$\,$GHz OH maser source G34.26+0.15 (pointing position, R.A. = 18\h53\m18.5\s, Decl. = +01\d14\am59\as, J2000; 40\kms~LSR central bandpass velocity) in most of the runs with the same spectral setup as the IRAS$\,$18566+0408 observations. We observed G34.26+0.15 for system checking; in particular, as a positive control for detection of OH masers with the warm C-Band High receiver. We detected several OH masers in G34.26+0.15 (see Table~1). We also found weak ($\sim 50\,$mJy) broad ($FWHM \ga 10\,\text{km s}^{-1}$) OH absorption in G34.26+0.15. The absorption line was almost undetectable in the unsmoothed spectra, but after substantial smoothing (channel width of $\sim$1\kms), we were able to detect the absorption in most runs. After averaging all data and smoothing to a channel width of 1.2\kms~(6$\,$mJy rms), the line parameters of the absorption line were $S_\nu = -65\,$mJy, $V_{LSR} = 50$\kms, $FWHM = 18$\kms. Absorption overlapping with 6.035$\,$GHz maser lines has been detected toward other massive star forming regions (e.g., Baudry et al. 1997). The upper panel of Fig.~1. shows a typical spectrum of the 6.035$\,$GHz OH maser in G34.26+0.15, obtained on 2008 November 18. None of the maser components detected in G34.26+0.15 showed clear variability (see Table~1). As an example, the light curves of the two brightest components are shown in the lower panel of Fig.~1. For both velocity components, $\chi$-squared fits (with weighting for the uncertainty of each point) show that the individual data points are consistent within 3$\sigma$ of the linear fits. In other words, we did not detect significant short-term varibility (flares) in any of the G34.26+0.15 maser components. \newpage As part of our monitoring program, we also observed the OH transitions at 6.016$\,$GHz, 6.030$\,$GHz, and 6.049$\,$GHz with the same spectral configuration of the 6.035$\,$GHz observations. No lines were detected toward IRAS$\,$18566+0408 at the same rms levels of the 6.035$\,$GHz data (see Table~2). \section{Results} We detected 6.035$\,$GHz OH maser emission in IRAS$\,$18566+0408 in four out of 24 observational epochs. This is the first detection of 6.035$\,$GHz OH maser emission in IRAS$\,$18566 +0408. We list in Table~2 the rms of each run, and the line parameters of the detections. The maser at 85.8\kms~was detected at all four epochs. At the first of the four epochs (2009 March 11), a second maser was detected at 89.0\kms. The flux densities of the 85.8\kms~maser in the two orthogonally linear polarizations were consistent within 2.6$\,\sigma$ in all runs, whereas the 89.0\kms~maser showed consistent flux density between the two polarizations within 3.6$\,\sigma$. There could be some linearly polarized emission at $\la 3\,\sigma$ levels; our data are not suitable for a more precise determination\footnote{There are examples of sources with significant 6.035$\,$GHz OH linear polarization (e.g., Knowles et al. 1976).}. The telescope was not configured to record all four Stokes parameters; hence, it is not possible to extract information about the degree of circular polarization from the data. Figure~2 shows the spectra obtained on 2009 March 11 (detection of two lines), 2009 August 15 (non-detection), and 2009 November 07 (detection of a single line). It is clear from the figure that the flux density of the maser varies with time; specifically, we detected two flare events. We found no significant change in linewidth or peak velocity of the 85.8\kms~maser. \vspace{-0.5cm} \section{Discussion} We show in the upper panel of Fig.~3 the light curve of the 6.035$\,$GHz OH maser component at 85.8\kms, including the four detections and all 3$\,\sigma$ upper limits. We detected two flare events: the first a single epoch detection in 2009 March, and the second a series of three detections from 2009 September through 2009 November. Given the null detections before and after each of these flares obtained when the cryogenics were operational (low rms), we can restrict the two events to a maximum duration of approximately 8 and 5 months, respectively. As mentioned in the introduction, IRAS$\,$18566+0408 is the only region known to exhibit quasi--periodic 6$\,$cm H$_2$CO and 6.7$\,$GHz CH$_3$OH flares. In the lower three panels of Fig.~3 we show the light curves of two 6.7$\,$GHz CH$_{3}$OH maser components and the light curve of the 6$\,$cm H$_2$CO maser (data from Araya et al. 2010, and {\it in prep.}). While the 87.8\kms~CH$_3$OH maser component shows very similar flares to those of the H$_2$CO maser, the flares of the CH$_3$OH maser component at 86.4\kms~are not as well-defined, and the flares may have a delay of 1 to 3 months with respect to H$_2$CO. We note that in addition to the two (weak) flare events of the 86.4\kms~CH$_3$OH maser shown in Fig.~3, two other flare events of this velocity component have been detected (Araya et al. 2010). Thus, there is evidence that the 86.4\kms~CH$_3$OH maser shows quasi-periodic flares, although not as clearly defined or as regular as the H$_2$CO and 87.8\kms~CH$_3$OH maser lines. As seen in Fig.~3, the 85.8\kms~OH maser reported in this work has a similar variability behavior to the 86.4\kms~CH$_3$OH maser. Thus, the light curves of the two CH$_3$OH masers associate the 86.4\kms~maser with the OH, and the 87.8\kms~maser with the H$_2$CO. The velocity difference between the two methanol masers is quite small (1.4\kms) but they are found at opposite ends of the CH$_3$OH maser arc imaged by Araya et al. (2010), with a projected separation of $\sim 6,000\,$AU along the arc. Given the similar variability profiles and LSR velocities, both OH and 86.4\kms~CH$_3$OH masers could originate from the same volume of gas. Based on the data reported here, we cannot reliably measure the time delay between the peak of the OH and H$_2$CO flares, but we can rule out simultaneous flares (see Fig.~3). Our data are consistent with a delay of 1 to 3 months between the OH and H$_2$CO flares just as observed between the H$_2$CO and 86.4\kms~CH$_3$OH maser. Interferometric observations and a longer monitoring program are needed to confirm the association between the 86.4\kms~CH$_3$OH and 85.8\kms~OH masers. High angular resolution observations have shown an association between 6.035$\,$GHz OH and 6.7$\,$GHz CH$_3$OH masers. For example, Caswell (1997) found that both maser species often show emission at similar velocities and co-exist in elongated structures with projected sizes of $\sim $2,000 to 6,000$\,$AU. The discovery of 6.035$\,$GHz OH flares and possible correlated variability with 6.7$\,$GHz masers brings a new (time-dependent) aspect to the relation between these maser species. The physical mechanism causing the periodic flares of CH$_3$OH masers detected in a number of sources (e.g., Goedhart et al. 2004) is still unclear. However, van der Walt (2011; see also van der Walt et al. 2009) reproduced remarkably well the flare profiles observed toward G9.62+0.20E with a colliding wind binary (CWB) model, in which the flares are caused by a change in the background radio continuum modulated by the orbital parameters of a young massive binary. Based only on the detection of 6.035$\,$GHz OH flares reported here, we cannot address whether the CWB model is applicable in the case of IRAS$\,$18566+0408. Nevertheless, as discussed by Araya et al. (2010), the H$_2$CO and CH$_3$OH maser flares in IRAS$\,$18566+0408 are likely caused by a change in the maser gains and not by a change in the background continuum. In this scenario, the maser gains are modulated by some periodic phenomenon external to the maser regions (possibilities include periodic accretion events onto a central protobinary system). If the CH$_3$OH flares are caused by a change in the maser gain, then correlated variability with OH masers would indicate a similar excitation mechanism for 6.035$\,$GHz OH and 6.7$\,$GHz CH$_3$OH masers. Indeed, theoretical models have shown that the excitation mechanism of class II CH$_3$OH masers is infrared pumping (e.g., Cragg et al. 2005), and that the population inversion of 6.035$\,$GHz OH masers is also predominantly due to infrared radiation (Gray 2001; see also Pihlstr\"om et al. 2008, Baudry et al. 1997)\footnote{However, collisional excitation may also have a prominent role as indicated by Cragg et al. (2002).}. The possibility that the maser flares are caused by gain variability due to changes in the infrared radiation field can qualitatively explain some of the differences between the various light-curves. For example, a 70-day delay of the OH flare following the H$_2$CO flare could result from the time required for pumping photons to propagate between the H$_2$CO and the OH maser regions. A 70 light-day distance corresponds to 12,000$\,$AU, which is of the same order as the 6,000$\,$AU projected size of the CH$_3$OH maser arc reported by Araya et al. (2010). If the 87.8\kms~CH$_3$OH and the H$_2$CO maser regions are closer to the central source of infrared field variability, then an exponential amplification of a change in the infrared pumping rate would result in a clear flare signature. On the contrary, maser regions located at greater distances would show a less clear flare signature due to geometrical dilution of the variable radiation field, optical depth effects, and a greater relative contribution of other sources of pumping photons. It is worth mentioning that the models of Cragg et al.~(2002) predict that the 6035$\,$MHz OH masers appear in zones of high density and high OH column density, but relatively low gas temperature. In fact, at kinematic temperatures $>70\,$K, the line would eventually be quenched. This suggests that H$_2$CO masers may occur in warmer gas closer to the central energy source than the excited OH masers. However, the exact circumstances will become clear only after interferometric mapping of the OH maser and the dense molecular gas is conducted. Interferometric observations are also required to investigate the relation between the masers discussed here and ground state OH emission in the region, which has been detected with single-dish telescopes (Szymczak \& G\'erard 2004). For example, Edris et al. (2007) mapped the 1665 and 1667$\,$MHz masers in this source using the NRAO Green Bank Telescope (GBT; 8\arcmin~beam size). Although the LSR velocities of the OH ground state masers are similar to the CH$_3$OH, H$_2$CO, and 6.035$\,$GHz OH masers, the positions of the ground state OH masers obtained with the GBT do not correspond to the H$_2$CO maser position within the quoted errors (Edris et al. 2007). \section{Summary} Using the 305$\,$m Arecibo Telescope in Puerto Rico, we detected two flare events of the 6.035$\,$GHz OH maser toward the massive star forming region IRAS$\,$18566+0408. This region is the only known source of periodic H$_2$CO and CH$_3$OH maser flares. Despite poor sampling of the OH light curve during the flares, our observations clearly show that the peaks of the OH flares were not simultaneous with the H$_2$CO peaks, but rather had delays of approximately a month or more. In contrast, the peaks of the OH flares appear to be correlated with a 6.7$\,$GHz CH$_3$OH maser at corresponding LSR velocity. Our results strengthen the association between 6.035$\,$GHz OH and 6.7$\,$GHz CH$_3$OH masers found in previous observational work and are consistent with a similar inversion mechanism of these maser species (radiative excitation). The delay between the H$_2$CO and OH flares might be caused by the difference in arrival times of pumping photons between the two maser regions. Consequently, interferometric observations will be the natural next step in order to pinpoint the exact location of the OH masers. A more extended monitoring program is also needed to confirm the association between the OH and CH$_3$OH masers. \acknowledgments We thank an anonymous referee for comments that significantly improved this manuscript. E.D.A. acknowledges partial support from the WIU Foundation and the Office of Sponsored Projects (faculty proposal planning stipend, and summer stipend). P.H. acknowledges partial support from NSF grant AST-0908901. S.K. acknowledges support from DGAPA grant IN-101310, UNAM.
1,314,259,992,727
arxiv
\section{Introduction}\label{sec1} The following model for a random walk in a random environment can be found in the physics literature; see Anshelevic and Vologodskii (\citeyear{AnsVol1981}), Alexander \textit{et al.} (\citeyear{Aleetal1981}), Kawazu and Kesten (\citeyear{KawKes1984}). Let $ \{\lambda_j;j\in\mathbb{Z}\} $ be a family of positive i.i.d. random variables and $ \mathcal{A} $ the $ \sigma$-algebra generated by those random variables. Let $ \{X(t);t\geq0\} $ be a continuous-time random walk on $ \mathbb {Z} $ having the following asymptotic transition rates for $ h\rightarrow0$: \begin{eqnarray} \label{Formel1} \mathbb{P} \bigl(X(t+h)=j+1|X(t)=j,\mathcal{A}\bigr) &=& \lambda_jh+\mathrm{o}(h),\\ \mathbb{P} \bigl(X(t+h)=j-1|X(t)=j,\mathcal{A}\bigr) &=& \lambda_{j-1}h+\mathrm{o}(h),\\ \mathbb{P} \bigl(X(t+h)=j|X(t)=j,\mathcal{A}\bigr) &=& 1-(\lambda_j+\lambda_{j-1})h+\mathrm{o}(h). \end{eqnarray} In other words, the process $ \{X(t);t\geq0\} $ is a birth--death process with possibly negative population size, where, for a population with $ j $ individuals, birth occurs at rate $ \lambda_j $ and death at rate $ \lambda_{j-1} $. We will assume that the process $ \{X(t);t\geq0\} $ starts at zero at time zero. The resulting process is symmetric, in the sense that the permeability of the edge connecting the vertices $ j $ and $ j+1 $ does not depend on the direction of the motion. This physical background motivates the name `random environment' for the sequence $ \{\lambda _j;j\in\mathbb{Z}\} $.\ In what follows, we denote the distribution of the random environment on the sequence space by $ P_\lambda$. The following convergence results are described in Kawazu and Kesten (\citeyear{KawKes1984}). \begin{kk1*} If $ c:=\mathbb{E} [\lambda_0^{-1}]<\infty$, then for $ P_\lambda$-almost all environments, the distributions (after conditioning on the environment) of the processes \[ X_n(t):=\frac{1}{n}X(n^2t),\qquad t\geq0, \] converge weakly with respect to the Skorohod topology toward the distribution of the process $ \{c^{-1/2}B(t);t\geq0\} $, where $ \{B(t);t\geq0\} $ is standard Brownian motion on $ \mathbb{R} $. \end{kk1*} (See also Papanicolaou and Varadhan (\citeyear{PapVar1981}) for some related results.) \begin{kk2*} If there exists a slowly varying function $ L_1 $ such that \[ \frac{1}{nL_1(n)}\sum_{j=1}^n\frac{1}{\lambda_j}\longrightarrow1\qquad \mbox{in probability}, \] then the distributions of the processes \[ X_n(t):=\frac{1}{n}X(n^2L_1(n)t) \] converge weakly with respect to the Skorohod topology toward the distribution of standard Brownian motion. \end{kk2*} \begin{kk3*} If there exists a slowly varying function $ L_2 $ such that the sequence of random variables \[ R_n:=\frac{1}{n^{1/\alpha}L_2(n)}\sum_{j=1}^n\frac{1}{\lambda_j} \] converges in distribution toward a one-sided stable distribution $ \vartheta_\alpha$ with index $ \alpha\in(0,1) $, then the distributions of the processes \[ X_n(t):=\frac{1}{n}X\bigl(n^{(1+\alpha)/\alpha}L_2(n)t\bigr) \] converge weakly with respect to the Skorohod topology toward the distribution of a continuous self-similar process $ \{X_\ast(t);t\geq0\} $ with scaling exponent $ \eta=\frac{\alpha}{\alpha+1} $. \end{kk3*} \begin{remarks*} (1) In the next section, we will give a representation for the process $ X_\ast$ in terms of a standard Brownian motion and a stable subordinator associated with the measure $ \vartheta_\alpha$. (2) We note that the results from Kawazu and Kesten (\citeyear{KawKes1984}) are generalized in Kawazu (\citeyear{Kaw1989}). \end{remarks*} He considered random walks in random environments defined by the following transition asymptotics: \begin{eqnarray*} \mathbb{P} \bigl(X(t+h)=j+1|X(t)=j,\mathcal{A}\bigr) &=& (\lambda_j/\eta _j)h+\mathrm{o}(h),\\ \mathbb{P} \bigl(X(t+h)=j-1|X(t)=j,\mathcal{A}\bigr) &=& (\lambda_{j-1}/\eta _j)h+\mathrm{o}(h),\\ \mathbb{P} \bigl(X(t+h)=j|X(t)=j,\mathcal{A}\bigr) &=& 1-\bigl((\lambda_j+\lambda _{j-1})/\eta_j\bigr)h+\mathrm{o}(h), \end{eqnarray*} where $ \{\eta_j,j\in\mathbb{N}\} $ is an i.i.d. family of positive random variables satisfying suitable assumptions. Similarly to the situation studied in Kawazu and Kesten (\citeyear{KawKes1984}), the resulting random walks converge toward appropriate continuous processes after scaling. In Kesten and Spitzer (\citeyear{KesSpi1979}), new classes of continuous self-similar processes are described. Moreover, it was proven therein that those processes are weak limits of random walks in random scenery. Those random walks are defined as follows. Let $ \{\xi(x);x\in\mathbb{Z}\} $ and $ \{Z_i;i\in\mathbb{N}\} $ be two independent families of i.i.d. random variables, where the random variables $ Z_i $ are assumed to be $ \mathbb{Z} $-valued. One can think of the sequence $ \{Z_i;i\in\mathbb{N}\} $ as increments of a classical $ \mathbb{Z} $-valued random walk $ S_k:=\sum_{i=1}^kZ_i $. The stationary sequence $ \{\xi(S_k);k\in\mathbb{N}\} $ has some non-trivial long-range dependencies if the underlying random walk $ \{S_k;k\in\mathbb{N}\} $ is recurrent. This is the case, for example, if $ Z_1 $ is in the domain of attraction of an $ \alpha$-stable distribution with $ \alpha\in(1,2] $. The random sequence $ D(n):=\sum_{k=1}^n\xi (S_k) $ is called a \emph{random walk in random scenery}. In Kesten and Spitzer (\citeyear{KesSpi1979}), the following convergence result was proven for those processes. \begin{ks1*} If $ \xi(0) $ is in the domain of attraction of a $ \beta$-stable distribution with $ \beta\in(0,2] $ and if $ Z_1 $ is in the domain of attraction of an $ \alpha$-stable distribution with $ \alpha\in(0,1) $, then the distributions of the processes \[ D_n(t):=n^{-1/\beta}\sum_{k=1}^{\lfloor nt\rfloor}\xi(S_k) \] \it converge weakly with respect to the Skorohod topology toward $ \beta$-stable L\'{e}vy motion. \end{ks1*} (See also Spitzer (\citeyear{Spi1976}) for a special case.) \begin{ks2*} If $ \xi(0) $ is in the domain of attraction of a $ \beta$-stable distribution with $ \beta\in(0,2] $ and if $ Z_1 $ is in the domain of attraction of an $ \alpha$-stable distribution with $ \alpha\in(1,2] $, then the distributions of the processes \[ D_n(t):=n^{-\delta}\sum_{k=1}^{\lfloor nt\rfloor}\xi(S_k) \] converge weakly with respect to the Skorohod topology toward a continuous self-similar process~$ D_\ast$ with scaling exponent $ \delta=1-\frac{1}{\alpha}+\frac {1}{\alpha\beta}$. \end{ks2*} \begin{remark*} The statement in KS1 corresponds to the transient case and is not difficult to prove since, in that case, the sequence $ \{\xi(S_k);k\in\mathbb{N}\} $ has only weak dependencies. This is the reason why one obtains $ \beta$-stable L\'evy noise in the limit. We also mention that the case $ \beta=1 $ is still open. \end{remark*} \begin{remark*} There exist various generalizations of the results of Kesten and Spitzer (\citeyear{KesSpi1979}). We will only mention Shieh (\citeyear{Shi1995}), where the limiting process is generalized to higher dimensions, Lang and Nguyen (\citeyear{LanNgu1983}), which deals with multidimensional random walks and some special random scenery, Maejima (\citeyear{Mae1996}), where the random scenery belongs to the domain of attraction of an operator-stable distribution, Arai (\citeyear{Ara2001}), where the random scenery belongs to the domain of partial attraction of a semi-stable distribution, and Saigo and Takahashi (\citeyear{SaiTak2005}), where the random scenery and the random walk belong to the domain of partial attraction of semi-stable and operator semi-stable distributions. \end{remark*} In this article, we investigate whether it is possible to substitute the classical random walk in the result of Kesten and Spitzer (\citeyear{KesSpi1979}) by the random walk in random environment which was introduced in Kawazu and Kesten (\citeyear{KawKes1984}). We will restrict our attention to the result KK3 since this is the case where a new type of self-similar process arises at the end. For simplicity and in order to avoid complicating notation, we will assume that the slowly varying function $ L_2 $ which appears in KK3 is constant and equal to one. The general case involving non-constant $ L_2 $ can be treated in a similar way. We now fix a probability space $ (\Omega,\mathcal{F},\mathbb{P} ) $ which is sufficiently large to support a family of i.i.d. random variables $ \{\lambda_j;j\in\mathbb{Z}\} $, a birth--death process $ \{X(t);t\geq0\} $ with asymptotic transition rates given by equations (1)--(3) and a family of i.i.d. random variables $ \{\xi(k),k\in\mathbb{Z}\} $. We assume that the families $ \{\xi(k),k\in\mathbb{Z}\} $ and $ \{ X(t);t\geq0\} $ are independent and that $ t\mapsto X(t) $ is cadlag $ \mathbb{P} $-almost surely. Further, we assume that $ \lambda^{-1}_1 $ is in the domain of normal attraction of a one-sided $ \alpha$-stable distribution $ \vartheta_\alpha$ with $ \alpha\in(0,1) $. Moreover, we assume that $ \xi(0) $ is in the domain of normal attraction of a $ \beta$-stable distribution $ \vartheta_\beta$ with $ \beta\in(0,2] $. Its characteristic function is given by \[ \psi(\theta)=\exp\bigl(-|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\operatorname{sgn}(\theta)\bigr)\bigr) , \] where $ 0<A_1<\infty$ and $ |A_1^{-1}A_2|\leq\tan(\uppi\beta/2) $. For $ \beta>1 $, it follows from those assumptions that $ \mathbb{E} [\xi(0)]=0 $. For $ \beta=1 $, we make the further assumption that there exists a $ K>0 $ such that \[ \bigl|\mathbb{E} \bigl[\xi(0)\mathbh{1}_{[-\rho,\rho]}(\xi(0)) \bigr] \bigr|\leq K\qquad \mbox{for all } \rho>0 . \] We can now define the following continuous-time version of the random walk in random scenery: \[ \Xi(t):=\int_0^t\xi(X(s))\,\mathrm{d}s . \] In the following, we will use the space \[ D[0,\infty):= \{\gamma\dvtx[0,\infty)\rightarrow\mathbb{R}\dvtx\gamma \mbox{ is cadlag} \} \] with the Skorohod topology. We will prove the following theorem. \begin{theorem} \label{MT} For $ \kappa:=\frac{1}{\alpha}+\frac{1}{\beta} $ and $ k_n:=n^{(1+\alpha)/\alpha}$, the distributions of the processes \[ \Xi_n(t):=n^{-\kappa}\int_0^{k_nt}\xi(X(s))\,\mathrm{d}s \] converge weakly with respect to the Skorohod topology toward the distribution of a self-similar stochastic process $ \{\Xi_\ast(t);t\geq0\} $ with scaling exponent $ \mu=1-\frac{\alpha}{\alpha+1}+\frac{\alpha}{(\alpha+1)\beta} $. \end{theorem} \begin{remark*} The stochastic process $ \{\Xi_\ast(t);t\geq0\} $ can be constructed as follows. Let $ Z_+ $ and $ Z_- $ be two independent copies of the $ \beta$-stable L\'{e}vy process which can be associated with the characteristic function \[ \psi(\theta)=\exp\bigl(-|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\operatorname{sgn}(\theta)\bigr) \bigr) . \] Further, let $ \{L_\ast(\tau,x);\tau\geq0,x\in\mathbb{R}\} $ be the local time of the stochastic process $ \{X_\ast(\tau);\tau\geq0\} $; that is, the random variable $ L_\ast(\tau,x) $ is the derivative with respect to $ x $ of the occupation time \[ \Gamma_\ast(\tau,(-\infty,x]):=\int_0^\tau\mathbh{1}_{(-\infty,x]}(X_\ast(\sigma))\,\mathrm{d}\sigma. \] We will see in the next section that the local time exists for all but a countable number of points $ x\in\mathbb{R} $. Moreover, for all $ \tau\geq0 $, the processes \[ \{L_\ast(\tau,x-);x\geq0\} \quad \mbox{and} \quad \{L_\ast(\tau ,-(x-));x\geq0\} \] are predictable with respect to the natural filtrations of $ Z_+ $ (resp., $ Z_-$). The following integral representation of the process $ \Xi_\ast$ can be given: \[ \Xi_\ast(\tau):=\int_0^\infty L_\ast(\tau,x-)\,\mathrm{d}Z_+(x)+\int _0^\infty L_\ast(\tau,-(x-))\,\mathrm{d}Z_-(x) . \] \end{remark*} \section{The convergence of the birth--death process} The goal of this section is to prove Corollary \ref{PrinceKor}, which is the main ingredient needed to show that the finite-dimensional distributions of $ \Xi_n $ converge toward the finite-dimensional distributions of $ \Xi_\ast$. This corollary contains a statement on the weak convergence of certain functionals of the occupation times of the rescaled processes $ X_n $. A result corresponding to Corollary \ref{PrinceKor} is also proved in Kesten and Spitzer (\citeyear{KesSpi1979}); however, we have to adopt a totally different approach since we do not have such precise information on the potential theory related to the random walk~$ X $. Instead, we will understand the occupation times of $ X_n $ and prove that they converge in an appropriate sense toward the local time of the limit process $ X_\ast$. We describe some of the main arguments from the proof in Kawazu and Kesten (\citeyear{KawKes1984}) for the convergence of the processes \[ X_n(t):=\frac{1}{n}X\bigl(n^{(1+\alpha)/{\alpha}}t\bigr) \] toward the self-similar process $ X_\ast$ defined in Kawazu and Kesten (\citeyear{KawKes1984}). We can enlarge our underlying probability space $ (\Omega,\mathcal{F},\mathbb{P} ) $ in such a way that it contains a standard Brownian motion $ \{ B(t);t\geq0\} $ and a cadlag version of the stable L\'{e}vy subordinator $ \{W(x);x\in\mathbb{R}\} $ which can be associated with the one-sided $ \alpha$-stable distribution $ \vartheta_\alpha$. Furthermore, we assume that $ \{B(t);t\geq0\} $, $ \{W(x);x\in\mathbb {R}\} $, $ \{X(t);t\geq0\} $ and $ \{\xi(n);n\in\mathbb{Z}\} $ are independent. Moreover, we assume that $ W(0)=0 $ and $ B(0)=0 $ hold $ \mathbb{P} $-almost surely. In the future, we will denote by $ \{L(t,x);t\geq0,x\in\mathbb{R}\} $ the local time of the Brownian motion $ \{B(t);t\geq0\} $. The process \[ V_\ast(t):=\int_\mathbb{R}L(t,W(x))\,\mathrm{d}x \] is non-decreasing $ \mathbb{P} $-almost surely. Therefore, we can define the following pseudo-inverse: \[ W^{-1}(y):=\inf\{x\in\mathbb{R};W(x)>y\} \quad \mbox{and} \quad V_\ast^{-1}(\tau):=\inf\{t\geq0;V_\ast(t)>\tau\} . \] In Kawazu and Kesten (\citeyear{KawKes1984}), the following representation for the self-similar process $ X_\ast$ is given: \[ X_\ast(\tau):=W^{-1}(B(V_\ast^{-1}(\tau))). \] We now sketch the main arguments from the proof in Kawazu and Kesten (\citeyear{KawKes1984}). We will need some of those ideas in our proof of the convergence of $ \Xi_n $ toward $ \Xi_\ast$. Their approach is based on the natural scale of the birth--death process. One defines \[ S(j):= \cases{\displaystyle \sum_{k=0}^{j-1}\lambda_k^{-1} &\quad for $ j>0$,\vspace*{2pt}\cr 0 & \quad for $j=0$,\cr \displaystyle-\sum_{k=j}^{-1}\lambda_k^{-1} &\quad for $ j<0$. } \] This implies that conditioned on $ \mathcal{A}:=\{\lambda_j;j\in\mathbb {Z}\}, $ the process $ S(X(t)) $ is on natural scale (see Kawazu and Kesten (\citeyear{KawKes1984}), page 565). This means that for all $ a,b,x\in\mathbb{R} $ with $ a<x<b $, one has \[ \mathbb{P} \bigl(S(X(t)) \mbox{ hits } \{a,b\} \mbox{ first at } a \mid S(X(0))=x,\mathcal{A}\bigr)=\frac{b-x}{b-a}. \] It is then possible to represent the process $ S(X(t)) $ as the time change of standard Brownian motion $ \{B(t);t\geq0\} $ as follows. One defines $ m(\mathrm{d}x):=\sum_{i\in\mathbb{Z}}\delta_{S(i)}(\mathrm{d}x) $ and \[ V(t):=\int_\mathbb{R} L(t,x)m(\mathrm{d}x)=\sum_{i\in\mathbb{Z}}L(t,S(i)) , \] where $ \{L(t,x);t\geq0,x\in\mathbb{R}\} $ is again the local time of the standard Brownian motion $ B $. One can see that $ \{B(V^{-1}(t));t\geq0\} $ and $ \{S(X(t));t\geq0\} $ are both cadlag and have the same distribution (see Kawazu and Kesten (\citeyear{KawKes1984}), page 566). One then has to scale the above constructions. \[ S_n(x):=n^{-1/\alpha}S(\lfloor nx\rfloor),\qquad n\in\mathbb{N}, x\in\mathbb{R} , \] where, for a positive real number $ x $, we denote by $ \lfloor x\rfloor$ its integer part. It follows from the assumptions on the environment $ \{\lambda_j;j\in \mathbb{Z}\} $ that for $ n\rightarrow\infty$, the processes $ \{S_n(x);x\in\mathbb{R}\} $ converge in distribution toward an $ \alpha$-stable L\'{e}vy process $ \{W(x);x\in\mathbb{R}\} $. Moreover, the process $ W $ is strictly increasing $ \mathbb{P} $-almost surely since $ \vartheta_\alpha$ is a one-sided stable distribution and $ \alpha \in(0,1) $. By a method given in Skorohod (\citeyear{Sko1956}) and Dudley (\citeyear{Dud1968}), it is possible to construct a suitable probability space $ (\tilde{\Omega},\tilde\mathcal{F},\tilde{\mathbb{P} }) $ with suitable $ D $-valued random variables $ \tilde{S}_n $ and $ \tilde{W} $ having the properties that $ \tilde {S}_n $ converges toward $ \tilde{W} $ almost surely with respect to $ \tilde{\mathbb{P} } $ and that $ \tilde{S}_n $ and $ \tilde{W} $ have the same distributions as $ S_n $ (resp., $ W $) (see Kawazu and Kesten (\citeyear{KawKes1984}), page 567). One then defines \[ \tilde{V}_n(t):=\int_\mathbb{R} L(t,x)\tilde{m}_n(\mathrm{d}x) \quad \mbox{and}\quad \tilde{V}_\ast(t):=\int_\mathbb{R} L(t,x)\tilde{m}_\ast(\mathrm{d}x) \] with \[ \int_\mathbb{R} f(x)\tilde{m}_n(\mathrm{d}x):=\int_{\mathbb{R}}f(\tilde {S}_n(x))\,\mathrm{d}x\quad \mbox{and}\quad \int_\mathbb{R} f(x)\tilde{m}_\ast(\mathrm{d}x):=\int_{\mathbb{R}}f(\tilde {W}(x))\,\mathrm{d}x \] for all measurable $ f\geq0 $. We then define $ \tilde{S}_n^{-1} $, $ \tilde{W}^{-1} $, $ \tilde{V}_n^{-1} $ and $ \tilde{V}_\ast^{-1} $ in the same way as $ W^{-1} $ (resp., $ V_\ast^{-1} $) above. In Kawazu and Kesten (\citeyear{KawKes1984}) (see page 568) they prove that $ \{B(\tilde{V}^{-1}_n(t));t\geq0\} $ converges $ \tilde{\mathbb{P} } $-almost surely toward $ \{B(\tilde{V}^{-1}_\ast(t));t\geq0\} $ in the $ J_1 $-topology. For convenience, we define \[ \tilde{X}_n(t):=\tilde{S}_n^{-1}(B(\tilde{V}^{-1}_n(t))), \qquad \tilde{X}_\ast(t):=\tilde{W}^{-1}(B(\tilde{V}^{-1}_\ast(t))) . \] We note that the process $ \{\tilde{X}_n(t);t\geq0\} $ is defined on $ (\Omega\times\tilde{\Omega},\mathcal{F}\times\tilde\mathcal{F},\mathbb{P} \times\tilde{\mathbb{P} }) $. It is proved in Kawazu and Kesten (\citeyear{KawKes1984}) that $ \{\tilde{X}_n(t);t\geq 0\} $ converges toward $ \{\tilde{X}_\ast(t);t\geq0\} $ with respect to the $ J_1 $-topology almost surely with respect to $ \mathbb{P} \times \tilde{\mathbb{P} } $ (see page 569). Moreover, for $ B_n(t):=n^{-1/2}B(nt) $ one has that (see Kawazu and Kesten (\citeyear{KawKes1984}), page 572) \[ |X_n(t)-S_n^{-1}(B_n(V_n^{-1}(t)))|\leq1/n \] and \[ \{S_n^{-1}(B_n(V^{-1}_n(t)));t\geq0 \}\stackrel{\mathcal{D}}{=} \{\tilde{S}^{-1}_n(B(\tilde{V}_n^{-1}(t)));t\geq0 \} = \{\tilde{X}_n(t);t\geq0 \} . \] If we define $ \hat{X}_n(t):=S_n^{-1}(B_n(V^{-1}_n(t))) $, then the previous observations imply that both processes $ \{X_n(t);t\geq0\} $ and $ \{\hat{X}_n(t);t\geq0\} $ converge in distribution toward $ \{\tilde{X}_\ast(t);t\geq0\} $, which has the same distribution as $ \{X_\ast(t);t\geq0\} $. In the rest of this section, we analyze the distributional behavior of the occupation times for the process $ X_n $ (see Proposition \ref{PrinceProp}). In order to obtain this result, we prove an analogous result for the process $ \tilde{X}_n $ (see Lemma \ref{PrinceLem}), which can be reduced to Proposition \ref{CardTowardMeasureProp}. The advantage of this detour is that we can prove almost sure convergence for the occupation times of the process $ \tilde{X}_n $ toward the local time of $ \tilde{X}_\ast$ (see Proposition \ref{OkTimeLokTimeConvProp}). This result is based on the fact that we have explicit formulas for the occupation times of $ \tilde{X}_n $ and the local time of $ \tilde{X}_\ast$ (see Proposition \ref{OkTimePro} and Corollary \ref{LokTimeKor1}). The explicit expression of the occupation time of $ \tilde{X}_n $ and the local time of $ \tilde{X}_\ast$ reveals that in order to prove Proposition \ref {OkTimeLokTimeConvProp}, it is sufficient to prove the almost sure convergence of $ \tilde{S}_n $ and $ \tilde{V}_n^{-1} $ toward $ \tilde{W}_\ast$ (resp., $ \tilde{V}^{-1}_\ast$). The convergence of $ \tilde{S}_n $ toward $ \tilde{W}_\ast$ holds by construction. The convergence of $ \tilde{V}_n $ toward $ \tilde {V}_\ast$ is obtained in Lemma \ref{TimeChangeConvLem} and then used to obtain the convergence of $ \tilde{V}_n^{-1} $ toward $ \tilde{V}^{-1}_\ast$ in Lemma \ref{InversTimeChangeConvLem}. \subsection{The local times of $ X_\ast$ and $ \tilde{X}_\ast$} We define the time that the processes $ \tilde{X}_\ast$ and $ X_\ast $ spend in the measurable set $ A $ until time $ \tau$ as \[ \Gamma_\ast(\tau,A):=\int_0^\tau\mathbh{1}_{A}(X_\ast (\sigma))\,\mathrm{d}\sigma\qquad \biggl(\mbox{resp.},\ \tilde{\Gamma}_\ast(\tau,A):=\int_0^\tau\mathbh{1}_{A}(\tilde{X}_\ast(\sigma))\,\mathrm{d}\sigma\biggr). \] We denote by $ \{L_\ast(\tau,x);\tau\geq0,x\in\mathbb{R}\} $ and $ \{\tilde{L}_\ast(\tau,x);\tau\geq0,x\in\mathbb{R}\} $ the local times of $ X_\ast$ (resp., $ \tilde{X}_\ast$) if they exist. In this subsection, we prove that both local times exist almost surely and relate them to the local time $ \{L(t,x);t\geq0,x\in\mathbb{R}\} $ of the underlying Brownian motion $ \{B(t);t\geq0\} $. \begin{proposition} \label{LokTimePro} One has $ \mathbb{P} $-almost surely that for $ \tau\geq0 $ and all $ x\in \mathbb{R} $, \[ \Gamma_\ast(\tau,(-\infty,x))=\int_{-\infty}^xL(V_\ast^{-1}(\tau ),W(y))\,\mathrm{d}y . \] Further, $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely for all $ \tau\geq0 $ and all $ x\in\mathbb{R} $, \[ \tilde{\Gamma}_\ast(\tau,(-\infty,x))=\int_{-\infty}^xL(\tilde {V}_\ast^{-1}(\tau),\tilde{W}(y))\,\mathrm{d}y . \] \end{proposition} \begin{pf} We have $ \mathbb{P} $-almost surely that $ x\mapsto W(x) $ is increasing. It follows that the set $ \mathcal{N}_1 $ of $ x\in\mathbb{R} $ where $ W $ is not continuous is countable. We define the set \[ \mathcal{N}_2:= \bigl\{x\in\mathbb{R}\dvtx\ell\bigl(\sigma;B(V_\ast^{-1}(\sigma ))=W(x)\bigr)>0 \bigr\} , \] where $ \ell$ denotes the Lebesgue measure on $ \mathbb{R} $. The set $ \mathcal{N}_2 $ is countable since for $ x_1\neq x_2 $, one has that the sets $ \{\sigma;B(V_\ast^{-1}(\sigma))=W(x_1)\} $ and $\{\sigma ;B(V_\ast^{-1}(\sigma))=W(x_2)\} $ are disjoint. The statement then follows since there cannot be an uncountable number of disjoint subsets of $ \mathbb{R} $ with positive Lebesgue measure. Thus the set $ \mathcal{N}:=\mathcal{N}_1\cup\mathcal{N}_2 $ is countable. Since the function $ x\mapsto\Gamma_\ast(\tau,(-\infty,x)) $ is increasing and since \[ x\mapsto\int_{-\infty}^xL(V_\ast^{-1}(\tau),W(y))\,\mathrm{d}y \] is continuous, it is sufficient to prove the statement of the proposition for $ x\in\mathcal{N}^c $. The fact that $ W $ is increasing and continuous in $ x $ implies the equivalence of the statement $ W(x)>y $ with the statement $ \exists z_0<x\dvtx W(z_0)>y $. The latter statement is then equivalent to the statement $ W^{-1}(y):=\inf\{z\dvtx W(z)>y\}<x $. This then implies that $ \mathbh{1}_{(-\infty,x)}(X_\ast(\sigma)) =\mathbh{1}_{(-\infty,W(x))}(B(V_\ast^{-1}(\sigma))) $. We also note that $ t\mapsto V(t) $ is continuous and non-decreasing. This implies that $ V_\ast\circ V_\ast^{-1}=\operatorname{id}_\mathbb{R} $. In the following, we want to compute the derivative of the non-decreasing function \[ M\dvtx \sigma\mapsto\int_{-\infty}^xL(V_\ast^{-1}(\sigma),W(y))\,\mathrm{d}y . \] Since $ W $ is increasing and continuous in $ x $, we have that $ B(V_\ast^{-1}(\sigma_0))<W(x) $ implies that \[ \sigma\mapsto\int_x^\infty L(V_\ast^{-1}(\sigma),W(y))\,\mathrm{d}y \] is locally constant, say equal to $c_0$, in a neighborhood of $ \sigma_0 $. Thus \[ \sigma\mapsto\int_{-\infty}^xL(V_\ast^{-1}(\sigma),W(y))\,\mathrm{d}y = V_\ast(V_\ast^{-1}(\sigma))-c_0=\sigma-c_0 \] in a neighborhood of $ \sigma_0 $. Moreover, since $ W $ is increasing and continuous in $ x $, we have that $ B(V_\ast^{-1}(\sigma_0))>W(x)$ implies \[ \sigma\mapsto\int_{-\infty}^xL(V_\ast^{-1}(\sigma),W(y))\,\mathrm{d}y \] is locally constant in a neighborhood of $ \sigma_0 $. It therefore turns out that \[ M'(\sigma)= \cases{ 1, &\quad if $ B(V_\ast^{-1}(\sigma))<W(x)$,\cr 0, &\quad if $ B(V_\ast^{-1}(\sigma))>W(x)$. } \] Moreover, for all $ \sigma_1,\sigma_2\in\mathbb{R}^+ $ with $ \sigma_1\leq\sigma_2 $, we have that \[ \int_{-\infty}^xL(V_\ast^{-1}(\sigma_1),W(y))\,\mathrm{d}y \leq\int_{-\infty}^xL(V_\ast^{-1}(\sigma_2),W(y))\,\mathrm{d}y \] and \[ \int_x^\infty L(V_\ast^{-1}(\sigma_1),W(y))\,\mathrm{d}y \leq\int_x^\infty L(V_\ast^{-1}(\sigma_2),W(y))\,\mathrm{d}y . \] This implies that \begin{eqnarray*} &&\int_{-\infty}^xL(V_\ast^{-1}(\sigma_2),W(y))\,\mathrm{d}y -\int_{-\infty}^xL(V_\ast^{-1}(\sigma_1),W(y))\,\mathrm{d}y\\ &&\quad \leq V_\ast(V_\ast^{-1}(\sigma_2))-V_\ast(V_\ast^{-1}(\sigma _1))=\sigma_2-\sigma_1. \end{eqnarray*} It follows that \[ \sigma\mapsto\int_{-\infty}^xL(V_\ast^{-1}(\sigma),W(y))\,\mathrm{d}y \] is Lipschitz continuous with Lipschitz constant smaller than one. Since the set $ \{\sigma\dvtx B(V_\ast^{-1}(\sigma))=W(x)\} $ is a zero set with respect to the Lebesgue measure $ \ell$ for all $ x\in\mathcal{N}^c $, it follows that \[ \int_0^\tau\mathbh{1}_{(-\infty,x)}(X_\ast(\sigma ))\,\mathrm{d}\sigma= \int_0^\tau\mathbh{1}_{(-\infty,W(x))}(B(V_\ast ^{-1}(\sigma)))\,\mathrm{d}\sigma= \int_0^\tau M'(\sigma)\,\mathrm{d}\sigma=M(\tau) . \] The second statement is proved in the same way. \end{pf} \begin{corollary} \label{LokTimeKor1} One has $ \mathbb{P} $-almost surely that the local time $ L_\ast (\tau,x) $ is defined for all $ \tau\geq0 $ and all $ x $, where $ x\mapsto W(x) $ is continuous. Further, one has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that the local time $ \tilde{L}_\ast(\tau,x) $ is defined for all $ \tau\geq0 $ and all $ x $, where $ x\mapsto \tilde{W}(x) $ is continuous. In those points, one has \[ L_\ast(\tau,x)=L(V_\ast^{-1}(\tau),W(x)) \qquad \bigl(\mbox{resp.}, \ \tilde{L}_\ast(\tau,x)=L(\tilde{V}_\ast^{-1}(\tau),\tilde {W}(x))\bigr) . \] \end{corollary} \begin{pf} Differentiation in Proposition \ref{LokTimePro} proves this corollary. \end{pf} \subsection{The occupation time of $ \tilde{X}_n $} For a measurable set $ A\subset\mathbb{R} $, we define \[ \hat{\Gamma}_n(t,A):=\int_0^t \mathbh{1}_{A}(\hat {X}_n(\sigma))\,\mathrm{d}\sigma, \qquad \tilde{\Gamma}_n(t,A):=\int_0^t \mathbh{1}_{A}(\tilde {X}_n(\sigma))\,\mathrm{d}\sigma \] and \[ \Gamma_n(t,A):=\int_0^t \mathbh{1}_{A}(X_n(\sigma ))\,\mathrm{d}\sigma. \] These are the respective times that the processes $ \hat{X}_n $, $ \tilde{X}_n $ and $ X_n $ spend in the set $ A $ until time $ t $. In this section, we give an explicit expression for the occupation time of $ \tilde{X}_n $ in terms of the local time $ \{L(t,x);t\geq0,x\in\mathbb{R}\} $ of the underlying Brownian motion $ \{B(t);t\geq0\} $. \begin{proposition}\label{OkTimePro} One has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely for all $ \tau\geq0 $ and all $ x\in\mathbb{R} $ that \[ \tilde{\Gamma}_n(\tau,\{x\})= \cases{ \displaystyle\frac{1}{n}L\biggl(\tilde{V}_n^{-1}(\tau),\tilde{S}_n\biggl(x-\frac{1}{n}\biggr)\biggr), &\quad if $ nx\in\mathbb{Z},$\cr 0, &\quad if $ nx\notin\mathbb{Z} $. } \] \end{proposition} \begin{pf} First, we note that \[ S_n^{-1}(S_n(x))=x+1/n \qquad \mbox{for all } x \mbox{ satisfying } nx \in\mathbb{Z}. \] If we use the fact that $ \{B_n(V_n^{-1}(t));t\geq0\}\}\stackrel{\mathcal{D}}{=}\{S_n(X_n(t));t\geq0\}, $ then we can\vspace*{-3pt} see that $ \{\hat{X}_n(t);t\geq0\}\stackrel{\mathcal{D}}{=}\{ X_n(t)+1/n;t\geq0\} $. Therefore, we see that $ \hat{X}_n $ only takes values in the lattice $ \frac{1}{n}\mathbb{Z} $. Moreover, we have that $ \tilde{S}_n $ and $ \tilde{V_n} $ have the same joint distribution as $ S_n $ and $ V_n $. Therefore, $ \hat{X}_n=S_n^{-1}(B_n(V_n^{-1}(\cdot))) $ has the same distribution as $ \tilde{X}_n=\tilde{S}_n^{-1}(B(\tilde{V}_n^{-1}(\cdot))) $. From this, it also follows that $ \tilde{X}_n $ stays for all time in the countable state space $ \{x\in\mathbb{R};nx\in\mathbb{Z}\} $. This implies that $ \tilde{\Gamma}_n(\tau,\{x\})=0 $ for $ nx\notin \mathbb{Z} $. This proves one part of the statement. For the proof of the other part of the statement, we will need the derivative of the function \[ \tilde{M}(\sigma):= \frac{1}{n}L\bigl(\tilde{V}_n^{-1}(\sigma),\tilde {S}_n(x-1/n)\bigr) . \] We first collect some useful facts which help to compute the derivative of $ \tilde{M} $.\vspace*{1pt} Since $ \tilde{S}_n $ is constant on the intervals $ [\frac {k}{n},\frac{k+1}{n}) $ for all $ k\in\mathbb{Z} $, we have \begin{equation} \label{SumGleich} \tilde{V}_n(t)=\int_\mathbb{R}L(t,\tilde{S}_n(x))\,\mathrm{d}x=\frac {1}{n}\sum_{i\in\mathbb{Z}}L\bigl(t,\tilde{S}_n(i/n)\bigr) . \end{equation} Since the $ (t,x)\mapsto L(t,x) $ is jointly continuous and non-decreasing $ \mathbb{P} $-almost surely (see Boylan (\citeyear{Boy1964}) or Getoor and Kesten (\citeyear{GetKes1972})), it follows that $ t\mapsto\tilde{V}_n(t) $ is continuous and non-decreasing $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely. This then gives rise to \begin{equation} \label{VnId} \tilde{V}_n\circ\tilde{V}_n^{-1}=\operatorname{id}_{\mathbb{R}^+} \qquad \mathbb{P} \times\tilde{\mathbb{P} }\mbox{-almost surely}. \end{equation} By construction, one has for all $ b\in\{\tilde{S}_n(x);x\in\mathbb {R}\} $ that $ \tilde{S}_n^{-1}(b)=x $ is equivalent to $ b=\tilde{S}_n(x-\frac{1}{n}) $. Moreover, one has that $ B(\tilde{V}_n^{-1}(\sigma))\in\{\tilde{S}_n(x);x\in\mathbb{R}\} $ for all $ \sigma\geq0 $ almost surely with respect to $ \mathbb{P} \times\tilde{\mathbb{P} } $. Hence, \begin{equation} \label{equival} \tilde{X}_n(\sigma)=\tilde{S}_n^{-1}(B(\tilde{V}_n^{-1}(\sigma )))=x \mbox{ is equivalent to } B(\tilde{V}_n^{-1}(\sigma))=\tilde{S}_n\biggl(x-\frac{1}{n}\biggr) . \end{equation} Moreover, the random variables $ \{\lambda_i^{-1};i\in\mathbb{N}\} $ are positive $ \mathbb{P} $-almost surely and therefore \begin{eqnarray} \label{inject} \mbox{the restriction of } x\mapsto\tilde{S}_n(x) \mbox{ to the set } \frac{1}{n}\mathbb{Z} \mbox{ is injective almost surely with respect to } \tilde{\mathbb {P} } . \end{eqnarray} Since, conditioned on $ \mathcal{A}=\sigma\{\lambda_j;j\in\mathbb{N}\} $, the process $ X $ is a Markov process, it follows that for $ nx\in\mathbb{Z} $, there exist non-negative random variables $ a_1<b_1<a_2<b_2<\cdots$ with the property \begin{eqnarray*} \{\sigma\geq0;\tilde{X}_n(\sigma)=x \}=\bigcup_{i\in\mathbb {N}}[a_i,b_i) \qquad \mathbb{P} \times\tilde{\mathbb{P} }\mbox{-a.s.} \end{eqnarray*} This implies that for all $ \sigma_0\notin\{a_i;i\in\mathbb{N}\} $, there exists a neighborhood $ \mathcal{U}(\sigma_0)$ containing $ \sigma_0 $ with the property that $ \sigma\mapsto\tilde{X}_n(\sigma)=\tilde{S}_n^{-1}(B(\tilde {V}_n^{-1}(\sigma))) $ is constant on $ \mathcal{U}(\sigma_0) $. Equations (\ref{equival}) and (\ref{inject}) then imply that $ \sigma\mapsto B(\tilde{V}_n^{-1}(\sigma)) $ must be constant on $ \mathcal{U}(\sigma_0) $. Therefore, for $ \sigma_0\notin\{a_i;i\in\mathbb{N}\} $ and $ B(\tilde{V}_n^{-1}(\sigma_0))\neq\tilde{S}_n(x-\frac{1}{n}) $, we have $ B(\tilde{V}_n^{-1}(\sigma))\neq\tilde{S}_n(x-\frac{1}{n}) $ for all $ \sigma$ in a neighborhood of $ \sigma_0 $. Hence \[ \sigma\mapsto L\bigl(\tilde{V}_n^{-1}(\sigma),\tilde{S}_n(x-1/n)\bigr) \] is constant in a neighborhood of $ \sigma_0$. The previous argument and the fact that $ \tilde{X}_n $\vspace*{1pt} only jumps to nearest neighbors in $ \frac{1}{n}\mathbb{Z} $ leads to the fact that $ \sigma_0\notin\{ a_i;i\in\mathbb{N}\} $ and $ B(\tilde{V}_n^{-1}(\sigma_0))=\tilde{S}_n(x-\frac{1}{n}) $ imply the existence of a suitable $ c_0>0 $ with the property \[ \sigma\mapsto\frac{1}{n}\sum_{z\neq nx-1}L\bigl(\tilde{V}_n^{-1}(\sigma ),\tilde{S}_n(z/n)\bigr)=c_0 \] in a neighborhood of $ \sigma_0$. Therefore, we can use (\ref{VnId}) to see that $ B(\tilde {V}_n^{-1}(\sigma_0))=\tilde{S}_n(x-\frac{1}{n}) $ implies that \[ \sigma\mapsto\frac{1}{n}L\bigl(\tilde{V}_n^{-1}(\sigma),\tilde {S}_n(x-1/n)\bigr)=\tilde{V}_n(\tilde{V}_n^{-1}(\sigma))-c_0=\sigma-c_0 \] in a neighborhood of $ \sigma_0 $. Consequently, the function \[ \tilde{M}(\sigma):= \frac{1}{n}L\bigl(\tilde{V}_n^{-1}(\sigma),\tilde {S}_n(x-1/n)\bigr) \] is differentiable for all $ \sigma\notin\{a_i;i\in\mathbb{N}\} $, and for $ nx\in\mathbb{Z} $, we have \[ \tilde{M}'(\sigma)= \cases{ 1, &\quad if $\displaystyle B(\tilde{V}_n^{-1}(\sigma))=\tilde{S}_n\biggl(x-\frac {1}{n}\biggr)$,\cr 0, &\quad if $\displaystyle B(\tilde{V}_n^{-1}(\sigma))\neq\tilde{S}_n\biggl(x-\frac{1}{n}\biggr)$. } \] Moreover, it is possible to prove that the function $ \tilde{M} $ is Lipschitz continuous with Lipschitz constant one. From those properties, it follows that \[ \int_0^\tau\mathbh{1}_{\{x\}}(\tilde{X}_n(\sigma ))\,\mathrm{d}\sigma= \int_0^\tau\mathbh{1}_{\{\tilde{S}_n(x-1/n)\} }(B(\tilde{V}_n^{-1}(\sigma)))\,\mathrm{d}\sigma= \int_0^\tau\tilde{M}'(\sigma)\,\mathrm{d}\sigma=\tilde{M}(\tau) . \] \upqed \end{pf} \subsection{The convergence of the occupation times} In this section, we investigate whether the occupation times of $ \tilde{X}_n $ converge toward the local time of $ \tilde{X}_\ast$ in an appropriate way as $ n\rightarrow \infty$. For this, we first need some auxiliary results. \begin{lemma} \label{TimeChangeConvLem} One has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that $ \tilde{V}_n(t) $ converges toward $ \tilde{V}_\ast(t) $ for all $ t\in\mathbb{R} $. \end{lemma} \begin{pf} We fix a $ T>0 $ and define $ w_o:=\sup\{x\dvtx L(T,x)>0\} $ and $ w_u:=\inf\{x\dvtx L(T,x)>0\} $. Those two random variables are independent of $ \tilde{\mathbb{P} } $. We know that $ \{\tilde{S}_n(x);x\in\mathbb{R}\} $ converges toward $ \{\tilde{W}(x);x\in\mathbb{R}\} $ with respect to the $ J_1 $-topology $ \tilde{\mathcal{F}} $-almost surely. We note that the local time of Brownian motion $ (x,t)\mapsto L(t,x) $ is jointly continuous $ \mathbb{P} $-almost surely (see Boylan (\citeyear{Boy1964}) or Getoor and Kesten (\citeyear{GetKes1972})). It follows that $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely $ \{L(t,\tilde {S}_n(x));x\in\mathbb{R}\} $ converges toward $ \{L(t,\tilde{W}(x));x\in\mathbb{R}\} $ with respect to the $ J_1 $-topology for all $ t\in[0,T] $. We fix a pair $ (\omega,\tilde{\omega})\in\Omega\times\tilde {\Omega} $ with the property that $ \{L(t,\tilde{S}_n(x))(\omega,\tilde{\omega});x\in\mathbb{R}\} $ converges toward $ \{L(t,\tilde{W}(x))(\omega,\tilde{\omega});x\in\mathbb{R}\} $ with respect to the $ J_1 $-topology for all $ t\in[0,T] $. There then exist suitable $ x_u,x_o\in\mathbb{R} $ with $ \tilde {W}(x_u)\leq w_u $ and $ \tilde{W}(x_o)\geq w_o $, and there exists a sequence of increasing, absolutely continuous, surjective Lipschitz maps $ \lambda_n\dvtx [x_u,x_o]\rightarrow[x_u,x_o] $ with the properties \[ \sup_{x\in[x_u,x_o]} |L(t,\tilde{W}(x))-L(t,\tilde{S}_n(\lambda _n(x))) |\longrightarrow0 \qquad \mbox{as } n\rightarrow\infty \] and \[ \operatorname{esssup}\limits_{x\in[x_u,x_o]} |\lambda_n'(x)-1 |\longrightarrow0 \qquad \mbox{as } n\rightarrow\infty. \] We should emphasize that the derivative of the function $ \lambda_n $ may not exist everywhere. However, those points where it does not exist form a zero set since $ \lambda_n $ is an absolutely continuous Lipschitz function. By a change of variables for all $ t\in[0,T] $, one then has \begin{eqnarray*} && \int_{x_u}^{x_o} L(t,\tilde{S}_n(x))\,\mathrm{d}x-\int_{x_u}^{x_o} L(t,\tilde{S}_n(\lambda_n(x)))\,\mathrm{d}x \\ &&\quad= \int_{x_u}^{x_o} L(t,\tilde{S}_n(x)) \biggl(1-\frac{1}{\lambda _n'(\lambda_n^{-1}(x))} \biggr)\,\mathrm{d}x +\mathrm{O} \Bigl(\sup_{x\in[x_u,x_o]}|\lambda_n(x)-x| \Bigr). \end{eqnarray*} It follows from the assumptions on the sequence $ \lambda_n $ that the above difference converges toward zero. Further, for all $ t\in[0,T] $, we have that \[ \int_{\mathbb{R}} L(t,\tilde{S}_n(\lambda_n(x)))\,\mathrm{d}x\longrightarrow \int_{\mathbb{R}} L(t,\tilde{W}(x))\,\mathrm{d}x\qquad \mbox{as } n\rightarrow \infty. \] Hence, one has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that $ \tilde {V}_n(t) $ converges toward $ \tilde{V}_\ast(t) $ for all $ t\in[0,T] $. Thus, for every $ T>0 $, we obtain an zero set $ N_T $ in $ \Omega\times\tilde{\Omega} $ where this convergence does not hold. The lemma now follows since the union \[ N_\infty:=\bigcup_{T\in\mathbb{N}}U_T \] is also a zero set with respect to $ \mathbb{P} \times\tilde{\mathbb {P} } $. \end{pf} Let $ f\dvtx\mathbb{R}\rightarrow\mathbb{R} $ be a function. We call $ \tau\in f(\mathbb{R}) $ a \emph{critical value} for $ f $ if there exist at least two distinct points $ t_1,t_2\in\mathbb{R} $ such that $ f(t_1)=f(t_2)=\tau$. Further, we call a point $ \tau\in f(\mathbb{R}) $ a \emph{regular value} for $ f $ if it is not a critical value. It is straightforward to see that the preimages of critical values contain an open interval if the function $ f $ is non-decreasing. This implies that the set of critical values of a non-decreasing function is at most countable. \begin{lemma}\label{InversTimeChangeConvLem} One has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that $ \tilde {V}_n^{-1}(\tau) $ converges toward $ \tilde{V}_\ast^{-1}(\tau) $ for all regular values $ \tau$ of $ \tilde{V}_\ast$. \end{lemma} \begin{pf} We note that $ \mathbb{P} $-almost surely the local time $ L(t,x) $ of the Brownian motion $ B $ is continuous and non-decreasing in $ t $ for all $ x\in\mathbb{R} $ (see Boylan (\citeyear{Boy1964}) or Getoor and Kesten (\citeyear{GetKes1972})) for the continuity). It follows that $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely the function \[ t\mapsto\tilde{V}_\ast(t):=\int_\mathbb{R} L(t,x)m_\ast(\mathrm{d}x) \] is continuous and non-decreasing. Therefore, $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely the function $ \tilde{V}_\ast^{-1}(\tau):=\inf\{t;\tilde{V}(t)>\tau\} $ is strictly increasing and right-continuous. We use Lemma \ref{TimeChangeConvLem} to fix a pair $ (\omega,\tilde {\omega})\in\Omega\times\tilde{\Omega} $ with the properties that: \begin{longlist}[(ii)] \item[(i)] $ \tau\mapsto\tilde{V}_\ast^{-1}(\tau) $ is strictly increasing and right-continuous; \item[(ii)] $ \tilde{V}_n(t) $ converges toward $ \tilde{V}_\ast(t) $ for all $ t\geq0 $. \end{longlist} Since the set where $ \tilde{V}_\ast$ is not continuous is countable, the set where $ \tilde{V}_\ast$ is continuous is dense in $ [0,\infty) $. We denote by $ K $ the set of critical values of $ \tilde{V}_\ast$. As was pointed out before, $ K $ is at most countable. For an arbitrary point $ \tau\in[0,\infty)\cap K^c $ and for any $ \epsilon>0 $, one can find points $ t_{\epsilon,0}, t_{\epsilon,1}\in(\tilde{V}_\ast^{-1}(\tau )-\epsilon,\tilde{V}_\ast^{-1}(\tau)) $ and $ t_{\epsilon,2}, t_{\epsilon,3}\in(\tilde{V}_\ast^{-1}(\tau ),\tilde{V}_\ast^{-1}(\tau)+\epsilon) $ with the property \[ \tilde{V}_\ast(t_{\epsilon,0})<\tilde{V}_\ast(t_{\epsilon ,1})<\tau<\tilde{V}_\ast(t_{\epsilon,2}) <\tilde{V}_\ast(t_{\epsilon,3}) . \] We can now choose a $ \delta>0$ such that \[ \tilde{V}_\ast(t_{\epsilon,0})+\delta<\tilde{V}_\ast(t_{\epsilon ,1})-\delta <\tilde{V}_\ast(t_{\epsilon,1})+\delta<\tau<\tilde{V}_\ast (t_{\epsilon,2})-\delta <\tilde{V}_\ast(t_{\epsilon,2})+\delta <\tilde{V}_\ast(t_{\epsilon,3})-\delta. \] Since $ \tilde{V}_n $ converges toward $ \tilde{V}_\ast$ in all points where $ \tilde{V}_\ast$ is continuous, there exists an $ n_0\in\mathbb{N} $ such that for all $ n\geq n_0 $, we have \[ \tilde{V}_n(t_{\epsilon,0})<\tilde{V}_\ast(t_{\epsilon,0})+\delta <\tilde{V}_\ast(t_{\epsilon,1})-\delta <\tilde{V}_n(t_{\epsilon,1})<\tilde{V}_\ast(t_{\epsilon,1})+\delta <\tau \] and \[ \tau<\tilde{V}_\ast(t_{\epsilon,2})-\delta<\tilde {V}_n(t_{\epsilon,2}) <\tilde{V}_\ast(t_{\epsilon,2})+\delta <\tilde{V}_\ast(t_{\epsilon,3})-\delta<\tilde{V}_n(t_{\epsilon ,3}) . \] By definition of $ t_{\epsilon,0}$, we have that $ z\leq\tilde {V}_\ast^{-1}(\tau)-\epsilon$ implies $ z\leq t_{\epsilon,0} $. From monotonicity and the first of both inequalities above, it follows that \[ \tilde{V}_n(z)\leq\tilde{V}_n(t_{\epsilon,0})\leq\tilde{V}_\ast (t_{\epsilon,0}) +\delta<\tilde{V}_\ast(t_{\epsilon,1}) . \] We have thus seen that $ z\leq\tilde{V}_\ast^{-1}(\tau)-\epsilon$ implies $ \tilde{V}_n(z)<\tilde{V}_\ast(t_{\epsilon,1}) $. If we reverse the implication, then we obtain that $ \tilde {V}_n(z)\geq\tilde{V}_\ast(t_{\epsilon,1}) $ implies $ z>\tilde{V}_\ast^{-1}(\tau)-\epsilon$. From this implication, it follows that \[ \tilde{V}_n^{-1}(\tilde{V}_\ast(t_{\epsilon,1}))=\inf\{z\dvtx\tilde {V}_n(z)>\tilde{V}_\ast(t_{\epsilon,1})\} >\tilde{V}_\ast^{-1}(\tau)-\epsilon. \] For $ z=t_{\epsilon,3} $, we have $ \tilde{V}_n(z)=\tilde {V}_n(t_{\epsilon,3})>\tilde{V}_\ast(t_{\epsilon,2}) $. In other words, there exists a $ z<\tilde{V}_\ast^{-1}(\tau )+\epsilon$ with $ \tilde{V}_n(z)>\tilde{V}_\ast(t_{\epsilon,2}) $. This proves that \[ \tilde{V}_\ast^{-1}(\tau)+\epsilon>\tilde{V}_n^{-1}(\tilde {V}_\ast(t_{\epsilon,2})) . \] Altogether, we have proven that for all $ n\geq n_0 $, \[ \tilde{V}_\ast^{-1}(\tau)-\epsilon<\tilde{V}_n^{-1}(\tilde {V}_\ast(t_{\epsilon,1})) <\tilde{V}_n^{-1}(\tilde{V}_\ast(t_{\epsilon,2}))<\tilde{V}_\ast ^{-1}(\tau)+\epsilon. \] By monotonicity, for all $ n\geq n_0 $ and all $ \tau'\in[\tilde{V}_\ast(t_{\epsilon,1}),\tilde{V}_\ast (t_{\epsilon,2})], $ one has \[ \tilde{V}_\ast^{-1}(\tau)-\epsilon<\tilde{V}_n^{-1}(\tau')<\tilde {V}_\ast^{-1}(\tau)+\epsilon. \] Since $ \tau\in[\tilde{V}_\ast(t_{\epsilon,1}),\tilde{V}_\ast (t_{\epsilon,2})] $, the proof is complete. \end{pf} \begin{lemma} \label{RegValLem1} For all $ \tau\geq0 $, one has that $ \tau$ is a regular value of $ \tilde{V}_\ast$ almost surely with respect to $ \mathbb{P} \times\tilde{\mathbb{P} } $. \end{lemma} \begin{pf} By the invariance properties of Brownian motion, we have that for all $ \gamma>0 $, \[ \{L(t,w);w\in\mathbb{R},t\geq0\}\stackrel{\mathcal{D}}{=} \{\gamma^{-1}L(\gamma^2t,\gamma w);w\in\mathbb{R},t\geq0\} . \] By the invariance of the $ \alpha$-stable L\'{e}vy process, we have that \begin{eqnarray*} \{L(t,\tilde{W}(x));x\in\mathbb{R},t\geq0\}&\stackrel{\mathcal{D}}{=}& \{\gamma^{-1}L(\gamma^2t,\gamma\tilde{W}(x));x\in\mathbb{R},t\geq 0\} \\ & \stackrel{\mathcal{D}}{=}& \{\gamma^{-1}L(\gamma^2t,\tilde{W}(\gamma ^\alpha x));x\in\mathbb{R},t\geq0\} . \end{eqnarray*} Substitution then yields \begin{eqnarray*} \biggl\{\int_{\mathbb{R}}L(t,\tilde{W}(x))\,\mathrm{d}x;t\geq0 \biggr\} & \stackrel{\mathcal{D}}{=}& \biggl\{\gamma^{-1}\int_{\mathbb{R}}L(\gamma ^2t,\tilde{W}(\gamma^\alpha x))\,\mathrm{d}x;t\geq0 \biggr\} \\ & \stackrel{\mathcal{D}}{=}& \biggl\{\gamma^{-1-\alpha}\int_{\mathbb {R}}L(\gamma^2t,\tilde{W}(x))\,\mathrm{d}x;t\geq0 \biggr\} . \end{eqnarray*} By definition, this means that \[ \{\tilde{V}_\ast(t);t\geq0\} \stackrel{\mathcal{D}}{=} \{\gamma^{-1-\alpha}\tilde{V}_\ast(\gamma^2t);t\geq0\} . \] We define $ \ell_\ast$ to be the image measure of the Lebesgue measure $ \ell$ with respect $ \tilde{V}_\ast$. The previous considerations imply that \[ \ell_\ast(\mathrm{d}t)\stackrel{\mathcal{D}}{=}\gamma^2\ell_\ast(\gamma^{-1-\alpha}\,\mathrm{d}t) . \] This identity implies that no $ \tau>0 $ satisfies $ \ell_\ast(\{ \tau\})>0 $ with a positive probability with respect to $ \mathbb{P} \times\tilde{\mathbb{P} } $. To a critical value $ \tau$ corresponds an interval where $ t\mapsto \tilde{V}_\ast$ is constant, which implies that $ \ell_\ast(\{\tau\})>0 $. For a particular point $ \tau>0 $, this cannot happen with positive probability. This finishes the proof of the statement. \end{pf} \begin{proposition} \label{OkTimeLokTimeConvProp} For all $ \tau\geq0 $, the sequence of functions $ x\mapsto L(\tilde {V}_n^{-1}(\tau),\tilde{S}_n(x+1/n)) $ converges toward the function $ x\mapsto L(\tilde{V}_\ast^{-1}(\tau ),\tilde{W}(x)) $ in the $ J_1 $-topology $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely. \end{proposition} \begin{pf} It is known that $ \tilde{S}_n $ converges toward $ \tilde{W} $ in the $ J_1 $-topology almost surely with respect to $ \tilde{\mathbb{P} } $. Moreover, by Lemmas \ref{InversTimeChangeConvLem} and \ref{RegValLem1}, for all $ \tau\geq0 $, the sequence $ \tilde{V}_n^{-1}(\tau) $ converges toward $ \tilde {V}_\ast^{-1}(\tau) $ almost surely with respect to $ \mathbb{P} \times\tilde{\mathbb{P} } $. The proposition follows since it is well known that $ (t,x)\mapsto L(t,x) $ is jointly continuous $ \mathbb{P} $-almost surely; see Boylan (\citeyear{Boy1964}) or Getoor and Kesten (\citeyear{GetKes1972}). \end{pf} \begin{lemma} \label{LokalNullMenge} For all $ k\in\mathbb{N} $, $ \theta_1,\ldots,\theta_k\in\mathbb{R} $ and all $ \tau_1,\ldots,\tau_k\geq0 $, the set \[ \mathcal{C}:= \Biggl\{c>0\dvtx\ell\Biggl(x\in\mathbb{R}; \Biggl|\sum_{i=1}^k\theta_iL(\tilde{V}_\ast ^{-1}(\tau_i),\tilde{W}(x)) \Biggr|=c \Biggr)>0 \Biggr\} \] is countable $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely, where $ \ell$ denotes the Lebesgue measure on $ \mathbb{R} $. \end{lemma} \begin{pf} It is well known that $ x\mapsto\tilde{W}(x) $ is strictly increasing $ \tilde{\mathbb{P} } $-almost surely. For $ c>0 $, we define the level-sets \[ \mathcal{N}_c:= \Biggl\{w\in\mathbb{R}; \Biggl|\sum_{i=1}^k\theta_i L(\tilde{V}_\ast^{-1}(\tau_i),w) \Biggr|=c \Biggr\} . \] Fix a strictly increasing path $ f\dvtx x\mapsto\tilde{W}(x) $ and assume that there exist an uncountable number of $ c>0 $ with the property that $ \ell(f^{-1}(\mathcal{N}_c))>0 $. For $ c\neq c' $, the sets $ f^{-1}(\mathcal{N}_c) $ and $ f^{-1}(\mathcal{N}_{c'}) $ are disjoint. We would obtain an uncountable number of disjoint sets with positive Lebesgue measure. This is, of course, not possible. \end{pf} \begin{proposition} \label{CardTowardMeasureProp} For all $ k\in\mathbb{N} $, $ \theta_1,\ldots,\theta_k\in\mathbb {R} $ and all $ \tau_1,\ldots,\tau_k\geq0 $, one has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that \begin{eqnarray*} &&\frac{1}{n}\operatorname{card} \Biggl\{x\in\mathbb{Z}\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau _i,\{x/n\}) \Biggr|>c \Biggr\}\\ &&\quad\longrightarrow \ell\Biggl(x\in\mathbb{R}\dvtx \Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x) \Biggr|>c \Biggr) \qquad \mbox{as } n\rightarrow\infty \end{eqnarray*} for all but a countable number of $ c>0 $. \end{proposition} \begin{pf} We can find a $ K>0 $ such that $ \{y\in\mathbb{R}\dvtx L(\tau_i,y)\neq0 \ {\rm for\ all}\ i=1,\ldots,k \} $ is a subset of the interval $ (\tilde{W}(-K),\tilde{W}(K)) $. By Propositions \ref {OkTimePro}, \ref{OkTimeLokTimeConvProp} and Corollary \ref{LokTimeKor1}, the sequence \begin{eqnarray*} \tilde{A}_n(x)&:=&n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{ x\}) \Biggr|\\ &\hspace*{3pt}=& \Biggl|\sum_{i=1}^k\theta_iL\bigl(\tilde{V}_n^{-1}(\tau_i),\tilde {S}_n(x-1/n)\bigr) \Biggr| \end{eqnarray*} converges $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely in the $ J_1$-topology toward \[ \tilde{A}_\ast(x):= \Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x) \Biggr| = \Biggl|\sum_{i=1}^k\theta_iL(\tilde{V}_\ast^{-1}(\tau_i),\tilde {W}(x)) \Biggr| . \] There then exists a sequence of continuous increasing maps $ \lambda _n\dvtx[-K,K]\rightarrow[-K,K] $ such that \[ \sup_{x\in[-K,K]} |\tilde{A}_\ast(x)-\tilde{A}_n\circ\lambda _n(x) |\longrightarrow0 \qquad \mbox{as } n\rightarrow\infty \] and such that each $ \lambda_n $ is Lipschitz continuous and satisfies \[ \operatorname{esssup}\limits_{x\in[-K,K]} |\lambda_n'(x)-1 |\longrightarrow0 . \] We should emphasize that the derivative of the function $ \lambda_n $ may not exist everywhere. However, those points where the derivative does not exist form a zero set since $ \lambda_n $ is an absolutely continuous Lipschitz function. We note that for suitably large $ n\in\mathbb{N}$, one has \begin{eqnarray*} &&\frac{1}{n}\operatorname{card} \Biggl\{x\in\mathbb{R}; \Biggl|\sum_{i=1}^k\theta_iL\bigl(\tilde{V}_n^{-1}(\tau_i),\tilde {S}_n(x-1/n)\bigr) \Biggr|>c \Biggr\}\\ &&\quad=\ell\bigl(x\in[-K,K];\tilde{A}_n(x)>c \bigr)=\int_{-K}^K\mathbh{1}_{(c,\infty)}(\tilde{A}_n(x))\,\mathrm{d}x . \end{eqnarray*} It then follows that \begin{eqnarray*} && \frac{1}{n}\operatorname{card} \Biggl\{x\in[-K,K]; n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x\}) \Biggr|>c \Biggr\} -\int_{-K}^K\mathbh{1}_{(c,\infty)}(\tilde{A}_n(\lambda _n(x)))\,\mathrm{d}x\\ &&\quad= \int_{-K}^K\mathbh{1}_{(c,-\infty)}(\tilde{A}_n(x))\,\mathrm{d}x \biggl(1-\frac{1}{\lambda_n'(\lambda_n^{-1}(x))} \biggr)\,\mathrm{d}x +\mathrm{O} \Bigl(\sup_{x\in[-K,K]}|\lambda_n(x)-x| \Bigr). \end{eqnarray*} By the assumptions on the sequence $ \{\lambda_n;n\in\mathbb{N}\} $, the previous difference converges toward zero. Furthermore, \[ \int_{-K}^K\mathbh{1}_{(c,\infty)}(\tilde{A}_n(\lambda _n(x)))\,\mathrm{d}x\longrightarrow \int_{-K}^K\mathbh{1}_{(c,\infty)}(\tilde{A}_\ast (x))\,\mathrm{d}x\qquad \mbox{as } n\rightarrow\infty \] whenever the set $ \{x\in[-K,K];\tilde{A}_\ast(s)=c\} $ is a zero set with respect to the Lebesgue measure $ \ell$ on $ \mathbb{R} $. Since this was proven in Lemma \ref {LokalNullMenge}, the statement of the proposition follows. \end{pf} Subsequently, we will make use of the following notation: \[ A_n^+:= \Biggl\{x\in\mathbb{Z}\dvtx\sum_{i=1}^k\theta_i\tilde{\Gamma }_n(\tau_i,\{x/n\})>0 \Biggr\} ,\qquad A_n^-:= \Biggl\{x\in\mathbb{Z}\dvtx\sum_{i=1}^k\theta_i\tilde{\Gamma }_n(\tau_i,\{x/n\})<0 \Biggr\} \] and \[ A^+:= \Biggl\{x\in\mathbb{R}\dvtx\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x)>0 \Biggr\} ,\qquad A^-:= \Biggl\{x\in\mathbb{R}\dvtx\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x)<0 \Biggr\} . \] Later, we will need the following version of Proposition \ref {CardTowardMeasureProp}. \begin{proposition} \label{SignedCardTowardMeasureProp} For all $ k\in\mathbb{N} $, $ \theta_1,\ldots,\theta_k\in\mathbb {R} $ and all $ \tau_1,\ldots,\tau_k\geq0 $, one has $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely that \[ \frac{1}{n}\operatorname{card} \Biggl\{x\in\mathbb{Z}\cap A_n^\pm\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr|>c \Biggr\} \longrightarrow \ell\Biggl(x\in\mathbb{R}\cap A^\pm\dvtx \Biggl|\sum_{i=1}^k\theta_i\tilde {L}_\ast(\tau_i,x) \Biggr|>c \Biggr) \] for all but a countable number of $ c>0 $. \end{proposition} \begin{pf} The proof uses essentially the same arguments as the proof of Proposition \ref{CardTowardMeasureProp}. \end{pf} \begin{remark*} With the same proof as for Proposition \ref {CardTowardMeasureProp}, we can show that $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely \[ \frac{1}{n}\operatorname{card} \bigl\{x\in\mathbb{Z}\dvtx n^2\tilde{\Gamma}_n^2(\tau _i,\{x/n\})>c \bigr\}\longrightarrow \ell\bigl(x\in\mathbb{R}\dvtx\tilde{L}_\ast^2(\tau_i,x)>c \bigr) \qquad \mbox{as } n\rightarrow\infty \] for all but a countable number of $ c>0 $. \end{remark*} \subsection{A useful lemma on integrated powers of local time} \begin{lemma} \label{PrinceLem} For $ \tau_1,\ldots,\tau_k\geq0 $ and $ \theta_1,\ldots,\theta _k\in\mathbb{R} $, the two sequences of random variables \begin{eqnarray*} &&n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_i\tilde {\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta\quad \mbox{and }\\ && n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr) \Biggr) \end{eqnarray*} converge $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely toward the respective random variables \[ \int_{-\infty}^\infty\Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x) \Biggr|^\beta \,\mathrm{d}x \quad \mbox{and}\quad \int_{-\infty}^\infty\Biggl( \Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau_i,x) \Biggr) \Biggr)\,\mathrm{d}x . \] \end{lemma} \begin{pf} We use the layer cake representation of the integrals (see Lieb and Loss (\citeyear{LieLos2001})) to write \[ \sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_in\tilde{\Gamma }_n(\tau_i,\{x/n\}) \Biggr|^\beta= \beta\int_0^\infty c^{\beta-1}\operatorname{card} \Biggl\{x\in\mathbb{Z}\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau _i,\{x/n\}) \Biggr|>c \Biggr\}\,\mathrm{d}c \] and \[ \int_{-\infty}^\infty\Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau _i,x) \Biggr|^\beta \,\mathrm{d}x =\beta\int_0^\infty c^{\beta-1}\ell\Biggl(x\in\mathbb{R}\dvtx \Biggl|\sum_{i=1}^k\theta_i\tilde{L}_\ast(\tau_i,x) \Biggr|>c \Biggr)\,\mathrm{d}c . \] We note that the convergence of $ \tilde{V}_n^{-1}(\tau_i) $ toward $ \tilde{V}_\ast^{-1}(\tau_i) $ and the fact that $ t\mapsto L(t,y) $ is increasing for every $ y\in \mathbb{R} $ imply that there exists an $ n_0\in\mathbb{N} $ with \[ L(\tilde{V}_n^{-1}(\tau_i),y)\leq L\bigl(\tilde{V}_\ast^{-1}(\tau_i)+1,y\bigr)\qquad \mbox{for all } y\in\mathbb{R}, 1\leq i\leq k, n\geq n_0 . \] Moreover, for all $ i\in\{1,\ldots,k\} $, the functions $ y\mapsto L(\tilde{V}_\ast^{-1}(\tau_i)+1,y) $ are continuous and their supports are contained in $ [-K,K] $ for a suitable $ K>0 $. Hence, there exists a $ C>0 $ such that for $ n\geq n_0 $, one has \begin{eqnarray*} n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr| &\leq&\Biggl|\sum_{i=1}^k\theta_iL\bigl(\tilde{V}_n^{-1}(\tau_i),\tilde {S}_n\bigl((x-1)/n\bigr)\bigr) \Biggr|\\ &\leq&\sum_{i=1}^k\theta_i\sup_{y\in\mathbb{R}}L\bigl(\tilde{V}_\ast ^{-1}(\tau_i)+1,y\bigr)\leq C. \end{eqnarray*} This implies that all of the functions \[ c\mapsto\operatorname{card} \Biggl\{x\in\mathbb{Z}\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr|>c \Biggr\} \] have support contained in $ [0,C]$. Moreover, for all $ c>0 $, we have \begin{eqnarray*} \operatorname{card} \Biggl\{x\in\mathbb{Z}\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau _i,\{x/n\}) \Biggr|>c \Biggr\} \leq\operatorname{card} \bigl\{x\in\mathbb{Z}\dvtx-K\leq\tilde{S}_n\bigl((x-1)/n\bigr)\leq K \bigr\}. \end{eqnarray*} Since \[ \ell\bigl(x;\tilde{W}(x)\in\{-K,K\} \bigr)=0 \] and since $ \tilde{S}_n $ converges toward $ \tilde{W} $ with respect to the Skorohod metric, we have that \[ \frac{1}{n}\operatorname{card} \bigl\{x\in\mathbb{Z}\dvtx-K\leq\tilde {S}_n\bigl((x-1)/n\bigr)\leq K \bigr\} \longrightarrow\ell\bigl(x\in\mathbb{R}\dvtx-K\leq\tilde{W}(x)\leq K \bigr). \] This implies that there exists an $ R>0 $ such that for all $ n\in \mathbb{N} $ and all $ c>0 $, we have \[ \frac{1}{n}\operatorname{card} \Biggl\{x\in\mathbb{Z}\dvtx n \Biggl|\sum_{i=1}^k\theta_i\tilde{\Gamma}_n(\tau_i,\{x/n\}) \Biggr|>c \Biggr\} \leq R. \] The first statement of the lemma then follows from dominated convergence and Proposition \ref{CardTowardMeasureProp}.\\[0.5mm] The second statement is proved in the same way by separating the positive and the negative parts of the integrals and using the statements from Proposition \ref {SignedCardTowardMeasureProp} instead of Proposition \ref{CardTowardMeasureProp}. \end{pf} \begin{proposition} \label{PrinceProp} For $ \tau_1,\ldots,\tau_k\geq0 $ and $ \theta_1,\ldots,\theta _k\in\mathbb{R} $, the two sequences of random variables \begin{eqnarray*} &&n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_i\Gamma _n(\tau_i,\{x/n\}) \Biggr|^\beta\quad \mbox{and}\\ &&n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum_{i=1}^k\theta_i\Gamma _n(\tau_i,\{x/n\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\Gamma_n(\tau_i,\{x/n\}) \Biggr) \Biggr) \end{eqnarray*} converge jointly in distribution toward the respective random variables \[ \int_{-\infty}^\infty\Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \,\mathrm{d}x \quad \mbox{and}\quad \int_{-\infty}^\infty\Biggl( \Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr) \Biggr)\,\mathrm{d}x . \] \end{proposition} \begin{pf} We know that \[ \{L_\ast(t,x);t\geq0,x\in\mathbb{R} \} \stackrel{\mathcal{D}}{=} \{\tilde{L}_\ast(t,x);t\geq0,x\in\mathbb{R} \} \] and \[ \{S_n^{-1}(B_n(V_n^{-1}(t)));t\geq0 \} \stackrel{\mathcal{D}}{=} \{\tilde{S}_n^{-1}(B(\tilde{V}_n^{-1}(t)));t\geq0 \} . \] Therefore, by Lemma \ref{PrinceLem}, the sequences of random variables \begin{eqnarray*} &&n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_i\hat {\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta\quad \mbox{and}\\ &&n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum_{i=1}^k\theta_i\hat {\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\hat{\Gamma}_n(\tau_i,\{x/n\}) \Biggr) \Biggr) \end{eqnarray*} converge jointly in distribution toward the respective random variables \[ \int_{-\infty}^\infty\Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \,\mathrm{d}x \quad \mbox{and}\quad \int_{-\infty}^\infty\Biggl( \Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr) \Biggr)\,\mathrm{d}x . \] Moreover, $ S_n^{-1}(S_n(x/n))=(x+1)/n $ for all $ x\in\mathbb{Z} $. This implies that \[ \hat{X}_n(\tau)\stackrel{\mathcal{D}}{=}S_n^{-1}(S_n(X_n(\tau )))=X_n(\tau)+1/n . \] Hence, we have $ \hat{\Gamma}_n(\tau,\{x/n\})\stackrel{\mathcal{D}}{=}\Gamma_n(\tau,\{(x+1)/n\}) $ for all $ x\in\mathbb{Z} $. Therefore, \[ n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_i\hat {\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta\stackrel{\mathcal{D}}{=} n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl|\sum_{i=1}^k\theta_i\Gamma _n(\tau_i,\{x/n\}) \Biggr|^\beta \] and \begin{eqnarray*} && n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum_{i=1}^k\theta_i\hat {\Gamma}_n(\tau_i,\{x/n\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\hat{\Gamma}_n(\tau_i,\{x/n\}) \Biggr) \Biggr) \\ &&\quad\stackrel{\mathcal{D}}{=} n^{\beta-1}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum _{i=1}^k\theta_i\Gamma_n(\tau_i,\{x/n\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\Gamma_n(\tau_i,\{x/n\}) \Biggr) \Biggr) . \end{eqnarray*} This proves the proposition. \end{pf} For the sequel, we define the occupation time \[ \Gamma(t,A):=\int_0^t\mathbh{1}_{A}(X(s))\,\mathrm{d}s \] of the process $ X $ in the measurable set $ A\subset\mathbb{R} $. Consequently, we have \[ \Xi(t)=\sum_x\Gamma(t,\{x\})\xi(x) . \] We will use this fact and the following corollary in the proofs of the next section. \begin{corollary} \label{PrinceKor} For $ \tau_1,\ldots,\tau_k\geq0 $ and $ \theta_1,\ldots,\theta _k\in\mathbb{R} $, the two sequences of random variables \begin{eqnarray*} &&n^{-1-{\beta/\alpha}}\sum_{x\in\mathbb{Z}} \Biggl|\sum _{i=1}^k\theta_i\Gamma(k_n\tau_i,\{x\}) \Biggr|^\beta\quad \mbox{and}\\ &&n^{-1-{\beta/\alpha}}\sum_{x\in\mathbb{Z}} \Biggl( \Biggl|\sum _{i=1}^k\theta_i\Gamma(k_n\tau_i,\{x\}) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_i\Gamma(k_n\tau_i,\{x\}) \Biggr) \Biggr) \end{eqnarray*} converge jointly in distribution toward the respective random variables \begin{eqnarray*} &&\int_{-\infty}^\infty\Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \,\mathrm{d}x \quad \mbox{and}\\ &&\int_{-\infty}^\infty\Biggl( \Biggl|\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr|^\beta \operatorname{sgn} \Biggl(\sum_{i=1}^k\theta_iL_\ast(\tau_i,x) \Biggr) \Biggr)\,\mathrm{d}x . \end{eqnarray*} \end{corollary} \begin{pf} If we let $ k_n:=n^{(1+\alpha)/\alpha} $, then for all $ n\in \mathbb{N} $ and $ x\in\mathbb{Z} $, we have that \[ \Gamma_n(\tau,x/n)=\int_0^\tau\mathbh{1}_{\{x/n\}}(X_n(t))\,\mathrm{d}t =k_n^{-1}\int_0^{k_n\tau} \mathbh{1}_{\{x\}}(X(t))\,\mathrm{d}t = n^{-(\alpha+1)/\alpha}\Gamma(k_n\tau,\{x\}) . \] The result then follows from Proposition \ref{PrinceProp}. \end{pf} \section{The finite-dimensional distributions} In this section, we prove the convergence of the finite-dimensional distributions of $ \Xi_n $ toward the finite-dimensional distributions of $ \Xi_\ast$. In order to do so, we first compute the exact expression of the finite-dimensional distributions of $ \Xi_\ast$. The proofs in this section follow the ideas given in Kesten and Spitzer (\citeyear{KesSpi1979}). In the \hyperref[sec1]{Introduction}, we defined \[ \Xi_\ast(\tau):=\int_0^\infty L_\ast(\tau,x-)\,\mathrm{d}Z_+(x)+\int _0^\infty L_\ast(\tau,-(x-))\,\mathrm{d}Z_-(x) , \] where $ \{Z_+(t);t\geq0\} $ and $ \{Z_-(t);t\geq0\} $ are independent copies of the $ \beta$-stable L\'{e}vy process, which can be associated with the stable distribution $ \vartheta_\beta$ with characteristic function given by \[ \psi(\theta)=\exp\bigl(-|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\operatorname{sgn}(\theta)\bigr) \bigr) . \] \begin{lemma} \label{FinitDistriLem} For $ t_1,\ldots,t_k\geq0 $ and $ \theta_1,\ldots,\theta_k\in \mathbb{R} $, we have that \begin{eqnarray*} &&\mathbb{E} \Biggl[\exp\Biggl(\mathrm{i}\sum_{j=1}^k\theta_j\Xi_\ast(t_j) \Biggr) \Biggr]\\ &&\quad= \mathbb{E} \Biggl[\exp\Biggl(-A_1\int_{-\infty}^\infty\Biggl|\sum_{j=1}^k\theta_jL_\ast (t_j,x) \Biggr|^\beta \,\mathrm{d}x \Biggr)\\ &&\qquad\hphantom{\mathbb{E} \Biggl[}{}\times\exp\Biggl(-\mathrm{i}A_2\int_{-\infty}^\infty\Biggl|\sum_{j=1}^k\theta_jL_\ast (t_j,x) \Biggr|^\beta \,\mathrm{d}x \operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_jL_\ast(t_j,x) \Biggr) \Biggr) \Biggr] . \end{eqnarray*} \end{lemma} \begin{pf} The proof is similar to that given in Kesten and Spitzer (\citeyear{KesSpi1979}) (see page 16ff). Let $ \nu$ be the L\'{e}vy measure of $ Z_+ $. One can truncate the L\'{e}vy measure as follows: \[ \nu_1(B)=\nu(B\cap\{y\in\mathbb{R};|y|\leq1\}) \quad \mbox{and} \quad \nu_2(B)=\nu(B\cap\{y\in\mathbb{R};|y|>1\}). \] Let $ M(t) $ and $ A(t) $ be independent L\'{e}vy processes, with respective characteristic functions \[ \mathbb{E} \bigl[\mathrm{e}^{\mathrm{i}\theta M(t)} \bigr] =\exp\biggl(t\int_{|y|\leq1} (\mathrm{e}^{\mathrm{i}\theta y}-1-\mathrm{i}\theta y )\nu_1(\mathrm{d}y) \biggr) \] and \[ \mathbb{E} \bigl[\mathrm{e}^{\mathrm{i}\theta A(t)} \bigr]=\exp\biggl(t\int_{|y|\leq1} (\mathrm{e}^{\mathrm{i}\theta y}-1 )\nu_2(\mathrm{d}y) \biggr), \] such that \[ Z^+(t)=M(t)+A(t)+Dt , \] where $ D $ is a suitable real constant. This decomposition exists and is called the L\'{e}vy--It\^{o} representation of $ Z^+ $. The advantage of this representation is that $ M(t) $ is a martingale and has all moments and $ A(t) $ is a process with bounded variation. Since the process $ \{L_\ast (t,x-);x\geq0\} $ is left-continuous and independent with respect to the filtration $ \mathcal{F}_t $ generated by $ Z^+(t) $, the process $ \{L_\ast(t,x-);x\geq0\} $ is $ \mathcal{F}_t $-predictable. Moreover, $ \{L_\ast(t,x-);x\geq0\} $ has bounded support $ \mathbb{P} $-almost surely. Therefore, we can find a suitable sequence of partitions $ \{x_l^{(n)};l\in\mathbb{N}\}$, $n\in\mathbb{N} $, with $ x^{(n)}_l<x^{(n)}_{l+1} $ for all $ l,n\in\mathbb{N} $ satisfying \[ \lim_{l\rightarrow\infty} x_l^{(n)}=\infty\quad \mbox{and}\quad \lim_{n\rightarrow\infty}\max_{l\in\mathbb{N}} \bigl(x_{l+1}^{(n)}-x_l^{(n)} \bigr)=0 \] such that \[ \int_0^\infty L_\ast(t,x-)\,\mathrm{d}M(x)=\lim_{n\rightarrow\infty} \sum_{l=1}^\infty L_\ast\bigl(t,x_l^{(n)}-\bigr) \bigl(M\bigl(x_{l+1}^{(n)}\bigr)-M\bigl(x_l^{(n)}\bigr)\bigr) \] with probability 1 (see Meyer (\citeyear{Mey1976}), Chapter II, Section 23). Moreover, we can also assume that \[ \int_0^\infty L_\ast(t,x-)\,\mathrm{d}A(x)=\lim_{n\rightarrow\infty} \sum_{l=1}^\infty L_\ast\bigl(t,x_l^{(n)}-\bigr) \bigl(A\bigl(x_{l+1}^{(n)}\bigr)-A\bigl(x_l^{(n)}\bigr) \bigr) \] with probability 1. From those considerations, it follows that there exists a sequence of partitions $ (x_l^{(n)})_{l\in\mathbb{N}} $ such that \[ \int_0^\infty L_\ast(t,x-)\,\mathrm{d}Z_+(x)=\lim_{n\rightarrow\infty} \sum_{l=1}^\infty L_\ast\bigl(t,x_l^{(n)}-\bigr) \bigl(Z_+\bigl(x_{l+1}^{(n)}\bigr)-Z_+\bigl(x_l^{(n)}\bigr) \bigr) \] with probability 1. Since the increments $ D^{(n)}_l:=Z_+(x_{l+1}^{(n)})-Z_+(x_l^{(n)}),\ l\in\mathbb{N}, $ are independent and have characteristic function \[ \mathbb{E} \bigl[\mathrm{e}^{\mathrm{i}\theta D^{(n)}_l} \bigr] =\exp\bigl(-\bigl(x_{l+1}^{(n)}-x_l^{(n)}\bigr)|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn}(\theta)\bigr) \bigr) \] by dominated convergence, we have \begin{eqnarray*} && \mathbb{E} \Biggl[\exp\Biggl(\mathrm{i}\sum_{j=1}^k\theta_j\int_0^\infty L_\ast (t_j,x-)\,\mathrm{d}Z_+(x) \Biggr) \Biggr]\\ &&\quad=\lim_{n\rightarrow\infty} \mathbb{E} \Biggl[\exp\Biggl( \sum_{l=1}^\infty\sum_{j=1}^k\mathrm{i}\theta_jL_\ast\bigl(t_j,x_l^{(n)}-\bigr) \bigl(Z_+\bigl(x_{l+1}^{(n)}\bigr)-Z_+\bigl(x_l^{(n)}\bigr) \bigr) \Biggr) \Biggr]\\ &&\quad=\lim_{n\rightarrow\infty} \mathbb{E} \Biggl[\exp\Biggl(-\sum_{l=1}^\infty\bigl(x_{l+1}^{(n)}-x_l^{(n)} \bigr) \Biggl|\sum_{j=1}^k\theta_jL_\ast\bigl(t_j,x_l^{(n)}-\bigr) \Biggr|^\beta\\ &&\qquad\hphantom{\lim_{n\rightarrow\infty} \mathbb{E} \Biggl[\exp\Biggl(-\sum_{l=1}^\infty} {}\times\Biggl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_jL_\ast\bigl(t_j,x_l^{(n)}-\bigr) \Biggr) \Biggr) \Biggr) \Biggr].\\ &&\quad= \mathbb{E} \Biggl[\exp\Biggl(-A_1\int_0^\infty\Biggl| \sum_{j=1}^k\theta_jL_\ast\bigl(t_j,x_l^{(n)}\bigr) \Biggr|^\beta \,\mathrm{d}x\\ &&\qquad\hphantom{\mathbb{E} \Biggl[\exp\Biggl(} {}-\mathrm{i}A_2\int_0^\infty\Biggl|\sum_{j=1}^k\theta_jL_\ast\bigl(t_j,x_l^{(n)}\bigr) \Biggl|^\beta \operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_jL_\ast\bigl(t_j,x_l^{(n)}\bigr) \Biggr)\,\mathrm{d}x \Biggr) \Biggr]. \end{eqnarray*} For $ Z_- $, one can proceed with similar arguments. \end{pf} \begin{proposition} \label{FinitDistriConvProp} The finite-dimensional distributions of the processes $ \{\Xi _n(t);t\geq0\} $ converge toward the finite-dimensional distributions of the process $ \{\Xi_\ast(t);t\geq0\} $. \end{proposition} \begin{pf} As in the previous sections, we define $ k_n:=n^{(1+\alpha)/\alpha} $ and $ \kappa:=\frac{1}{\alpha}+\frac{1}{\beta} $. We already saw that we can use the occupation time $ \{\Gamma(t,\{x\} );t\geq0,x\in\mathbb{R}\} $ of the process $ \{X(t);t\geq0\} $ to represent the process $ \{\Xi(t);t\geq 0\} $ as follows: \[ \Xi(t)=\sum_{x\in\mathbb{Z}}\Gamma(t,\{x\})\xi(x) . \] It follows that \[ \Xi_n(t)=n^{-\kappa}\Xi(k_nt)=n^{-\kappa}\sum_{x\in\mathbb {Z}}\Gamma(k_nt,\{x\})\xi(x) . \] Let $ \varphi(\theta):=\mathbb{E} [\exp(\mathrm{i}\theta\xi(1)) ] $ be the characteristic function of the scenery random variable $ \xi(1) $. It then follows from the above representation that \[ \sum_{j=1}^k\theta_j\Xi_n(t_j) =n^{-\kappa}\sum_{x\in\mathbb{Z}}\sum_{j=1}^k\theta_j\Gamma (k_nt_j,\{x\})\xi(x) \] and \begin{eqnarray*} R_n:= \mathbb{E} \Biggl[\exp\Biggl(\mathrm{i}\sum_{j=1}^k\theta_j\Xi_n(t_j) \Biggr) \Biggr] =\mathbb{E} \Biggl[\prod_{x\in\mathbb{Z}}\varphi\Biggl(n^{-\kappa}\sum _{j=1}^k\theta _j\Gamma(k_nt_j,\{x\}) \Biggr) \Biggr]. \end{eqnarray*} The random scenery $ \{\xi(z);z\in\mathbb{Z}\} $ is in the domain of attraction of a $ \beta$-stable distribution with characteristic function given by \[ \psi(\theta)=\exp\bigl(-|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn}(\theta) \bigr)\bigr) . \] This implies that \[ 1-\varphi(\theta)\sim|\theta|^\beta \bigl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn}(\theta)\bigr)\qquad \mbox{as } \theta\rightarrow0 . \] Thus \begin{eqnarray*} \log(\varphi(\theta)) \sim\log(\psi(\theta)) \qquad\mbox{as } \theta\rightarrow0 . \end{eqnarray*} Therefore, for $ |\theta|\leq1 $, we have that \begin{eqnarray*} \biggl|\frac{\log(\varphi(\theta))-\log(\psi(\theta))}{\log(\psi (\theta))} \biggr|=\mathrm{o}(\theta) . \end{eqnarray*} If we define \[ \varphi_{x,n}:=\varphi\Biggl( n^{-\kappa}\sum_{j=1}^k\theta_j\Gamma (k_nt_j,\{x\}) \Biggr) \] and \[ \psi_{x,n}:=\exp\Biggl(-n^{-\kappa\beta} \Biggl|\sum_{j=1}^k\theta_j\Gamma (k_nt_j,\{x\}) \Biggr|^\beta \Biggl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_j\Gamma(k_nt_j,\{x\}) \Biggr) \Biggr) \Biggr) \] for all $ x\in\mathbb{Z} $, one has \begin{eqnarray*} \biggl|\frac{\log(\varphi_{x,n})-\log(\psi_{x,n})}{\log(\psi_{x,n})} \biggr| =\mathrm{o} \Biggl(n^{-\kappa}\sum_{j=1}^k\theta_j\Gamma(k_nt_j,\{x\}) \Biggr) . \end{eqnarray*} This implies that \begin{eqnarray*} \biggl|\log\biggl(\prod_{x\in\mathbb{Z}}\varphi_{x,n} \biggr) -\log\biggl(\prod_{x\in\mathbb{Z}}\psi_{x,n} \biggr) \biggr| &=& \biggl|\sum_{x\in\mathbb{Z}}\log(\varphi_{x,n})-\sum_{x\in\mathbb {Z}}\log(\psi_{x,n}) \biggr| \\ &\leq&\sum_{x\in\mathbb{Z}}\log(\psi_{x,n}) \mathrm{o} \Biggl(n^{-\kappa}\sum_{j=1}^k\theta_j\Gamma(k_nt_j,\{x\}) \Biggr). \end{eqnarray*} By Corollary \ref{PrinceKor}, the right-hand side of the previous inequality converges toward zero in probability. The continuity of the logarithm then implies that \begin{eqnarray*} \biggl|\prod_{x\in\mathbb{Z}}\varphi_{x,n}-\prod_{x\in\mathbb{Z}}\psi _{x,n} \biggr|\longrightarrow0 \qquad \mbox{in probability as } n\rightarrow\infty. \end{eqnarray*} We use this and dominated convergence to prove that the limit of the sequence $ \{R_n;n\in\mathbb{N}\} $ exists and is equal to the limit of the sequence \begin{eqnarray*} Q_n:=\mathbb{E} \Biggl[\exp\Biggl(-\sum_{x\in\mathbb{Z}}n^{-\kappa\beta} \Biggl| \sum_{j=1}^k\theta_j\Gamma(k_nt_j,\{x\}) \Biggr|^\beta \Biggl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_j\Gamma(k_nt_j,\{x\}) \Biggr) \Biggr) \Biggr) \Biggr]. \end{eqnarray*} By Corollary \ref{PrinceKor} and Lemma \ref{FinitDistriLem}, the sequence $ \{Q_n;n\in\mathbb{N}\} $ converges toward \begin{eqnarray*} Q_\ast&:=&\mathbb{E} \Biggl[\exp\Biggl(-\int_{-\infty}^\infty\Biggl|\sum _{j=1}^k\theta _jL_\ast(t_j,x) \Biggr|^\beta \Biggl(A_1+\mathrm{i}A_2\cdot\operatorname{sgn} \Biggl(\sum_{j=1}^k\theta_jL_\ast(t_j,x) \Biggr) \Biggr)\,\mathrm{d}x \Biggr) \Biggr]\\ &=&\mathbb{E} \Biggl[\exp\Biggl(\mathrm{i}\sum_{j=1}^k\theta_j\Xi_\ast(t_j) \Biggr) \Biggr]. \end{eqnarray*} As we have seen in Lemma \ref{FinitDistriLem}, $ Q_\ast$ is the characteristic function for the finite-dimensional distributions of $ \{\Xi_\ast(t);t\geq0\} $. This completes the proof of the proposition. \end{pf} \section{The tightness} In this section, we prove that the sequence $ \{\Xi_n(t);t\geq0\} $ is tight. The proof of Theorem \ref{MT} then follows since we have already obtained the convergence of the finite-dimensional distributions in the previous section. The main proof of tightness also follows the ideas given in Kesten and Spitzer (\citeyear{KesSpi1979}). We first need some suitable inequalities for the occupation times of $ X_\ast$. However, the proofs of those inequalities differ from those given in Kesten and Spitzer (\citeyear{KesSpi1979}). \begin{lemma}\label{Lem2} There exists a function $ \epsilon\dvtx\mathbb{R}^+\rightarrow\mathbb {R}^+ $ with the properties $ \epsilon(A)\rightarrow0 $ as $ A\rightarrow\infty$ and \[ \mathbb{P} \bigl(\Gamma(s,\{x\})>0\ \mbox{for\ some}\ x\ \mbox{with}\ |x|>As^{ {\alpha }/({1+\alpha})} \bigr)\leq\epsilon(A) \qquad \mbox{for all } s\geq0. \] \end{lemma} \begin{pf} For a positive real number $ x $, we denote by $ \lceil x\rceil$ the smallest integer which is greater or equal to $ x $. Obviously, for all $ s\geq0 $, we have \begin{eqnarray*} && \mathbb{P} \bigl(\Gamma(s,\{x\})>0 \mbox{ for some } x\mbox{ with } |x|>As^{\alpha/(1+\alpha)} \bigr) \\ &&\quad\leq \mathbb{P} \bigl(|X(r)|> As^{\alpha/(1+\alpha)}\mbox{ for some } r\leq s \bigr) \\ &&\quad\leq \mathbb{P} \bigl(|X(r)|> A \bigl( \bigl\lceil s^{\alpha/(1+\alpha)} \bigr\rceil-1 \bigr)\mbox{ for some } r\leq\bigl\lceil s^{\alpha/(1+\alpha)} \bigr\rceil ^{(1+\alpha)/\alpha} \bigr) \\ &&\quad= \mathbb{P} \bigl( \bigl|X \bigl( \bigl\lceil s^{{\alpha}/({1+\alpha})} \bigr\rceil ^{(1+\alpha)/\alpha}u \bigr) \bigr|> A \bigl\lceil s^{\alpha/(1+\alpha)} \bigr\rceil-A \mbox{ for some } u\leq1 \bigr) \\ &&\quad\leq \mathbb{P} \Bigl(\sup_{r\leq1}\bigl|X_{n(s)}(r)\bigr|> A/2 \Bigr) \qquad \mbox{for } s>1, \end{eqnarray*} with $ n(s):= \lceil s^{\alpha/(1+\alpha)} \rceil\rightarrow \infty$ as $ s\rightarrow\infty$. Since \[ \mathbb{P} \Bigl(\sup_{r\leq1}|X_n(r)|> A/2 \Bigr)\longrightarrow\mathbb{P} \Bigl(\sup_{r\leq 1}|X_\ast(r)|> A/2 \Bigr)\qquad \mbox{as } n\rightarrow\infty, \] we can define \[ \epsilon(A):=\sup_{s\geq0}\mathbb{P} \Bigl(\sup_{r\leq1}\bigl|X_{n(s)}(r)\bigr|> A/2 \Bigr)\qquad \mbox{for all } A>0 . \] This proves the statement of the lemma. \end{pf} \begin{lemma}\label{Lem3} There exists a $ C>0 $ such that for all $ s\geq0 $, one has \[ \sum_{x\in\mathbb{Z}}\mathbb{E} [\Gamma^2(s,\{x\}) ]\sim Cs^{2-{\alpha}/({1+\alpha})} . \] \end{lemma} \begin{pf} For a positive real number $ x $, we denote by $ \lfloor x\rfloor$ its integer part. We know that for $ w(s):= \lfloor s^{\alpha/(\alpha+1)} \rfloor$, one has \begin{eqnarray*} \frac{(w(s))^{2(\alpha+1)/\alpha}}{s^2}\sum_{x\in\mathbb {Z}}\Gamma_{w(s)}^2\bigl(1,\{x/w(s)\}\bigr) =s^{-2}\sum_{x\in\mathbb{Z}}\Gamma^2 \bigl((w(s))^{(\alpha+1)/{\alpha}},\{x\} \bigr) \leq s^{-2}\sum_{x\in\mathbb{Z}}\Gamma^2(s,\{x\}) \end{eqnarray*} and \begin{eqnarray*} s^{-2}\sum_{x\in\mathbb{Z}}\Gamma^2(s,\{x\}) & \leq& s^{-2}\sum_{x\in\mathbb{Z}}\Gamma^2 \bigl(\bigl(w(s)+1\bigr)^{ ({\alpha+1})/{\alpha}},\{x\} \bigr)\\ &=& \frac{(w(s)+1)^{2(\alpha+1)/{\alpha}}}{s^2}\sum_{x\in \mathbb{Z}}\Gamma_{w(s)+1}^2\bigl(1,\bigl\{x/\bigl(w(s)+1\bigr)\bigr\}\bigr) . \end{eqnarray*} Consequently, one has \[ s^{-2}\sum_{x\in\mathbb{Z}}\mathbb{E} [\Gamma^2(s,\{x\}) ] \sim\sum_{x\in\mathbb{Z}}\mathbb{E} \bigl[\Gamma_{w(s)}^2\bigl(1,\{x/w(s)\} \bigr) \bigr] = \sum_{x\in\mathbb{Z}}\mathbb{E} \bigl[\tilde{\Gamma}_{w(s)}^2\bigl(1,\{ x/w(s)\}\bigr) \bigr] . \] It follows from the layer cake representation and the remark after the proof of Proposition \ref{SignedCardTowardMeasureProp} that \begin{eqnarray*} w(s)\sum_{x\in\mathbb{Z}}\tilde{\Gamma}_{w(s)}^2\bigl(1,\{x/w(s)\}\bigr) =\frac{1}{w(s)}\int_0^\infty\operatorname{card} \bigl\{x\in\mathbb{Z}\dvtx w^2(s)\tilde{\Gamma}^2_{w(s)}\bigl(1,\{x/w(s)\}\bigr)>c \bigr\}\,\mathrm{d}c \end{eqnarray*} converges $ \mathbb{P} \times\tilde{\mathbb{P} } $-almost surely toward \[ \int_0^\infty\ell\bigl(x\in\mathbb{R}\dvtx\tilde{L}^2(1,x)>c \bigr)\,\mathrm{d}c =\int_{\mathbb{R}}\tilde{L}_\ast^2(1,x)\,\mathrm{d}x. \] Dominated convergence and Fubini's theorem imply that \[ w(s)\sum_{x\in\mathbb{Z}}\mathbb{E} \bigl[\tilde{\Gamma}_{w(s)}^2\bigl(1,\{ x/w(s)\}\bigr) \bigr]\longrightarrow \int_{\mathbb{R}}\mathbb{E} [\tilde{L}_\ast^2(1,x) ]\,\mathrm{d}x\qquad \mbox{as }s\rightarrow\infty. \] Therefore, \[ w(s)s^{-2}\sum_{x\in\mathbb{Z}}\mathbb{E} [\Gamma^2(s,\{x\}) ]\longrightarrow \int_{\mathbb{R}}\mathbb{E} [\tilde{L}_\ast^2(1,x) ]\,\mathrm{d}x\qquad \mbox{as } s\rightarrow\infty. \] This proves the statement of the lemma. \end{pf} \begin{lemma} \label{Lem4} \textup{(1)} For all $ \beta\in(0,2] $ and $ \rho>0 $, there exists a $ C_1>0 $ such that as $ n\rightarrow\infty$, we have \[ \bigl|\mathbb{E} \bigl[\xi(0)\mathbh{1}_{[-\rho,\rho]} (n^{-1/\beta}\xi(0)) \bigr] \bigr|\sim C_1n^{(1-\beta)/\beta} . \] \textup{(2)} For all $ \beta\in(0,2) $ and $ \rho>0 $, there exists a $ C_2>0 $ such that as $ n\rightarrow\infty$, we have \[ \bigl|\mathbb{E} \bigl[\xi^2(0)\mathbh{1}_{[-\rho,\rho]} \bigl(n^{-{1/\beta}}\xi(0)\bigr) \bigr] \bigr|\sim C_2n^{(2-\beta)/{\beta}}. \] \end{lemma} \begin{pf} The random variable $ \xi(0) $ is in the domain of attraction of a $ \beta$-stable random variable with characteristic function given by \[ \psi(\theta)=\exp\bigl(-|\theta|^\beta\bigl(A_1+\mathrm{i}A_2\operatorname{sgn}(\theta)\bigr)\bigr) , \] with $ 0<A_1<\infty$ and $ |A_1^{-1}A_2|\leq\tan(\uppi\beta/2) $. A consequence of this setting is that for $\beta>1 $, we have $ \mathbb{E} [\xi(0)]=0 $. Further, if $ \beta\in(0,2] $, then there exist $ B_1,B_2\geq0 $ such that \[ \lim_{\rho\rightarrow\infty}\rho^\beta\mathbb{P} \bigl(\xi(0)\geq \rho\bigr)= B_1\quad \mbox{and}\quad \lim_{\rho\rightarrow\infty}\rho^\beta\mathbb{P} \bigl(\xi(0)\leq -\rho\bigr)= B_2 . \] For $ \beta=2 $, we have $ B_1=B_2=0$ since the decay of the tail probabilities is exponential in that case. For $ \beta\neq1 $, we then have that \begin{eqnarray*} \bigl|\mathbb{E} \bigl[\xi(0)\mathbh{1}_{[-\rho,\rho]}(n^{- {1}/{\beta }}\xi(0)) \bigr] \bigr| &=& \int_0^{\rho n^{{1}/{\beta}}}\mathbb{P} \bigl(|\xi(0)|\geq c\bigr)\,\mathrm{d}c\\ &\sim& (B_1+B_2)\int_0^{\rho n^{{1}/{\beta}}}c^{-\beta}\,\mathrm{d}c\\ &=& (B_1+B_2)(1-\beta)^{-1}\rho^{1-\beta}n^{({1}/{\beta })(1-\beta)}. \end{eqnarray*} This proves the first statement for $ \beta\neq1 $. For $ \beta=1 $, the statement is just our assumption from the \hyperref[sec1]{Introduction}. Moreover, by similar arguments for $ \beta\neq2 $, we have that \begin{eqnarray*} \bigl|\mathbb{E} \bigl[\xi^2(0)\mathbh{1}_{[-\rho,\rho ]}(n^{- {1}/{\beta}}\xi(0)) \bigr] \bigr| &\sim& (B_1+B_2)\int_0^{\rho n^{{1}/{\beta}}}c^{1-\beta}\,\mathrm{d}c\\ &=& (B_1+B_2)(2-\beta)^{-1}\rho^{2-\beta}n^{(1/\beta) (2-\beta)}. \end{eqnarray*} This completes the proof of the second statement. \end{pf} \begin{proposition} The distributions of the sequence $ \{\Xi_{n};n\in\mathbb{N}\} $ are tight with respect to the Skorohod topology. \end{proposition} \begin{pf} We follow the method given in Kesten and Spitzer (\citeyear{KesSpi1979}). Let $ \epsilon >0 $ be given. By Lemma~\ref{Lem2}, there exists an $ A>0 $ such that $ \epsilon(AT^{-\alpha/(1+\alpha)} )\leq\epsilon/4 $. This implies that \begin{eqnarray*} && \mathbb{P} \biggl(\Xi_n(t)\neq n^{-\kappa}\sum_{|x|\leq An}\Gamma (k_nt,\{x\} )\xi(x)\mbox{ for some } t\leq T \biggr) \\ &&\quad\leq \mathbb{P} \bigl(\Gamma(k_nT,\{x\})>0\mbox{ for some } x \mbox{ with } |x|>Ak_n^{\alpha/(1+\alpha)} \bigr) \\ &&\quad\leq \epsilon\bigl(AT^{-\alpha/(1+\alpha)} \bigr)\\ &&\quad\leq \epsilon/4. \end{eqnarray*} There exists a $ \rho_0>0 $ with the property that for all $ \rho >\rho_0 $ and all $ n\in\mathbb{N} $, we have \[ 3An\bigl(1-\mathbb{P} \bigl(-\rho n^{1/\beta}\leq\xi(0)\leq\rho n^{1/\beta} \bigr)\bigr) \leq\epsilon/4 . \] This is valid since for suitable $ B_1,B_2\geq0 $, we have \[ \lim_{\rho\rightarrow\infty}\rho^\beta\mathbb{P} \bigl(\xi(0)\geq \rho\bigr)= B_1 \quad \mbox{and}\quad \lim_{\rho\rightarrow\infty}\rho^\beta\mathbb{P} \bigl(\xi(0)\leq -\rho\bigr)= B_2 . \] For all $ x\in\mathbb{Z} $, we have the random variables \begin{eqnarray*} \bar{\xi}_n(x)&:=&\xi(x)\mathbh{1}_{[-\rho,\rho ]}(n^{-1/\beta}\xi(x)) , \\ E_n&:=&n^{-\kappa}\frac{1}{T}\mathbb{E} \biggl[\sum_{x\in\mathbb {Z}}\Gamma(k_nt,\{ x\})\bar{\xi}_n(x) \biggr] =n^{-\kappa}\frac{1}{T}\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}} \Gamma(k_nt,\{x\})\mathbb{E} [\bar{\xi}_n(x) ] \biggr] \end{eqnarray*} and \[ \bar{\Xi}_n(t):=n^{-\kappa}\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\}) \bigl(\bar{\xi}_n(x)-\mathbb{E} [\bar{\xi}_n(x) ] \bigr) . \] \textit{Claim} 1. The family of random variables $ \{E_n(t);n\in\mathbb {N}\} $ is bounded. This is true since, by Lemma \ref{Lem4}, we have \begin{eqnarray*} \biggl|\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\})\mathbb{E} [\bar{\xi }_n(x) ] \biggr| &=& |\mathbb{E} [\bar{\xi}_n(0) ] |\sum_{x\in\mathbb{Z}}\Gamma (k_nt,\{x\})\\ &=&k_nt |\mathbb{E} [\bar{\xi}_n(0) ] | \leq Ctn^{(\alpha+1)/\alpha}n^{(1/\beta)(1-\beta)} \end{eqnarray*} and $ \frac{\alpha+1}{\alpha}+\frac{1}{\beta}(1-\beta)-\kappa=0.$\vspace*{1pt} \textit{Claim} 2. For all $ \eta>0 $, there exists an $ n_0\in\mathbb {N} $ such that for all $ n\geq n_0 $, we have \[ \mathbb{P} \biggl(\sup_{t\leq T}|\Xi_n(t)-\bar{\Xi}_n(t)-E_nt|>\frac {\eta}{2} \biggr)\leq\frac{\epsilon}{2} . \] To see this, we first note that \[ \Xi_n(t)-\bar{\Xi}_n(t)-E_nt = n^{-\kappa}\sum_{x\in\mathbb {Z}}\Gamma(k_nt,\{x\}) \bigl(\xi(x)-\bar{\xi}_n(x) \bigr) \] since \begin{eqnarray*} && \Xi_n(t)-\bar{\Xi}_n(t)-E_nt-n^{-\kappa}\sum_{x\in\mathbb {Z}}\Gamma(k_nt,\{x\}) \bigl(\xi(x)-\bar{\xi}_n(x) \bigr) \\ &&\quad= n^{-\kappa} \biggl(\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\})\mathbb {E} [\bar {\xi}(x) ] -\frac{t}{T}\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\} )\mathbb{E} [\bar {\xi}(x) ] \biggr] \biggr)\\ &&\quad= n^{-\kappa}\mathbb{E} [\bar{\xi}(0) ] \biggl(\sum_{x\in\mathbb {Z}}\Gamma (k_nt,\{x\}) -\frac{t}{T}\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\}) \biggr] \biggr)\\ &&\quad= n^{-\kappa}\mathbb{E} [\bar{\xi}(0) ] \biggl(k_nt-\frac{t}{T}k_nT \biggr)\\ &&\quad=0. \end{eqnarray*} Lemma \ref{Lem4} implies that \begin{eqnarray*} && \mathbb{P} \biggl(n^{-\kappa}\sum_{x\in\mathbb{Z}}\Gamma(k_nt,\{x\}) \bigl(\xi (x)-\bar{\xi}_n(x) \bigr)\neq0 \mbox{ for some } t\leq T \biggr) \\ &&\quad\leq \mathbb{P} \bigl(\Gamma(k_nT,\{x\})>0\mbox{ for some } x \mbox{ with }|x|>Ak_n^{\alpha/(1+\alpha)} \bigr) \\ &&\qquad{}+ \mathbb{P} \bigl(\xi(x)\neq\bar{\xi}_n(x)\mbox{ for some } |x|\leq Ak_n^{\alpha/(1+\alpha)} \bigr) \\ &&\quad\leq \epsilon\bigl(AT^{-\alpha/(1+\alpha)} \bigr) +3Ak_n^{\alpha/(1+\alpha)}\mathbb{P} \bigl(\xi(0)\neq\bar{\xi }_n(0) \bigr) \\ &&\quad\leq \frac{\epsilon}{4}+3An \bigl(1-\mathbb{P} \bigl(-\rho n^{1/\beta}\leq\xi(0)\leq\rho n^ {1/\beta} \bigr) \bigr) \\ &&\quad\leq \frac{\epsilon}{2}. \end{eqnarray*} \textit{Claim} 3. There exists a $ K_0>0 $ such that for all $ n\in \mathbb{N} $, we have \[ \mathbb{E} [ |\bar{\Xi}_n(t_2)-\bar{\Xi}_n(t_1) |^2 ]\leq C_0(t_2-t_1)^{2-({1+\alpha})/{\alpha}} . \] We define the $ \sigma$-field $ \mathcal{X}=\{X(t);t\geq0\} $. It then follows from the independence of $ \{X(t);t\geq0\} $ and $ \{\xi(x);x\in\mathbb {Z}\} $ that \begin{eqnarray*} && \mathbb{E} \biggl[ \biggl(\sum_{x\in\mathbb{Z}}\bigl(\Gamma(k_nt_2,\{x\})-\Gamma (k_nt_1,\{x\})\bigr)\bar{\xi}_n(x) \biggr)^2 \biggr]\\ &&\quad=\mathbb{E} \biggl[\mathbb{E} \biggl[ \biggl( \sum_{x\in\mathbb{Z}}\bigl(\Gamma(k_nt_2,\{x\}) -\Gamma(k_nt_1,\{x\})\bigr)\bar{\xi}_n(x) \biggr)^2 \bigg|\mathcal{X} \biggr] \biggr]\\ &&\quad= \mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\bigl(\Gamma(k_nt_2,\{x\}) -\Gamma(k_nt_1,\{x\})\bigr)^2\mathbb{E} [\bar{\xi}^2_n(x) |\mathcal{X} ] \biggr]\\ &&\quad= \sum_{x\in\mathbb{Z}} \mathbb{E} \bigl[\bigl(\Gamma(k_nt_2,\{x\}) -\Gamma(k_nt_1,\{x\})\bigr)^2 \bigr]\mathbb{E} [\bar{\xi}^2_n(x) ].\\ \end{eqnarray*} This implies that \begin{eqnarray*} \mathbb{E} [ |\bar{\Xi}_n(t_2)-\bar{\Xi}_n(t_1) |^2 ] &\leq& n^{-2\kappa} \sum_{x\in\mathbb{Z}}\mathbb{E} \bigl[\bigl(\Gamma(k_nt_2,\{x\})-\Gamma (k_nt_1,\{x\} )\bigr)^2 \bigr]\mathbb{E} [\bar{\xi}_n^2(x) ]\\ &=& n^{-2\kappa}\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\bigl(\Gamma(k_nt_2,\{ x\}) -\Gamma(k_nt_1,\{x\})\bigr)^2 \biggr]\mathbb{E} [\bar{\xi}_n^2(0) ] . \end{eqnarray*} Conditioned on $ \mathcal{A}:=\{\lambda_i;i\in\mathbb{Z}\} $, the process $ X $ has the strong Markov property. Using this, we can prove that for $ t_1\leq t_2 $, the conditional distribution of $ \sum_x(\Gamma(t_2,\{x\})-\Gamma(t_1,\{x\}))^2 $ with respect to $ \mathcal{A} $ equals the conditional distribution of $ \sum_x \Gamma^2(t_2-t_1,\{x\}) $ with respect to $ \mathcal{A} $. Hence, \begin{eqnarray*} \mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\bigl(\Gamma(t_2,\{x\})-\Gamma(t_1,\{ x\})\bigr)^2 \biggr] &=&\mathbb{E} \biggl[\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\bigl(\Gamma(t_2,\{x\} )-\Gamma(t_1,\{x\} )\bigr)^2 \big|\mathcal{A} \biggr] \biggr]\\ &=& \mathbb{E} \biggl[\mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\Gamma ^2(t_2-t_1,\{x\}) \big|\mathcal{A} \biggr] \biggr]\\ &=& \mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\Gamma^2(t_2-t_1,\{x\}) \biggr]. \end{eqnarray*} By Lemma \ref{Lem3}, it follows that \begin{eqnarray*} \mathbb{E} \biggl[\sum_{x\in\mathbb{Z}}\bigl(\Gamma(k_nt_2,\{x\})-\Gamma (k_nt_1,\{x\} )\bigr)^2 \biggr] &\leq& Ck_n^{2-\alpha/(1+\alpha)}(t_2-t_1)^{2-\alpha /(1+\alpha)} \\ &=& Cn^{2(1+\alpha)/\alpha-1}(t_2-t_1)^{2-\alpha /(1+\alpha)}. \end{eqnarray*} Moreover, we know that \[ \mathbb{E} [\bar{\xi}_n^2(0) ]\leq\tilde{C}n^{(2-\beta) (1/\beta)} . \] Putting this all together, we obtain \[ \mathbb{E} [ |\bar{\Xi}_n(t_2)-\bar{\Xi}_n(t_1) |^2 ] \leq C_0n^{(2-\beta)(1/\beta)} n^{-2\kappa}n^{2(1+\alpha)/\alpha-1}(t_2-t_1)^{2- {\alpha}/({1+\alpha})} . \] Since $ (2-\beta)\frac{1}{\beta}-2\kappa+2\frac{1+\alpha}{\alpha }-1=0 $, Claim 3 follows. Since $ 2-\frac{\alpha}{1+\alpha}> 1 $, the tightness in the Skorohod topology of the family $ \{\Xi_n;n\in\mathbb{N}\} $ now follows from Claims 1--3 and a theorem of Billingsley (\citeyear{Bil1968}) (see page 95). \end{pf} \section*{Acknowledgements} The authors wish to express their deepest gratitude toward the probability group and the staff of Academia Sinica and National Taiwan University for mathematical and administrational help during their visit to Taiwan. Special thanks go to Shieh Narn-Rueih, Hwang Chii-Ruey and Sheu Shuenn-Jyi for many interesting discussions on probability theory. Moreover, the authors would like to thank the referee for his very detailed report which helped to improve the manuscript.
1,314,259,992,728
arxiv
\section{Introduction} Optomechanical systems have emerged as a formidable platform for the control and manipulation of light-matter interactions in quantum technologies~\cite{1.general-review-optomechanics-1,BowenMilburn}. From a fundamental perspective, they allow for preparing a superposition of quantum states of a macroscopic object~\cite{2.macro-superposition-1, 2.macro-superposition-2}, production of non-classical states for the light~\cite{3.optical-nc-1, 3.optical-nc-2} and the mechanics~\cite{4.mechanics-nc-1}, and may lead even to the detection of the quantum nature of gravity~\cite{5.quantum-gravity-1, 5.quantum-gravity-2}. Practically, the optomechanical systems can render hybrid architectures for quantum networking schemes~\cite{6.quantum-network-1}, the possibility of quantum state transfer~\cite{7.quantum-state-transfer-1} and quantum distillation~\cite{8.distillation-1}, and serve as a sensor for detecting small forces~\cite{9.force-1}, displacements~\cite{10.displacement-1}, masses~\cite{11.mass-1}, and accelerations~\cite{12.accelerometer-1, 12.accelerometer-2} with unprecedented precision. A crucial necessity for most of the above schemes is to possess the mechanical oscillator near its ground state~\cite{1.general-review-optomechanics-1, 13.importance-1}. Typically, the mechanical part operates at frequencies ranging from $1$ MHz to $1$ GHz~\cite{1.general-review-optomechanics-1}. This means that sophisticated cooling techniques are inevitable for reaching the mechanical ground state~\cite{14.ground-state-1, 14.ground-state-2, 14.ground-state-3, 14.ground-state-4}. To certify the success of any cooling procedure, it is of paramount importance to measure the temperature of the system precisely. \begin{figure}[t] \includegraphics[width=0.85 \linewidth]{model_v4.png} \caption{(a) Schematic diagram of optomechanical system. (b) General procedure for the estimation of the oscillator's temperature $T$. The mechanical object (sample) of mass $m$, temperature $T$, and frequency $\Omega$ is probed by a coherent signal, interacting nonlinearly with the oscillator. To infer $T$, we suggest a feasible measurement scheme based on homodyne detection, which delivers nearly optimal thermometry performances.}\label{fig:model} \end{figure} Thermodynamical quantities (including temperature) are challenging to define, measure, and manipulate at quantum level~\cite{15.quantum-thermo-1}, which may even lead to reformulating the laws of thermodynamics~\cite{16.new-laws-thermo-1,16.new-laws-thermo-2,16.new-laws-thermo-3,16.new-laws-thermo-4,17.anna-review}. Concerning temperature, two main approaches may be identified for thermometry in the quantum domain: (i) the search for the optimal observable to be measured on the sample to extract information about temperature, and (ii) the design and the optimization of a {\em probing} technique, where the sample is let to interact with an external probe, which is then measured to extract information about the temperature of the sample. The first approach~\cite{18.first-approach1,18.first-approach2,18.first-approach3,Campbell2018} is the most natural procedure for estimating temperature and the optimal observable turns out to be the energy, as it happens in classical physics. However, this approach may be very demanding, as it requires access to the entire system, measuring its energy and having full knowledge of the spectrum. In the second approach, a small quantum probe interacts with the system without causing much disturbance and is then measured. Here we may distinguish two main strategies: one may consider a probe that interacts with the system for a long time to reach equilibrium. Measuring the probe will the provide information about the temperature of the system~\cite{19.second-approach-1, 19.second-approach-2, 19.second-approach-3}. However, satisfying these conditions for fragile quantum systems may not be an easy task in practice. Alternatively, one may consider a quantum probe interacting with the system for a limited time \cite{20.third-approach-1, 20.third-approach-2, 20.third-approach-3, 20.third-approach-4, Feyles2019, Gebbia2019, Mancino2020} and the temperature becomes encoded in the entangled non-equilibrium system-probe quantum state. Even tracing out the system degrees of freedom, temperature information remains mapped onto the state of the probe and may be extracted using a suitable set of measurements. Interestingly, this non-equilibrium scenario may yield enhanced precision compared to the equilibrated probes~\cite{21.justification-1}. Indeed, for systems that are prone to decoherence, such as optomechanical systems, interrogating the probe on a short timescale seems to be the most suitable strategy for thermometry. Notice that for probes at equilibrium the measured quantity is the thermodynamical temperature of the sample {\em and the probe}, while for out-of-equilibrium probes one just estimates a parameter of the probe density matrix, which turns out to be determined by the initial temperature of the sample. Currently, the dominant scheme for thermometry in optomechanical systems is based on the measurement of the so-called motional sidebands asymmetry ratio, i.e., $\bar{n}/(\bar{n} + 1)$ (with $\bar{n}$ being the mean phonon number)~\cite{22.sideband-asymmetry-1, 22.sideband-asymmetry-2, 22.sideband-asymmetry-3, 23.ratio-1}. Since this technique involves a cavity being strongly driven, the optomechanical system is typically linearized, and thus, the intrinsic nonlinear nature of the radiation-pressure optomechanical interaction cannot be addressed. In addition, even though heavily used in experiments, the motional sideband asymmetry technique may not provide the ultimate precision for thermometry. Therefore, developing new techniques for measuring the temperature of a mechanical object at the quantum precision limit in the nonlinear regime, as quantified by the quantum Fisher information, is highly desirable. In this paper, we consider an optomechanical system where no driving field is present and operating in the nonlinear regime. Initially, the mechanical oscillator is at thermal equilibrium at an unknown temperature. By switching on the interaction between the mechanical oscillator and the probing light, temperature information may be mapped to the quantum state of light, and it may be extracted through optical measurements, see Fig.~\ref{fig:model}. We have three main results: (i) the temperature parameter is shown to be imprinted solely as a phase diffusion process in the optical state; (ii) the quantum precision limit, set by the quantum Fisher information, is nearly saturated by placing a nonlinear Kerr medium before a homodyne detector; and (iii) by properly choosing the Kerr nonlinearity, the measurement basis becomes independent of temperature, avoiding complex adaptive sensing protocols. Our protocol is distinct from previous proposals as it neither relies on Gaussian interactions nor needs for adjustments of detunings~\cite{24.previous-works-1, 24.previous-works-2, 24.previous-works-3, 24.previous-works-4, 24.previous-works-5}. The rest of the article is organized as follows: In Sec.~\ref{sec:preliminaries}, we briefly introduce the theory of quantum parameter estimation, for which we stressed the main equations to be used in the single parameter estimation case. In Sec.~\ref{sec:themodel}, we derive the reduced density matrix of the light probe. Sec.~\ref{sec:qfi}, accounts for the study of the quantum Fisher information. In Sec.~\ref{sec:cfi}, we present the measurement strategy to be employed in order to achieve the ultimate quantum bound. Finally, we present the conclusions of our results in Sec.~\ref{sec:conclusions}. \section{elements of parameter estimation}\label{sec:preliminaries} Quantum parameter estimation aims to determine one or multiple quantities of interest by performing appropriate measurements and exploiting and estimator algorithm. In this work, we focus on single parameter estimation, where the only quantity to estimate is the temperature $T$ of a mechanical oscillator, whereas the rest of the parameters are assumed to be known and fully controlled. The estimation procedure will ultimately infer the quantity of interest using two essential steps: (i) gathering data through performing a specific type of measurement; and (ii) feed the gathered data into an estimator to infer the value of the parameter. For any choice of a measurement basis, the precision of the estimation obeys the classical Cram\'{e}r-Rao inequality~\cite{25.classical-cramer-rao-1} \begin{equation} \mathrm{Var}[T] \geq \frac{1}{M \mathcal{F}_C(T)},\label{eq:classical_cramer} \end{equation} where $M$ is the total number of measurements, $\mathrm{Var}[T]$ is the variance of the estimated quantity, and $\mathcal{F}_C(T)$ is the so-called classical Fisher information obtained as~\cite{25.classical-cramer-rao-1, 26.quantum-parameter-estimation-1} \begin{equation} \mathcal{F}_C(T) = \int dx \frac{1}{p(x|T)} \left[\partial_T p(x|T) \right]^2. \label{eq:classical_fisher} \end{equation} In the above expression, $\partial_T := \partial/\partial_T$, and $p(x|T)$ is the conditional probability for a measurement outcome $x$ given the temperature $T$. The equality in Eq.~\eqref{eq:classical_cramer} can be achieved when the estimator is optimal. In the asymptotical regime, where the data set is large, it is proven that Bayesian algorithm provides the best estimator~\cite{27.estimation, 26.quantum-parameter-estimation-1}. One can further generalize the above classical inequality by optimizing upon all the possible Positive-Operator Valued Measure (POVM) $\{\Pi_x\}$ operators, where $\int dx \Pi_x = \mathbb{I}$. This extra optimization tightens the above bound and leads to the quantum Cram\'{e}r-Rao inequality~\cite{26.quantum-parameter-estimation-1} \begin{equation} \mathrm{Var}[T] \geq \frac{1}{M \mathcal{F}_Q(T)},\label{eq:classical_quantum} \end{equation} where \begin{equation} \mathcal{F}_Q(T) := \mathrm{Tr}[\left(\partial_T\rho_T\right) L_T] = \mathrm{Tr}[\rho_T L_T^2] \geq \mathcal{F}_C,\label{eq:QFI-def} \end{equation} is the quantum Fisher information $\mathcal{F}_Q(T)$, $\rho_T$ is the density matrix parametrized on the oscillator's temperature $T$, and $L_T$ is the so-called Symmetric Logarithmic Derivative (SLD). By expressing the density matrix $\rho_T$ in spectral decomposition, one can provide an explicit form of the SLD as follows~\cite{26.quantum-parameter-estimation-1}: \begin{equation} L_T = 2 \sum_{n,m} \frac{\langle \psi_m | \partial_T \rho_T| \psi_n \rangle}{\varrho_m + \varrho_n} | \psi_m \rangle \langle \psi_n |, \label{eq:SLD} \end{equation} where $\rho_T = \sum_n \varrho_n |\psi_n\rangle \langle \psi_n|$, and $\varrho_m + \varrho_n \neq 0$. With the above definition in Eqs.~\eqref{eq:QFI-def}-\eqref{eq:SLD}, it is straighforward to finally reach the quantum Fisher information on this particular basis \begin{equation} \mathcal{F}_Q = 2 \sum_{n,m} \frac{|\langle \psi_m | \partial_T \rho_T| \psi_n \rangle|^2}{\varrho_m + \varrho_n}. \label{eq:qfi_pure} \end{equation} This is the definition which is employed throughout our numerical simulations. \section{The model}\label{sec:themodel} The standard nonlinear optomechanical Hamiltonian in the absence of external driving is ($\hbar = 1$): \begin{equation} \hat{H} = \Omega \hat{b}^\dagger \hat{b} - g_0 \hat{a}^\dagger\hat{a}(\hat{b}^\dagger + \hat{b}),\label{eq:opto-hamiltonian} \end{equation} where we have switched to an appropriate frame rotating at the frequency of the optical mode $\hat{a}$. The mechanical oscillator of frequency $\Omega$ and mode $\hat{b}$ couples to the light field with strenght $g_0$ (see Refs.~\cite{12.accelerometer-2} for a brief review on some explicit expressions for $g_0$ as different physical setups are considered). Under this specific type of interaction, the mechanical oscillator's potential shifts its equilibrium position conditioned upon the eigenenergies $n$ of the number operator $\hat{a}^\dagger\hat{a}$~\cite{1.general-review-optomechanics-1, 3.optical-nc-1, 3.optical-nc-2}. The mechanical oscillator is assumed to be initially in a mixed thermal state at temperature $T$, the parameter to be estimated. It is convenient to represent the oscillator state in coherent basis \begin{equation} \rho_\mathrm{M}(0) = \frac{1}{\pi \bar{n}} \int |\beta\rangle\langle \beta| e^{-\frac{|\beta|^2}{\bar{n}}}d^2\beta, \end{equation} where \begin{equation} \bar{n} = \left(\mathrm{exp}\left[\frac{\Omega}{k_B T}\right] - 1\right)^{-1} \end{equation} is the phonon occupancy number and $k_B$ is the Boltzmann constant. Since $\bar{n}$ is an injective function of $T$, we will refer to the oscillator's temperature estimation either using $\bar{n}(T):=\bar{n}$ or $T$ indistinctibly. Assuming full control of the light probe, we consider an initial pure state spanned in Fock basis with known coefficients $c_k \in \mathbb{C}$ as \begin{equation} \rho_\mathrm{L}(0) = \sum_{ n,m = 0}^\infty c_n c_m^* |n\rangle\langle m|. \end{equation} Therefore, the initial state of the system becomes \begin{equation} \rho(0) = \rho_\mathrm{L}(0)\otimes\rho_\mathrm{M}(0) \end{equation} The system undergoes a time evolution as \begin{equation} \rho(t) = \hat{U}(t) \rho(0) \hat{U}^\dagger(t). \end{equation} where the time evolution operator $\hat{U}(\tau)=\exp(- i \hat{H} \tau)$ has been found to be~\cite{3.optical-nc-1, 3.optical-nc-2} \begin{equation} \hat{U}(\tau) = e^{i(g \hat{a}^\dagger \hat{a})^2(\tau - \sin \tau)} e^{ g \hat{a}^\dagger \hat{a} (\eta \hat{b}^\dagger - \eta^* \hat{b})} e^{i\tau\hat{b}^\dagger \hat{b}}.\label{eq:unitary-operator} \end{equation} In the formula above we rescaled the relevant Hamiltonian shown in Eq.~\eqref{eq:opto-hamiltonian} by the mechanical frequency $\Omega$, and consequently, we have defined $g := g_0/\Omega$, $\eta := 1 - e^{-i\tau}$, and $\tau := \Omega t$. Notice that the second exponential in the time evolution operator is a displacement operator acting on the mechanical subsystem conditioned upon the observable $\hat{a}^\dagger\hat{a}$, whereas the first and third exponentials are a nonlinear function of the photon number operator $\hat{a}^\dagger\hat{a}$ and a phase shift operating solely on the optical and the mechanical modes, respectively. One can find that the bipartite density matrix as \begin{multline} \rho(\tau) = \sum_{ n,m = 0}^\infty c_n c_m^* e^{i g^2(n^2 - m^2)(\tau - \sin\tau)}|n\rangle \langle m|\otimes \\ \frac{1}{\pi \bar{n}}\int d^2\beta e^{-\frac{|\beta|^2}{\bar{n}}}e^{\frac{g(n - m)}{2}[\beta^* (e^{i\tau} - 1) - \beta(e^{-i\tau} - 1)]} |\phi_n\rangle \langle \phi_m|,\label{eq:bipartite-dynamics} \end{multline} with coherent mechanical amplitude \begin{equation} |\phi_n\rangle := |\beta e^{-i\tau} + g n \eta \rangle. \end{equation} Finally, by performing the trace over the oscillator's degrees of freedom in~\eqref{eq:bipartite-dynamics}, one can obtain the following reduced density matrix for the light field \begin{equation} \rho_\mathrm{L}(\tau) = \sum_{\substack{n=0\\m=0}}^\infty c_n c_m^* \mathcal{C}_{n,m} |n\rangle \langle m|,\label{eq:state_optomechanics1} \end{equation} where \begin{equation} \mathcal{C}_{n,m} = e^{i g^2(n^2-m^2)(\tau - \sin\tau)} e^{g^2(m-n)^2(1+2\bar{n})(\cos\tau - 1)}.\label{eq:state_optomechanics2} \end{equation} The expressions in Eqs.~\eqref{eq:state_optomechanics1}-\eqref{eq:state_optomechanics2} are the main results of this section. As evident, there are two different components in $\mathcal{C}_{n,m}$. The first exponential term is a coherent phase arising from the non-Gaussian interaction and does not depend on the temperature. The second term, however, is a phase diffusion which depends on temperature. We will provide more discussions about the quantum state of the light probe in the next section. \subsection{Features of the light probe} It is worth noting that the parameter to be estimated, namely $\bar{n}$, only arises in one of the exponentials in the reduced density matrix of the light probe, given in Eq.~\eqref{eq:state_optomechanics1}. In turn, this exponential resembles the detrimental effect of phase diffusion. The diffusion process may be of course described using Lindblad formalism~\cite{19.diffusion-1}, but it can also be expressed as resulting from the application of a random phase shift $\hat{U}_\theta := e^{-i\theta\hat{a}^\dagger\hat{a}}$, with $\theta$ being a random number sampled from a Gaussian distribution with zero-mean and standard deviation $\Delta$ \cite{19.diffusion-2, 19.diffusion-3}: \begin{align} \rho_{\mathrm{D}} & = \frac{1}{\sqrt{4\pi\Delta^2}} \int_\mathbb{R} d\theta e^{-\frac{\theta^2}{4\Delta^2}}\hat{U}_\theta\, \rho \, \hat{U}_\theta^\dagger \notag \\ & = \sum_{n,m=0}^\infty c_nc_m^* e^{-2(n - m)^2\Delta^2}|n\rangle\langle m|. \end{align} This process results in a degrading of the off-diagonal terms in the eigenbasis of $\hat{a}^\dagger\hat{a}$, yet conserving the energy. Notably, it also mimics the more complex process arising from the full bipartite dynamics shown in Eq.~\eqref{eq:state_optomechanics1}, where the precise amount $g^2(n - m)^2(1+2\bar{n})(\cos\tau - 1)$ emerges as a consequence of the mechanical coherent overlapping $\langle \phi_m | \phi_n \rangle$ and the relative phase from the displacement operator $e^{g(n-m)[\beta^* (e^{i\tau} - 1) - \beta(e^{-i\tau} - 1)]/2}$. \section{Quantum Fisher Information}\label{sec:qfi} The quantum Fisher information $\mathcal{F}_Q$ in general is a function of the tunable parameters of the system. To achieve the best precision for temperature estimation, one has to: (i) maximize $\mathcal{F}_Q$ with respect to such tunable parameters; and (ii) find the optimal measurement basis to achieve the bound given by $\mathcal{F}_Q$. Since the eigenvalue problem for the reduced quantum state in Eq.~\eqref{eq:state_optomechanics1} is analytically intractable, we rely on numerical methods for computing the quantum Fisher information. Moreover, even though a general photon distribution $c_n$ was considered for the derivation of the light probe, throughout this work we focus on a readily accesible input light, namely a coherent state with amplitude $\alpha \in \mathbb{R}$, which results in \begin{equation} c_n = e^{-\frac{\alpha^2}{2}} \frac{\alpha^n}{\sqrt{n!}}. \end{equation} \begin{figure}[t] \includegraphics[width=\linewidth]{fig_qfi_dynamics.png} \caption{(a) Quantum Fisher information $\mathcal{F}_Q(g,\tau|\bar{n})$ as functions of $g$ and $\tau$ for a given $\bar{n}$. As the figure shows, one can always adjust the set of parameters $g$ and $\tau$ in such a way that delivers maximal quantum Fisher information. In (b), we show the quantum Fisher information as function of $g$ for some interaction times $\tau$. As seen from the figure, an election of $\tau = \pi$ gives the lowest $g$ needed to reach maximal quantum Fisher information. Panels (c)-(d), show the Wigner function in phase space $\{q_l,p_l\}$ for the optical quantum state with the same maximal quantum Fisher information for times $\tau = \pi$ and $\tau=\pi/10$, respectively. An evident nonlinear phase as well as an incoherent phase diffusion is observed for $\tau = \pi$, whereas as the time decreases, say $\tau = \pi/10$, the nonlinear phase vanishes. Other values are $\alpha = 2$ and $\bar{n}=1$.}\label{fig:quantum-info} \end{figure} In Fig.~\ref{fig:quantum-info}(a) we show the quantum Fisher information $\mathcal{F}_Q(g,\tau|\bar{n})$ as functions of the optomechanical coupling $g$ and interaction time $\tau$ given a temperature $\bar{n}$. Without loss of generality, we have fixed the coherent amplitude $\alpha = 2$, as well as the phonon occupancy number to be $\bar{n} = 1$. As evident from the figure, there is a vast domain where the set of controlled parameters $\{g, \tau\}$ can always be adjusted such that the quantum Fisher information is maximal. This could be understood in terms of the effective phase diffussion exponential in the reduced density matrix in Eq.~\eqref{eq:state_optomechanics1}. To see this, let us first consider the limit of $\tau \ll 1$, under this limit the quantum state can be approximated as \begin{equation} \rho_\mathrm{L}(\tau) \stackrel{\tau \ll 1}{\approx} \sum_{\substack{n=0\\m=0}}^\infty e^{-\alpha^2} \frac{\alpha^{n+m}}{\sqrt{n!m!}}e^{-\frac{(g\tau)^2}{2}(m-n)^2(1+2\bar{n})} |n\rangle \langle m|,\label{eq:optics-approx} \end{equation} where the optomechanical coherent phase, arising from the non-Gaussian interaction, no longer plays a role and only the diffussion process is present. In this limit, as the dependende on $\{g, \tau\}$ is through their multiplication $g\tau$ by choosing a short interaction time $\tau$ a large $g$ is required for maximizing $\mathcal{F}_Q$. This is evident in the area in the $g$-$\tau$ plane for which the quantum Fisher information is maximal as shown in Fig.~\ref{fig:quantum-info}(a). On the other hand, as $\tau$ increases the relationship between $g$ and $\tau$ delivering maximal quantum Fisher information becomes more complex, this is because of the phase diffusion term $\mathrm{exp}[g^2(m-n)^2(1+2\bar{n})(\cos\tau - 1)]$. It follows that, for very small values of $g$, this term goes to one and dependence of $\bar{n}$ is lost. On the contrary, if $g$ is large, then the exponential term becomes vanishingly small, again losing its dependence on $\bar{n}$. Only for some intermediate values of $g$ the quantum Fisher information is maximal which is evident in Fig.~\ref{fig:quantum-info}(a). Interestingly, for an interaction time of $\tau = \pi$, one can maximize the quantum Fisher information by tuning the optomechanical strenght $g$ to its lower value. This election of the controlled parameters $\{g, \tau\}$ is of singular interest, as optomechanical systems in the nonlinear regime currently operates under weak radiation-pressure interaction coupling. Without loss of generality, from now on we will fix $\tau = \pi$, and $g = g_\mathrm{max}$ will correspond to the optomechanical coupling that brings the quantum Fisher information to its maximal value. To support the above, in Fig.~\ref{fig:quantum-info}(b), we show the quantum Fisher information $\mathcal{F}_Q$ as function of $g$ for different times $\tau$. As shown in the figure, different values of $g$ and $\tau$ lead to the same maximum value of the quantum Fisher information, for which $\tau = \pi$, as stated before, is the one delivering the lowest optomechanical coupling strenght $g$. To illustrate the differences between optical states with same quantum Fisher information, yet tuned with different choices of $g$ and $\tau$, we plot in Figs.~\ref{fig:quantum-info}(c)-(d) the quasiprobability Wigner function $W(q_l,p_l)$ in the phase space $\{q_l,p_l\}$ with associated quadratures of the light field. The Wigner function is numerically evaluated according to~\cite{20.qutip-1, 20.qutip-2}: \begin{eqnarray} \nonumber W(q_l,p_l) &=& \frac{1}{\pi}\int_{-\infty}^\infty\langle q_l + x|\rho_\mathrm{L}(\tau) |q_l - x \rangle e^{-2ip_l x} dx,\\ \nonumber &=& \sum_{n,m=0}^\infty \frac{e^{-\alpha^2} \alpha^{n+m}}{n! m! \sqrt{2^{n+m} \pi^3}} e^{-q_l^2} \mathcal{C}_{n,m} \\ \nonumber &\times& \int_{-\infty}^\infty e^{- (2ip_lx + x^2)}\mathcal{H}_m(q_l-x)\mathcal{H}_n(q_l+x)dx,\\ \end{eqnarray} where we have used \begin{equation} \langle n | x \rangle = \frac{e^{-\frac{x^2}{2}} \mathcal{H}_n(x)}{\sqrt{2^{n} n!} \pi^{1/4}}, \end{equation} with $\mathcal{H}_n(x)$ being the Hermite polynomials of order $n$. In Fig.~\ref{fig:quantum-info}(c), we show the Wigner function of the light state when the interaction time is $\tau = \pi$ and $g \approx 0.3$. The nonlinear features arising from the non-Gaussian optomechanical interaction are apparent. The moderate-to-strong value of $g$ makes this case practically relevant, e.g. see Refs.~\cite{14.ground-state-3, feasibility-2, feasibility-3, feasibility-4, feasibility-5, feasibility-6} for experimental values. However, in this case, the non-Gaussian features may be challenging to detect by accessible measurement shemes, such as homodyne detection \cite{ngh1,ngh2}. On the other hand, as shown in Fig.~\ref{fig:quantum-info}(d), by considering $\tau = \pi/10$ and $g \approx 1.87$, the light state exhibits only phase diffusion features which may be more easily detected via homodyne detection. However, this comes at the cost of larger values of $g$ which may be experimentally unfeasible. Therefore, it is highly desirable to find a measurement strategy which operates at the small $g$ and yet is able to deliver excellent estimation performance. \begin{figure}[t] \includegraphics[width=\linewidth]{fig_qfi_central_v3.pdf} \caption{(a) Quantum Fisher information as function of the oscillator's temperature $\bar{n}$ for different values of $\alpha$. The theoretical limit $\alpha \gg 1$, optimized for $\{g=g_\mathrm{max},\tau=\pi\}$, represents the maximum value at which the quantum Fisher information can reach for a given $\bar{n}$. Panel (b), shows the optomechanical coupling such that maximizes the quantum Fisher information $g_\mathrm{max}$ as function of $\bar{n}$.}\label{fig:qfi-nbar} \end{figure} In Fig.~\ref{fig:quantum-info} we kept $\bar{n}$ and $\alpha$ fixed. Now we investigate their impact on the quantum Fisher information. In Fig.~\ref{fig:qfi-nbar}(a), we show the quantum Fisher information as a function of the oscillator's temperature $\bar{n}$ for different values of the coherent amplitude $\alpha$. As the figure shows, the quantum Fisher information peaks at $\bar{n} = 0$ for any $\alpha$, while rapidly decreasing as the oscillator's temperature $\bar{n}$ grows. This can be intuitively understood as in the limit of high oscillator's temperature, i.e., $\bar{n} \gg 1$, the phase diffusion term $\mathrm{exp}[g^2(m-n)^2(1+2\bar{n})(\cos\tau - 1)]$, given in Eq.~\eqref{eq:state_optomechanics2}, goes to zero and weakly depends on the exact value $\bar{n}$, for all values of $\alpha$. In the opposite regime, i.e., $\bar{n} \ll 1$, the probe changes substantially as $\bar{n}$ varies. In other words, the variation of phonon excitations lead to a completely different optical phase diffusion term, and thus, one would expect better estimation and lower uncertainties for this quantity. Furthermore, as the Fig.~\ref{fig:qfi-nbar}(a) shows, increasing the initial coherent amplitude $\alpha$ always benefits the precision in estimating the temperature of the oscillator, however, it quickly saturates for an initial number of photons above $\alpha^2 > 9$. In the limit of large $\alpha$, one can linearize the optomechanical Hamiltonian and the corresponding QFI can be analytically evaluated via the Gaussian formalism (see Appendix \ref{a:linearized} for more details about the derivation). By taking $\tau = \pi$, one gets \begin{equation} \mathcal{F}_Q \stackrel{\alpha \gg 1}{=} \frac{2}{(1 + 2\bar{n})^2}.\label{eq:limit-estimation} \end{equation} Remarkably, as seen from the figure, even for $\alpha^2 > 9$ one can almost achieve this limit. As stated before, each point of the quantum Fisher information in Fig.~\ref{fig:qfi-nbar}(a) has been maximized using $\tau = \pi$ and $g = g_\mathrm{max}$. In Fig.~\ref{fig:qfi-nbar}(b), we depict the dependence of $g_\mathrm{max}$ as function of the temperature $\bar{n}$ for different coherent amplitudes $\alpha$. As the figure shows, comparable strong-to-moderate strenght of $g$ is observed for any chosen $\alpha$. The large values of $g$ when $\bar{n} \simeq 1$ can be intuitively explained as one requires stronger correlations between the light field and the oscillator in order to extract some information related to the mechanics. \section{Classical Fisher Information}\label{sec:cfi} The bound given by $\mathcal{F}_Q$ sets the ultimate precision limit allowed by quantum mechanics. Nonetheless, the quantum Cram\`er-Rao theorem does not explicitly provide the optimal measurement. In order to saturate the bound one needs to implement the optimal POVM, which is made by the set of projectors over the eigenstates of the SLD operator, i.e. $L_T$, in combination with optimal estimators. It is known that for large data sets a Bayesian estimator provides optimal estimation~\cite{27.estimation, 26.quantum-parameter-estimation-1}. One of the complex problems in quantum metrology is that the optimal measurement basis, computed from the eigenvetors of the SLD operator $L_T$, depend on the unknown parameter, here $\bar{n}$. The typical recipe for this problem is to follow complex adaptive approaches~\cite{21.adaptive-1, 21.adaptive-2, 21.adaptive-3, 21.adaptive-4, 21.adaptive-5, 21.adaptive-6} to update the measurement basis iteratively by extracting information about the exact value of the unknown parameter. In practice, to avoid such complexity, it is of significant importance if one can determine a fixed measurement basis which is independent of the unknown parameter and maximizes the quantum Fisher information. Therefore, in what follows we focus on determining an undemanding measurement which leads closely to the bound. \subsection{Determining a feasible measurement} \begin{figure}[t] \includegraphics[width=\linewidth]{fig_undoing_kerr.png} \caption{(a) Wigner function of the light field for $\alpha = 3, \bar{n} = 0.25, \tau = \pi, g_\mathrm{max} \approx 0.38$. Significant negative values characterizes the nonclassical nature of the light field. (b) The optical state is led to interact with a Kerr medium of nonlinear strenght $\chi$. By a proper choice of $\chi = 2\pi g^2_\mathrm{max}$, one can fully suppress the intrinsic coherent nonlinear phase arising from the non-Gaussian optomechanical interaction.}\label{fig:undoing-kerr} \end{figure} As depicted in Fig.~\ref{fig:qfi-nbar}(a), the estimation of the oscillator's temperature delivers larger quantum Fisher information particularly for low phonon quanta excitations, say $0 \leq \bar{n} \leq 1$. Within this domain, the optical state may exhibit strong nonclassical features conditioned upon the coherent amplitude $\alpha$ and the strength of the optomechanical coupling $g$. For instance, in Fig.~\ref{fig:undoing-kerr}(a), we numerically evaluate the Wigner function of the light field for $\alpha = 3$ (near saturation of the quantum Fisher information shown in Eq.~\eqref{eq:limit-estimation}), and the achieved experimental mechanical oscillator ground state $\bar{n} = 0.25$~\cite{14.ground-state-1, 14.ground-state-2}. As the figure shows, the light field presents distinct nonclassical features, as evidenced by the ample negativity arising from the Wigner function. For this scenario, it is difficult to provide a true optimal measurement basis as the SLD may result in very complex measurement setups. Motivated by this, let us apply the following unitary operator on the quantum state of our probe \begin{equation} \hat{U}_\mathrm{K} = \mathrm{exp}\left[-\frac{i\chi}{2}(\hat{a}^\dagger\hat{a})^2\right], \end{equation} where $\chi$ is a Kerr nonlinear tunable parameter. The reason behind the application of this nonlinear Kerr unitary operation is to modulate the temperature-independent phase in the quantum state of the probe, given in Eq.~\eqref{eq:state_optomechanics2}, to compensate the non-Gaussian effect of the Hamiltonian. The transformed state reads as \begin{multline} \tilde{\rho}_\mathrm{L}(\tau=\pi) = \hat{U}_\mathrm{K} \rho_\mathrm{L}(\tau = \pi) \hat{dU}_\mathrm{K}^\dagger \\ = e^{-\alpha^2}\sum_{\substack{n=0\\m=0}}^\infty \frac{\alpha^{n+m}}{\sqrt{n!m!}} e^{i (n^2-m^2)\left(\pi g_\mathrm{max}^2 - \frac{\chi}{2}\right)} \\ \times e^{-2g_\mathrm{max}^2(m-n)^2(1+2\bar{n})} |n\rangle \langle m|. \end{multline} The Wigner function of $\tilde{\rho}_\mathrm{L}(\tau=\pi)$ is depicted in Fig.~\ref{fig:undoing-kerr}(b) when $\chi$ is set to $\chi = 2\pi g^2_\mathrm{max}$. Interestingly, by this choice the non-Gaussian phase is fully cancelled resulting in an entirely positive Wigner function. Indeed, it is the Wigner function of an initially Gaussian state subject ot phase diffusion. To quantify the performance of this procedure, one has to evaluate the classical Fisher information $\mathcal{F}_C$ using homodyne detection preceded by a nonlinear Kerr medium, and compare it with the ultimate precision bound given by $\mathcal{F}_Q$. To evaluate the classical Fisher information $\mathcal{F}_C$ shown in Eq.~\eqref{eq:classical_fisher}, it is straightforward to obtain the conditional probability $p(x_{\Phi_\mathrm{LO}}|\bar{n})$ as \begin{multline} p(x_{\Phi_\mathrm{LO}}|\bar{n}) = \mathrm{Tr}\left[ |x_{\Phi_\mathrm{LO}} \rangle \langle x_{\Phi_\mathrm{LO}}| \tilde{\rho}_\mathrm{L}(\tau=\pi) \right],\\ = \sum_{\substack{n=0\\m=0}}^\infty \frac{\alpha^{n+m}}{\sqrt{n!m!}} e^{i (n^2-m^2)\left(\pi g_\mathrm{max}^2 - \frac{\chi}{2}\right)}e^{-2g_\mathrm{max}^2(m-n)^2(1+2\bar{n})}\\ \times e^{-\alpha^2} e^{-x_{\Phi_\mathrm{LO}}^2} \frac{\mathcal{H}_m(x_{\Phi_\mathrm{LO}}) \mathcal{H}_n(x_{\Phi_\mathrm{LO}}) e^{i\Phi_\mathrm{LO}(m-n)}}{\sqrt{\pi 2^{(m+n)} m! n! }}, \end{multline} where $|x_{\Phi_\mathrm{LO}}\rangle$ is the eigenvector of the rotated quadrature operator $\hat{x}_\phi$ with local oscillator phase $\phi$ defined as: \begin{equation} \hat{x}_{\Phi_\mathrm{LO}} = \frac{\hat{a} e^{-i\Phi_\mathrm{LO}} + \hat{a}^\dagger e^{i\Phi_\mathrm{LO}} }{\sqrt{2}}. \end{equation} \begin{figure}[t] \includegraphics[width=\linewidth]{new_fig_cfi_central.png} \caption{(a) Fisher information ratio $\mathcal{F}_C/\mathcal{F}_Q$ as functions of the Kerr nonlinear strenght $0 \leq \chi \leq 2\pi g^2_\mathrm{max}$ and the oscillator's temperature $\bar{n}$. A proper tuning of $\chi$ and the known local oscillator phase $\Phi_\mathrm{LO}$ [see panel (b)] can lead to a Fisher information ratio up to $\mathcal{F}_C \approx 0.95 \mathcal{F}_Q$. (b) Local oscillator phase $\Phi_\mathrm{LO}$ as functions of the Kerr nonlinear strenght $0 \leq \chi \leq 2\pi g^2_\mathrm{max}$ and the oscillator's temperature $\bar{n}$. Notice that by tuning $\chi = 2g^2_\mathrm{max}\pi$ makes the measurement basis independent of the unknown parameter. In (c), we show the Fisher ratio $\mathcal{F}_C/\mathcal{F}_Q$ as a function of $\Phi_\mathrm{LO}$ for different values of $\bar{n}$ when the Kerr nonlinearity is tuned to $\chi = 2g_\mathrm{max}^2\pi$. As seen, the measurement basis becomes independent of the unknown parameter $\bar{n}$. Similarly, in panel (d), when the Kerr medium is tuned to $\chi = 2g_\mathrm{max}^2\pi/4$, the measurement basis depends on $\bar{n}$, as the peak of the Fisher ratio changes as the temperature varies.}\label{fig:cfi} \end{figure} In Fig.~\ref{fig:cfi}(a), we compute the Fisher information ratio $\mathcal{F}_C/\mathcal{F}_Q$ as a function of the Kerr nonlinear strength $\chi$ and the oscillator's temperature $\bar{n}$. Notice that the Kerr modulation ranges between $0 \leq \chi \leq 2\pi g^2_\mathrm{max}$, i.e., from no Kerr medium interaction to the value which cancels the phase from non-Gaussian interaction completely. As it is evident from the figure, the best performance is achieved when the $\chi = 2\pi g^2_\mathrm{max}$, for which $\mathcal{F}_C/\mathcal{F}_Q$ reaches a near-optimal ratio of $\sim 0.95$. Two relevant cases are pertinent to explore. On the one hand, for low phonon quanta excitations $\bar{n} \approx 0$, performing the homodyne detection step without any Kerr modulation $\chi = 0$ leads to a low Fisher information ratio about $\sim 0.1$ ---while letting the system to interact with a Kerr medium of strength $\chi = 2\pi g^2_\mathrm{max}$ one gains much information up to a Fisher ratio of $\sim 0.95$. This result can be understood as estimating such values of $\bar{n} \approx 0$ demands stronger optomechanical couplings, which then enables major nonclassical features arising from the non-Gaussian character of the Hamiltonian [see Fig.~\ref{fig:undoing-kerr}(a)]. Thus, to obtain better performances in the homodyne detection scheme, one requires to cancel the non-Gaussian phase contribution, in Eq.~\eqref{eq:state_optomechanics2}, significantly. On the other hand, for larger values of $\bar{n}$, i.e., $\bar{n} \geq 1$, even modest values of Kerr nonlinearity is enough to achieve large $\mathcal{F}_C/\mathcal{F}_Q$ ratio. A crucial point in the above procedure for determining $\bar{n}$ is to fix $\Phi_\mathrm{LO}$, which specifies the homodyne measurement. If the optimized value of $\Phi_\mathrm{LO}$ depends on $\bar{n}$, which is unknown, then one has to resort in an adaptive approach. In such procedure, one has to acquire some prior information about $\bar{n}$ using non-optimal measurements, i.e., taking any value for $\Phi_\mathrm{LO}$, and then use the estimated value of $\bar{n}$ for updating the $\Phi_\mathrm{LO}$ for a better estimation in the next iteration. By repeating this for a few iterations, one can eventually tune $\Phi_\mathrm{LO}$ near its optimal value. It is highly desirable to find an optimal measurement independent of the parameter of interest, here $\bar{n}$. To investigate this, in Fig.~\ref{fig:cfi}(b), we plot the optimal $\Phi_\mathrm{LO}$ as a function of $\bar{n}$ and $\chi$. In general, for any choice of $\chi$, the optimal local phase $\Phi_\mathrm{LO}$ varies as $\bar{n}$ changes. Remarkably, by tuning $\chi = 2g_\mathrm{max}^2\pi$, which fully cancels the effect of the non-Gaussian optomechanical interaction, the optimal $\Phi_\mathrm{LO}$ becomes zero for any value of $\bar{n}$. This important observation shows that by using a Kerr nonlinear medium with $\chi = 2g_\mathrm{max}^2\pi$ one single measurement basis can detect $\bar{n}$ over a wide range of values, avoiding complex adaptive measurement methods. To show this more concretely, in Fig.~\ref{fig:cfi}(c), we plot the Fisher ratio $\mathcal{F}_C/\mathcal{F}_Q$ as a function of $\Phi_\mathrm{LO}$ for different values of $\bar{n}$ when the Kerr nonlinearity is tuned to $\chi = 2g_\mathrm{max}^2\pi$. As the figure shows, the maximum efficiency is achieved for $\Phi_\mathrm{LO} = 0$ or $\Phi_\mathrm{LO} = \pi$ for all values of $\bar{n}$. For the sake of completeness, in Fig.~\ref{fig:cfi}(d), we plot the $\mathcal{F}_C/\mathcal{F}_Q$ as a function of $\Phi_\mathrm{LO}$ when $\chi$ is tuned to a non-optimal value $\chi = 2g_\mathrm{max}^2\pi/4$ for various values of $\bar{n}$. As evident from the figure, for different values of $\bar{n}$ the peak of the curve varies, making an adaptive strategy essential. It is also interesting to briefly discuss what happens in the limit of large $\alpha$ (i.e. for $\alpha \gg 1$). As we explain in Appendix~\ref{a:linearized}, if one considers the linearized optomechanical Hamiltonian, the classical Fisher information $\mathcal{F}_C$ for any Gaussian (general-dyne) measurement \cite{GenoniDiffusone,Serafozzi}, and thus comprising the special case of homodyne detection, can be analytically evaluated. Remarkably one shows that $\mathcal{F}_C$ goes to zero in the limit $\alpha \gg 1$ for any choice of the measurement. Non-Gaussian measurements are thus going to be necessary not only to attain the ultimate limit set by the QFI in Eq. (\ref{eq:limit-estimation}), but, in the limit of large $\alpha$, also to obtain a non-zero information about the temperature. \section{Concluding remarks}\label{sec:conclusions} In this paper, we have suggested a scheme for measuring the temperature of a mechanical oscillator, initially in a thermal state, using coherent light as a probe when the optomechanical system operates in the nonlinear regime. Remarkably, our scheme reaches precision, which almost saturates the quantum bound quantified by quantum Fisher information. To support our results, we analytically derive the temporal evolution of the reduced density matrix of the light probe, in which we find two different contributions: (i) a coherent phase due to the intrinsic non-Gaussian interaction term; and (ii) an incoherent diffusion process. Remarkably, the phase diffusion contribution is the only one encoding the mechanical oscillator's temperature. This suggests that the estimation performs better at low phonon quanta excitations, i.e., low temperature, as increasing the mean phonon number leads toward a complete loss of information regarding the optical phase. The key part of our protocol to achieve quantum-limited precision is to place a nonlinear Kerr medium before the homodyne detector. The introduction of this medium significantly increases the precision as it helps to cancel the temperature-independent coherent phase of the light probe. Hence, the measurement outcomes are solely determined by the incoherent diffusion process which encodes the initial temperature of the mechanical oscillator. Remarkably, by choosing the Kerr nonlinearity to fully cancel the coherent phase the local phase of the homodyne detection becomes independent of the temperature. This significantly simplifies the thermometry procedure, as it avoids the use of complex adaptive sensing methods. \section*{Acknowledgements} AB acknowledges the National Key R\&D Program of China, Grant No. 2018YFA0306703. VM thanks the Chinese Postdoctoral Science Fund for grant 2018M643435. MGG acknowledges support from a Rita Levi-Montalcini fellowship of MIUR. MGAP is member of INdAM-GNFM.
1,314,259,992,729
arxiv
\section{Introduction} The description of Gorenstein rings plays a central role in algebra and geometry. Gorenstein ideals in codimension 1 and 2 are known to be complete intersections. Buchsbaum and Eisenbud (\cite{BuchsbaumEisenbud77}) showed that any Gorenstein ideal of codimension 3 is generated by $2k+1$ elements, $k\geq 1$, which are the submaximal Pfaffians of a skew-symmetric matrix of size $2k+1$. Codimension 4 Gorenstein ideals are not fully understood but there are partial characterization results due to Kustin-Miller (\cite{KustinMiller82}) and Reid (\cite{Reid13}). The non-existence of any structure theorem for Gorenstein ideals in higher codimension is a main motivation for the following generalization. Let $I$ be a Gorenstein ideal of codimension $d \geq 4$ in a regular local ring or a graded polynomial ring $T$. Instead of considering $R = T/I$ as an $T$-module and describing its minimal free resolution, the idea is to reduce the codimension by considering $T$ as an algebra over some other regular local ring $S$. More precisely, let $\varphi\colon S \rightarrow R$ be a ring homomorphism which induces on $R$ the structure of a finitely generated $S$-module. Let us further assume that $R$ is a perfect Cohen-Macaulay $S$-module of codimension $ c= \dim S - \dim_S R \leq 3$. Then the Auslander-Buchsbaum formula implies that the minimal free resolution of $R$ as an $S$-module has length $\leq 3$ and the question is whether there exist a general description of such a minimal free resolution. In codimension 1, \cite{CataneseRegular} gives a characterization for the case that $R$ is the canonical ring of a regular surface of general type which holds also in greater generality. Gorenstein algebras of codimension 2 and their special symmetric minimal free resolutions were described in \cite{Grassi96} and \cite{Boehning05}. In this article, we focus on Gorenstein algebras whose codimension is an odd number and adapt the ideas of Buchsbaum and Eisenbud to Gorenstein algebras after imposing some mild additional condition on the homomorphism $\varphi$. Throughout this article, all considered rings are commutative Noetherian with identity. To start with, we briefly recall some notation. \begin{definition}[\cite{BuchsbaumEisenbud77}, Section 2] \label{def_alternating} Let $S$ be a ring, and let $F$ be a finitely generated free $S$-module. We call a map $f\colon F \rightarrow F^{\ast} := \Hom_S(F,S)$ \textit{alternating} if, with respect to some (and hence any) basis and dual basis of $F$ and $F^{\ast}$ respectively, the matrix corresponding to $f$ is skew-symmetric, or equivalently, if $f^\ast = -f$. \end{definition} \begin{definition}\label{def_perfect}[\cite{BrunsHerzog98}, Chapter 1] Let $S$ be a ring and let $M$ be a finitely generated $S$-module. Then the $\textit{grade}$ of $M$ is defined as \[ \grade M = \min\{i \mid \Ext^i_S(M,S) \neq 0\}. \] Moreover, we call $M$ \textit{perfect}, if $\grade M = \pd_S M$. \end{definition} \begin{definition} Let $(S,\mathfrak{m})$ be a regular local ring, and let $R$ be a ring. Moreover, let $\varphi\colon S \rightarrow R$ be a homomorphism of rings which induces on $R$ the structure of a finitely generated $S$-module. The number $c = \dim S - \dim_S R$ is called the \textit{codimension} of $R$. We call $R$ a \textit{Gorenstein ($S$-)algebra} or \textit{relatively Gorenstein of codimension $c$} if $R$ is a perfect $S$-module and \begin{equation*} R \cong \Ext^c_S(R,S) \end{equation*} as $R$-modules. \end{definition} \begin{remark}\label{rem_modulestructureExt} Note that the functoriality of $\Ext_S^{c}(,S)$ induces the structure of an $R$-module on $\Ext^c_S(R,S)$. Indeed if $r \in R$ and $\mu_r$ denotes the multiplication map on $R$ by $r$, then, for any $g \in \Ext_S^{c}(R,S)$, we set $rg = \Ext_S(\mu_r,S)(g) \in \Ext_S^{c}(R,S)$. \end{remark} The main result of this article is the following: \begin{theorem}\label{thm_mainthm} Let $(S,\mathfrak{m})$ be a regular local ring, and let $R$ be any ring. Let $\varphi \colon S \rightarrow R$ be a ring homomorphism which induces on $R$ the structure of a finitely generated $S$-module. Furthermore, set $\tilde{S} = S/\ann_S R$, and let $\tilde{\varphi}\colon \tilde{S} \rightarrow R$ be the induced injective homomorphism. Assume that the condition $$ (\diamondsuit) \text{\quad $\tilde{\varphi}_p \colon \tilde{S}_p \rightarrow R_p$ is an isomorphism for all minimal primes of $\tilde{S}$ }$$ holds. Let $c = \dim S - \dim_S R$ be the codimension. If $c = 2m+1$ for some $m\geq 0$, the following conditions are equivalent: \begin{itemize} \item $R$ is a Gorenstein algebra of codimension $c$. \item There exists a minimal free resolution of $R$ as an $S$-module of type \begin{equation}\label{eq_minres} 0 \leftarrow R \xleftarrow{d_0} F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \xleftarrow{d_{m}} F_m \xleftarrow{d_{m+1}} F_m^\ast \xleftarrow{d_{m}^\ast} \cdots \leftarrow F_1^\ast \xleftarrow{d_1^\ast} F_0^\ast \leftarrow 0, \end{equation} where $F_0 = S \oplus F_0' $ for some free module $F_0'$, $d_0 = ( \varphi \mid d_0')$ and $d_{m+1}^\ast = (-1)^m d_{m+1}$. \end{itemize} \end{theorem} \begin{notation} Let $S,R$ and $\varphi$ be as in Theorem \ref{thm_mainthm}. In the following, we denote by $e_0$ the element of $F_0$ corresponding to $1_S \in S \hookrightarrow F_0$. Thus, by the choice of $d_0$ in \eqref{eq_minres}, we have $d_0(e_0) = \varphi(1_S) = 1_R$. \end{notation} \begin{remark} Let us see how Theorem \ref{thm_mainthm} recognizes the known results in codimension 1 and 3. So let us assume that $R = S/I$ for some ideal $I \subseteq S$ of codimension $c$. Then the condition $(\diamondsuit)$ is trivially satisfied. If $c=1$, then $d_1\colon F_0 = S \leftarrow F_1$ is a symmetric matrix, and hence $I$ is generated by one element. If $c = 3$, then $\rank F_1 = 1+ \rank d_2$ is an odd number as any skew-symmetric matrix has an even rank and the result reduces to (a reformulation of) the structure theorem of Buchsbaum-Eisenbud for codimension 3 Gorenstein ideals. \end{remark} \begin{remark} Note that the equivalent conditions on $R$ in Theorem \ref{thm_mainthm} imply that $R$ is a Cohen-Macaulay $S$-module. Indeed, by the Auslander-Buchsbaum formula we have $$ \depth R = \depth S - \pd_S(R) = \dim S - \pd_S(R) = \dim_S(R).$$ \end{remark} \begin{remark} Note that if $R$ is an integral domain, then so is $\tilde{S}$, and the fact that $R$ is a finitely generated $\tilde{S}$-module implies that the condition $(\diamondsuit)$ is equivalent to $Q(\tilde{S}) \cong Q(R)$, or geometrically that the corresponding integral affine schemes are birational. \end{remark} Let us first show how condition $(\diamondsuit)$ implies that every $S$-linear homomorphism $g\colon R \rightarrow R$ is also $R$-linear, more precisely is the multiplication by an element in $R$: \begin{proposition}\label{prop_iso} Let $S$ be a local Cohen-Macaulay ring and let $\varphi\colon S \rightarrow R$ be a ring homomorphism which induces on $R$ the structure of a finitely generated Cohen-Macaulay $S$-module. Furthermore, let $I = \ann_S R$ and set $\tilde{S} = S/I$. Assume that the induced homomorphism $\tilde{S}_p \rightarrow R_p$ is an isomorphism for every minimal prime ideal of $\tilde{S}$. Then the natural homomorphism $R \rightarrow \Hom_S(R,R)$ is an isomorphism. \end{proposition} \begin{proof} First notice that any associated prime of $R$ as an $S$-module contains $I$. Thus, the minimal primes of ${\tilde{S}}$ coincides with the minimal primes of $R$ as an $S$-module (via the identification $\Spec({\tilde{S}}) \rightarrow \Spec(S)$). On the other hand, $R$ being a Cohen-Macaulay $S$-module implies that every associated prime of $R$ as an $S$-module is minimal. Consequently, $I$ is unmixed and every associated prime is of height $\dim S - \dim R$. For a minimal prime $p \subset \tilde{S}$, we have $\tilde{S}_p \cong R_p$ as $\tilde{S}_p$-modules by assumption. Thus, \begin{equation}\label{eq_min} \Hom_{\tilde{S}_p}(R_p,R_p) \cong \tilde{S}_p \cong R_p. \end{equation} Now let $\psi\colon R \rightarrow R$ be an arbitrary $S$-linear homomorphism and set $\alpha := \psi-\psi(1)\id_R$. We want to show that $\alpha$ is the zero homomorphism. The module $N = \im(\alpha)$ is an $\tilde{S}$-submodule of $R$ and for all minimal primes of ${\tilde{S}}$ we have $N_p = 0$ by \eqref{eq_min}. Suppose that $N \neq 0$. Then $N$ has an associated prime $q$ and $N_q \neq 0$. But $q$ also being an associated prime of $R$ and hence a minimal prime of ${\tilde{S}}$ gives a contradiction. \end{proof} \begin{remark}\label{rem_isob} Let $S, R$ and $\varphi$ be as in the setting of Proposition \ref{prop_iso}. Then the statement remains correct if we consider homomorphisms from $R$ to an isomorphic $\tilde{S}$-module. More precisely, let $B$ be an $R$-module which is isomorphic to $R$ as an $\tilde{S}$-module, then the homomorphism $$ B \rightarrow \Hom_{{\tilde{S}}}(R,B)$$ sending an element $b \in B$ to the multiplication homomorphism by $b$ is an isomorphism. \end{remark} Using Proposition \ref{prop_iso} we are now able to prove one part of Theorem \ref{thm_mainthm}: \begin{proof} [Proof of Theorem \ref{thm_mainthm}, "(skew-) symmetric resolution $\Rightarrow$ Gorenstein"] Let \[ 0 \leftarrow R \xleftarrow{d_0} F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \xleftarrow{d_{m}} F_m \xleftarrow{d_{m+1}} F_m^\ast \xleftarrow{d_{m}^\ast} \cdots \leftarrow F_1^\ast \xleftarrow{d_1^\ast} F_0^\ast \leftarrow 0, \] be a minimal free resolution of $R$ as an $S$-module with $d_{m+1}^\ast = (-1)^m d_{m+1}$. By assumption we have $$ \Ext^i_S(R,S) = 0 \text{ for all } i < \depth(S) - \dim_S R = \dim S - \dim_S R = c. $$ Thus, applying the functor $\Hom_S(,S)$ to the given minimal free resolution of $R$, we obtain a complex \[ 0 \leftarrow \Ext^c_S(R,S) \leftarrow F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \xleftarrow{d_{m}} F_m \xleftarrow{(-1)^m d_{m+1}} F_m^\ast \xleftarrow{d_{m}^\ast} \cdots \leftarrow F_1^\ast \xleftarrow{d_1^\ast} F_0^\ast \leftarrow 0, \] which is a minimal free resolution of $\Ext^c_S(R,S)$. Comparing these two complexes, we construct a commutative diagram \\ {\centering \begin{tikzpicture}[scale=\textwidth] \matrix(m)[matrix of math nodes, row sep=3.1em, column sep=2.4em, text height=1.1ex, text depth=0.21ex] { 0 & R & F_0 & \cdots & F_m & F_m^\ast & \cdots & F_0^\ast & 0 \\ 0 & \Ext^c_S(R,S) & F_0 & \cdots & F_m & F_m^\ast & \cdots & F_0^\ast & 0 \\}; \path[->,font=\scriptsize,>=angle 90] (m-1-2) edge (m-1-1) edge[dashed] node[left] {\tiny{$u$}} (m-2-2) (m-1-3) edge node[above] {\tiny{$d_0$}} (m-1-2) (m-1-4) edge node[above] {\tiny{$d_1$}}(m-1-3) (m-1-5) edge (m-1-4) (m-1-6) edge (m-1-5) (m-1-5) edge node[left] {\tiny{$\id_{F_m}$}} (m-2-5) (m-2-2) edge (m-2-1) (m-2-3) edge (m-2-2) (m-2-6) edge (m-2-5) (m-2-5) edge (m-2-4) (m-2-4) edge node[above] {\tiny{$d_1$}} (m-2-3) (m-1-3) edge node[left] {\tiny{$\id_{F_0}$}} (m-2-3) (m-2-6) edge node[above] {\tiny{$(-1)^m d_{m+1}$}} (m-2-5) (m-1-6) edge node[above] {\tiny{$d_{m+1}$}}(m-1-5) edge node[right] {\tiny{$(-1)^m\id_{F_m^\ast}$}} (m-2-6) (m-1-7) edge (m-1-6) (m-2-7) edge (m-2-6) (m-1-8) edge (m-1-7) edge node[right] {\tiny{$(-1)^m\id_{F_0^\ast}$}} (m-2-8) (m-2-8) edge (m-2-7) (m-1-9) edge (m-1-8) (m-2-9) edge (m-2-8) ; \end{tikzpicture} } where $u$ is the $S$-linear isomorphism $u\colon R \rightarrow \Ext_{S}^{c}(R,S)$ induced by the chain maps. Now, since $u$ is also $\tilde{S}$-linear, Proposition \ref{prop_iso} and Remark \ref{rem_isob} imply that $u$ is an $R$-linear homomorphism. Hence, \begin{equation*}\label{eq_extring} R \cong \Ext_{S}^{c}(R,S), \end{equation*} showing that $R$ is a Gorenstein $S$-algebra. \end{proof} Next let us assume that $R$ is a Gorenstein algebra of odd codimension $c = 2m+1$. Then the minimal free resolution of $R$ as an $S$-module is of the form \[ 0 \leftarrow R \leftarrow F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \leftarrow F_{c-1} \xleftarrow{d_c} F_c \leftarrow 0 \] Applying the functor $\Hom_S(,S)$ to the minimal free resolution of $R$, we obtain a complex $$ 0 \rightarrow \Ext^c_S(R,S) \leftarrow F_c^\ast \xleftarrow{d_c^\ast} F_{c-1}^\ast \leftarrow \cdots \leftarrow F_0^\ast \leftarrow 0$$ which is exact by the assumption on $R$. Moreover, as $R$ is a Gorenstein S-algebra, there exists an isomorphism $\tilde{\psi} \colon \Ext^c_S(R,S) \rightarrow R$ which lifts to isomorphisms $\psi_i\colon F_{c-i} ^\ast \rightarrow F_i$. Using this chain isomorphism, we get a minimal free resolution of $R$ of the form \[ 0 \leftarrow R \xleftarrow{d_0} F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \xleftarrow{d_{m}} F_m \xleftarrow{d_{m+1}\circ \psi_{m+1}} F_m^\ast \xleftarrow{d_{m}^\ast} \cdots \leftarrow F_1^\ast \xleftarrow{d_1^\ast} F_0^\ast \leftarrow 0. \] Now, similar as in the proof of the Buchsbaum-Eisenbud structure theorem, the aim is to show the existence of an isomorphism of complexes which yields a symmetric respectively skew-symmetric map in the middle. To do so, we first introduce a (skew-)commutative multiplication on the minimal free resolution $\textbf{F} =\bigoplus F_i$ of $R$. Afterwards, we use this multiplication to define a map of chain complexes between the resolution and its dual. The last step is to show that this chain map is indeed an isomorphism of complexes. \section{A Multiplicative Structure on the Minimal Free Resolution} Let $S,R$ and $\varphi\colon S \rightarrow R$ be as in Theorem \ref{thm_mainthm}. The assumption $(\diamondsuit)$ is not necessary for the following. Recall that we denote by $e_0$ the element in $F_0$ corresponding to $1_S$ under the inclusion $S \hookrightarrow F_0$. From now on $\textbf{F} = (F_{\scriptscriptstyle \bullet},d)$ will denote a (fixed) minimal free resolution of $R$ as an $S$-module \[ 0 \leftarrow R \leftarrow F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \leftarrow F_{c-1} \xleftarrow{d_c} F_c \leftarrow 0. \] Moreover, we consider $\textbf{F}$ as an graded $S$-module by calling an element $x\in F_i$ homogeneous of (homological) degree $|x| = i$. We denote by $\textbf{F} \otimes \textbf{F}$ the tensor product of $\textbf{F}$ by itself with differentials $$\delta(x \otimes y) = d(x) \otimes y + (-1)^{|x|}x \otimes d(y)$$ for homogeneous elements $x,y \in \textbf{F}$. \begin{proposition}\label{prop_multiplication} There exists a chain map $\mu\colon \textbf{F} \otimes \textbf{F} \rightarrow \textbf{F}$ such that, writing $ab$ for $\mu(a\otimes b)$, the following holds: \begin{enumerate}[(i)] \item $d(fg) = d(f)g + (-1)^{|f|}fd(g)$ for any $f,g \in \textbf{F}$ homogeneous,\label{item:prop1} \item $\mu$ is homotopy-associative,\label{item:prop2} \item the element $e_0 \in F_0$ acts as a unit for $\mu$ on $\textbf{F}$, that means $e_0g = g = ge_0 $ for any $g \in \textbf{F}$, \label{item:unitelement} \item $fg = (-1)^{|f|\cdot|g|}gf$ for any $f,g \in \textbf{F}$ homogeneous, that means $\mu$ is \textit{commutative}. \label{item:commut} \end{enumerate} \end{proposition} We will prove this statement in several steps. Let $\tilde{\mu}: R \otimes_S R \rightarrow R$ be the multiplication homomorphism. First we consider the following diagram \begin{equation*} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.9em, column sep=2.9em, text height=1.5ex, text depth=0.25ex] {0 & R \otimes R & ({\textbf{F} \otimes \textbf{F}})_0 & ({\textbf{F} \otimes \textbf{F}})_1 & \cdots \\ 0 & R & F_0 & F_1 & \cdots \\}; \path[->,font=\scriptsize,>=angle 90] (m-1-2) edge (m-1-1) edge node[left] {$\tilde{\mu}$} (m-2-2) (m-1-3) edge node[above] {$d_0 \otimes d_0$} (m-1-2) (m-1-4) edge node[above] {$\delta_1$} (m-1-3) (m-1-5) edge node[above] {$\delta_2$} (m-1-4) (m-2-2) edge (m-2-1) (m-2-3) edge node[above] {$d_0$} (m-2-2) (m-2-4) edge node[above] {$d_1$} (m-2-3) (m-2-5) edge node[above] {$d_2$} (m-2-4) ; \end{tikzpicture} \end{equation*} where the first row is a complex and the second row is exact. \begin{lemma}\label{lem_liftmultiplication} Any chain map $\mu\colon {\textbf{F} \otimes \textbf{F}} \rightarrow \textbf{F}$ which is a lift of $\tilde{\mu}$ satisfies properties \eqref{item:prop1}-\eqref{item:prop2} of Proposition \ref{prop_multiplication}. \end{lemma} \begin{proof} Property \eqref{item:prop1} is just the commutativity of the diagram above with the induced chain map $\mu$. To verify the second property we have to show that the chain map $$ \rho \ := \mu \circ (\mu \otimes \id_\textbf{F} ) - \mu \circ (\id_\textbf{F} \otimes \mu )\colon \textbf{F}^{\otimes 3} \rightarrow \textbf{F}$$ is homotopic to zero. But $\rho$ is a lift of the map $\tilde{\mu} \circ (\tilde{\mu} \otimes \id_{R} ) - \tilde{\mu} \circ (\id_{R} \otimes \tilde{\mu})\colon R^{\otimes 3} \rightarrow R$ which is the zero map since $R$ is associative. Thus $\rho$ is homotopic to zero. \end{proof} \begin{remark} Similarly one can check that the map $\mu$ in Lemma \ref{lem_liftmultiplication} is homotopy-commutative. \end{remark} So it remains to show that there is a lift $\mu$ of $\tilde{\mu}$ which satisfies also properties \eqref{item:unitelement} and \eqref{item:commut}. To do this, we will first introduce the symmetric square of the complex $\textbf{F}$ as in \cite{BuchsbaumEisenbud77}. Let $M$ be the submodule of $\textbf{F} \otimes \textbf{F}$ generated by the relations $$ \{f \otimes g - (-1)^{|f| \cdot |g|}g \otimes f \mid f,g \in \textbf{F} \textup{ homogeneous} \}.$$ Since $\delta(M) \subseteq M$, the module \[ S_2(\textbf{F}) = (\textbf{F} \otimes \textbf{F})/ M \] inherits the structure of a complex of $S$-modules (with induced differentials $\bar{\delta}$). Let $n \geq 0$ and set $V = \oplus_{i + j = n, \ i<j} F_i \otimes F_{j}$. Then \[ S_2(\textbf{F})_n \cong \begin{cases} V & \text{if $n$ is odd}, \\ V \oplus \bigwedge^{2}(F_{n/2}) & \text{if } n \equiv 2 \mod 4 , \\ V \oplus S_2(F_{n/2}) & \text{if } n \equiv 0 \mod 4. \end{cases} \] In particular, $S_2(\textbf{F})$ is a complex of free $S$-modules. Let $\pi\colon \textbf{F} \otimes \textbf{F} \rightarrow S_2(\textbf{F})$ be the map of chain complexes where each $\pi_i$ is the canonical projection from $(\textbf{F} \otimes \textbf{F})_i$ to $S_2(\textbf{F})_i$. From the definition of $S_2(\textbf{F})$ we see that any lift $\mu\colon \textbf{F} \otimes \textbf{F} \rightarrow \textbf{F}$ of $\tilde{\mu}$ which factors through $\pi$ satisfies property \eqref{item:commut}. \begin{proof}[Proof of Proposition \ref{prop_multiplication}] To begin with, we set $\alpha_0 := \tilde{\mu} \circ (d_0 \otimes d_0)\colon F_0 \otimes F_0 \rightarrow R$. Since $R$ is a commutative ring, $\alpha_0$ factors through $\pi_0$ yielding a homomorphism $ \gamma_0\colon S_2(\textbf{F})_0 \cong S_2(F_0) \rightarrow R$. Now, as $S_2(\textbf{F})_0$ is free, there is a map $\beta_0\colon S_2(\textbf{F})_0 \rightarrow F_0$ such that $ \gamma_0 = d_0 \circ \beta_0.$ From this, setting $\mu_0 = \beta_0 \circ \pi_0$, we obtain a commutative diagram \begin{equation}\label{diag_comm} \begin{tikzpicture}[baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes,row sep=1.9em,column sep=2.1em,minimum width=2em,text height=1.5ex, text depth=0.25ex] { & R \otimes R & & & & & F_0 \otimes F_0 \\ & & & & & & \\ & & & & & \color{red}S_2(\textbf{F})_0 & \\ 0& R & & & & & F_0 \\}; \path[-stealth] (m-1-2) edge node [left] {$\tilde{\mu}$} (m-4-2) (m-4-7) edge node [above] {$d_0$} (m-4-2) (m-1-7) edge node [above] {$d_0 \otimes d_0$} (m-1-2) (m-4-2) edge (m-4-1) (m-1-7) edge node[left=0.5em]{$\alpha_0$} (m-4-2) (m-1-7) edge[red] node[red,auto] {$\pi_0$} (m-3-6) (m-3-6) edge[red] node[red,auto] {$\gamma_0$} (m-4-2) (m-3-6) edge[red] node[red,auto] {$\beta_0$} (m-4-7) (m-1-7) edge node[left] {$\mu_0$} (m-4-7); \end{tikzpicture} \end{equation} Note that for any element $g \in F_0$ we have and $\gamma_0(\pi_0(e_0 \otimes g)) = \gamma_0(\pi_0(g \otimes e_0))$ and \[ \gamma_0(\pi_0(e_0 \otimes g)) =\alpha_0(g \otimes e_0) = d_0(g).\] Hence, we can choose $\beta_0$ in such a way that $ \beta_0(\pi_0(e_0 \otimes g)) = \beta_0(\pi_0(g \otimes e_0)) = g$. Consequently, $$ \mu_0(e_0 \otimes g) = \mu_0(g \otimes e_0) = g$$ for all $g \in F_0$. By the choice of $\gamma_0$, $$ 0 \leftarrow R \xleftarrow{\gamma_0} S_2(\textbf{F})_0 \xleftarrow{\bar{\delta}_1} S_2(\textbf{F})_1 \xleftarrow{\bar{\delta}_2} \ldots$$ is a complex of free $S$-modules. Comparing this complex $S_2(\textbf{F})$ with $\textbf{F}$ we can extend $\beta_0$ to a map of chain complexes $\beta\colon S_2(\textbf{F}) \rightarrow \textbf{F}$: \begin{center} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] {0 & R & S_2(\textbf{F})_0 & S_2(\textbf{F})_1 & S_2(\textbf{F})_2 & \cdots \\ 0 & R & F_0 & F_1 & F_2 & \cdots \\}; \path[->,font=\scriptsize,>=angle 90] (m-1-2) edge (m-1-1) edge node[left] {$\id$} (m-2-2) (m-1-3) edge node[above] {$\gamma_0$} (m-1-2) edge[red] node[left,red] {$\beta_0$} (m-2-3) (m-1-4) edge node[above] {$\bar{\delta}_1$} (m-1-3) edge[red] node[left,red] {$\beta_1$} (m-2-4) (m-1-5) edge node[above] {$\bar{\delta}_2$} (m-1-4) edge[red] node[left,red] {$\beta_2$} (m-2-5) (m-1-6) edge (m-1-5) (m-2-2) edge (m-2-1) (m-2-3) edge node[above] {$d_0$} (m-2-2) (m-2-4) edge node[above] {$d_1$} (m-2-3) (m-2-5) edge node[above] {$d_2$} (m-2-4) (m-2-6) edge (m-2-5) ; \end{tikzpicture} \end{center} As above, we can choose the maps $\beta_i$ successively so that $$ \beta_i(\pi_i(e_0 \otimes g)) = \beta_i(\pi_i(g \otimes e_0)) = g$$ for any $g \in F_i$: Indeed, if $g \in F_1$, then $$ \beta_0(\bar{\delta}_1(\pi_1(e_0 \otimes g))) = \beta_0(\pi_0(\delta_1(e_0 \otimes g))) = \beta_0(\pi_0(e_0 \otimes d_1(g))) = d_1(g).$$ Hence, we can choose $\beta_1$ so that $ \beta_1(\pi_1(e_0 \otimes g)) = \beta_1(\pi_1(g\otimes e_0)) = g$ and we proceed similarly for $i \geq 2$. \par \smallskip Setting $\mu_i = \beta_i \circ \pi_i$ for $i \geq 1$, we obtain a chain map $\mu\colon \textbf{F} \otimes \textbf{F} \rightarrow \textbf{F}$ which factors through $\beta\colon S_2(\textbf{F}) \rightarrow \textbf{F}$ as desired. Finally, we summarize the constructed maps in one diagram: \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3.8em, column sep=1.2em,text height=1.5ex, text depth=0.25ex]{ & & & R \otimes R& & (\textbf{F} \otimes \textbf{F})_0& & (\textbf{F} \otimes \textbf{F})_1 & & \cdots \\ & & & & S_2(\textbf{F})_0 & & S_2(\textbf{F})_1 & & \cdots &\\ 0& & R & & F_0 & & F_1 & & \cdots &\\}; \path[-stealth] (m-1-8) edge node[above] {$\delta_1$} (m-1-6) edge node[left] {$\pi_1$} (m-2-7) (m-1-6) edge (m-3-5) (m-1-6) edge (m-3-5) (m-2-7) edge [-,line width=8pt,draw=white] (m-2-5) edge node[above] {$\bar{\delta}_1$} (m-2-5) edge node[left] {$\beta_1$} (m-3-7) (m-2-5) edge node[left] {$\beta_0$} (m-3-5) (m-3-7) edge node[above] {$d_1$} (m-3-5) (m-1-6) edge node[left] {$\pi_0$} (m-2-5) (m-1-8) edge (m-3-7) (m-2-9) edge [-,line width=8pt,draw=white] (m-2-7) edge (m-2-7) (m-3-9) edge (m-3-7) (m-1-10)edge (m-1-8) (m-1-6) edge node[above] {$d_0 \otimes d_0 $} (m-1-4) (m-3-3) edge (m-3-1) (m-3-5) edge node[above] {$d_0$}(m-3-3) (m-2-5) edge node[above left] {$\gamma_0$} (m-3-3) (m-1-4) edge node[above left] {$\tilde{\mu}$} (m-3-3) ; \end{tikzpicture} \end{center} \end{proof} \section{Proof of Theorem ~\texorpdfstring{\ref{thm_mainthm}}{\ref*{thm_mainthm}}} As a next step towards proving the remaining part of Theorem \ref{thm_mainthm} we use the multiplication on $\textbf{F}$ from the last section and the Gorenstein property of $R$ to deduce an isomorphism between the given resolution of $R$ an its dual. So let us assume that $R$ is a Gorenstein algebra of odd codimension $c = 2m+1$ We know from the introductory part that $$ 0 \leftarrow \Ext^c_S(R,S) \leftarrow F_c^\ast \xleftarrow{d_c^\ast} F_{c-1}^\ast \leftarrow \cdots \leftarrow F_0^\ast \leftarrow 0$$ is a minimal free resolution of $\Ext^c_S(R,S)$ Furthermore, using the Gorenstein property, there exists an isomorphism $\tilde{\psi}\colon \Ext^c_S(R,S) \rightarrow R$ which lifts to an isomorphism $\psi_0\colon F_c^\ast \rightarrow F_0$. Let $\sigma \in F_c^\ast$ be the preimage of $e_0 \in F_0$ under this isomorphism. For any $0 \leq i \leq c$ we define a map $ h_i\colon \ F_i \otimes F_{c-i} \rightarrow S$ by sending $a \otimes b$ to $\sigma(ab)$. Moreover, for each $i$, this induces a map \begin{align*} t_i\colon & F_i \rightarrow \Hom_S(F_{c-i},S) = F_{c-i}^{\ast}\\ & a \mapsto t_i(a) \colon F_{c-i} \rightarrow S \\ & \quad \quad \quad \quad \quad b \mapsto h_i(a \otimes b) = \sigma(ab). \end{align*} \begin{lemma}\label{lem_mapsresdual} In the diagram \begin{center} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { F_0 & F_1 & \cdots & F_m & F_{m+1} & \cdots \\ F_c^{\ast} & F_{c-1}^{\ast} & \cdots & F_{m+1}^\ast & F_{m}^\ast & \cdots \\}; \path[->,font=\scriptsize,>=angle 90] (m-1-2) edge node[above] {$d_1$}(m-1-1) edge node[left] {$t_1$} (m-2-2) (m-1-3) edge (m-1-2) (m-1-4) edge node[above] {$d_{m}$} (m-1-3) edge node[left] {$t_{m}$} (m-2-4) (m-2-4) edge (m-2-3) (m-2-3) edge (m-2-2) (m-2-2) edge node[above] {$d_c^{\ast}$} (m-2-1) (m-1-1) edge node[left] {$t_0$} (m-2-1) (m-1-5) edge node[above] {$d_{m+1}$} (m-1-4) edge node[left] {$t_{m+1}$} (m-2-5) (m-2-5) edge node[above] {$d_{m+1}^{\ast}$} (m-2-4) (m-1-6) edge (m-1-5) (m-2-6) edge node[above] {$d_{m}^\ast$} (m-2-5) ; \end{tikzpicture} \end{center} the rectangles with homomorphism $d_i \colon F_i \rightarrow F_{i-1}$ are commutative if $i$ is an odd number and anti-commutative if $i$ is even. For the diagram in the middle we have \[ t_m \circ d_{m+1} = (-1)^{m} d_{m+1}^\ast \circ t_{m+1} \] Moreover, $t_{c-i} = t_i^\ast$. \end{lemma} \begin{proof} Let $i \in \{ 1,\ldots,c\}$, and let $a \in F_i$. We have to show that $t_{i-1}(d_i(a)) = d_{c+1-i}^\ast(t_i(a_i)) \in F_{c+1-i}^\ast$. For any element $b \in F_{c+1-i}$ we have \begin{align*} t_{i-1}(d_i(a))(b) &= \sigma(d_i(a)b) \\ d_{c+1-i}^\ast(t_i(a))(b) & = \sigma(ad_{c+1-i}(b)) \end{align*} On the other hand, as $ab = 0$, $$0 = d_{i}(a)b + (-1)^{i}ad_{c+1-i}(b).$$ Hence, for an odd integer $i$ we have $d_i(a)b = ad_{c+1-i}(b)$, whereas for an even integer $i$ we have $d_i(a)b = -ad_{c+1-1}(b)$. This shows the first claim. Next let $i \in \{0,\ldots,m\}$, and let $a_i \in F_i$, $b_{c-i} \in F_{c-i}$. Then $t_i(a_i)(b_{c-i}) = \sigma(a_ib_{c-i})$ and $t_{c-i}(b_{c-i})(a_i) = \sigma(b_{c-i}a_i)$. Thus, the second claim follows from the fact that $a_ib_{c-i} = b_{c-i}a_i$ for every $i$. \end{proof} As a final step, we show that the $t_i$ are indeed isomorphisms. Note that in the setting of \cite{BuchsbaumEisenbud77} we have $F_0 = S = F_c$ which implies directly that $t_0$ is the identity map (lifting the identity on the corresponding rings) and thus, that each $t_i$ is an isomorphism. To show this in our setting, we make now use of Proposition \ref{prop_iso}. \begin{proposition}\label{prop_chainmapiso} For each $i=0,\ldots,c$, the map $t_i$ is an isomorphism. \end{proposition} \begin{proof} Let $\tilde{t}\colon R \rightarrow \Ext^c_S(R,S)$ be the $S$-linear map induced by the chain maps in Lemma \ref{lem_mapsresdual}. Since both complexes are minimal free resolutions, it is enough to show that $\tilde{t}$ is an isomorphism. Now let $\tilde{\psi}\colon \Ext^c_S(R,S) \rightarrow R$ be the isomorphism introduced at the beginning of this section with induced isomorphism $\psi_0: F_c^\ast \rightarrow F_0$. Let us see what happens with the element $e_0 \in F_0$ in the following diagram: \begin{center} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.4em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { 0 & R & F_0 & \cdots \\ 0 & \Ext^c_S(R,S) & F_c^\ast & \cdots \\ 0 & R & F_0 & \cdots \\}; \path[->,font=\scriptsize,>=angle 90] (m-1-2) edge (m-1-1) edge node[left] {$\tilde{t}$} (m-2-2) (m-1-3) edge node[above] {$d_0$} (m-1-2) edge node[left] {$t_0$} (m-2-3) (m-1-4) edge (m-1-3) (m-2-2) edge (m-2-1) edge node[left] {$\psi$} (m-3-2) (m-3-2) edge (m-3-1) (m-3-3) edge node[above] {$d_0$} (m-3-2) (m-2-3) edge (m-2-2) edge node[left] {$\psi_0$} (m-3-3) (m-2-4) edge (m-2-3) (m-3-4) edge (m-3-3) ; \end{tikzpicture} \end{center} Under the homomorphism $t_0$, the element $e_0 \in F_0$ is mapped to $\sigma$. Indeed for any $b \in F_c$ we have $t_0(e_0)(b) = \sigma(e_0b) = \sigma(b)$. Hence, by the choice of $\sigma$ we know that $\psi_0(t_0(e_0)) = e_0$ and, consequently, $\tilde{\psi}(\tilde{t}(1_R)) = 1_R$ as $d_0(e_0) = 1_R$. Now, using Proposition \ref{prop_iso}, we know that the natural homomorphism $R \rightarrow \Hom_{{\tilde{S}}}(R,R)$ is an isomorphism which implies that $\tilde{\psi} \circ \tilde{t}$ is the identity on $R$. Note that both $\tilde{\psi}$ and $\tilde{t}$ are also ${\tilde{S}}$-linear. Thus, since $\tilde{\psi}$ isomorphism, so is $\tilde{t}$. \end{proof} The proof of the remaining part of Theorem $\ref{thm_mainthm}$ is a direct consequence from the previous results: \hspace{-1pt} \begin{proof}[Proof of Theorem \ref{thm_mainthm}, "Gorenstein $\Rightarrow$ (skew-) symmetric resolution"] By Proposition \ref{prop_chainmapiso}, the homomorphism $t_{m+1}\colon F_{m+1} \rightarrow F_{m}^\ast$ is an isomorphism. Then $\widetilde{d_{m+1}} := d_{m+1} \circ t_{m+1}^{-1}$ is a homomorphism from $F_m^{\ast} $ to $F_m$ and, as $ t_{m+1}^{\ast} \circ d_{m+1} = t_m\circ d_{m+1} = (-1)^m d_{m+1}^{\ast} \circ t_{m+1} $, we have: \begin{align*} \widetilde{d_{m+1}}^{\ast} & = (d_{m+1} \circ t_{m+1}^{-1})^{\ast} = ({t_{m+1}^{-1}})^{\ast} \circ d_{m+1}^{\ast} \\ & = (-1)^m d_{m+1} \circ t_{m+1}^{-1} = (-1)^m \widetilde{d_{m+1}}. \end{align*} Consequently, $$ 0 \leftarrow R \leftarrow F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \leftarrow F_{m} \xleftarrow{\widetilde{d_{m+1}}} F_m^\ast \leftarrow \cdots \leftarrow F_1^{\ast} \xleftarrow{d_1^{\ast}} F_0^{\ast} \leftarrow 0$$ is a minimal free resolution of $R$ with symmetric or alternating middle map as desired. \end{proof} \section{The graded case} We reformulate our structure theorem in the graded case. The proofs are entirely along the same line as in the local case and are omitted here. \begin{definition} Let $S$ be a positively graded polynomial ring, and let $R$ be a finite graded $S$-algebra. By $c = \dim S - \dim_S R$ we denote the codimension of $R$. Then $R$ is called a \textit{Gorenstein algebra of codimension $c$ and twist $t \in {\mathbb Z}$} if $$ R \cong \Ext^c_S(R,S(t))$$ as $R$-modules. \end{definition} \begin{theorem}\label{thm_mainthmgraded} Let $S$ be a positively graded polynomial ring, and let $R$ be any graded ring. Let $\varphi \colon S \rightarrow R$ be a homogeneous ring homomorphism which induces on $R$ the structure of a finitely generated graded $S$-module. Furthermore, let $\tilde{S} = S/\ann_S R$ and $\tilde{\varphi}\colon \tilde{S} \rightarrow R$ be the induced injective homomorphism. Assume that the condition $$ (\diamondsuit) \text{\quad $\tilde{\varphi}_p \colon \tilde{S}_p \rightarrow R_p$ is an isomorphism for all minimal primes of $\tilde{S}$ }$$ holds. If the codimension $ c = \dim S - \dim_S R$ is an odd integer $2m+1$, then the following are equivalent: \begin{itemize} \item $R$ is a Gorenstein algebra of codimension $c$ and twist $t$. \item There exists a minimal free resolution of $R$ as an $S$-module of type \begin{equation} 0 \leftarrow R \xleftarrow{d_0} F_0 \xleftarrow{d_1} F_1 \leftarrow \cdots \xleftarrow{d_{m}} F_m \xleftarrow{d_{m+1}} F_m^\ast(t) \xleftarrow{d_{m}^\ast} \cdots \leftarrow F_1^\ast(t) \xleftarrow{d_1^\ast} F_0^\ast(t) \leftarrow 0, \end{equation} where $F_0 = S \oplus F_0'$ for some graded free module $F_0'$, $d_0 = ( \varphi \mid d_0')$ and $d_{m+1}^\ast = (-1)^m d_{m+1}$. \end{itemize} \end{theorem} \section{Examples} Finally, we present some applications of our structure result for the cases $c = 1$ and $c =3$ originating from algebraic geometry. \begin{example} As a first example we study the canonical ring of a smooth general curve $C$ of genus $6$. Canonically embedded, $C$ is a curve of degree 10 in $\P^5_{\langle x_0,\ldots,x_4,y \rangle}$ and a quadratic section of a del Pezzo surface $Y \subset \P^5$ of degree 5 uniquely determined by $C$. The anti-canonical ring of $Y$ is a Gorenstein ring of dimension 3 and $Y$ is defined by the submaximal Pfaffians of a skew-symmetric $5\times 5$ matrix $m$. Let $T = \Bbbk[x_0,\ldots,x_4,y]$. The canonical ring $R(C)$ is of the form $T/I$, where $I$ is a Gorenstein ideal of codimension 4, with a minimal free resolution of the form \[0 \leftarrow R(C) \leftarrow T \leftarrow T(-2)^6 \leftarrow T(-3)^5 \oplus T(-4)^5 \leftarrow T(-5)^6 \leftarrow T(-7)^1\leftarrow 0.\] Let us assume that \[m = \begin{pmatrix} 0 & y & l_1 & l_3 & x_0 \\ -y & 0 & l_2 & l_4 & x_1 \\ -l_1 & -l_2 & 0 & y & x_2 \\ -l_3 & -l_4 & -y & 0 & x_3 \\ -x_0 & -x_1 & -x_2 & -x_3 & 0 \\ \end{pmatrix}\] where $l_1,\ldots,l_4 \in \Bbbk[x_0,\ldots,x_4]$ are some non-zero linear forms, and that the quadratic section $f$ defining $C$ is of the form $x_4y + q$ with $q \in S:=\Bbbk[x_0,\ldots,x_4]$ being a quadratic form. Projecting from the point $p = (0:0:0:0:0:1)$ to $\P^4_{\langle x_0,\ldots,x_4 \rangle}$, the image of $Y$ is a surface with a unique singular point $q$, whereas $C$ is isomorphic to its image in $\P^4$. Now $R(C)$ is a finitely generated Cohen-Macaulay $S$-module and an $S$-algebra of codimension 3. Furthermore, the condition $(\diamondsuit)$ is satisfied since $\Proj(R(C))$ and $\Proj(S/\ann_S(R(C)))$ are isomorphic curves. Hence, we can apply Theorem \ref{thm_mainthmgraded} to obtain a minimal free resolution of $R(C)$ as an $S$-module of the form \[0 \leftarrow R(C) \leftarrow F_0 \xleftarrow{d_1} F_1 \xleftarrow{d_2} F_1^\ast(t) \xleftarrow{d_1^\ast} F_0^\ast(t) \leftarrow 0\] with an alternating map $d_2$. Moreover, a computation shows that the minimal free resolution is of the form \[0 \leftarrow R(C) \leftarrow \begin{array}{c} S \\ \bigoplus \\ S(-1) \end{array} \xleftarrow{d_1} \begin{array}{c} S(-2)^5 \\ \bigoplus \\ S(-3) \end{array} \xleftarrow{d_2} \begin{array}{c} S(-4)^5 \\ \bigoplus \\ S(-3) \end{array} \xleftarrow{d_1^\ast} \begin{array}{c} S(-5) \\ \bigoplus \\ S(-6) \end{array} \leftarrow 0\] with a possible choice of differentials \begin{small} \[ d_ 1 = \begin{pmatrix} {x}_{3}{l}_{1}-{x}_{2}{l}_{3}& {x}_{3}{l}_{2}-{x}_{2}{l}_{4} & -x_1l_1+x_0l_2 & -x_1l_3+x_0l_4 & q & x_4q(l) \\ {x}_{0}&{x}_{1}&{x}_{2}&{x}_{3}&{x}_{4}& q \end{pmatrix}\] \end{small} where $q(l) = l_1l_4-l_2l_3$, and \[ d_2 = \begin{pmatrix} 0& q&{-{x}_{4}{l}_{4}}&{x}_{4}{l}_{2}&-{x}_{3}{l}_{2}+{x}_{2}{l}_{4}&{x}_{1}\\ & 0&{x}_{4}{l}_{3}&{-{x}_{4}{l}_{1}}&{x}_{3}{l}_{1}-{x}_{2}{l}_{3}&{-{x}_{0}}\\ & & 0& q &{x}_{1}{l}_{3}-{x}_{0}{l}_{4}&{x}_{3}\\ & & & 0&-{x}_{1}{l}_{1}+{x}_{0}{l}_{2}&{-{x}_{2}}\\ & &-\text{skew} & & 0 & 0\\ & & & & & 0 \end{pmatrix}. \] \end{example} In the first example, we used the (known) general description of the canonical curve $C \subseteq \P^5$ to deduce a minimal free resolution of $R(C)$ as an $S$-module. However, the strength of our structure result lies in a reversed approach, that means constructing an (unknown) variety via some projection to a lower dimensional projective space. We will sketch this approach in the next examples. \begin{example} As an example for codimension $c = 1$ we consider a minimal surface $X$ of general type with $K^2 = 6$, $p_g(X) = h^0(X,K_X) = 4$ and $q(X) = h^1(X,{\cal O}_X) = 0$ whose canonical system $|K_X|$ has no base points and defines a birational morphism. These surfaces were completely described in \cite{CataneseRegular} by the following method. First, choose a minimal set of algebra generators of the canonical ring $R(X)$. Let $u_0,\ldots,u_3$ be a basis of $H^0(X,K_X)$. Now $h^0(X,2K_X) = K_X^2+ \chi = 11$ implies that we need one further generator in degree 2, denoted by $x_0$. Now let us consider $R(X)$ as an $S := \Bbbk[u_0,\ldots,u_3]$- module, and let $\varphi\colon S \rightarrow R(X)$ be the natural ring homomorphism. Since the morphism of projective schemes induced by $\varphi$ is birational onto its image, the morphism $\varphi$ satisfies condition $(\diamondsuit)$. Hence, there exists a minimal free resolution of the type $$ 0 \leftarrow R(X) \leftarrow F_0 \xleftarrow{d_1} F_0^\ast(t) \leftarrow 0,$$ where $t = -5$ and $F_0 = S \oplus S(-2)$. Thus, as a starting point for constructing a surface $X$ with the given invariants we choose a symmetric matrix $$ d_1 = \begin{pmatrix} a_{11} & a_{12} \\ a_{12} & a_{22} \end{pmatrix}$$ whose entries are homogeneous polynomials in $S$ with $\deg(a_{11}) = 5, \deg(a_{12}) = 3$ and $\deg(a_{22}) = 1$. Furthermore, we choose the entries so that $f = \det(d_1)$ defines an irreducible surface in $\P^3$. In \cite{CataneseRegular}, Catanese gives a necessary and sufficient condition, denoted as the rank condition, for $R := \coker d_1$ carrying the structure of a commutative ring: \par \begingroup \leftskip=6cm \noindent \emph{ (R.C.) Let $h$ be the number of rows of $d_1$, and let $d_1'$ be the matrix obtained by deleting the first row of $d_1$. Then the ideal generated by the entries of $\bigwedge^{h-1}d_1$ coincides with the ideal generated by the entries of $\bigwedge^{h-1}d_1'$}. \endgroup \par In this example, the rank condition is satisfied if $a_{11} = c_2a_{12} + c_4{a_{22}}$, where $c_2$ is a quadratic form in $S$ and $c_4$ a quartic form. Furthermore, if (R.C.) is satisfied, we can compute the remaining defining relations of $R(X)$ as an $\Bbbk$-algebra and see that canonical model $\Proj(R(X))$ can be embedded $\P(1^4,2)$ as an complete intersection defined by the relations \begin{align*} a_{1,2}+a_{2,2}x_0 &= 0, \\ x_0^2 - c_2x_0+c_4 & = 0. \end{align*} \end{example} \begin{example} Describing the canonical ring of a numerical Godeaux surface has served as a main motivation for the presented structure result. This description is a crucial tool in the construction of numerical Godeaux surfaces in \cite{Schreyer05} and \cite{Stenger18}. Recall that a numerical Godeaux surface $X$ is a minimal surface of general type with $K_X^2 =1$ and $p_g(X) = q(X) = 0$. The canonical ring $R(X)$ is a finitely generated positively graded $\Bbbk={\mathbb C}$-algebra. Studying its generators and relations, one can deduce that $R(X)$ is of the form \begin{equation}\label{eq_canringgod} \Bbbk[x_0,x_1,y_0,\ldots,y_3,z_0,\ldots,z_3,w_0,w_1,w_2]/I, \end{equation} where $\deg(x_i) =2$, $\deg(y_j) = 3$ and $\deg(z_j) = 4$ and $\deg(w_k) = 5$ and $I$ is a Gorenstein ideal of codimension 10 (for more details and proofs we refer to \cite{Stenger18}, Chapter 3). Thus, the minimal free resolution of $R(X)$ has length 10 and finding a general description seems hopeless. The idea is to study to $R(X)$ as an graded $S := \Bbbk[x_0,x_1,y_0,\ldots,y_3]$-module, or geometrically, via the projection from the canonical model $\Proj(R(X))$ to $\P(2^2,3^4)$. \begin{proposition} Let $X$ be a numerical Godeaux surface, and let $S$, $R(X)$ and $\varphi$ be as above. Then $R(X)$ is a Gorenstein $S$-algebra of codimension 3 and twist -17 satisfying $(\diamondsuit)$. \end{proposition} \begin{proof} See \cite{Stenger18}, Chapter 4. \end{proof} \begin{remark} The canonical ring $R(X)$ is a finitely generated $S$-module since the bi- and tricanonical system of $X$ do not have common base point by some classical results of Miyaoka (\cite{Miyaoka}). Moreover, $(\diamondsuit)$ is satisfied because the tricanonical system $|3K_X|$ induces a birational map onto its image in $\P^3$, and so does the morphism $\tilde{\varphi}\colon \Proj(R(X)) \rightarrow \Proj(S)$ with image $\Proj(S/\ann_S(R(X)))$. \end{remark} \begin{corollary} Let $X$ be a numerical Godeaux surface. Then there exists a minimal free resolution of $R(X)$ as an $S$-module of type \[ 0 \leftarrow R(X) \leftarrow F_0 \xleftarrow{d_1} F_1 \xleftarrow{d_2} F_1^\ast(-17) \xleftarrow{d_1^\ast} F_0^\ast(-17) \leftarrow 0,\] where $d_2$ is alternating. \end{corollary} \begin{remark} The construction method for numerical Godeaux surfaces from \cite{Schreyer05} and \cite{Stenger18} originally focuses on constructing the projected surface of codimension 3, or equivalently, an $S$-module $R$ by constructing maps $d_1$ and $d_2$. In \cite{Stenger18}, we introduce a sufficient condition on the entries of the matrix $d_1$ for $R$ carrying the structure of a commutative algebra. If this condition is satisfied, then we can recover the ring structure of $R(X)$, that means all its defining relations in \eqref{eq_canringgod}, from its structure as an $S$-module using Diagram \eqref{diag_comm}. Thus, our structure result gives a powerful tool for constructing (the canonical model) of a numerical Godeaux surface from a suitable model in codimension 3. For more details, we refer to \cite{Stenger18}, Chapter 5. \end{remark} \end{example} \noindent \textit{Acknowledgments.} The author would like to thank Frank-Olaf Schreyer for the helpful discussions and comments. This work has been supported by the German Research Foundation (DFG), TRR 195 "Symbolic tools in mathematics and their applications".
1,314,259,992,730
arxiv
\section{Introduction: Superfast accurate LRA -- the State of the Art}\label{sintro} {\bf 1. Introduction: Superfast accurate LRA -- the State of the Art and our study.} {\em Low-rank approximation (LRA)} of a matrix is one of the most fundamental problems of Numerical Linear Algebra and Data Mining and Analysis, with applications ranging from machine learning theory and neural networks to term document data and DNA SNP data (see surveys \cite{HMT11}, \cite{M11}, and \cite{KS16}). Matrices representing Big Data are so immense in size that realistically one can only access and process a tiny fraction of their entries. Quite typically, however, they admit their LRA, that is, are close to low rank matrices or equivalently have {\em low numerical rank}.\footnote{We use such concepts as ``low", ``small", ``nearby", etc. defined in context.} This can be decisive because one can operate with low rank matrices {\em superfast} -- by using sublinear arithmetic time and memory space, that is, by using much fewer flops and memory cells than an input matrix has entries. Can, however, one compute accurate LRA superfast? Adversary argument shows that every superfast algorithm fails already on a small family of matrices filled with 0s except for a single entry filled with 1 (see Appendix \ref{shrdin}), but as we showed in \cite{PLSZ16} and \cite{PLSZ17} some specified superfast algorithms compute accurate LRA of all matrices admitting LRA except for a rather narrow subclass.\footnote{Likewise superfast correctness verification is possible under some additional input information or assumptions on the input, although not for the worst case input and not even for the small input family of Appendix \ref{shrdin}.} We are going to revisit and to extend that study. {\bf 2. Randomized accurate LRA.} In \cite{PLSZ16} and \cite{PLSZ17} we rely on an algorithm from \cite{TYUC17}, where \cite[Theorems 4.7 and 4.8]{CW09} are cited as the source.\footnote{The algorithms and estimates of \cite[Theorems 4.7 and 4.8]{CW09} use Rademacher (rather than Gaussian) matrices.} The algorithm reduces the computation of LRA of a fixed $m\times n$ matrix $M$ of numerical rank $\rho$ to the computation of LRA of a $k\times l$ auxiliary matrix $FMH$ for any pair of integers $k$ and $l$ such that $\rho\le k\le m~{\rm and}~ \rho\le l\le n$ and for random multipliers $F$ of size $k\times m$ and $H$ of size $n\times l$. Based on the known techniques of random subspace sampling (cf. \cite{HMT11}) Tropp et al. proved in \cite{TYUC17} that with a high probability (hereafter {\em whp}) the expected value of the Frobenius error norm of the computed LRA of $M$ stays within a factor of $\frac{kl}{(k-l)(l-\rho)}$ from its minimum provided that $F$ and $H$ are standard Gaussian (aka normal) random matrices; this factor is close to 1 for $k\gg l\gg \rho$.\footnote{Hereafter we call such matrices {\em Gaussian} and write ``$\gg$" for ``much greater than" and ``$\ll$" for ``much smaller than".} Similar results, with a little smaller upper bound on the error probability and under lower bounds of order $r\log(r)$ on $k$ and $l$, have been proved in the case where the multipliers $F$ and $H$ are {\em SHRT} or {\em SFRT} matrices (that is, the matrices of subsampled Hadamard or Fourier randomized transforms). For $k\ll m$ and/or $l\ll n$ the matrix $FMH$ has small size such that the known algorithms compute its LRA superfast. The computation of this matrix, however, is not superfast in the cases of Gaussian, SRHT and SRFT matrices. {\bf 3. Dual superfast accurate LRA.} In \cite{PLSZ16} and \cite{PLSZ17} we considered the latter approach primal and studied its {\em dual} variation where the multipliers $F$ and $H$ are fixed and an input matrix $M$ admitting LRA is random.\footnote{We assume that in a customary representation (\ref{eqrnkrho}) of a matrix $M$ admitting its LRA one or both of the matrices $U$ and $V$ are Gaussian and the other one or two factors are well-conditioned matrices of full rank; in that case with no loss of generality we can assume that the factor $T$ is a diagonal matrix.} Then we proved that the computed LRA is accurate whp for any pair of orthogonal or well-conditioned multipliers $F$ and $H$. In \cite{PLSZ16} and \cite{PLSZ17} we described some classes of sparse multipliers with which the LRA algorithms are {\em superfast} and proposed some policies of their successive application which increased the probability of computing accurate LRA of random inputs. We observe good accordance of this formal study with the results of our extensive tests on real world matrices, including some test matrices from \cite{HMT11}. {\bf 4. Generation of sparse multipliers.} Next we recall and extend a particular family of sparse multipliers in \cite{PLSZ16} and \cite{PLSZ17} defined by means of abridging the classical recursive processes of the generation of $n\times n$ SRHT and SRFT matrices in $k=\log_2(n)$ recursive steps for $n=2^k$. The processes begin with the sparse matrix $\big (\begin{smallmatrix} I & ~~I \\ I & -I\end{smallmatrix}\big )$ for the $\frac{n}{2}\times \frac{n}{2}$ identity matrix $I=I_{n/2}$ and recursively fill it with nonzero entries. In $k$ steps it becomes dense, and after random unitary diagonal scaling its submatrix made up of a fixed smaller number of random columns or rows turns into SRHT or SRFT multiplier. With such a multiplier accurate LRA of any matrix $M$ admitting LRA is output whp, but in all our extensive tests the output remained as accurate when we used instead {\em abridged SRHT or SRFT multipliers} computed in at most three recursive steps. Similarly a square Gaussian matrix has been decomposed in \cite{PLSZ16}, \cite{PLSZ17} into the product of random bidiagonal and random permutation matrices, and then a rectangular multiplier has been obtained from it by means of row or column sampling. Alternatively, in the spirit of \cite[Remark 4.6]{HMT11}, we can compute QR orthogonalization to any $n\times n$ matrix $G$, thus decomposing it into the product $G=Q_1Q_2\cdots Q_s R$ where $Q_1,Q_2,\dots,Q_s$ are sparse orthogonal matrices of Givens rotations or Householder transforms and $R$ is a right (upper) triangular matrix. Then the input matrix $M$ can be multiplied by a matrix $F$ or $H$ obtained by abridging a partial product $P_i=Q_1Q_2\cdots Q_i$ for $i\le s$. Here the multipliers $F$ and $H$ are made of a fixed number of the first rows or columns of $P_i$, respectively. One can combine such preprocessing with using diagonal scaling and random permutations. {\bf 5. Three-factor LRA.} Before proceeding any further we formalize the computation of LRA. Recall that an $m\times n$ matrix $\tilde M$ has rank at most $\rho$, $\rank(\tilde M)\le \rho$, if \begin{equation}\label{eqrnkr} \tilde M=XY,~X\in \mathbb C^{m\times \rho},~{\rm and}~Y\in \mathbb C^{\rho\times n}. \end{equation} Likewise an $m\times n$ matrix $M$ has numerical rank at most $\rho$, $\nrank(M)\le \rho$, if it admits its close approximation by a matrix $\tilde M$ of rank at most $\rho$ or equivalently if there exist three matrices $X$, $Y$ and $E$ such that \begin{equation}\label{eqlra} M=XY+E~{\rm where}~E\approx O,~X\in \mathbb C^{m\times \rho},~{\rm and}~Y\in \mathbb C^{\rho\times n}. \end{equation} We naturally generalize such a 2-factor LRA $XY$ of $M$ to 3-factor LRA \begin{equation}\label{eqrnkrho} \tilde M=UTV+E,~{\rm where}~E\approx O,~U\in \mathbb C^{m\times k},~T\in \mathbb C^{k\times l},~V\in \mathbb C^{l\times n}, \end{equation} $\rho\le k\le m$, $\rho\le l\le n$, and matrices $U$, $T$, and $V$ may have full rank exceeding $\rho$. As a special case of a 3-factor LRA $UTV$ of (\ref{eqrnkrho}) for $k=l$ and $T=I_k$ we obtain a 2-factor LRA, which satisfies (\ref{eqlra}) for $k=\rho$. Moreover the pairs of maps $UT\rightarrow X$ and $V\rightarrow Y$ as well as $U\rightarrow X$ and $TV\rightarrow Y$ turn a 3-factor LRA $UTV$ of (\ref{eqrnkrho}) into a 2-factor LRA $XY$. {\bf 6. SVD and its truncation.} $M=U_{M}\Sigma_{M}V^*_{M}$ is {\em Compact Singular Value Decomposition (SVD)} of a matrix $M$ of rank $\rho$ for two orthogonal matrices of its left and right singular vectors $U_{M}\in \mathbb C^{m\times \rho}$ and $V_{M}\in \mathbb C^{n\times\rho}$, respectively, and the diagonal matrix of its singular values $\Sigma_{M}=\diag(\sigma_j(M))_{j=1}^{r}$ such that $\sigma_1(M)\ge \sigma_2(M)\ge \dots\ge \sigma_{\rho}(M)>0$. The matrix $M^+=V_{M}\Sigma_{M}^{-1}U_{M}^*$ is the Moore--Penrose pseudo inverse of $M$, $M^+=M^{-1}$ for a nonsingular matrix $M$. An important 3-factor LRA of $M$ is given by the $\rho$-{\em truncation} of its SVD, obtained by means of keeping $\rho$ top (that is, largest) singular values of $M$ and setting to 0 all the other ones, thus transforming SVD of $M$ into its {\em top SVD} and minimizing both spectral and Frobenius norms $||E||$ and $||E||_F$ of the error matrix $E$ of rank-$\rho$ approximation of $M$. \begin{lemma}\label{letrnc} {\rm (The minimal error norm for an LRA: \cite[Theorem 2.4.8]{GL13}.)} For a matrix $M$ and a positive integer $\rho$, the $\rho$-truncation of SVD of $M_{\rho}=UTV$ with $T=\diag(\sigma_j(M))_{j=1}^{\rho}$ denoting a diagonal matrix filled with the $\rho$ top singular values of $M$ in nonincreasing order, is a closest rank-$\rho$ approximation of $M$ under both spectral and Frobenius norms, $$||M_{\rho}-M||=\sigma_{\rho+1}(M)~ {\rm and}~\tau_{\rho+1}^2(M):= ||M_{\rho}-M||_F^2=\sum_{j\ge \rho}\sigma_j^2(M),$$ or in a unified way $$\tilde {\rho+1}(M):=|M_{\rho}-M|= \min_{N:~\rank(N)=\rho} |M-N|.$$ \end{lemma} \begin{lemma}\label{lesngr} {\rm (The impact of a perturbation of a matrix on its singular values: \cite[Corollary 8.6.2]{GL13}.)} For $m\ge n$ and a pair of ${m\times n}$ matrices $M$ and $M+E$ it holds that $$|\sigma_j(M+E)-\sigma_j(M)|\le||E||~{\rm for}~j=1,\dots,n. $$ \end{lemma} The estimates of the following theorem imply that the top SVD of a matrix $M$ is stable in its perturbation within $0.2(\sigma_{\rho}(M)-\sigma_{\rho+1}(M))$; the estimates are more explicit than those by Davis-Kahan 1972 and Wedin 1973, which involve angles between singular spaces. \begin{theorem}\label{thsngspc} {\rm (The impact of a perturbation of a matrix on its top singular vectors. \cite[Theorem 8.6.5]{GL13}.)} Suppose that $$g=:\sigma_{\rho}(M)-\sigma_{\rho+1}(M)>0~{\rm and}~||E||_F\le 0.2g.$$ Then, for the left and right singular spaces associated with the $r$ largest singular values of the matrices $M$ and $M+E$, there exist orthogonal matrix bases $ B_{\rho,\rm left}(M)$, $B_{\rho,\rm right}(M)$, $ B_{\rho,\rm left}(M+E)$, and $B_{\rho,\rm right}(M+E)$ such that $$\max\{||B_{\rho,\rm left}(M+E)- B_{\rho,\rm left}(M)||_F,||B_{\rho,\rm right}(M+E)-B_{\rho,\rm right}(M)||_F\}\le 4\frac{||E||_F}{g}.$$ \end{theorem} For example, if $\sigma_{\rho}(M)\ge 2\sigma_{\rho+1}(M)$, which implies that $g\ge 0.5~\sigma_{\rho}(M)$, and if $||E||_F\le 0.1~ \sigma_{\rho}(M)$, then the upper bound on the right-hand side is approximately $8||E||_F/\sigma_\rho(M)$. {\bf 7. CUR LRA.} CUR LRA is another important 3-factor LRA, highly popular and particularly memory efficient (see \cite{GTZ97}, \cite{GE96}, \cite{P00}, \cite{DMM08}, \cite{BW17}, \cite{OZ18}, and the references therein). In this case we seek LRA $M'\approx M$ in the form of $CUR$ where $R$ and $C$ are two submatrices of $M$ made up of its fixed or random sets of $k$ rows and $l$ columns, respectively, and $U$ is an $l\times k$ matrix. In a popular option (cf. \cite{DMM08}), which we call {\em canonical CUR LRA}, we define a nucleus $U$ as follows: first define a {\em CUR generator} $G$ made up of the $kl$ common entries of the factors $C$ and $R$ of CUR, then compute the $\rho$-truncation $G_{\rho}$ of its SVD, and finally let $U$ be the Moore--Penrose pseudo inverse $G_{\rho}^+$. In this case we only need to fix two index sets that define the submatrices $G$, $C$ and $R$, and then to compute the nucleus $U$ by using about $kl$ memory cells and either $O(kl\min\{k,l\})$ flops deterministically or $O(kl\log(\min\{k,l\}))$ flops whp. In particular $U=G^{-1}$ if $k=l=\rho$ and if $G$ is a nonsingular submatrix of $M$. \begin{theorem}\label{thncl} For a canonical CUR LRA $M'$ it holds that (i) $M=CG^+R$ if and only if $\rank(G)=\rank(M)=\rho>0$, and (ii) $M'\approx M$ if and only if $\nrank(G)=\nrank(M)$. \end{theorem} \begin{proof} Readily verify claim (i). See a proof of claim (ii) in \cite{PLSZ16}, \cite{PLSZ17}. \end{proof} Suppose that we are given a rank-$\rho$ truncation $M'$ of SVD of a matrix $M$ defining its LRA. Then by virtue of claim (i) we can rewrite this LRA as a canonical CUR decomposition $M'=CUR$ for $U=G^{-1}$ as soon as we find a nonsingular $\rho\times \rho$ submatrix $G$ of the LRA $M'$. In \cite[Appendix D.2]{PLSZ17} such a submatrix, computed superfast, has the norm $||G^{-1}||$ within a small factor from its minimum value $1/\sigma_{\rho}(M)$. {\bf 8. The size of LRA versus its accuracy.} The algorithms of \cite{TYUC17}, \cite{PLSZ16} and \cite{PLSZ17} output LRA $XY\approx M$ for $M\in \mathbb C^{m\times n}$, $X\in \mathbb C^{m\times l}$ and $Y\in \mathbb C^{m\times \rho}$, where high output accuracy is ensured for an appropriate choice of integers $k$ and/or $l$ greatly exceeding the numerical rank $\rho=\nrank (M)$. Such an increase of the size of LRA is undesirable, but the size can be decreased to its minimum at the price of a mild increase of the output error norm bound. Namely, according to \cite[Proposition 6.1]{TYUC17}, \begin{equation}\label{eqpr6.1} ||(XY)_{\rho}-M||_F\le \tau_{\rho+1}(M)+ 2 ||XY-M||_F \end{equation} for any LRA of (\ref{eqlra}) and $\tau_{r+1}(M)$ of Lemma \ref{letrnc}. Computation of SVD of $M$ involves at least order of $mn\min\{m,n\}$ flops; one can extend the randomized LRA algorithms in \cite{HMT11} with SRHT and SRFT multipliers to the computation of the $\rho$-truncation $M_{\rho}$ whp by using $O(mn\log (\rho))$ flops. Furthermore given any 3-factor LRA of (\ref{eqrnkrho}) for $\rho\le \min\{k,l\}$ and $k\ll m$ or $l\ll n$, Algorithm \ref{alglratpsvd} in Appendix \ref{slrasvd} computes the $\rho$-truncation of its SVD superfast, thus obtaining rank-$\rho$ approximation with only minor sacrifice in error bound. If a computed LRA $\tilde M$ is too crude, one can try to refine it yb computing LRA anew with some alternative sparse multipliers, but here are some recipes for the refinement that use a computed crude LRA $\tilde M= UTV$. In the first two recipes we assume that this crude LRA is reasonably accurate so that the top SVD of LRA is close to that of an input matrix (see Lemma \ref{lesngr} and Theorem \ref{thsngspc}). (i) Suppose that we have computed SVD $\tilde M=U_{\tilde M}\Sigma_{\tilde M}V_{\tilde M}^*$ and have applied the algorithm of \cite{TYUC17} with no randomization, by choosing the multipliers $F=U_{\tilde M}^*$ and $H=\bar V_{\tilde M}$. Then by extending the analysis of \cite[Section 10]{HMT11} we can prove that both spectral and Frobenius norms of the output error matrix are bounded by some values close to $2\tilde \sigma_{\rho+1}(M)$. These orthogonal multipliers $F$ and $H$ are not sparse, and so the algorithm is not superfast, but we can replace them by sparse orthogonal ones according to a recipe of section 4. (ii) The random subspace sampling algorithms of Drineas et al. \cite{DMM08} and of various subsequent papers (see \cite{KS16}, \cite{BW17}, and the references therein) compute accurate LRA of a matrix $M$ superfast whp provided that its top SVD is available.\footnote{In this case the algorithm can superfast compute so called leverage scores, which then serve as sampling probabilities defining superfast random subspace sampling.} The top SVD of LRA $\tilde M$ can serve instead of the top SVD of an input matrix $M$ as long as these top SVDs are close enough to one another. (iii) Even if the top SVD of an LRA $\tilde M$ is not close to that of $M$, we can apply iterative refinement of an LRA as follows: (a) compute the rank-$\rho$ truncation $\tilde M_{\rho}$ of SVD of a given LRA $\tilde M$ of an input matrix $M$; (b) compute a rank-$2\rho$ approximation $\Delta$ of the matrix $M-\tilde M_{\rho}$ (notice that $\nrank(M-\tilde M_{\rho})\le \nrank(M)+\rank(\tilde M_{\rho})\le 2\rho$); (c) compute the rank-$\rho$ truncation $(\Delta+\tilde M)_{\rho}$ of the SVD of the matrix $\Delta+\tilde M$. Stages (a) and (c) can be performed superfast because their inputs have small size. At stage (b) we can apply any superfast algorithm for LRA and should extend the classical techniques of iterative refinement (cf. \cite{H02}, \cite{S98}). In particular if we have computed $\tilde M$ by applying superfast dual variant of the algorithm of \cite{TYUC17}, then we can reuse the multipliers $F$ and $H$ involved in it, but we should compute the product $F(M-\tilde M_{\rho})H$ with higher precision in order to decrease the output error of stage (a). We can do this because $\tilde M_{\rho}\approx \tilde M \approx M$ and so $||M-\tilde M_{\rho}||\ll ||M||$. One can recursively repeat stages (a)--(c) hoping that $(\Delta+\tilde M)_{\rho}\rightarrow M_{\rho}$. Recall that if $(\Delta+\tilde M)_{\rho}$ becomes a random matrix of rank $\rho$, then LRA algorithms applied to it should output an accurate LRA of a matrix $M$ whp (cf. \cite{PLSZ16} and \cite{PLSZ17}). {\bf 9. Homotopy continuation technique.} In the proposed recipes (i) and (ii) for iterative refinement we assume that LRA $\tilde M$ is close to a matrix $M$, and clearly recipe (iii) is most effective where $\tilde M$ is close to $M$. Thus one can apply these recipes more efficiently by using homotopy continuation technique, that is, by applying the recipes recursively to the pairs of matrices $M_i$ and $\tilde M_i$ for $i=0,1,\dots$ where $M_0=\tilde M$, $M_{i+1}=M_i+s_i(M-M_i)$, $s_i$ are sufficiently small positive scalars, $i=0,1,\dots$, and $\tilde M_i$ for a positive $i$ is the LRA of the matrix $M_{i-1}$ computed according to the selected recipe (i), (ii) or (iii). In order to choose a scalar $s_i$ one may first estimate or guess the error norm $||M-M_i||$ and then choose the maximal $s_i$ for which Lemma \ref{lesngr} and Theorem \ref{thsngspc} enable us to control the errors of the current refinement iteration. \medskip \medskip \medskip {\bf \Large Appendix}
1,314,259,992,731
arxiv
\section{Introduction} \label{s1} Einstein-Rosen waves are among the simplest non-stationary solutions to the vacuum Einstein equations (see, e.g., \cite{1}). Not surprisingly, therefore, they have been used in a number of different contexts: investigation of energy loss due to gravity waves \cite{2}, asymptotic structure of radiative space-times \cite{3}, quasi-local mass \cite{4}, the issue of time in canonical gravity \cite{5}, and quantum gravity in a simplified but field theoretically interesting context of midi-superspaces \cite{5,6}. These solutions admit two Killing fields, both hypersurface orthogonal, of which one is rotational, $\partial/\partial\phi$, and the other translational, $\partial/\partial z$, along the axis of symmetry. (In certain applications, the orbits of the Killing field $\partial/\partial z$ are compactified, i.e., are taken to be circles. Our analysis will allow this possibility.) When the hypersurface orthogonality condition is removed, we obtain the cylindrical gravitational waves with {\it two} polarization modes. These have also been used to explore a number of issues, ranging from the study of Hamiltonian densities \cite{7} and numerical analysis of interacting pulses \cite{8} to the issue of cosmic censorship \cite{9}. The presence of a translational Killing field, however, makes the analysis of the asymptotic structure of these space-times quite difficult: they fail to be asymptotically flat either at spatial or null infinity. Consequently, one can not use the standard techniques to define asymptotic symmetries or construct the analogs of the ADM or Bondi energy momenta. Therefore, until recently, conserved quantities for these space-times --such as the C-energy \cite{2,7}-- were constructed by exploiting the local field equations, without direct reference to asymptotics. It is not apriori clear, therefore, that the quantities have the physical interpretation that has been ascribed to them. What is of physical interest are the values of conserved quantities {\it per unit length} along the axis of symmetry, i.e. along the integral curves of $\partial/\partial z$; because of the translational symmetry, the total conserved quantities in such a space-time would be clearly infinite. A natural strategy then is to go to the manifold of orbits of the $\partial/\partial z$-Killing field. Since this 3-dimensional space-time does not have a translational symmetry, one would expect it to be asymptotically flat in an appropriate sense. Hence, it should be possible to analyze its asymptotic structure unambiguously. In this paper, we will adopt this approach to explore the symmetries and physical fields at null infinity. A similar analysis of spatial infinity was performed recently \cite{10} in the context of the phase space formulation of general relativity. Somewhat surprisingly, it turned out that the C-energy is {\it not} the generator of the time translation which is unit at infinity; it does not therefore represent the Hamiltonian, or the physical energy (per unit {\it z}-length) in the space-time. The physical Hamiltonian turns out to be a {\it non-polynomial} function of the C-energy. In the present paper, we will see that the same is true of the analog of Bondi energy at null infinity. Thus, the purpose of this paper is to develop a framework to discuss the asymptotic structure at null infinity for 3-dimensional space-times. The underlying theory is general relativity coupled to matter fields satisfying appropriate fall-off conditions. The conditions on matter are satisfied, in particular, by the fields that arise from a symmetry reduction of a large class of 4-dimensional vacuum space-times admitting a space translation. Therefore, we will, in particular, provide a framework for analyzing the behavior of the gravitational field near null infinity of such space-times. We call such space-times generalized cylindrical waves since they need not admit an axial Killing field $\partial/\partial\phi$. Our analysis is also useful in a completely different context; that of quantum gravity. For, this class of space-times also provides interesting midi-superspace for quantum gravity and our results set the stage for its asymptotic quantization and the corresponding S-matrix theory. The plan of the paper is as follows. In Sec.\ref{s2}, we will analyze the asymptotic structure of the Einstein-Rosen waves from a 3-dimensional perspective. This analysis will motivate our general definition of asymptotic flatness in Sec.\ref{s3} and also make the main results plausible. In Sec.\ref{s3}, we introduce the notion of asymptotic flatness at null infinity in 3 space-time dimensions and analyze the structure of asymptotic fields. In Sec.\ref{s4}, we discuss asymptotic symmetries and in Sec.\ref{s5}, conserved quantities. While the general methods adopted are suggested by the standard Bondi-Penrose treatment of null infinity in 4-dimensional general relativity, there are a number of surprises as well. First, in 3 dimensions, the physical metric $g_{ab}$ is {\it flat} outside sources. Consequently, there are physically interesting solutions to the constraints which lead to space-times which are flat near spatial infinity $i^o$; the energy-momentum at $i^o$ is coded, not in local fields such as the curvature, but in a globally defined deficit angle. This simplifies the task of specifying boundary conditions as one approaches $i^o$ along null infinity $\it I$. On the other hand, there are also a number of new complications. In 4 dimensions, the stationary and the radiative space-times satisfy the same boundary conditions at null infinity. This is not the case in 3 dimensions. Hence, while dealing with radiative solutions, we can not draw on our intuition from the stationary case. Secondly, in 4 dimensions, up to a super-translation freedom --which corresponds to terms $O(1/r)$-- there is a fixed Minkowskian metric at infinity. In 3 dimensions, this is not the case; the Minkowski metric $\eta_{ab}$ to which a physical metric approaches varies even in the leading order, depending on the radiative content of the physical space-time. Consequently, the symmetry group is larger than what one might expect from one's experience in 4 dimensions. Furthermore, while one can canonically single out the translational subgroup of the BMS group in 4 dimensions, now the task becomes subtle; in many ways it is analogous to the task of singling out a preferred Poincar\'e subgroup of the BMS group. This in turn makes the task of defining the analog of Bondi energy much more difficult. These differences make the analysis non-trivial and hence interesting. Some detailed calculations are relegated to appendices. Using Bondi-type coordinates, the asymptotic behavior of curvature tensors of Einstein-Rosen waves is analyzed in the 3-dimensional framework in Appendix A. Appendix B considers static cylindrical solutions whose asymptotics, as mentioned above, is quite different from that of the radiative space-times analyzed in the main body of the paper. It should be emphasized that while part of the motivation for our results comes from the symmetry reduction of 4-dimensional general relativity, the main analysis itself refers to 3-dimensional gravity coupled to {\it arbitrary} matter fields (satisfying suitable fall-off conditions) which need not arise from a symmetry reduction. Nonetheless, the framework has numerous applications to the 4-dimensional theory. For example, in the accompanying paper \cite{16}, we will use the results of this paper to study the behavior of Einstein-Rosen waves at null infinity of the {\it 4-dimensional} space-times. In this paper, the symbol $\it I$ will generally stand for $\it I^+$ or $\it I^-$. In the few cases where a specific choice has to be made, our discussion will refer to $\it I^+$. \goodbreak \section{Einstein-Rosen waves: Asymptotics in 3 dimensions} \label{s2} This section is divided into three parts. In the first, we recall the symmetry reduction procedure and apply it to obtain the 3-dimensional equations governing Einstein-Rosen waves. (See, e.g., \cite{1} for a similar reduction for stationary space-times.) This procedure reduces the task of finding a 4-dimensional Einstein-Rosen wave to that of finding a solution to the wave equation on 3-dimensional {\it Minkowski} space. In the second part, we analyze the asymptotic behavior (at null infinity) of these solutions to the wave equation. In the third part, we combine the results of the first two to analyze the asymptotic behavior of space-time metrics. We will find that there is a large class of Einstein-Rosen waves which admit a smooth null infinity, $\it I$, as well as a smooth time-like infinity $i^\pm$. (As one might expect, the space-like infinity, $i^o$, has a conical defect.) These waves provide an important class of examples of the more general framework presented in Sec.\ref{s3}. \goodbreak \subsection{Symmetry reduction} \label{s2.1} Let us begin with a slightly more general context, that of vacuum space-times which admit a space-like, hypersurface orthogonal Killing vector $\partial/\partial z$. These space-times can be described conveniently in coordinates adapted to the symmetry: \nopagebreak[3]\begin{equation} ds^2 = V^2(x)dz^2 +\bar g_{ab}(x)\ dx^adx^b\ ,\ \ \ a,b,\dots =0,1,2 \label{(2.1)} \end{equation} where $x \equiv x^a $ and $\bar g_{ab}$ is a 3-metric metric with Lorentz signature. As in the more familiar case of static space-times \cite{1} the field equations are \nopagebreak[3]\begin{equation} \bar R_{ab}-V^{-1}\bar\nabla_a \bar\nabla_b V =0\ ,\ \ \label{(2.2)} \end{equation} \nopagebreak[3]\begin{equation} \bar g^{ab} \bar\nabla_a \bar\nabla_b V =0 \ , \label{(2.3)} \end{equation} where $\bar\nabla$ and $\bar R_{ab}$ are the derivative operator and the Ricci tensor of $\bar g_{ab}$. These equations can be simplified if one uses a metric in the 3-space which is rescaled by the norm of the Killing vector and writes the norm of the Killing vector as an exponential \cite{12,1}. Then (2.1)--(2.3) become \nopagebreak[3]\begin{equation} ds^2 = e^{2\psi(x)}dz^2 +e^{-2\psi(x)} g_{ab}(x)\ dx^adx^b\ , \label{(2.4)} \end{equation} \nopagebreak[3]\begin{equation} R_{ab}-2 \nabla_a \psi \nabla_b \psi =0\ , \label{(2.5)} \end{equation} \nopagebreak[3]\begin{equation} {} \quad g^{ab}\nabla_a\nabla_b \psi=0 \ , \label{(2.6)} \end{equation} where $\nabla$ denotes the derivative with respect to the metric $g_{ab}$. These equations can be re-interpreted purely in a 3-dimensional context. To see this, consider Einstein's equations in 3 dimensions with a scalar field $\Phi$ as source: \nopagebreak[3]\begin{equation} R_{ab}-{1\over 2}R\, g_{ab}= 8\pi G T_{ab} =8\pi G (\nabla_a \Phi \nabla_b\Phi - \textstyle{1\over 2} (\nabla_c\Phi \nabla^c\Phi)\, g_{ab})\ , \label{(2.7)} \end{equation} \nopagebreak[3]\begin{equation} g^{ab}\nabla_a \Phi \nabla_b\Phi = 0\ . \label{(2.8)} \end{equation} Since the trace of equation (2.7) gives $R = 8\pi G\nabla^c\Phi \nabla_c\Phi$, (2.7) is equivalent to \nopagebreak[3]\begin{equation} R_{ab}= 8\pi G \, \nabla_a\Phi \nabla_b\Phi \ . \label{(2.9)} \end{equation} Now, with $\Phi = \psi/\sqrt{4\pi G} $ we obtain (2.5) and (2.6). Thus, the 4-dimensional vacuum gravity is equivalent to the 3-dimensional gravity coupled to a scalar field. Recall that in 3 dimensions, there is no gravitational radiation. Hence, the local degrees of freedom are all contained in the scalar field. One therefore expects that the Cauchy data for the scalar field will suffice to determine the solution. For data which fall off appropriately, we thus expect the 3-dimensional Lorentzian geometry to be asymptotically flat in the sense of Penrose \cite{13}, i.e. to admit a 2-dimensional boundary representing null infinity. Let us now turn to the Einstein-Rosen waves by assuming that there is further space-like, hypersurface orthogonal Killing vector $\partial/\partial\phi$ which commutes with $\partial/ \partial z$. Then, as is well known, the equations simplify drastically. Hence, a complete global analysis can be carried out easily. Recall first that the metric of a vacuum space-time with two commuting, hypersurface orthogonal space-like Killing vectors can always be written locally as \cite{14} \nopagebreak[3]\begin{equation} ds^2=e^{2\psi}dz^2+ e^{2(\gamma -\psi)}(-dt^2+d\rho^2)+\rho^2e^{-2\psi}d\phi^2\ , \label{(2.10)} \end{equation} where $\rho$ and $t$ (the ``Weyl canonical coordinates'') are defined invariantly and $\psi=\psi(t,\rho)$, $\gamma=\gamma (t,\rho)$. (Here, some of the field equations have been used.) Hence the 3-metric $g$ is given by \nopagebreak[3]\begin{equation} d\sigma^2=g_{ab}dx^adx^b=e^{2\gamma}(-dt^2+d\rho^2) +\rho^2d\phi^2\ . \label{(2.11)} \end{equation} Let us now assume that $\partial/\partial\phi$ is a rotational field in the 3-space which keeps a time-like axis fixed. Then the coordinates used in (2.10) are unique up to a translation $t\rightarrow t+a$. (Note, incidentally, that ``trapped circles'' are excluded by the field equations \cite{9}.) The field equations (\ref{(2.5)}) and (\ref{(2.6)}) now become \nopagebreak[3]\begin{eqnarray} R_{tt}&=&\gamma''-\ddot\gamma +\rho^{-1}\gamma'=2\dot\psi^2\ , \\ R_{\rho\rho}&=& -\gamma''+\ddot\gamma +\rho^{-1}\gamma'=2\psi'^2\ , \\ R_{t\rho}&=& \rho^{-1}\dot\gamma =2\dot\psi\psi'\ , \end{eqnarray} and \nopagebreak[3]\begin{equation} -\ddot\psi+\psi''+ \rho^{-1}\psi'=0 \ , \label{(2.15)} \end{equation} where the dot and the prime denote derivatives with respect to $t$ and $\rho$ respectively. The last equation is the wave equation for the non-flat 3-metric (2.11) {\it as well as for the flat metric obtained by setting} $\gamma=0$. This is a key simplification for it implies that the equation satisfied by the matter source $\psi$ decouples from the equations (2.12)-(2.14) satisfied by the metric. These equations reduce simply to: \nopagebreak[3]\begin{equation} \gamma ' =\rho\,(\dot\psi^2+\psi'^2)\ , \label{(2.16)} \end{equation} \nopagebreak[3]\begin{equation} \dot\gamma = 2\rho\dot\psi\psi '\ . \label{(2.17)} \end{equation} Thus, we can first solve for the axi-symmetric wave equation (2.15) for $\psi$ on Minkowski space and then solve (2.16) and (2.17) for $\gamma$ --the only unknown metric coefficient-- by quadratures. (Note that (2.16) and (2.17) are compatible because their integrability condition is precisely (2.15).) \goodbreak \subsection{Asymptotic behavior of scalar waves} \label{s2.2} In this subsection we will focus on the axi-symmetric wave equation in 3-dimen\-sion\-al Minkowski space and analyze the asymptotic behavior of its solutions $\psi$. We begin with an observation. The ``method of descent'' from the Kirchoff formula in 4 dimensions gives the following representation of the solution of the wave equation in 3 dimensions, in terms of Cauchy data $\Psi_0=\psi(t=0,x,y), \Psi_1=\psi_{,t}(t=0,x,y)$: \nopagebreak[3]\begin{eqnarray} \psi(t,x,y)&=& {1\over 2 \pi}\ {\partial\over \partial t} \int\!\!\!\int_{S(t)} {\Psi_0(x',y')dx'dy' \over [t^2-(x-x')^2-(y-y')^2]^{1/2}} \nonumber\\ & & +{1\over 2 \pi} \int\!\!\!\int_{S(t)} {\Psi_1(x',y')dx'dy' \over [t^2-(x-x')^2-(y-y')^2]^{1/2}}\ , \label{(2.18)} \end{eqnarray} where $S$ is the disk \nopagebreak[3]\begin{equation} (x-x')^2+(y-y')^2\le t^2 \nonumber\end{equation} in the initial Cauchy surface (see, e.g., \cite{29}). We will assume that the Cauchy data are axially symmetric and of compact support. Let us first investigate the behavior of the solution at future null infinity $\it I$. Let $\rho,\phi$ be polar coordinates in the plane and introduce the retarded time coordinate \nopagebreak[3]\begin{equation} u=t-\rho \nonumber\end{equation} to explore the fall-off along the constant $u$ null hypersurfaces. Because of axi-symmetry, we may put $y=0$ without loss of generality. The integration region becomes \nopagebreak[3]\begin{equation} (\rho -x')^2+y'^2\le (u+\rho)^2 . \nonumber\end{equation} Let us rewrite the integrands of (\ref{(2.18)}) as \nopagebreak[3]\begin{equation} {\Psi(x',y')dx'dy' \over [2\rho(u+x')+u^2-x'^2-y'^2]^{1/2}} ={1\over{(2\rho)}^{1/2}}{\Psi(x',y')dx'dy'\over(u+x')^{1/2}} \left(1+{u^2-x'^2-y'^2\over2(u+x')}\ {1\over\rho}\right)^{-1/2}\ . \label{(2.22)} \end{equation} For large $\rho$, (\ref{(2.22)}) admits a power series expansion in $\rho^{-1}$ which converges absolutely and uniformly. Hence we can exchange the integration in (\ref{(2.18)}) with the summation and we can also perform the differentiation $\partial/ \partial u $ term by term. Therefore on each null hypersurface $u=const$ one can obtain an expansion of the form \nopagebreak[3]\begin{equation} \psi(u,\rho)={1\over\sqrt\rho}\left(f_0(u)+\sum_{k=1}^\infty{f_k(u) \over\rho^k}\right)\ . \label{(2.23)} \end{equation} The coefficients in this expansion are determined by integrals over the Cauchy data. These functions are particularly interesting for $u$ so large that the support of the data is completely in the interior of the past cone. One finds \nopagebreak[3]\begin{equation} f_0(u)={1\over 2\sqrt2\pi}\int_0^\infty\!\int_0^{2\pi}\rho'd\rho'd\phi' \left[-{1\over2}{\Psi_0\over(u+\rho'\cos\phi')^{3\over2}} +{\Psi_1\over(u+\rho'\cos\phi')^{1\over2}} \right]\ . \label{(2.24)} \end{equation} Note that the coefficient is analytic in $u^{-1/2}$, and at $u\gg \rho_0$, $\rho_0$ being the radius of the disk in which the data are non-zero, we obtain \nopagebreak[3]\begin{equation} f_0(u)={k_0\over u^{3/2}}+{k_1\over u^{1/2}}+\dots\ , \label{(2.25)} \end{equation} where $k_0,k_1$ are constants which are determined by the data. If the solution happens to be time-symmetric, so that $\Psi_1$ vanishes, we find $f_0\sim u^{-3/2}$ for large $u$. This concludes our discussion of the asymptotic behavior along $u = const$ surfaces. Finally, we wish to point out that the main results obtained in this section continue to hold also for general data of compact support which are not necessarily axi-symmetric. In particular, one obtains an expansion like (\ref{(2.23)}) where the coefficients now depend on both $u$ and $\phi$, and asymptotic forms like (\ref{(2.25)}). The assumption of compact support can also be weakened to allow data which decay near spatial infinity sufficiently rapidly so that we still obtain solutions smooth at null infinity. (This is in particular the case for the Weber-Wheeler pulse considered in the accompanying paper \cite{16}.) \goodbreak \subsection{Asymptotic behavior of the metric} \label{s2.3} We now combine the results of the previous two subsections. Recall from Eq. (\ref{(2.11)}) that the 3-dimensional metric $g_{ab}$ has a single unknown coefficient, $\gamma(t, \rho)$, which is determined by the solution $\psi(t, \rho)$ to the wave equation in Minkowski space (obtained simply by setting $\gamma= 0$). The asymptotic behavior of $\psi(t,\rho)$ therefore determines that of the metric $g_{ab}$. Let us begin by expressing $g_{ab}$ in the Bondi-type coordinates $(u=t-\rho ,\rho,\phi)$. Then, Eq. (\ref{(2.11)}) yields \nopagebreak[3]\begin{equation} d\sigma^2=e^{2\gamma}(-du^2-2du d\rho) + \rho^2d\phi^2\ ; \label{(2.26)} \end{equation} the Einstein equations take the form \nopagebreak[3]\begin{equation} \gamma_{,u}=2\rho\psi_{,u}(\psi_{,\rho}-\psi_{,u})\ , \label{(2.27)} \end{equation} \nopagebreak[3]\begin{equation} \gamma_{,\rho}=\rho\psi^2_{,\rho}\ ; \label{(2.28)} \end{equation} and the wave equation on $\psi$ becomes \nopagebreak[3]\begin{equation} -2\psi_{,u\rho}+\psi_{,\rho\rho} +\rho^{-1}(\psi_{,\rho}-\psi_{,u})=0\ . \label{(2.29)} \end{equation} The asymptotic form of $\psi(t,\rho)$ is given by the expansion (\ref{(2.23)}). Since we can differentiate (\ref{(2.23)}) term by term, the field equations (\ref{(2.27)}) and (\ref{(2.28)}) imply \nopagebreak[3]\begin{equation} \gamma_{,u}= -2[\dot f_0(u)]^2+\sum_{k=1}^\infty{g_k(u)\over\rho^k}\ , \label{(2.30)} \end{equation} \nopagebreak[3]\begin{equation} \gamma_{,\rho}= {1\over 4}[\dot f_0(u)]^2\ {1\over {\rho^2}} +\sum_{k=1}^\infty{h_k(u)\over\rho^{k+2}}\ , \label{(2.31)} \end{equation} where the functions $g_k, h_k$ are products of the functions $f_0, f_k, \dot f_0, \dot f_k$. Since for data of compact support $f_0, f_k$ vanish for all $u<u_0$, we can integrate (\ref{(2.30)}) as follows: \nopagebreak[3]\begin{equation} \gamma = \gamma_0 + \int_{-\infty}^u\left( -2(\dot f_0(u))^2+ \sum_{k=1}^\infty{g_k(u)\over\rho^k}\right)du\ . \label{(2.32)} \end{equation} Thus, $\gamma$ also admits an expansion in $\rho^{-1}$ where the coefficients depend smoothly on $u$. It is now straightforward to show that the space-time admits a smooth future null infinity, $\it I$. Setting $\tilde\rho=\rho^{-1}, \tilde u=u, \tilde\phi =\phi$ and rescaling $g_{ab}$ by a conformal factor $\Omega=\tilde\rho $, we obtain \nopagebreak[3]\begin{equation} d\tilde \sigma^2=\Omega^2 d\sigma^2=e^{2\tilde\gamma} (-\tilde\rho^2d\tilde u^2+2d\tilde ud\tilde\rho)+d\tilde\phi^2\ , \label{(2.33)} \end{equation} where $\tilde\gamma(\tilde u, \tilde\rho)=\gamma(u, \tilde\rho^{-1})$. Because of (\ref{(2.32)}), $\tilde\gamma$ has a smooth extension through $\tilde\rho=0$. Therefore, $\tilde{g}_{ab}$ is smooth across the surface $\tilde\rho= 0$. This surface is the future null infinity, $\it I$. Using the expansion (\ref{(2.23)}) of $\psi$ near null infinity, various curvature tensors can be expanded in powers of $\rho^{-1}$. More precisely, a suitable null triad can be chosen which is parallel propagated along $u=const$, $\phi=const$ curves. The resulting triad components of the Riemann tensor and the Bach tensor are given in Appendix A. The (conformally invariant) Bach tensor is finite {\it but non-vanishing} at null infinity. This is to be contrasted with the Bondi-Penrose description of null infinity in asymptotically flat, 4-dimensional space-times, where the (conformally invariant) Weyl tensor vanishes. In this sense, while in the standard 4-dimensional treatments the metric is conformally flat {\it at} null infinity, in a 3-dimensional treatment, it will not be so in general. This is one of the new complications that one encounters. To understand the meaning of the constant $\gamma_0$ let us consider the solution on the Cauchy surface $t=0$. Eq. (\ref{(2.16)}) implies that we can determine $\gamma$ by a $\rho$-integration from the center. If we insist on regularity at $\rho =0$ we have \nopagebreak[3]\begin{equation} \gamma(t=0,\rho)=\int_0^\rho \rho\,(\dot\psi^2+\psi'^2)\ d\rho\ . \label{(2.34)} \end{equation} Hence, for data of compact support, $\gamma_0$ is a positive constant whose value is determined by the initial data for $\psi$: \nopagebreak[3]\begin{equation} \gamma_0=\gamma(t=0,\rho=\infty)=\int_0^\infty \rho\,(\dot\psi^2+ \psi'^2)\ d\rho \ . \label{(2.35)} \end{equation} This way the constant $\gamma_0$ in (\ref{(2.32)}) is uniquely determined for solutions which are regular at $\rho=0$. Its value is given by the total energy of the scalar field $\psi$ computed using the Minkowski metric (obtained from $g_{ab}$ by setting $\gamma =0$). On a constant $t$ surface, for a point outside the support of the data, we have $\gamma =\gamma_0$, a constant. Hence, outside the support of the data, the 3-metric on the Cauchy surface is flat. For any non-trivial data, however, $\gamma_0$ is strictly positive, whence the metric has a ``conical singularity'' at spatial infinity: the metric there is given by \nopagebreak[3]\begin{equation} d\sigma^2=e^{2\gamma_0}(-dt^2+d\rho^2)+\rho^2d\phi^2\ . \label{(2.36)} \end{equation} Notice that a conical singularity can also be seen near null infinity in this physical metric because the change of the proper circumference of a circle with proper radial distance is different from the case of asymptotically Minkowskian space. Finally, using (\ref{(2.32)}), we find that, as one approaches $\it I$ (i.e. $\rho\to\infty$), we have: \nopagebreak[3]\begin{equation} \gamma(u,\infty)= \gamma_0 -2\int_{-\infty}^u \dot f_0(u)^2du . \label{(2.37)} \end{equation} Now, a detailed examination \cite{16} of the behavior of the scalar field $\psi$ near time-like infinity $i^+$ reveals that the space-time is smooth at ${i}^+$ and that $\gamma$ vanishes there. Hence, we obtain the simple result \nopagebreak[3]\begin{equation} \gamma_0 =2\int_{-\infty}^{+\infty} \dot f_0(u)^2du . \label{(2.38)} \end{equation} Thus, there, is a precise sense in which the conical singularity, present at space-like infinity, is ``radiated out'' and a smooth (in fact analytic) time-like infinity ``remains". We will see that, modulo some important subtleties, Eq. (\ref{(2.37)}) plays the role of the Bondi mass-loss formula \cite{15}. \goodbreak \section{ Null infinity in 3 dimensions: general framework} \label{s3} In this section, we will develop a general framework to analyze the asymptotic structure of the gravitational and matter fields at null infinity in 3 dimensions along the lines introduced by Penrose in 4 dimensions. As a special case, when the matter field is chosen to be the massless Klein-Gordon field, we will recover a 3-dimensional description of null infinity of generalized cylindrical waves. It turns out that the choice of the fall-off conditions on matter fields is rather subtle in 3 dimensions. Fortunately, the analysis of the Einstein-Rosen waves presented in Sec.\ref{s2} provides guidelines that restrict the available choices quite effectively. In Sec.\ref{s3.1}, we specify the boundary conditions and discuss some of their immediate consequences. In \ref{s3.2}, we extract the important asymptotic fields and discuss the equations they satisfy at null infinity. Sec.\ref{s3.3} contains an example which, so to say, lies at the opposite extreme from the Einstein-Rosen waves: the simplest solution corresponding to a static point particle in 3 dimensions. This example is simple enough to bring out certain subtleties which in turn play an important role in the subsequent sections. \goodbreak \subsection{Boundary conditions} \label{s3.1} A 3-dimensional space-time $(M, g_{ab})$ will be said to be {\it asymptotically flat at null infinity} if there exists a manifold $\tilde{M}$ with boundary $\it I$ which is topologically $S^1 \times R$, equipped with a smooth metric $\tilde{g}_{ab}$ such that \begin{itemize} \item[i)]{there is a diffeomorphism between $\tilde{M} - \it I$ and $M$ (with which we will identify the interior of $\tilde{M}$ and $M$);} \item[ii)]{there exists a smooth function $\Omega$ on $\tilde{M}$ such that, at $\it I$, we have $\Omega = 0$, $\nabla_a \Omega \not= 0$, and on $M$, we have $\tilde{g}_{ab} =\Omega^2 g_{ab}$;} \item[iii)]{If $T_{ab}$ denotes the stress-energy of matter fields on the physical space-time $(M, g_{ab})$, then $\Omega T_{ab}$ admits a smooth limit to $\it I$ which is {\it trace-free}, and the limit to $\it I$ of $\Omega^{-1}T_{ab}\tilde{n}^a \tilde{V}^b$ vanishes, where $\tilde{V}^a$ is any smooth vector field on $\tilde{M}$ which is tangential to $\it I$ and $\tilde{n}^a = \tilde{g}^{ab}\tilde\nabla_b\Omega$;} \item[iv)]{if $\Omega$ is so chosen that $\tilde\nabla^a \tilde\nabla_a \Omega = 0$ on $\it I$, then the vector field $\tilde{n}^a$ is complete on $\it I$.} \end{itemize} Conditions i), ii) and iv) are the familiar ones from 4 dimensions and have the following implications. First, since $\Omega$ vanishes at $\it I$, points of $\it I$ can be thought of as lying at infinity with respect to the physical metric. Second, since the gradient of $\Omega$ is non-zero at $\it I$, $\Omega$ ``falls off as $1/\rho$''. Finally, we know that $\it I$ has the topology $S^1\times R$ and condition iv) ensures that it is as ``complete in the $R$-direction'' as it is in Minkowski space. The subtle part is the fall-off conditions on stress-energy; these are {\it substantially weaker} than those in the standard 4-dimensional treatment. For instance, in 4 dimensions, if we use Maxwell fields as sources, then because of conformal invariance, if $F_{ab}$ solves Maxwell's equations on the physical space-time $(M, g_{ab})$, then $\tilde{F}_{ab} := F_{ab}$ satisfies them on the completed space-time $(\tilde{M}, \tilde{g}_{ab})$. Hence $\tilde{F}_{ab}$ admits a smooth limit to $\it I$. This immediately implies that $\Omega^{-2} T_{ab}$ also admits a smooth limit, where $T_{ab}$ is the stress-energy tensor of $F_{ab}$ in the physical space-time. In the case of a scalar field source, the fall-off is effectively the same although the argument is more subtle (see page 41 in \cite{17}). In 3 dimensions, on the other hand, we are asking only that $\Omega T_{ab}$ admits a limit (although, as noted above, the asymptotic fall-off of $\Omega$ is the same in 3 and 4 dimensions). This is because a stronger condition will have ruled out the cylindrical waves discussed in Sec.\ref{s2}. To see this, consider smooth scalar fields $\psi$ with initial data of compact support. Then, if we set $\tilde\psi = \Omega^{-1/2}\psi$, we have the identity: $$ \tilde{g}^{ab} \tilde\nabla_a \tilde\nabla_b \tilde\psi - {\textstyle{1\over 8}}\, \tilde{R} \tilde\psi = \Omega^{-{\textstyle{5\over 2}}}(g^{ab}\nabla_a\nabla_b \psi -{\textstyle{1\over 8}}\, R \psi)\ ,$$ where $R$ and $\tilde{R}$ are the scalar curvatures of $g_{ab}$ and $\tilde{g}_{ab}$ respectively. Hence $\tilde\psi$ is well-behaved on $\it I$ which implies that \nopagebreak[3]\begin{eqnarray}\Omega T_{ab} &\equiv& 2 \Omega^2 (\tilde\nabla_a\tilde\psi) (\tilde\nabla_b \tilde\psi) +2 \Omega\tilde\psi \tilde{n}_{(a}\tilde\nabla_{b)}\tilde\psi + \textstyle{1\over 2} \tilde{\psi}^2\tilde{n}_a \tilde{n}_b \nonumber\\ &-& \textstyle{1\over 2}\tilde{g}_{ab} [ \Omega^2 \tilde{\nabla}^m {\tilde\nabla}_m \tilde\psi + \Omega\tilde\psi \tilde{n}^m\tilde\nabla_m \tilde\psi + \tilde{n}^m \tilde{n}_m \tilde{\psi}^2]\label{se} \end{eqnarray} admits a well-defined, non-zero limit at $\it I$ satisfying the conditions of our definition. Hence, stronger fall-off requirements on $T_{ab}$ would have made the framework uninteresting. We will see that this weak fall-off is responsible for a number of surprises in the 3-dimensional theory. Could we have imposed even weaker fall-off conditions? The requirement of smoothness on $\tilde{g}_{ab}$, $\Omega$ and $\Omega T_{ab}$ can be substantially weakened: All our analysis will go through if $\tilde{g}_{ab}$ and $\Omega$ are only $C^3$, and $\Omega T_{ab}$ only $C^1$ at $\it I$. On the other hand, we will see that the condition on the trace of $\Omega T_{ab}$ is necessary to endow $\it I$ with interesting structure. We will see that the vanishing of the limit of $\Omega^{-1} T_{ab} \tilde{n}^a \tilde{V}^b$ is necessary to ensure that the energy and super-momentum fluxes of matter across (finite patches of) $\it I$ are finite. Let us now examine the structure available at the boundary $\it I$. As in 4 dimensions, it is convenient to work entirely with the tilde fields which are smooth at $\it I$. Let us set $$\tilde{L}_{ab} = \Omega (R_{ab} - \textstyle{1\over 4}R\, g_{ab})=: \Omega S_{ab} $$ and lower and raise its indices with $\tilde{g}_{ab}$ and its inverse. $\tilde{L}_{ab}$ carries the same information as the stress-energy tensor $T_{ab}$ of matter and our conditions on $T_{ab}$ ensure that $\tilde{L}_{ab}$ is smooth at $\it I$. Set $$\bar{f} = \Omega^{-1} \tilde{n}^a \tilde{n}_a\, . $$ Then, using the expression $R_{abcd} = 2 (S_{a[c}g_{d]b} - S_{b[c} g_{d]a})$ of the Riemann tensor in 3 dimensions, the formula expressing the relation between curvature tensors of $g_{ab}$ and $\tilde{g}_{ab}$ reduces to: \nopagebreak[3]\begin{equation}\Omega \tilde{S}_{ab} + \tilde\nabla_a \tilde{n}_b - \textstyle{1\over 2} \bar{f} \tilde{g}_{ab} = \tilde{L}_{ab}\ , \label{(3.1)}\end{equation} where $\tilde{S}_{ab} = (\tilde{R}_{ab} - \textstyle{1\over 4}\tilde{R}\tilde{g}_{ab} )$. This is the basic field equation in the tilde variables. Since all other fields which feature in it are known to be smooth at $\it I$, it follows that $\bar f$ is also smooth. This implies in particular that $\tilde{n}^a$ is null. Since $\tilde{n}_a = \tilde{\nabla}_a\Omega$ is the normal field to $\it I$, we conclude that ${\it I}$ {\it is a null surface}. Next, we note that there is a considerable freedom in the choice of the conformal factor $\Omega$. Indeed, if $(\tilde{M}, \tilde{g}_{ab} = \Omega^2 g_{ab})$ is an allowable completion, so is $(\tilde{M}, \Omega'^2 g_{ab})$ where $\Omega' = \omega\Omega$ for any smooth, nowhere vanishing function $\omega$ on $\tilde{M}$. Now, under the conformal transformation $\Omega\mapsto \Omega' = \omega\Omega$, we have: $$\tilde\nabla'_a \tilde{n}'^a \= \omega^{-1}\tilde\nabla_a \tilde{n}^a + 3 \omega^{-2} {\cal L}_{\tilde{n}} \omega \ ,$$ where, from now on, $\=$ will stand for {\it ``equals at the points of $\it I$ to''}. Hence, by using an appropriate $\omega$, we can always make $\tilde{n}'^a$ divergence-free. Such a choice will be referred to as a {\it divergence-free conformal frame}. This frame is, however, not unique. The restricted gauge freedom is given by: \nopagebreak[3]\begin{equation}\Omega \mapsto \omega\Omega, \quad{\rm where}\quad {\cal L}_{\tilde{n}} \omega\= 0\ . \label{(3.2)}\end{equation} Now, condition iv) in our definition requires that, in any divergence-free conformal frame, the vector field $\tilde{n}^a$ be complete on $\it I$. Suppose it is so in one divergence-free conformal frame $\Omega$. Let $\Omega'$ correspond to another divergence-free frame. Then, $\Omega' = \omega\Omega$, with $\omega$ smooth, nowhere vanishing and satisfying ${\cal L}_{\tilde{n}} \omega \= 0$. The last equation implies that $\tilde{n}'^a$ is complete on $\it I$ if and only if $\tilde{n}^a$ is complete there. Hence, we need to verify iv) in just one divergence-free conformal frame. {\it In what follows, we will work only in divergence-free conformal frames.} Next, taking the trace of (3.1) and using the fact that $\tilde{L}$ vanishes on $\it I$ we conclude that, in any divergence-free frame, $\bar{f}$ vanishes on $\it I$, whence $$ \tilde{f} := \Omega^{-1} \bar{f} $$ admits a smooth limit there. The field $\tilde{f}$ will play an important role. Finally, it is easy to check that in any divergence-free conformal frame, we have: \nopagebreak[3]\begin{equation}\tilde{n}^b \tilde\nabla_b \tilde{n}_a \= 0,\quad {\rm and} \quad \tilde{L}_{ab} \tilde{n}^b \= 0\ . \label{(3.3)}\end{equation} Thus, in particular, as in 4 dimensions, $\it I$ is ruled by null geodesics. The space ${{\cal B}}$ of orbits of $\tilde{n}^a$ --the ``base space'' of $\it I$-- is diffeomorphic to $S^1$. The second equation and the trace-free character of $\tilde{L}_{ab}$ imply that, {\it on} $\it I$, $\tilde{L}_{ab}$ has the form \nopagebreak[3]\begin{equation}\tilde{L}_{ab} \= \tilde{L}_{(a}\tilde{n}_{b)}\, , \quad{\rm with}\quad \tilde{L}_a\tilde{n}^a \= 0\, , \label{(3.4)}\end{equation} for some smooth co-vector field $\tilde{L}_a$. Hence, the pull-back to $\it I$ of $\tilde{L}_{ab}$ vanishes which in turn implies, via (3.1), the pull-back to $\it I$ of $\tilde{\nabla}_a\tilde{n}_b$ also vanishes. Hence, if we denote by $\tilde{q}_{ab}$ the pull-back of $\tilde{g}_{ab}$, we have: \nopagebreak[3]\begin{equation}{\cal L}_{\tilde{n}} \tilde{q}_{ab} \= 0\ .\label{(3.5)} \end{equation} Since $\it I$ is null, it follows that \nopagebreak[3]\begin{equation} \tilde{q}_{ab}\tilde{n}^b \= 0\, .\label{ (3.6)} \end{equation} Thus, $\tilde{q}_{ab}$ is the pull-back to $\it I$ of a positive definite metric on the manifold of orbits ${\cal B}$ of the vector field $\tilde{n}^a$. By construction, ${\cal B}$ is a 1-dimensional manifold with topology of $S^1$. Hence, there exists a 1-form $\tilde{m}_a$ on $\it I$ such that \nopagebreak[3]\begin{equation} \tilde{q}_{ab} = \tilde{m}_a \tilde{m}_b\, . \label{(3.7)}\end{equation} (In cylindrical waves, $\tilde{m}_a$ is the pull-back to $\it I$ of $\tilde \nabla_a \phi$ and $\tilde{n}^a$ equals $\exp (-2\tilde{\gamma})\, (\partial/ \partial u)$ on $\it I$.) Under a conformal rescaling $\Omega \mapsto \omega\Omega$ (from one divergence-free frame to another), we have: \nopagebreak[3]\begin{equation} \tilde{m}_a \mapsto \omega \tilde{m}_a \quad \tilde{n}^a \mapsto \omega^{-1} \tilde{n}^a\, .\label{(3.8)} \end{equation} The pairs $(\tilde{m}_{a}, \tilde{n}^a)$ (or, equivalently, $(\tilde{q}_{ab}, \tilde{n}^a))$ are the kinematical fields which are ``universal'' to $\it I$: In any asymptotically flat space-time, we obtain the same collection of pairs. This situation is analogous to that in 4 dimensions where pairs $(\tilde{q}_{ab}, \tilde{n}^a)$ constitute the universal kinematic structure. However, whereas the 4-metric evaluated {\it at} $\it I$ has no dynamical content, in the present case, the 3-metric {\it at} $\it I$ does carry dynamical content and varies from one space-time to another. \goodbreak \subsection{Asymptotic fields} \label{s3.2} The pairs $(\tilde{q}_{ab}, \tilde{n}^a)$ on $\it I$ represent the leading or the ``zeroth oder'' structure at $\it I$. The next, in the hierarchy, is an intrinsic derivative operator. Let $\tilde{K}_b$ be a smooth co-vector field on $\tilde{M}$, and $\underline{\tilde{K}}_b$, its pull-back to $\it I$. Define: \nopagebreak[3]\begin{equation}\tilde{D}_a \underline{\tilde{K}}_b : = \underline{\tilde{\nabla}_a \tilde{K}_b} \ , \label{(3.9)} \end{equation} where the under-bar on right side denotes the pull-back to $\it I$. (Since $\underline{\tilde{K}}_b = \underline{\tilde{K}}'_b$ if and only if $\tilde{K}'_b = \tilde{K}_b +\tilde{h} {}\tilde{n}_b + \Omega \tilde{W}_b$ for some smooth $\tilde{h}$ and $\tilde{W}_b$, $\tilde{D}$ is a well-defined operator if and only if the pull-back to $\it I$ of $\tilde\nabla_a (\tilde{h} \tilde{n}_b + \Omega \tilde{W}_b)$ vanishes. It is easy to check that it does.) In 4 dimensions, the two radiative degrees of freedom of the gravitational field are coded in this intrinsic derivative operator \cite{18}. In 3 dimensions, on the other hand, there is no ``pure'' gravitational radiation. Hence, one would expect that the derivative operator $\tilde{D}$ has no invariant physical content. This is indeed the case. To see this, note first that given any vector field $\tilde{V}^a$ tangential to $\it I$ we have: $$ \tilde{V}^a\tilde{D}_a \tilde{q}_{ab} \= 0, \quad {\rm and} \quad \tilde{V}^a \tilde{D}_a \tilde{n}^b \= \tilde{V}^a\tilde{L}_a{}^b\, , $$ where, in the second equation, we have used Eq. (\ref{(3.5)}). Now, for a zero rest mass scalar field (i.e., for 4-dimensional Einstein-Rosen waves), $\tilde{L}_{ab} \= {\textstyle{1\over 2}}\tilde{\psi}^2 \tilde{n}_a \tilde{n}_b$, whence $\tilde{V}^a \tilde{D}_a \tilde{n}^a \= 0$. Hence, the difference between any two permissible derivative operators on $\it I$ is given by: $$ (\tilde{D}'_a - \tilde{D}_a) \tilde{K}_b \= \tilde{C}_{ab}^c \tilde{K}_c, \quad {\rm with} \quad \tilde{C}_{ab}^c = \tilde\Sigma_{ab}\tilde{n}^c \tilde{K}_c\ ,\,\, $$ where $\tilde{K}_b$ is any co-vector field on $\it I$ and $\tilde\Sigma_{ab}$, a symmetric tensor field on $\it I$, transverse to $\tilde{n}^a$; $\tilde\Sigma_{ab}\tilde{n}^a \= 0$. Thus, $\tilde\Sigma_{ab} \= g\tilde{m}_a\tilde{m}_b$ for some function $g$ on $\it I$. Now, if we make a conformal transformation $\Omega \mapsto \Omega' = (1 + \omega_1 \Omega) \Omega$, the derivative operator $\tilde{D}$ changes through: $(\tilde{D}'_a - \tilde{D}_a) \tilde{K}_b = \omega_1 \tilde{m}_a\tilde{m}_b \tilde{n}^c \tilde{K}_c$, {\it even though the transformation leaves $\tilde{m}_a$ and $\tilde{n}^a$ invariant}. Thus, as in 4 dimensions, the ``trace-part'' of $\tilde\Sigma_{ab}$ is ``pure-gauge''. Now, in 4 dimensions, the degrees of freedom of the gravitational field reside in the trace-free part of $\tilde\Sigma_{ab}$ \cite{18}. For the 3-dimensional description of Einstein-Rosen waves, by contrast, since $\tilde\Sigma_{ab}$ is itself pure-trace, the trace-free part vanishes identically reflecting the absence of pure gravitational degrees of freedom. In 4 dimensions, the Bondi news --which dictates the fluxes of energy-momentum carried away by gravity waves-- is coded in the curvature of $\tilde{D}$. By contrast, in the general 3-dimensional case (i.e. without restriction on the form of matter sources), we can always make the curvature vanish by going to an appropriate conformal frame. To see this, recall, first that, since $\it I$ is 2-dimensional, the full curvature of any connection is determined by a scalar. For connections under consideration, we have: $2\tilde{D}_{[a}\tilde{D}_{b]} \tilde{K}_c = {}\tilde{R}\tilde\epsilon_{ab} \tilde{m}_c\tilde{n}^d \tilde{K}_d$, where $\tilde\epsilon_{ab}$ is the obvious alternating tensor on $\it I$. (Thus, $\tilde\epsilon_{ab} = 2\tilde{l}_{[a} \tilde{m}_{b]}$, where $\tilde{l}_a$ is a null co-vector field on $\it I$ satisfying $\tilde{l}_a\tilde{n}^a = 1$.) Under conformal re-scalings $\Omega {}\mapsto \Omega' = (1+\omega_1 \Omega)\Omega$, we have $\tilde{R} \mapsto \tilde{R}' = \tilde{R}+ {\cal L}_{\tilde{n}}\omega_1$. Thus, by choosing an appropriate $\omega_1$, we can always set $\tilde{R}' =0$. There is no invariant physical information in the curvature of the derivative operator $\tilde{D}$ intrinsic to $\it I$. Let us therefore examine the curvature of the full 3-dimensional connection $\tilde\nabla$. Using Eq. (\ref{(3.1)}) and the Bianchi identity of the rescaled metric $\tilde{g}_{ab}$ we have: \nopagebreak[3]\begin{equation} 2 \tilde{S}_{ab}\tilde{n}^a + \tilde\nabla_b(\Omega\tilde{f}) = \tilde\nabla^a\tilde{L}_{ab} - \tilde\nabla_b \tilde{L} \, , \label{(3.10)}\end{equation} where $\tilde{L} = \tilde{g}^{ab}\tilde{L}_{ab}$. The Bianchi identity for the physical metric $g_{ab}$ implies that the right side of Eq. (\ref{(3.10)}) is given by $ 2\Omega^{-1} \tilde{L}_{ab}\tilde{n}^a$. Hence, combining the two, we have: \nopagebreak[3]\begin{equation} 2 \tilde{S}_{ab}\tilde{n}^a + \Omega \tilde\nabla_b \tilde{f} + \tilde{f}\tilde{n}_b = 2 \Omega^{-1}\tilde{L}_{ab}\tilde{n}^a \, .\label{(3.11)}\end{equation} These, together with (3.1), are the basic equations that govern the asymptotic dynamics. Our assumptions on the stress-energy tensor imply that $\Omega^{-1} \tilde{L}_{ab}\tilde{n}^a\tilde{V}^b$ vanishes on $\it I$ for any vector $\tilde{V}^a$ tangential to $\it I$. Eq. (\ref{(3.11)}) now implies: $\tilde{S}_{ab} \tilde{n}^a \tilde{V}^b \= 0$. Hence, the pull-back $\u{S}_{ab}$ to $\it I$ of $\tilde{S}_{ab}$ has the form $$ \u{S}_{ab} = \u{S} \tilde{m}_a\tilde{m}_b\ . $$ Similarly, since $\tilde{L}_{ab}$ is trace-free on $\it I$ and since $\tilde{L}_{ab}\tilde{n}^b$ vanishes there (cf. Eqs. (\ref{(3.3)}) and (\ref{(3.4)})), the pull-back $\u{L}_{ab}$ of $\Omega^{-1}L_{ab}$ to $\it I$ exists and has the form: $$ \u{L}_{ab} = \u{L} \tilde{m}_a\tilde{m}_b. $$ The field \nopagebreak[3]\begin{equation}\tilde{B}:= \u{S} - \u{L} \label{(3.12)}\end{equation} will play an important role in what follows. The Bach tensor $\tilde{B}_{abc}$ --vanishing of which is a necessary and sufficient condition for conformal flatness in 3 dimensions-- is given by: \nopagebreak[3]\begin{equation} \tilde{B}_{abc} = 2\tilde\nabla_{[b} \tilde{S}_{c]a} = 2\Omega^{-1} (\tilde\nabla_{[b} \tilde{L}_{c]a} - \Omega^{-1}\tilde{n}^m\tilde{g}_{a[b}\tilde{L}_{c]m}). \label{(3.13)}\end{equation} Thus, the Bach tensor is non-zero only in presence of matter. Note that, in general, it does not vanish even at $\it I$. This is in striking contrast with the situation in 4 dimensions where the Weyl tensor of the rescaled metric {\it does} vanish at $\it I$. We will see that the fact that in 3 dimensions we do not have conformal flatness even {\it at} $\it I$ makes the discussion of asymptotic symmetries much more difficult. Transvecting the Bach tensor with $\tilde{n}^a$ and pulling the result back to $\it I$, we obtain: \nopagebreak[3]\begin{equation}\tilde{n}^a \underline{\tilde{B}_{abc}} \= -{\cal L}_{\tilde{n}} \underline{S}_{bc} \= - {\cal L}_{\tilde{n}} \underline{L}_{bc} - (\lim_{\mapsto I} \Omega^{-2}\tilde{n}^m \tilde{n}^n \tilde{L}_{mn}) \tilde{q}_{bc}\, . \label{(3.14)}\end{equation} Since the last term in this equation has the form of the flux of ``matter-energy'' across $\it I$ (it equals $2({\cal L}_{\tilde{n}} \tilde\psi)^2$ in the case of Einstein-Rosen waves, cf. Eq. (\ref{se})), it is tempting to interpret this equation as the analog of the local Bondi conservation law on $\it I$ in 4 dimensions. Let us rewrite this equation in a more convenient form: \nopagebreak[3]\begin{equation} \tilde{D}_{[a}\,(\underline{S} - \underline{L}) \tilde{m}_{b]} = {\textstyle{1\over 2}} \lim_{\mapsto \it I}\, [\Omega^{-2} (\underline{L}_{mn} \tilde{n}^m \tilde{n}^n)\, \tilde\epsilon_{ab}]\, . \label{(3.15)}\end{equation} Then, it is tempting to regard the 1-form $\tilde{B}\tilde{m}_a \= (\underline{S} - \underline{L}) \tilde{m}_{a}$ as the analog of the 4-dimensional ``Bondi mass aspect''. Let us therefore study its conformal properties. Under a rescaling $\Omega \mapsto \Omega' = \omega\Omega$, we have: \nopagebreak[3]\begin{equation} \tilde{B}\tilde{m}_a \, \mapsto \, \tilde{B}'\tilde{m}'_a = [\omega^{-1} \tilde{B} - \omega^{-2}\tilde{m}^a\tilde{m}^b \tilde{D}_a\tilde{D}_b \omega +{\textstyle {3\over 2}}\omega^{-3} (\tilde{m}^a\tilde{D}_a \omega)^2]\tilde{m}_a \, ,\label{(3.16)}\end{equation} where $\tilde{m}^a$ is a vector field tangential to $\it I$ satisfying $\tilde{m}^a \tilde{m}_a = 1$. Note that the transformation law involves only the values of $\omega$ {\it on} $\it I$; unlike in the transformation law for $\tilde{R}$, discussed above, the field $\omega_1$ (which measures the first derivative of $\omega$ off $\it I$) never enters. This transformation law will play an important role in the next two sections. Finally, we note an identity which enables us to express, at $\it I$, the quantity $\tilde{B}$ constructed from the curvatures of $\tilde{g}_{ab}$ and ${g}_{ab}$ in terms of the metric coefficients. To see this, recall first that the derivative operator $\tilde{D}$ within $\it I$ is obtained by ``pulling back'' the space-time derivative operator $\tilde\nabla$ to $\it I$. Hence one can express the curvature $\tilde{R}$ of $\tilde{D}$ in terms of the curvature $\tilde{S}_{ab}$ of $\tilde\nabla$. Using the Bianchi identity (3.10) to express some of the components of $\tilde{S}_{ab}$ in terms of matter fields, we obtain: \nopagebreak[3]\begin{equation}\tilde{B} \, \= \, \u{S} - \u{L}\, \= \, {\textstyle{1\over 2}} \tilde{f} - \tilde{R}\, . \label{(3.17)}\end{equation} Thus, in a conformal frame in which $\tilde{R}$ is zero, the analog $\tilde{B}$ of the Bondi-mass aspect can be computed directly from the metric coefficient $\tilde{f} = \Omega^{-2} \tilde{g}_{ab}\tilde{n}^a \tilde{n}^b$. For the Einstein-Rosen waves, for example, it is straightforward to check that the completion given in Sec.\ref{s2} satisfies the condition $\tilde{R}= 0$ and by inspection $\tilde{f}$ is given by $\exp\, (-2\tilde{\gamma})$. Thus, in practice, Eq. (\ref{(3.17)}) often provides an easy way to calculate $\tilde{B}$. Finally, note that, under conformal rescalings $\Omega \mapsto (1 + \omega_1\Omega) \Omega$, both $\tilde{f}$ and $\tilde{R}$ transform non-trivially. However, the combination $\textstyle \tilde{f} - \tilde{R}$ remains unchanged. \goodbreak \subsection{Point particle} \label{s3.3} In this sub-section, we will consider the simplest point-particle solution to 3-dimension\-al gravity and, using the results obtained in the last two sub-sections, study its behavior at null infinity. In an obvious coordinate system adapted to the world line of the point particle, the physical space-time metric $g_{ab}$ is given by \cite{19}: $$ d\sigma^2 = -dt^2 + r^{-8GM}(dr^2 + r^2 d\phi^2), $$ where $-\infty < t< \infty,\, 0 <r < \infty $ and $0 \le \phi < 2\pi$. The particle has mass $M$ and ``resides'' at the origin. Since the stress-energy tensor vanishes everywhere outside the $r=0$ world-line (which is excised from the space-time) the metric is flat outside the origin. We can transform it in a manifestly flat form by setting $$ \rho = {r^\alpha\over \alpha}, \quad \bar\phi = |\alpha|\phi, \quad {\rm where}\quad \alpha = 1 -4GM \, .$$ (Note that $\bar\phi$ now ranges in $[0, 2\pi |\alpha|)$.) In terms of these coordinates, the metric is given by: \nopagebreak[3]\begin{equation} d\sigma^2 = -dt^2 + d\rho^2 +\rho^2 d\bar\phi^2\, . \label{(3.18)}\end{equation} Although the metric is manifestly flat, it fails to be globally Minkowskian because of the range of $\bar\phi$; there is a conical singularity at the origin and the resulting deficit angle measures the mass. It is straightforward to conformally complete this space-time to satisfy our definition of asymptotic flatness. Setting $u = t-\rho$ and $\Omega = 1/\rho$, the rescaled metric $\tilde{g}_{ab}$ is given by: \nopagebreak[3]\begin{equation} d\tilde{\sigma}^2 := \Omega^2 d\sigma^2 = - \Omega^2 du^2 + 2 du d\Omega + d\bar\phi^2\ . \label{(3.19)}\end{equation} It is trivial to check that the completion satisfies all our conditions and that the conformal frame is divergence-free. The kinematic fields are given by $\tilde{n}^a \equiv \partial/\partial u$ and $\tilde{m}_a = \tilde{D}_a \bar\phi$. By inspection $\tilde{f} = 1$ and a simple calculation shows that $\tilde{R} = 0$. Thus, $\tilde{B} = 1/2$; it carries no information about mass. This information is hidden in the deficit angle: Integrating $\tilde{m}_a$ on the base space ${\cal B}$, we have: $$ \oint_{\cal B} \tilde{m}_a dS^a\, = \, 2\pi \alpha = 2\pi (1-4GM)\, . $$ In 4 dimensions, one often insists that the conformal frame be such that the metric on the base space be a unit 2-sphere metric. These are the Bondi conformal frames. The obvious analog in 3 dimensions is to ask that the frame be such that the length of the base space be equal to $2\pi$, the length of a unit circle. (Although this restriction is very weak, it seems to be the only viable analog of the Bondi restriction in 4 dimensions.) The completion we gave above does not satisfy this condition. However, it is trivial to rectify this situation through a (constant) conformal rescaling. Set $\Omega' = (1/\alpha)\Omega$. Then, \nopagebreak[3]\begin{equation} {d{\tilde{\sigma}}'}^2 = - {\Omega'}^2 du^2 + {2\over \alpha}du d\Omega'+ d \phi^2 \ , \label{(3.20)}\end{equation} where $\phi = (1/|\alpha|)\bar\phi$ ranges over $[0, 2\pi)$; the base space ${\cal B}$ is a circle of length $2\pi$ as required. Since we have performed a {\it constant} rescaling, we have $\tilde{R}' = 0$. However, $\tilde{f}$ does change: $\tilde{f}' = \alpha^2$. Thus, in the ``Bondi type'' frame, mass resides in $\tilde{B}$: Since $\tilde{B}= {\textstyle{1\over2}}\alpha^2$ in this frame, the mass is given by \nopagebreak[3]\begin{equation} M = {1\over 4G}(1 - \sqrt{2\tilde{B}})\, . \label{(3.21)}\end{equation} Thus, our expectation of the last sub-section that $\tilde{B}$ would be the analog of the Bondi mass aspect is correct. However, to arrive at this interpretation, we must use a properly normalized (``Bondi-like'') conformal frame. This point will be important in Sec.\ref{s5}. We will conclude this discussion with two remarks. The metric considered in this sub-section is stationary and so it is appropriate to compare the situation we encountered with that in 4-dimensional stationary space-times. In both cases, the stationary Killing field selects a preferred rest frame at $\it I$ (which, in our example, is given by the time translation $\partial /\partial{u}$). However, in 4 dimensions, one can find {\it asymptotic} Killing fields corresponding to space translations as well. In the present case, on the other hand, due to the conical singularity, globally defined space-translation vector fields fail to exist {\it even asymptotically} (unless $M= 0$ in which case the deficit angle vanishes). For example, we can introduce Cartesian coordinates $t, \bar{x}, \bar{y}$ corresponding to $t,\rho,\bar\phi$. Then, $\bar{X}^a \equiv \partial/\partial \bar{x}$ and $\bar{Y}^a \equiv \partial/\partial \bar{y}$ {\it are} local Killing fields. However, the chart itself fails to be globally defined and so do the vector fields. Another strategy is suggested by what happens in Minkowski space-time. In any of its standard completions space-translations are represented by the vector fields $(\cos\phi) \tilde{n}^a$ and $(\sin\phi ) \tilde{n}^a$. In the ``Bondi-like'' conformal frame introduced above these vector fields are globally defined at null infinity of our point particle space-time as well. However, now they fail to be Killing fields even asymptotically. The second remark is that the stationary space-time we considered here is a very special solution. Generic stationary solutions in 3-dimensional general relativity have a logarithmic behavior near infinity and therefore fail to satisfy our definition of asymptotic flatness at null infinity. (See Appendix B. Our point particle solution corresponds essentially to the special case $C= 0$ in Eqs. (\ref{B2},\ref{B3}).) This is another key difference between 3 and 4 dimensions. \goodbreak \section{Asymptotic symmetries} \label{s4} In 4 dimensions, the asymptotic symmetry group at null infinity is given by the BMS group \cite{13,15,17,30}. Its structure is the same as that of the Poincar\'e group in that it is a semi-direct product of an Abelian group with the Lorentz group. The Abelian group, however, is {\it infinite} dimensional; it is the additive group of functions on a 2-sphere (the base space of $\it I$) with conformal weight +1. It is called the group of super-translations. The four dimensional group of translations can be invariantly singled out. However, unless additional conditions are imposed (near $i^0$ or $i^+$), the BMS group does not admit a preferred Lorentz or Poincar\'e sub-group. This enlargement from the ten dimensional Poincar\'e group to the infinite dimensional BMS group is brought about because, in presence of gravitational radiation, one can not single out a preferred Minkowski metric even at infinity; one can only single out a family of Minkowskian metrics and they are related by super-translations. In this section, we will examine the asymptotic symmetry group in 3 dimensions. One's first impulse is to expect that the situation would be completely analogous to that in 4 dimensions since the ``universal structure'' available at $\it I$ in the two cases is essentially the same. It turns out however that because the space-time metric is dynamical even at infinity --i.e., because in general the physical metric does not approach a Minkowskian metric even to the leading order-- the group of asymptotic symmetries is now enlarged even further. Furthermore, now it is not possible to single out even the group of translations without additional conditions. This section is divided into two parts. The first discusses the asymptotic symmetry group and the second introduces additional conditions to single out translations. \goodbreak \subsection{Asymptotic symmetry group} \label{s4.1} Let us begin by recalling the universal structure, i.e., the structure at infinity that is common to all asymptotically flat space-times. As usual, the asymptotic symmetries will then be required to preserve this structure. Given {\it any} space-time satisfying our definition of asymptotic flatness and {\it any} conformal completion thereof, its null infinity, $\it I$, is a 2-manifold, topologically $S^1\times R$. It is ruled by a (divergence-free) null vector field $\tilde{n}^a$ and its intrinsic, degenerate metric $\tilde{q}_{ab}$ satisfies: \nopagebreak[3]\begin{equation}\tilde{q}_{ab} \tilde{V}^b \= 0\ \ {\hbox{\rm if and only if}}\ \ \tilde{V}^b \propto \tilde{n}^b\, , \label{(4.1)}\end{equation} where $\tilde{V}^b$ is an arbitrary vector field on $\it I$. The ``base space'' ${\cal B}$ of $\it I$, i.e., the space of integral curves of $\tilde{n}^a$ on $\it I$, has the topology of $S^1$. As in 4 dimensions, the intrinsic metric $\tilde{q}_{ab}$ on $\it I$ is the pull-back to $\it I$ of a metric $\bar{q}_{ab}$ on ${\cal B}$; that is, ${\cal L}_{\tilde{n}} \tilde{q}_{ab} = 0$. Next, we have the conformal freedom given in Eq. (\ref{(3.2)}). Thus, $\it I$ is equipped with an equivalence class of pairs $(\tilde{q}_{ab}, \tilde{n}^a)$ satisfying Eqs. (\ref{(4.1)}, \ref{(3.5)}), where two are considered as equivalent if they differ by a conformal rescaling: $(\tilde{q}_{ab}, \tilde{n}^a)\approx (\omega^2 \tilde{q}_{ab}, \omega^{-1} \tilde{n}^a)$, with ${\cal L}_{\tilde{n}} \omega = 0$. This structure is completely analogous to that at null infinity of 4-dimensional asymptotically flat space-times. As we already saw, in 3 dimensions, a further simplification occurs: in any conformal frame, $\it I$ admits a unique co-vector field $\tilde{m}_a$ such that: $\tilde{q}_{ab} = \tilde{m}_a \tilde{m}_b$. Hence, in the universal structure, we can replace $\tilde{q}_{ab}$ by $\tilde{m}_a$. Thus, $\it I$ is equipped with equivalence classes of pairs $(\tilde{m}_a, \tilde{n}^a)$ satisfying: \nopagebreak[3]\begin{equation} \tilde{m}_a \tilde{n}^a \= 0 \quad {\rm and} \quad {\cal L}_{\tilde{n}} \tilde{m}_a \= 0 \ , \label{(4.2)}\end{equation} where $(\tilde{m}_a, \tilde{n}^a ) \approx (\omega \tilde{m}_a, \omega^{-1} \tilde{n}^a)$ for any nowhere vanishing smooth function $\omega$ on $\it I$ satisfying ${\cal L}_{\tilde{n}} \omega = 0$. Note that the second equation in (4.2) implies that $\tilde{m}_a$ is the pull-back to $\it I$ of a co-vector field $\bar{m}_a$ on the base space ${\cal B}$. The asymptotic symmetry group ${\cal G}$ is the sub-group of the diffeomorphism group of $\it I$ which preserves this structure. An infinitesimal asymptotic symmetry is therefore a vector field $\tilde{\xi}^a$ on $\it I$ satisfying: \nopagebreak[3]\begin{equation}{\cal L}_{{\tilde\xi}} \tilde{m}_a \= \tilde\alpha \tilde{m}_a \quad{\rm and}\quad {\cal L}_{\tilde\xi} \tilde{n}^a \= - \tilde\alpha \tilde{n}^a\ , \label{(4.3)}\end{equation} for some smooth function $\tilde\alpha$ (which depends on ${\tilde\xi}^a$) satisfying ${\cal L}_{\tilde{n}} \tilde\alpha \= 0$. Eqs. (\ref{(4.3)}) ensure that the 1-parameter family of diffeomorphisms generated by ${\tilde\xi}^a$ preserves the ``ruling'' of $\it I$ by the integral curves of its null normal, its divergence-free character, and maps pair $(\tilde{m}_a, \tilde{n}^a)$ to an equivalent one, thereby preserving each equivalence class. It is easy to check that vector fields satisfying Eqs. (\ref{(4.3)}) form a Lie algebra which we will denote by ${\cal L}{\cal G}$. This is the Lie algebra of infinitesimal asymptotic symmetries. To unravel the structure of ${\cal L}{\cal G}$, we will proceed as in 4 dimensions. Let ${\cal L}{\cal S}$ denote the subspace of ${\cal L}{\cal G}$ spanned by vector fields of the type ${\tilde\xi}^a \= \tilde{h} \tilde{n}^a$. Elements of ${\cal L}{\cal S}$ will be called infinitesimal {\it super-translations}. Eqs. (\ref{(4.3)}) imply: \nopagebreak[3]\begin{equation}{\cal L}_{\tilde{n}} \tilde{h} \= 0, \quad {\cal L}_{\tilde{h}\tilde{n}}\tilde{m}_a = 0, \quad {\rm and} \quad {\cal L}_{\tilde{h}\tilde{n}} \tilde{n}^a = 0\, . \label{(4.4)}\end{equation} Thus, for any super-translation, $\tilde{h}$ is the pull-back to $\it I$ of $\bar{h}$ on the base space ${\cal B}$ and the action of the super-translation leaves each pair $(\tilde{m}_a, \tilde{n}^a)$ individually invariant. Furthermore, given any ${\tilde\xi}^a\in {\cal L}{\cal G}$ and any $\tilde{h}\tilde{n}^a \in {\cal L}{\cal S}$, we have: \nopagebreak[3]\begin{equation} [{\tilde\xi}, \tilde{h}\tilde{n} ]^a = ({\cal L}_{\tilde\xi} \tilde{h} - \tilde\alpha) \tilde{n}^a\ .\label{(4.5)}\end{equation} Thus, ${\cal L}{\cal S}$ is a Lie ideal of ${\cal L}{\cal G}$. To unravel the structure of ${\cal L}{\cal G}$, let us examine the quotient ${\cal L}{\cal G}/{\cal L}{\cal S}$. Let $[{\tilde\xi}^a]$ denote the element of the quotient defined by ${\tilde\xi}^a$; $[{\tilde\xi}^a]$ is thus an equivalence class of vector fields on $\it I$ satisfying (4.3), where two are regarded as equivalent if they differ by a super-translation. The second equation in (4.3) implies that every ${\tilde\xi}^a$ in ${\cal L}{\cal G}$ admits an unambiguous projection $\bar{\xi}^a$ to the base space ${\cal B}$. The equivalence relation implies that all vector fields ${\tilde\xi}^a$ in $[{\tilde\xi}^a]$ project to the same field $\bar{\xi}^a$ on ${\cal B}$ and that $[{\tilde\xi}^a]$ is completely characterized by $\bar{\xi}^a$. What conditions does $\bar{\xi}^a$ have to satisfy? The only restriction comes from the first equation in (4.3): $\bar{\xi}^a$ must satisfy ${\cal L}_{\bar{\xi}}\bar{m}_a = \bar\alpha\bar{m}_a$ for some $\bar\alpha$ on ${\cal B}$. However, since ${\cal B}$ is {\it one} dimensional, this is no restriction at all! Thus, $\bar{\xi}^a$ can be {\it any} smooth vector field on the circle ${\cal B}$. ${\cal L}{\cal G}/{\cal L}{\cal S}$ is thus the Lie algebra of all smooth diffeomorphisms on $S^1$. (In 4 dimensions, by contrast, the first of equations (4.3) is very restrictive since the base space is a 2-sphere; $\bar{\xi}^a$ has to be a conformal Killing field on $(S^2,\bar{q}_{ab})$. The Lie algebra of these conformal Killing fields is just six dimensional and is isomorphic to the Lie algebra of the Lorentz group in 4 dimensions.) These results imply that the group ${\cal G}$ of asymptotic symmetries has the structure of a semi-direct product. The normal subgroup ${\cal S}$ is the Abelian group of super-translations. Given a conformal frame, each infinitesimal super-translation ${\tilde\xi}^a = \tilde{h}\tilde{n}^a$ is characterized by a function $\tilde{h}$. If we change the conformal frame, $\tilde{g}_{ab} \mapsto \tilde{g}'_{ab} = \omega^2 \tilde{g}_{ab}$, we have $\tilde{n}^a\mapsto \tilde{n}'^a = \omega^{-1}\tilde{n}^a$ and hence $\tilde{h}\mapsto \tilde{h}' = \omega \tilde{h}$. Thus, each super-translation is characterized by a conformally weighted function on the circle ${\cal B}$; the super-translation subgroup ${\cal S}$ is isomorphic with the additive group of smooth functions on a circle with unit conformal weight. The quotient ${\cal G}/{\cal S}$ of ${\cal G}$ by the super-translation subgroup ${\cal S}$ is the group Diff$(S^1)$ of diffeomorphisms on a circle. In the semi-direct product, Diff$(S^1)$ acts in the obvious way on the additive group of conformally weighted functions on $S^1$. We will conclude this sub-section with some remarks. 1. In the light of the above discussion, let us re-examine the conditions on the stress-energy tensor in our definition of asymptotic flatness. In Sec.\ref{s3.1} we pointed out that the conditions are considerably weaker than those normally imposed in 4 dimensions and argued that imposition of stronger conditions would deprive the framework of interesting examples. Could we have imposed even weaker conditions? Note that, if $\Omega T_{ab}$ fails to admit a well-defined limit to $\it I$, we could not even have concluded that $\it I$ is a null hypersurface (see Eq. (\ref{(3.1)})). What about the condition on the trace? In absence of this condition, the pull-back of $\tilde{L}_{ab}$ to $\it I$ would not have vanished. This then would have implied ${\cal L}_{\tilde{n}} \tilde{q}_{ab} \= (4/3)\tilde{L} \tilde{q}_{ab} \not= 0$. Consequently, the asymptotic symmetry group would have borne little resemblance to the BMS group \cite{13,15,17,30} that arises in 4 dimensions. Thus, the specific conditions we used in the definition strike a balance: they are weak enough to admit interesting examples and yet strong enough to yield interesting structure at $\it I$. 2. The semi-direct product structure of the asymptotic symmetry group is the same as that of the BMS group. The super-translation group is also the natural analog of the super-translation subgroup of the BMS group. The quotient, however, is quite different: while it is the Lorentz group in the 4-dimensional case, it is now an {\it infinite dimensional} group, Diff$(S^1)$. Recall, however, that in the corresponding analysis in 4 dimensions, the base space of $\it I$ is a 2-sphere. $S^2$ admits a unique conformal structure and the Lorentz group arises as its conformal group. In the present case, the base space ${\cal B}$ is topologically $S^1$ and the quotient of ${\cal G}$ by the super-translation subgroup is the conformal group of $S^1$. (Recall that $\bar{\xi^a}$ has to satisfy ${\cal L}_{\bar{\xi}} \bar{q}_{ab} = 2\bar\alpha \bar{q}_{ab}$ since $\bar{q}_{ab} = \bar{m}_a\bar{m}_b$.) It just happens that, since $S^1$ is 1-dimensional, {\it every} diffeomorphism of $S^1$ maps $\bar{q}_{ab}$ to a conformally related metric. This is the origin of the enlargement. 3. Can one understand this enlargement from a more intuitive standpoint? Recall that the symmetry group is enlarged when the boundary conditions are weakened. Thus, it is the weaker conditions on the fall-off of stress-energy --and hence on the curvature of the physical metric-- that is responsible for the enlargement of the group. This can be seen in the explicit asymptotic form of the metric of Einstein-Rosen waves that we encountered in Sec.\ref{s2.3}, \nopagebreak[3]\begin{equation} d\sigma^2 = e^{2\gamma} (-du^2 - 2dud\rho) + \rho^2 d\phi^2\ , \label{(4.7)}\end{equation} where $\gamma \= \gamma(u)$ is a dynamical field on $\it I$, sensitive to the radiation. If $\gamma =0$, we obtain Minkowski space. The radiative space-times that result when $\gamma \not= 0$ thus differ from the radiation-free Minkowski space already to the {\it leading} order at null infinity. In 4 dimensions, by contrast, the leading order behavior of the physical metric has no dynamical content; the components of the metric carrying physical information fall as $1/r$. It is this difference that is responsible for the tremendous enlargement of the asymptotic symmetry group. Let us analyze this point further. Suppose, in 4 dimensions, we consider metrics whose form is suggested by (4.6): \nopagebreak[3]\begin{equation} ds^2 = e^{2\gamma}(-du^2 -2 dudr) + r^2 d\Sigma^2 \ , \label{(4.8)}\end{equation} where $\gamma = \gamma(u,r,\theta, \phi)$ has a well defined limit as $r$ tends to infinity along constant $u, \theta,\phi$ curves, and $d\Sigma^2$ denotes the 2-sphere metric. Now, the situation is similar to that encountered in the Einstein-Rosen waves: metrics with different radiative content differ already to the leading order. Nonetheless, setting $\Omega = 1/r$, it is easy to carry out a conformal completion of this metric and verify that it admits a smooth $\it I$. However, the problem is that the {\it curvature of this metric fails to fall off sufficiently rapidly for the stress-energy tensor to have the fall-off normally required in 4 dimensions}. Hence, this metric fails to be asymptotically flat in the usual 4-dimensional sense. In 3 dimensions, on the other hand, to obtain an interesting framework, we are forced to admit the analogous metrics (4.6). \goodbreak \subsection{Translations} \label{s4.2} In 4 dimensions, one can single out translations from the BMS group in a number of ways. Somewhat surprisingly, it turns out that every one of those techniques fails in 3 dimensions. We will first illustrate this point and then show that one can introduce additional conditions to single out translations. As one might expect from our discussion of Sec.\ref{s3.3}, the situation is subtle even after introduction of the stronger conditions. Among various characterizations of the translation sub-group of the BMS group, the one that is conceptually simplest and aesthetically most pleasing is given by group theory \cite{30}: Translations form the unique 4-dimensional {\it normal} subgroup of the BMS group. In three dimensions, however, the asymptotic symmetry group is much larger; the quotient of ${\cal G}$ by super-translations is now ${\rm Diff}(S^1)$ --the {\it full} diffeomorphism group of a circle-- rather than the (finite dimensional) Lorentz group. Consequently, ${\cal G}$ does not admit {\it any} finite dimensional normal subgroup. Thus, the most obvious 4-dimensional strategy is not applicable. In 4 dimensions, another method of singling out translations is to use the notion of ``conformal-Killing transport'' \cite{20}. The conformal Killing data at any point of $\it I$ corresponding to translations are integrable because the Weyl tensor (of the tilde metric) vanishes there identically. In 3 dimensions, the analogous condition would be vanishing of the Bach tensor. Unfortunately, as we saw in Sec.\ref{s3.2}, in presence of matter fields the Bach tensor fails to vanish at $\it I$. (The explicit expression of the Bach tensor in the case of Einstein-Rosen waves is given in Appendix A.) This in turn makes the conformal-Killing transport of data that would have corresponded to translations non-integrable on $\it I$ and the strategy fails. Finally, a third method of selecting translations in 4 dimensions is to go to a Bondi conformal frame, i.e., one in which the metric $\bar{q}_{ab}$ on the base space is the unit 2-sphere metric and consider the 4-parameter family of super-translations ${\tilde\xi}^a = \tilde{h}\tilde{n}^a$, where $\tilde{h}$ is any linear combination of the $\ell= 0, 1$ spherical harmonics. There is only a 3-parameter family of Bondi frames and the conformal factor that relates them is highly constrained. As a result, if $\tilde{h}$ is a linear combination of the $\ell =0,1$ spherical harmonics in one Bondi frame, it is so in {\it all} Bondi frames \cite{30}. The construction thus selects precisely a 4-parameter sub-group of the super-translation group ${\cal S}$. This strategy fails in 3 dimensions because the base space is now $S^1$ and the notion of a ``unit $S^1$ metric'' fails to have the rigidity that the unit 2-sphere metrics enjoy. Indeed, as we already remarked in Sec.\ref{s3.3}, the only non-trivial analog of the Bondi frames condition is to require that the conformal frame be such that the length of the base space ${\cal B}$ be $2\pi$ and there is an {\it infinite} dimensional freedom in the choice of such frames. Consequently, we can not select a 3-dimensional space of translations in this manner. Thus, to select translations, we need to impose additional conditions. To be viable, they should select the standard, 3-dimensional translation group in Minkowski space-time. However, as we saw in the point particle space-time, asymptotic space-translations do not exist globally near $\it I$ if $M\not= 0$. (This is also the case for Einstein-Rosen waves.) Hence, one would expect that, when the total (ADM-type) mass is non-zero, the conditions should select only a time translation. Thus, the conditions have to be subtle enough to achieve both these goals at once. Fortunately, such conditions do exist and are, furthermore, satisfied by a large class of examples. A space-time $(M, g_{ab})$ will be said to be {\it strongly asymptotically flat at null infinity} if it satisfies the boundary conditions of Sec.\ref{s3.1} and admits a conformal completion in which \nopagebreak[3]\begin{equation}\tilde{B}\equiv {}\u{S} - \u{L} \equiv {1\over 2}\tilde{f} - {}\tilde{R} \,\, \rightarrow \textstyle{k\over 2} \ge 0 \quad \hbox{\rm as one approaches $i^o$ along $\it I$\ ,} \label{saf}\end{equation} where $k$ is a constant. Note that if the space-time is axi-symmetric, $\tilde{B}$ automatically approaches a constant: if $\Omega$ is chosen to be rotationally symmetric, $\tilde{B}$ would also be rotationally symmetric everywhere on $\it I$ and hence, in particular, its limit to $i^o$ along $\it I$ will be angle independent as required. (We will see in Sec.\ref{s5} that the positivity of $k$ ensures that the ADM-type energy is well-defined.) Thus, the additional condition is satisfied in a large class of examples, including the Einstein-Rosen waves and our point particle space-time. Note that if the last condition is satisfied in a given conformal frame, we can rescale the conformal factor by a {\it constant} and obtain another conformal frame in which it is also satisfied. We can eliminate this trivial freedom by a normalization condition. A conformal frame will be said to be of {\it Bondi-type} if $\tilde{B}$ satisfies (\ref{saf}) {\it and} if $\oint_{{\cal B}} \tilde{m}_a dS^a = 2\pi$. A natural question is: How many Bondi-type conformal frames does a strongly asymptotically flat space-time admit? We will show that Minkowski space admits precisely a 2-parameter family of them and the freedom corresponds precisely to that of choosing a unit time-like vector (i.e. a rest frame). This is completely analogous to the freedom in the choice of Bondi frames in 4 dimensions. If the ADM-type mass is non-zero, however, the Bondi-type frame will turn out to be generically unique (unlike Bondi frames in 4 dimensions). To establish these results, let us fix a strongly asymptotically flat space-time and two Bondi-type completions thereof in which $\tilde{B}$ tends, respectively, to $k/2$ and $k'/2$ for some constants $k$ and $k'$. (In Minkowski space-time, it turns out that $k = k' =1$.) Let us suppose that the two conformal frames are related by $\Omega = \alpha \Omega'$, i.e., $\tilde{g}_{ab} = \alpha^2 \tilde{g}'_{ab}$. Then, the transformation property (3.17) of $\tilde{B}$ implies: \nopagebreak[3]\begin{equation}{k'\over 2} = {k\over 2} \alpha^2 + \alpha\, \tilde\partial^2 \alpha - {\textstyle{1\over 2}}\, (\tilde\partial\alpha)^2 \, , \label{(4.9)}\end{equation} where $\tilde\partial \equiv \tilde{m}^a\tilde{D}_a \equiv \partial/\partial\phi$. The question now: How many (smooth) solutions does Eq. (\ref{(4.9)}) admit? The equation is non-linear and rather complicated. However, if we take its $\tilde\partial$-derivative we are left with a linear equation: \nopagebreak[3]\begin{equation} \tilde\partial (\tilde\partial^2 \alpha + k \alpha) = 0\, . \label{(4.10)}\end{equation} This has regular solutions only if $k= n^2$ for an integer $n$ (recall that, in a Bondi-frame, the range of $\phi$ on ${\cal B}$ is in $[0, 2\pi)$). Similarly, interchanging the role of primed and unprimed frames, we conclude that $k' = {n'}^2$ for some integer $n'$. Finally, the fact that the length of ${\cal B}$ in {\it both} conformal frames is $2\pi$, implies that $n' = n$. Thus, unless $k=k' = n^2$, Eq. (\ref{(4.9)}) does not admit a regular solution. Thus, unless $k= n^2$, the Bondi-type conformal frame is in fact unique. In this generic case, we have a preferred time translation sub-group of ${\cal G}$ generated by ${\tilde\xi}^a = \tilde{n}^a$. In the point particle example, this is precisely the time translation selected by the rest frame of the particle. In Einstein-Rosen waves, it turns out to be the one selected by the total Hamiltonian of the system \cite{10}. If $k = n^2$, the reduced equation (\ref{(4.10)}) clearly admits a 2-parameter family of solutions: In terms of the angular coordinate $\phi$ on ${\cal B}$ (with $\tilde{m}_a = \tilde{D}_a\phi$), these are given by \nopagebreak[3]\begin{equation} \alpha = A + B \cos n\phi + C \sin n\phi, \quad {\rm with} \quad - A^2 +B^2 +C^2 = -1\, . \label{(4.11)}\end{equation} It is straightforward to check that they also satisfy the full equation (\ref{(4.9)}). In the obvious completion of Minkowski space-time (obtained by setting $M = 0$ in the point particle example or $\psi =0$ in Einstein-Rosen waves), we have $\tilde{f}=1$ and $\tilde{R} = 0$, whence $\tilde{B} = 1/2$. This corresponds to the case $n =1$. Thus, Minkowski space-time does admit Bondi-type conformal frames and the constant $k$ is precisely $1$ (i.e., we can not obtain any other value by going from one Bondi-type frame to another). There is precisely a 2-parameter family of Bondi-type frames related by a conformal factor $\alpha$ of Eq. (\ref{(4.11)}) (with $n=1$). Fix any one of these and consider the 3-parameter family of super-translations of the form $\tilde{h}\tilde{n}^a$ where \nopagebreak[3]\begin{equation} \tilde{h} = (a + b \cos\phi + c \sin\phi)\, . \label{(4.12)}\end{equation} Using Eq. (\ref{(4.11)}) (with $n = 1)$, one can check that this 3-dimensional space of these super-translations is left invariant if we replace one Bondi-type frame by another. Following the (third) strategy (mentioned above) used in 4 dimensions, one can call this the translation sub-group of the asymptotic symmetry group. This label is indeed appropriate: It is easy to check that the restrictions to $\it I$ of any translational Killing field of Minkowski space has precisely this form. Thus, if $n=1$, the procedure does select for us a 3-dimensional translation sub-group of ${\cal G}$. It turns out, however, that if $n =1$, the deficit angle at spatial infinity vanishes and we therefore have zero ADM-type energy. By 3-dimensional positive energy theorem \cite{10}, the only physically interesting space-time in which this can occur is the Minkowski space-time. If $k >1$, we have a surplus angle at spatial infinity and the ADM-type energy is now negative. We will therefore ignore the $n>1$ cases from now on (although they do display interesting mathematical structures; see Appendix B). To summarize, strongly asymptotically flat space-times generically admit a preferred Bondi-type frame and a preferred time-translation. In the exceptional cases, where $k = n^2$, we obtain a 3-parameter family of Bondi-type frames. However, the only physically interesting exceptional case is Minkowski space-time where $n=1$. \goodbreak \section{Conserved quantities} \label{s5} This section is divided into two parts. In the first, we introduce the notion of energy at a retarded instant of time and of fluxes of energy and, in the second, we discuss super-momenta. Again, while the general ideas are similar to those introduced by Bondi, Sachs and Penrose in 4 dimensions, there are also some important differences. Perhaps the most striking difference is the following. Consider generic, strongly asymptotically flat space-times. As we saw, in this case, there is a preferred Bondi-type frame and a preferred translation subgroup of the asymptotic symmetry group. However, as the example of Einstein-Rosen waves illustrates, because the space-time metric is dynamical even at infinity, the vector field $\tilde{n}^a$ (or a constant multiple thereof) in the Bondi-type frame is {\it not} the extension to $\it I$ of a {\it unit} time translation in the space-time. If the initial data of the scalar field are of compact support, space-time is flat in a neighborhood of $i^o$ and a constant multiple of $\tilde{n}^a$ --namely, $(\exp \, \tilde\gamma_0) \tilde{n}^a$-- coincides with the extension to $\it I$ of the unit time translation near $i^o$. However, in the region of $\it I$ with non-trivial radiation, the restriction of the unit time translation is given by $(\exp\,\tilde\gamma (u)) \tilde{n}^a$; the rescaling involved is $u$-dependent whence the vector field is not even a super-translation! Energy, on the other hand, is associated with unit time translations. Hence, energy at null infinity is not directly associated with any component of super-momentum and a new strategy is needed to define it. \goodbreak \subsection{Energy} \label{s5.1} The strategy we will adopt is to capture the notion of energy through the appropriate deficit angle. We will first begin with motivation, then write down the general expression of energy and finally verify that it has the expected physical properties. Let us begin with an axi-symmetric, strongly asymptotically flat space-time, consider its Bondi-type completion with an axi-symmetric conformal factor. (Thus, $\oint_{{\cal B}} \tilde{m}_a dS^a = 2\pi$.) Fix a cross-section $C_o$ of $\it I$ to which the rotational Killing field is tangential. Because of axi-symmetry of the construction, the field $\tilde{B}$ is constant on $C_o$, say $\tilde{B}|_{C_o} = k_o/2$. If this were a cross-section of $\it I$ of the point particle space-time, it follows from our discussion of Sec.\ref{s3.3} (cf Eq. (\ref{(3.21)})) that we would associate with it energy \nopagebreak[3]\begin{equation} E= {1\over 4G}(1 -\sqrt{k_o})\, .\label{(5.1)}\end{equation} (Thus, in particular, if $k_o= 1$ as in Minkowski space-time, we would have $E= 0$.) By inspection, we can generalize this expression to arbitrary cross-sections of null infinity of general --i.e., non-axi-symmetric-- space-times. Given any strongly asymptotically flat space-time, a Bondi-type conformal frame and a cross-section $C$ of $\it I$, we will set: \nopagebreak[3]\begin{equation} E[C] := {1\over 8\pi G}\oint_{C}\, (1 - \sqrt{2\tilde{B}})\,\tilde{m}_a dS^a \, .\label{(5.2)}\end{equation} The appearance of the square-root is rather unusual and seems at first alarming: the formula would not be meaningful if $\tilde{B}$ were to become negative. Note, however, that, by assumption of strong asymptotic flatness, the limit $k/2$ of $\tilde{B}$ to $i^o$ is positive. Furthermore, since ${\cal L}_{\tilde{n}} \tilde{B} = \lim_{\mapsto\it I} \,\Omega^{-2}\, \tilde{L}_{cd}\tilde{n}^c \tilde{n}^d$ and since the right side is positive definite if the matter sources satisfy local-energy conditions, $\tilde{B}$ remains positive on $\it I$. Thus, $E[C]$ is bounded above by $1/4G$ which is also the upper bound of the total Hamiltonian at spatial infinity \cite{10}. Let us now verify various properties of this quantity which provide a strong support in favor of its interpretation as energy. {}$\bullet$ First, let us suppose that we are in Minkowski space-time. Then, in {\it any} Bondi-type frame, we have $\tilde{B} = 1/2$ everywhere on $\it I$. Hence, on any cross-section, the energy vanishes. {}$\bullet$ Next, let us consider the point-mass space-time with positive $M$. Then from Sec.\ref{s4.2} we know that there is a unique Bondi-type frame and in this frame, ${2\tilde{B}} = (1- 4GM)^2$ whence, on {\it any} cross-section $C$, we obtain $E[C] = M$. This is of course not surprising since our general definition was motivated by the point mass example. However, the result is not totally trivial because we are now allowing arbitrary cross-sections, not necessarily tangential to the rotational Killing field. {}$\bullet$ Consider Einstein-Rosen waves. In the non-trivial case when the scalar field $\psi$ is non-zero, the Bondi-type frame is unique. In this frame, $2\tilde{B} = \exp (-2\tilde\gamma (u))$. Hence, $$ E[C] = {1\over 8\pi G} \oint_C (1 - e^{-\tilde\gamma (u)})d\phi\, .$$ In the limit to $i^o$ (or, in the past of the support of $\tilde\psi (u)$ on $\it I$), we have $E \mapsto (1/4G)(1- \exp {(-\tilde\gamma_0}))$. This is {\it precisely} the value of the total Hamiltonian at spatial infinity --the generator of unit time translations near $i^o$. This result is non-trivial because the Hamiltonian is defined \cite{10} through {\it entirely} different techniques using the symplectic framework based on Cauchy slices. In the limit to $i^+$, we know from Sec.\ref{s2.3} that $\tilde\gamma (u)$ tends to zero. Hence $E[C]$ tends to zero. This behavior of $E[C]$ is also physically correct because $i^+$ is regular in these space-times. We wish to emphasize that these two constraints --agreement with the known expressions both at $i^o$ and $i^+$ of Einstein-Rosen waves-- on the viable expression of energy are strong. Hence, the fact that there exists a {\it general} expression for $E[C]$ involving only fields defined {\it locally} on the cross-section $C$ which reduces to the correct limits at both ends of $\it I$ of the Einstein-Rosen waves is quite non-trivial. {}$\bullet$ What about the flux of energy? If a cross-section $C_1$ is in the future of a cross-section $C_2$, from Eqs. (\ref{(3.15)}, \ref{(5.2)}) we have: \nopagebreak[3]\begin{eqnarray} E[C_1] - E[C_2] &=& {1\over 8\pi G} \int_{\Delta} {\tilde{D}}_{[a} (1 - \sqrt{2\tilde{B}})\, \tilde{m}_{b]} dS^{ab}\nonumber\\ &=& -{1\over 16\pi G} \int_{\Delta}\, (2\tilde{B})^{-{1\over 2}}\, \lim_{\mapsto I} (\Omega^{-2} \tilde{L}_{mn}\tilde{n}^m \tilde{n}^n)\, \tilde\epsilon_{ab} dS^{ab}\, ,\label{(5.3)}\end{eqnarray} where $\Delta$ is the portion of $\it I$ bounded by $C_1$ and $C_2$. (The limit in the integrand is well-defined because of our conditions on the stress-energy tensor. For the Einstein-Rosen waves, it is $({\cal L}_{\tilde{n}}\tilde{\psi})^2$; see Eq. (\ref{se}).) If the matter sources satisfy local energy conditions, the integrand in the second expression is positive definite. Thus, $E[C_1] \le E[C_2]$, the equality holding if and only if there is no flux of matter through the region $\Delta$. As one would expect, radiation through $\it I$ carries positive energy. The appearance of $1/\sqrt{2\tilde{B}}$ in the integrand is not alarming because, as remarked above, for the class of space-times under consideration, $\tilde{B}$ is guaranteed to be positive on $\it I$ in Bondi-type frames. {}$\bullet$ In the case when the source is a zero rest-mass scalar field, we can make the energy flux more explicit: $\lim_{\mapsto I} (\Omega^{-2} \tilde{L}_{mn}\tilde{n}^m \tilde{n}^n) = 2 ({\cal L}_{\tilde{n}} \tilde\psi)^2$. Hence, for Einstein-Rosen waves, Eq. (5.3) reduces to: \nopagebreak[3]\begin{equation} E[C_1] - E[C_2] = -{1\over 8\pi G} \int_{\Delta}\, e^{\tilde\gamma(u)} ({\cal L}_{\tilde{n}}\tilde{\psi})^2\, \tilde{\epsilon}_{ab} dS^{ab}\, . \label{(5.4)}\end{equation} In the limit in which the cut $[C_2]$ tends to $i^o$, $E[C_2]$ reduces to the gravitational Hamiltonian \cite{10}. Hence, on any cut, $E[C]$ is given by the difference between the total Hamiltonian and the energy that is radiated out up until that cut. Finally, note that, because of the appearance of $\exp \tilde\gamma(u)$ in the integrand, this expression of energy-flux is more complicated than the flux formula (\ref{(2.37)}) for $\gamma(u)$, i.e., the flux formula for Thorne's C-energy \cite{2}. This is, however, to be expected: Even at spatial infinity, the total Hamiltonian is $(1/4G)(1-\exp (-\tilde\gamma_o))$ while the C-energy is just $(1/4G)\tilde\gamma_o$. In the weak field limit the two agree. But in strong fields, they are quite different. In particular, the total Hamiltonian and $E[C]$ are bounded above by $1/4G$ while the C-energy is unbounded above. {}$\bullet$ We saw that, in the case of Einstein-Rosen waves, our expression (5.2) of energy reduces to the total Hamiltonian in the limit as the cross-section approaches $i^o$. We expect that this result holds much more generally: It should hold in any space-time which is strongly asymptotically flat at null infinity and also satisfies the boundary conditions at spatial infinity needed in the Hamiltonian formulation \cite{10}. That is, broadly speaking, we expect the agreement to hold if the space-time is sufficiently well-behaved to have a well-defined total Hamiltonian {\it and} a well-defined limit of (5.2) to $i^o$. It is easy to provide strong plausibility arguments for this conjecture since both quantities measure the deficit angle at $i^o$. However, more detailed arguments are needed to establish this result conclusively. \goodbreak \subsection{Super-momentum} \label{s5.2} We will conclude the main paper by introducing a notion of super-momentum. For reasons indicated in the beginning of this section, however, these quantities are not related to the energy in a simple way. They are given primarily for completeness. As in 4 dimensions \cite{18}, in a suitable Hamiltonian formulation based on null infinity, they may be the generators of canonical transformations induced by super-translations. Recall first that, in 4 dimensions, super-momentum arises as a linear map from the space of super-translations to reals and is expressible in any conformal frame. The basic fields that enter are constructed from the asymptotic curvature of the rescaled metric (and matter sources). However, in order to ``remove irrelevant conformal factor terms'', one also has to introduce a kinematic field \cite{17} with appropriate conformal properties. The situation in 3 dimensions is rather similar. Let us begin by introducing the analog $\tilde\rho$ of the kinematical field. Set $\tilde\rho = 1/2$ in any Bondi-type conformal frame and transform it to any other frame via the following law: if $\Omega =\alpha \Omega'$, then \nopagebreak[3]\begin{equation}{\tilde{\rho}}' = \alpha^2 \tilde\rho + \alpha \, \tilde\partial^2\alpha - {\textstyle{1\over 2}} (\tilde\partial \alpha)^2\, , \label{(5.5)}\end{equation} where, as before $\tilde\partial \equiv \tilde{m}^a \tilde{D}_a$. Hence, the field $\tilde\rho -\tilde{B}$ transforms rather simply: $({\tilde\rho}' - {\tilde{B}}') = \alpha^2 (\tilde\rho - \tilde{B})$ (see Eq. (3.17)). As in 4 dimensions, the field $\rho$ serves two purposes: it removes the unwanted, inhomogeneous terms in the transformation properties of $\tilde{B}$ and it removes the ``purely kinematical'' part of $\tilde{B}$ in the Bondi-type frames. We can now define the super-momentum. Fix any conformal completion of the physical space-time (not necessarily of a Bondi-type). The value of the super-momentum on a super-translation $\tilde{T}\tilde{n}^a$, evaluated at a cross-section $C$ of $\it I$ will be: \nopagebreak[3]\begin{equation} P_{\tilde{T}}[C] = {1\over 8\pi G} \oint_{C}\,(\tilde\rho - \tilde{B})\, \tilde{T} \tilde{m}_a dS^a\ . \label{(5.6)}\end{equation} Under a conformal transformation, $\Omega \mapsto \Omega'=\alpha^{-1} \Omega$, we have ${\tilde{T}}' = \alpha^{-1} \tilde{T}$ and ${\tilde{m}_a}' = \alpha^{-1}\tilde{m}_a$. Hence, the 1-form integrand remains unchanged. Thus, as needed, the expression of super-momentum is conformally invariant; i.e., it is well-defined. Let us note its basic properties. First, by inspection, the map defined by the super-momentum $P$ from super-translations to reals is linear. Second, in Minkowski space-time, $\tilde\rho = \tilde{B}$ in any conformal frame. Hence, the value of super-momentum vanishes identically on {\it any} cross-section. Finally, since ${\cal L}_{\tilde{n}} \tilde\rho = 0$, we have \nopagebreak[3]\begin{equation}{\cal L}_{\tilde{n}} [(\tilde\rho - \tilde{\cal B})\tilde{T} \tilde{m}_a] = - \lim_{\mapsto I} (\Omega^{-2}\,\tilde{L}_{mn}\tilde{n}^m \tilde{n}^n)\tilde{T} \tilde{m}_a\, .\label{(5.7)}\end{equation} Therefore, as in the case of energy, the flux of the component of the super-momentum along any time-like super-translation (i.e., one in which $\tilde{T} > 0$) is positive. \goodbreak \section{Discussion} \label{s6} In this paper, we developed the general framework to analyze the asymptotic structure of space-time at null infinity in 3 space-time dimensions. We did not have to restrict ourselves to any specific type of matter fields. However, if the matter sources are chosen to be a triplet of scalar fields constituting a non-linear ($SO(2,1)$) $\sigma$-model, the space-times under considerations can be thought of as arising from symmetry reduction of 4-dimensional generalized cylindrical waves, i.e., vacuum solutions to the 4-dimensional Einstein equations with one space-translation isometry. If the source consists of a single zero rest mass scalar field, the translation Killing field in four dimensions is hypersurface orthogonal. Finally, if there is, in addition, a rotational Killing field, the space-times are symmetry reductions of the 4-dimensional Einstein-Rosen waves. The general strategy we adopted was to follow the procedures developed by Bondi and Penrose in 4 dimensions. However, we found that due to several peculiarities associated with three dimensions, those procedures have to be modified significantly. A number of unexpected difficulties arise and the final framework has several surprising features. This is in contrast with the situation in higher dimensions where the framework is likely to be very similar to that in 4 dimensions. The new features can be summarized as follows. First, in 3 dimensions, the space-time metric is flat in any open region where stress-energy vanishes and thus we are forced to consider gravity coupled with matter. To accommodate physically interesting cases, we have to allow matter fields such that the fall-off of the stress-energy tensor at null infinity is significantly weaker than that in 4 dimensions. This in turn means that the metric is dynamical even at infinity; it does not approach a Minkowskian metric even in the leading order. In fact, physically interesting information, such as the energy and energy fluxes, is coded in these leading order, dynamical terms. As a result, the asymptotic symmetry group ${\cal G}$ is enlarged quite significantly. Like the BMS group in 4 dimensions, it admits an infinite dimensional normal subgroup ${\cal S}$ of super-translations. The structure of this sub-group is completely analogous to that of its counterpart in 4 dimensions. However, the quotient, ${\cal G}/{\cal S}$, is {\it significantly} larger. While in 4 dimensions the quotient is the six dimensional Lorentz group, now it is the infinite dimensional group ${\rm Diff}(S^1)$ of diffeomorphisms of a circle. Furthermore, whereas the BMS group admits a preferred (4-dimensional) group of translations, ${\cal G}$ does not. To select translations, one has to impose additional conditions, which in some ways are analogous to the conditions needed in 4 dimensions to extract a preferred Poincar\'e subgroup of the BMS group. We imposed these by demanding that there should exist a conformal frame in which the field $\tilde{B}$ tends to a constant as one approaches $i^o$ along $\it I$. This condition is automatically satisfied in axi-symmetric space-times. We saw that, in a generic situation, it selects a unique conformal frame (up to constant rescalings which can be removed by a normalization condition) and we can then select a preferred time translation in ${\cal S}$. If the past limit of the $\it I$-energy is zero, it selects a 2-parameter family of frames ---the analogs of Bondi frames in 4 dimensions. In this case, we can select a 3-dimensional sub-group of translations from ${\cal S}$. Finally, given any cross-section $C$ of $\it I$, we associated with it energy, $E[C]$, as well as a super-momentum $P_{\tilde{T}}[C]$. The former is a scalar and has several properties that one would expect energy to have. The latter is a linear map from the space of super-translations to reals and may arise, in an appropriate Hamiltonian formulation based on $\it I$, as the generator of canonical transformations corresponding to super-translations. These results refer to 3-dimensional general relativity coupled to arbitrary matter fields. However, as noted above, if the matter fields are chosen appropriately, we can regard the 3-dimensional system as arising from a symmetry reduction of 4-dimensional vacuum general relativity by a space-translation Killing field. (One can also consider 4-dimensional general relativity coupled to suitable matter. Then, one acquires additional matter fields in 3 dimensions.) In this case, the energy $E[C]$ (or the super-momentum $P_{\tilde{T}}[C]$) associated with a cross-section $C$ of 3-dimensional null infinity represents the energy (or super-momentum) per unit length (along the symmetry axis) in four dimensions. Thus, the 3-dimensional results have direct applications to 4-dimensional general relativity as well. In addition, as we will see in the companion paper \cite{16}, the analysis of the asymptotic behavior of fields in 3 dimensions can also be used to shed light on the structure of null infinity in 4 dimensions. There are a number of technical issues that remain open. First, as indicated in Sec.\ref{s5.1} it is desirable to find the precise conditions under which the past limit of $E[C]$ yields the total Hamiltonian \cite{10}. A second important issue is that of positivity of $E[C]$. For the total Hamiltonian, this was established \cite{10} using a suitable modification of Witten's spinorial argument in 4 dimensions. Can this argument be further modified to show positivity of $E[C]$? If space-time admits a regular $i^+$, the limit of $E[C]$ as $C$ tends to $i^+$ vanishes. Since the flux is positive, this implies that $E[C]$ is positive on every cross-section. However, in the general case, it is not apriori clear that in the Bondi-type frame, $\tilde{B}$ will not exceed $1/2$ making $E[C]$ negative on some cross-section. Next, in the case when the matter fields admit initial data of compact support, space-time is flat near $i^o$. In this case, it should be possible to select a preferred 1-parameter sub-group of rotations in ${\cal G}$ and define angular momentum. Finally, in the case when $i^+$ is regular, one would expect that, as in Minkowski space, there exists a 2-parameter family of Bondi-type conformal frames in which $\tilde{B}$ tends to a constant at $i^+$. It is not apriori clear whether the Bondi-type frame selected by the behavior of $\tilde{B}$ at $i^o$ is included in the family selected at $i^+$. If the space-time is axi-symmetric, the answer is in the affirmative. It would be interesting to investigate what happens in the general case. The present framework provides a natural point of departure for constructing an $S$-matrix theory both classically and, especially, quantum mechanically. 3-dimensional quantum gravity without matter fields is fully solvable but the solution is trivial in the asymptotically flat case. When we bring in matter, we have a genuine field theory which is diffeomorphism invariant. If the matter fields are suitably restricted, the theories are equivalent to the reduction of 4-dimensional general relativity (or of 10-dimensional string theories). Quantization of such theories should shed considerable light on the conceptual problems of non-perturbative quantum gravity. As a first step towards quantization, one might use ideas from the asymptotic quantization scheme introduced in 4 dimensions \cite{21}. Since the Lorentz sub-groups are now replaced by the ${\rm Diff}(S^1)$ sub-groups of ${\cal G}$ and since ${\rm Diff}(S^1)$ admits interesting representations (with non-zero central charges), the asymptotic quantum states would now have interesting, non-trivial sectors. Secondly, this quantization would also lead to ``fuzzing'' of space-time points along the lines of Ref. \cite{22}. To see this, recall first that the light cone of each space-time point gives rise to a ``cut'' of $\it I$ (which, in general, is quite complicated). Thus, given $\it I$ and these light cone cuts, one can ``recover'' space-time points in an operational way. Now, in a number of cases with scalar field sources --including of course the Einstein-Rosen waves-- one expects the initial-value problem based on $\it I$ to be well-posed and the classical $S$-matrix to be well-behaved. In such cases, it should be possible to express the light cone cuts on $\it I$ directly in terms of the data of the scalar field on $\it I$. Now, in the quantum theory, the scalar field on $\it I$ is promoted to an operator-valued distribution and, given any quantum state, one only has a probability distribution for the scalar field to assume various values. This immediately implies that one would also have only probability distributions for light cone cuts, i.e., for points of space-time. This approach may well lead one to a non-commutative picture of space-time geometry. \bigskip\goodbreak {\bf Acknowledgements:} AA and JB would like to thank the Max-Planck-Institute for Gravitational Physics for its kind hospitality. AA was supported in part by the NSF grants 93-96246 and 95-14240 and by the Eberly Research Fund of Penn State University. JB was supported in part by the grants GACR--202/96/0206 and GAUK--230/1996 of the Czech Republic and the Charles University, and by the US-Czech Science and Technology grant 92067. \bigskip\goodbreak
1,314,259,992,732
arxiv
\section{Introduction} \label{sec:intro} We are interested in modelling polarized radiative transfer within the magnetized plasma around black holes. Evaluations of transfer coefficients date back as far as \citet{westfold_polarization_1959}. A summary of early work on emission and absorption coefficients is given by \citet{ginzburg_cosmic_1965} \textcolor{black}{(see also \citet{Fleishman_2010})}. Recently, \citet{dexter_public_2016} and \citet{moscibrodzka_ipole_2018} have used polarized transfer coefficients to model emission from the accretion flows of black holes in their respective ray-tracing codes, \texttt{GRTRANS} and \texttt{ipole}, and these have been used in modeling Event Horizon Telescope observations of M87 (\citetalias{PaperVII}, \citetalias{PaperVIII}). Polarized intensity is described by the Stokes vector $I_S = \{I, Q, U,V\}$, where Stokes I is the total intensity, Q and U describe linear polarization, and V describes circular polarization. Polarized radiative transfer can be described by a set of emission, absorption, and Faraday mixing coefficients in the Stokes basis. The radiative transfer equation in the Stokes basis is \begin{linenomath} \begin{equation} \frac{d}{ds}I_S = j_S-M_{ST}I_T, \end{equation} \end{linenomath} where $j_S$ is a vector containing the emission coefficients in the Stokes basis, \textcolor{black}{$ds$ is the path length, $S$ and $T$ are indices of the Stokes vector,} and $M_{ST}$ is the Mueller matrix: \begin{linenomath} \begin{equation} M_{ST} \equiv \begin{pmatrix} \alpha_I & \alpha_Q & \alpha_U & \alpha_V\\ \alpha_Q & \alpha_I & \rho_V & -\rho_U \\ \alpha_U & -\rho_V & \alpha_I & \rho_Q\\ \alpha_V & \rho_U & -\rho_Q & \alpha_I \end{pmatrix}. \end{equation} \end{linenomath} Here $\alpha_S$ and $\rho_S$ are absorption and Faraday mixing coefficients, respectively, in the Stokes basis. We use a Cartesian coordinate system in the calculation of the transfer coefficients. We set $\hat{z}$ parallel to the magnetic field $\mathbf{B}$. The observer angle $\theta$ is the angle between $\mathbf{B}$ and the photon wavevector $\mathbf{k}$ which we choose to lie in the $x-z$ plane. Note that we have chosen a coordinate system (in the plasma rest frame) such that all Stokes U coefficients are zero. Other recent work, such as \citet{huang_faraday_2011} and \citet{dexter_public_2016}, define $\mathbf{k}$ to lie in the $y-z$ plane. This difference in convention requires one to change the sign of all Stokes Q and U coefficients when converting between our coefficients and theirs. The signs of the Stokes I and V coefficients are the same in the two coordinate systems. We use cgs-Gaussian units throughout. The frequency of the photon is $\nu = c k/(2\pi)$, where $k$ is the magnitude of the wavevector. We are often interested in coefficients' dependence on the ratio of the frequency to the cyclotron frequency, $\nu/\nu_c$, where $\nu_c = eB/(2\pi m_ec) = 2.8\times 10^7 B$. Here $B$ is the magnitude of the magnetic field, $e$ is the elementary charge, and $m_e$ is the electron mass. For an isotropic electron distribution, the transfer coefficients are dependent solely on the distribution of electron Lorentz factors.\footnote{We do not consider anisotropic distribution functions. See \cite{Fleishman_2003} for a discussion} \citet{Melrose_1991} provide a procedure for calculating $j_S$ and $\alpha_S$ from the distribution function. This procedure is implemented by \citet{leung_numerical_2011} in the \harmony{} code for a relativistic thermal distribution. \citet{pandya_polarized_2016}, henceforth P16, introduced a simplified code, \symphony{}, which improves the accuracy and speed of the numerical integration. In addition to the thermal distribution \symphony{} also provides coefficients for power-law and \kdf{} distributions. Absorption and Faraday mixing coefficients are linearly related to components of the susceptibility tensor $\chi_{ij}$ (eqs. \ref{eqn:alphaSusceptibility}-\ref{eqn:rhoSusceptibility}). \citet{huang_faraday_2011} describe a procedure and provide a Mathematica script to evaluate components of the susceptibility tensor for both a thermal and a power-law electron distribution function. \citet{pandya_numerical_2018}, henceforth P18, extend the \symphony{} code to compute mixing and absorption coefficients via the susceptibility tensor method for the thermal, power-law, and \kdf{} distributions. The first goal of this paper is to review, update, and correct prior work. For example, we modify the sign of some transfer coefficient fits presented in P16 and P18 so that they are consistent with IEEE/IAU convention. Second, we modify \symphony's method for evaluating Faraday conversion coefficients to permit efficient evaluation over a larger range of parameters. Third, we provide new fitting formulae for Faraday mixing coefficients for \kdf{} distributions. The plan of the paper is as follows. Section 2 summarizes the mathematical construction of the integrals used to evaluate transfer coefficients for both the radiative transfer and the susceptibility tensor methods. It also describes the distribution functions for which we compute transfer coefficients: thermal (Maxwell-Juettner) distributions; power-law distributions; and \kdf{} distributions. Section 3 describes the numerical integration methods implemented in \symphony. Section 4 presents updated fitting formulae for absorptivity and emissivity coefficients as well as new fitting formulae for Faraday mixing coefficients for \kdf{} distributions. \section{Mathematical Construction} \subsection{Radiative Transfer} P16 use the radiative transfer equation in the Stokes basis to solve for polarized emissivity and absorption coefficients. As in \citet{leung_numerical_2011} and P16, the polarized emissivity coefficients are given by \begin{equation} \label{eqn:emissivityVector} j_S = \begin{pmatrix} j_I \\ j_Q \\ j_U \\ j_V \end{pmatrix} = \frac{2\pi e^2\nu^2}{c}\int d^3p\,\,\,(f)\,\,\sum_{n = 1}^{\infty}\delta(y_n)K_S, \end{equation} and the absorptivity coefficients for an isotropic electron distribution function are given by \begin{equation} \label{eqn:absorptivityVector} \alpha_S = \begin{pmatrix} \alpha_I \\ \alpha_Q \\ \alpha_U \\ \alpha_V \end{pmatrix} = -\frac{\pi e^2}{m_ec}\int d^3p \,\,\left(\frac{\partial f}{\partial \gamma}\right)\,\, \sum_{n = 1}^{\infty}\delta(y_n)K_S, \end{equation} where $f$ is a normalized electron distribution function, $\gamma$ is the electron Lorentz factor, and $\delta$ is the Dirac delta function. Eqs. \ref{eqn:thermal_f}-\ref{eqn:kappa_f} give $f$ for the thermal, power-law, and kappa distributions. Here, $K_S$ depends on the Stokes parameter. Complete definitions of $K_S$ and $y_n$ are given in \textcolor{black}{Appendix \ref{Appendix_A}}. \symphony{} calculates emission and absorption coefficients by numerically evaluating the three-dimensional integral over momentum space and summing over harmonics, $n$. The numerical methods used in calculating these transfer coefficients are summarized in section \ref{Radiative_Transfer_Numerical} of this paper. P16 provide a full description of the method. \subsection{The Susceptibility Tensor} We calculate the plasma susceptibility 3-tensor $\chi_{ij}$ in the $x$, $y$, $z$ basis, where $z$ is aligned with ${\mathbf B}$. \textcolor{black}{A full statement of the susceptibility tensor is given in Appendix \ref{Appendix_B}}. The definition of the susceptibility tensor and its relation to the transfer coefficients is derived in Section 2, Section 3, and Appendix A.1. of P18. Absorption and Faraday mixing coefficients in the Stokes basis are related to the susceptibility tensor components by: \begin{equation} \label{eqn:alphaSusceptibility} \alpha_S = \frac{\pi\nu}{c} \begin{cases} \text{Im}(\cos^2(\theta)\chi_{xx}-2\cos(\theta)\sin(\theta)\chi_{xz}+\sin^2(\theta)\chi_{zz}+\chi_{yy}), & \text{(Stokes I)}\\ \text{Im}(\cos^2(\theta)\chi_{xx}-2\cos(\theta)\sin(\theta)\chi_{xz}+\sin^2(\theta)\chi_{zz}-\chi_{yy}), & \text{(Stokes Q)}\\ \text{Im}(\cos(\theta)\chi_{yx}-\sin(\theta)\chi_{yz}+\cos(\theta)\chi_{xy}-\sin(\theta)\chi_{zy}) = 0, & \text{(Stokes U)}\\ \text{Re}(\cos(\theta)\chi_{xy}-\sin(\theta)\chi_{zy}-\cos(\theta)\chi_{yx}+\sin(\theta)\chi_{yz}), & \text{(Stokes V)} \end{cases} \end{equation} and \begin{equation} \label{eqn:rhoSusceptibility} \rho_S = \frac{\pi\nu}{c} \begin{cases} \text{Re}(\chi_{yy} - \cos^2(\theta)\chi_{xx} + 2\cos(\theta)\sin(\theta)\chi_{xz} - \sin^2(\theta)\chi_{zz}), & \text{(Stokes Q)}\\ \text{Re}(\cos(\theta)\chi_{yx}-\sin(\theta)\chi_{yz}+\cos(\theta)\chi_{xy}-\sin(\theta)\chi_{zy}) = 0, & \text{(Stokes U)}\\ \text{Im}(\cos(\theta)\chi_{xy}-\sin(\theta)\chi_{zy}-\cos(\theta)\chi_{yx}+\sin(\theta)\chi_{yz}), & \text{(Stokes V)}. \end{cases} \end{equation} Here $\theta$ is the angle between the magnetic field ($z$-axis) and the wavevector, $\mathbf{k}$, which lies in the $x$-$z$ plane. The Stokes basis in the plasma frame is constructed so that projection of $\mathbf{B}$ in the polarization plane is along the $U>0$ axis, which sets $j_U$ to 0. As mentioned in P18, applying the Onsager relations to the susceptibility tensor (by requiring a time-reversal invariance of the microscopic dynamics) implies that $\chi_{xy}=-\chi_{yx}$, $\chi_{zy}=- \chi_{yz}$ and $\chi_{xz}=\chi_{xz}$. This relation implies $\alpha_U$ and $\rho_U$ are 0 as shown in Eq. \ref{eqn:alphaSusceptibility} and \ref{eqn:rhoSusceptibility}. The Onsager relations may also be used to show that $\alpha_I$, $\alpha_Q$, and $\rho_Q$ are symmetric and $\alpha_V$ and $\rho_V$ are antisymmetric under a sign change of either the magnetic field or the particle's charge. Similarly, it can be shown from the definition of $K_S$, given in \textcolor{black}{equation \ref{eqn:K_S}}, that $j_I$ and $j_Q$ are symmetric and $j_V$ is antisymmetric under one of these sign changes. The susceptibility tensor components are evaluated in terms of a four-dimensional integral: three momentum space coordinates and a time coordinate $\tau$ that describes the unperturbed history of the electron orbit. These integrals are performed over the derivative with respect to $\gamma$ of the scaled electron distribution function, $d\tilde{f}/d\gamma$. Eqs. \ref{eqn:df_dgamma_thermal}-\ref{eqn:df_dgamma_kappa} provide $d\tilde{f}/d\gamma$ for the thermal, power-law, and kappa distributions. P18 evaluate two of the momentum-space integrals analytically. The remaining integrals are over $\gamma$ and $\tau$. Section \ref{Suscept_Tensor_Method} summarizes the methods from P18 that \symphony{} uses to evaluate these integrals and details a new method for performing this integration for $\rho_Q$. \subsection{Electron Distributions} In order to evaluate transfer coefficients it is necessary to integrate over the distribution of electrons in three-dimensional momentum space: \begin{equation} f \equiv \frac{dn_e}{d^3p} = \frac{1}{m_e^3c^3\gamma^2\beta}\frac{dn_e}{d\gamma d\cos\xi d\phi}. \end{equation} Here $\gamma$ is the electron Lorentz factor, $\xi$ is the pitch angle, $\beta = v/c$, where $v$ is the electron velocity, and $\phi$ is the gyrophase. As in P16 and P18, we consider the relativistic thermal, the isotropic power-law, and the isotropic \kdf{} electron distributions. The thermal distribution is \begin{equation} \label{eqn:thermal_f} \frac{dn_e}{d\gamma d\cos\xi d\phi} = \frac{n_e}{4\pi \Theta_e}\frac{\gamma(\gamma^2-1)^{1/2}}{K_2(1/\Theta_e)}\exp\left({-\frac{\gamma}{\Theta_e}}\right), \indent \text{(thermal)} \end{equation} where $\Theta_e \equiv k_B T/m_e c^2$ is the dimensionless electron temperature, $n_e$ is the number density of electrons, and $K_2$ is a modified Bessel function of the second kind. The power-law distribution is \begin{equation} \frac{dn_e}{d\gamma d\cos\xi d\phi} = \frac{n_e(p-1)}{4\pi (\gamma_{min}^{1-p} - \gamma_{max}^{1-p})}\gamma^{-p} \indent \text{for } \gamma_{min}\leq\gamma\leq\gamma_{max}, \indent \text{(power-law)} \end{equation} where $p$ is the power-law index and $\gamma_{max}$ and $\gamma_{min}$ are the maximum and minimum Lorentz factors. The \kdf{} distribution is \begin{equation} \label{eqn:kappa_f} \frac{dn_e}{d\gamma d\cos\xi d\phi} = \frac{n_eN_\kappa}{4\pi}\gamma (\gamma^2-1)^{1/2}\left(1+\frac{\gamma - 1}{\kappa w} \right)^{-(\kappa+1)}, \indent \text{(kappa)} \end{equation} where $\kappa$ is related to the high-energy power-law index, $w$ describes the width of the distribution, and $N_\kappa$ is a normalization factor that is evaluated numerically in \symphony{} to ensure that the integral of the distribution function over gamma is unity. The absorptivities and rotativities depend on the derivatives $d\tilde{f}/d\gamma$, where $\tilde{f} \equiv m^3c^3f/n_e$. These derivatives are: \begin{equation} \label{eqn:df_dgamma_thermal} \frac{d\tilde{f}}{d\gamma} = -\frac{\exp{(-\gamma/\Theta_e)}}{4\pi \Theta_e^2K_2(1/\Theta_e)}, \indent \text{(thermal)} \end{equation} \begin{equation} \frac{d\tilde{f}}{d\gamma} = -\frac{(p-1)(-1+2\gamma^2+p(\gamma^2-1))}{4\pi (\gamma_{min}^{1-p}- \gamma_{max}^{1-p})\beta(\gamma^2-1)}\gamma^{-3-p}, \indent \text{(power-law)} \end{equation} and \begin{equation} \label{eqn:df_dgamma_kappa} \frac{d\tilde{f}}{d\gamma} = -\frac{N_\kappa(1+\kappa)}{4\pi\kappa w}\left(1+\frac{\gamma - 1}{\kappa w}\right)^{-2-\kappa}. \indent \text{(kappa)} \end{equation} Notice that this corrects a typographical error in P18's $d\tilde{f}/d\gamma$ for the power-law distribution. \section{Numerical Methods} \subsection{Integration of Radiative Transfer Equations} \label{Radiative_Transfer_Numerical} Here we briefly summarize the numerical scheme used by \symphony{} to compute emission and absorption coefficients from equations \ref{eqn:emissivityVector} and \ref{eqn:absorptivityVector}. More detail is provided in P16. \symphony{} is based on \harmony{} (\citet{leung_numerical_2011}), with a simpler code organization and integration technique improvements that permit accurate evaluation of coefficients for Stokes V, and extension to larger $\nu/\nu_c$. As in \citet{leung_numerical_2011}, P16 integrate equations \ref{eqn:emissivityVector} and \ref{eqn:absorptivityVector} over $\cos\xi$ and makes the substitution, $\cos\xi = (\nu-n\nu_c/\gamma)/(\nu\beta\cos\theta)$. This reduces the integral and sum needed to compute emissivities and absorptivities to the form \begin{equation} \int_{\gamma_-}^{\gamma_+}d\gamma\sum^{\infty}_{n=n_-}I(n,\gamma), \end{equation} where $I$ is the integrand which depends on the distribution function, the Stokes parameter, $\gamma_\pm = (r \pm |\cos\theta|(r^2-\sin^2\theta)^{1/2})/\sin^2\theta$, $r = n\nu_c/\nu$, and $n_- = (\nu|\sin\theta|)/\nu_c$. Here $n_-$ is rounded up to become an integer. \symphony{} first calculates the $\gamma$ integral using the Quasi-Adaptive Gaussian quadrature routines, \texttt{QAG} and \texttt{QAGIU} (from GNU Science Library; GSL). The $\gamma$ integration range is chosen based on accurate estimates for the location and width of the integrand's peak in $\gamma$ space. This ensures that the quadrature captures the peak at large $\nu/\nu_c$, where the integrand is sharply peaked. \symphony{} then directly computes the first 30 terms of the $n$ summation and approximates the remaining terms $(n \geq n_-+30)$ as an integral. For Stokes V, the $\gamma$ integral is split into a positive part and a negative part with slightly different absolute areas on either side of the zero at $\gamma_0 = n\nu_c/(\nu\sin^2\theta)$. The positive and negative contributions are summed over $n$ separately and then combined to avoid numerical error due to cancellation. \subsection{Susceptibility Tensor Method} \label{Suscept_Tensor_Method} To calculate absorption and rotation coefficients from components of the susceptibility tensor ($\chi_{ij}$) we numerically evaluate the integrals over $\tau$ and $\gamma$. These integrals have the form \begin{equation} \chi_{ij} = \int_0^{\infty}d\tau\int_1^\infty d\gamma \frac{d\tilde{f}}{d\gamma} I_{ij}(\gamma, \tau, \nu/\nu_c, \mathbf{k}), \end{equation} where $I_{ij}$ is the integrand which depends on the component of the susceptibility tensor being calculated. \textcolor{black}{Appendix \ref{Appendix_B}} provides a complete statement of these two-dimensional integrals \textcolor{black}{as presented in P18}. Following P18 and integrating over $\tau$ first, the $\gamma$ integrand reduces to a smooth, well-behaved function. For $\alpha_I$, $\alpha_V$, $\alpha_Q$, and $\rho_V$ we find the relevant components of the susceptibility tensor individually before combining them to obtain the transfer coefficients as in equations \ref{eqn:alphaSusceptibility} and \ref{eqn:rhoSusceptibility}. For $\rho_Q$, however, the combined susceptibility tensor components nearly cancel at large values of $\nu/\nu_c$, so small errors in individual components lead to large fractional errors in $\rho_Q$. We minimize the fractional error by combining the integrands of the susceptibility tensor terms {\em before} integrating. The tau integrand for $\rho_Q$ ($\equiv dK_{\rho_Q}$) oscillates rapidly and the integral is slow to converge. We are able to numerically evaluate the integral, however, by using an approximate form at large $\tau$ and the fact that the dominant contribution to the integral is at small $\tau$. Following \textcolor{black}{Appendix \ref{Appendix_B}}, the tau integrand oscillates at three distinct frequencies : $\omega_+(\tau)$, $\omega_-(\tau)$, and $\omega_{env} = \omega_c/\omega$. Here, \begin{equation} \omega_{\pm}(\tau) = \gamma \pm \frac{A(\tau)}{\tau}, \end{equation} where \begin{equation} A(\tau) = \sqrt{\alpha^2+\delta^2}, \end{equation} \begin{equation} \alpha = \gamma\beta\cos(\theta)\tau, \end{equation} and \begin{equation} \delta = \frac{2\gamma\beta\omega\sin(\theta)}{\omega_c}\sin\left(\frac{\omega_c}{2\omega}\tau\right). \end{equation} The integral's dependence on $\omega_+$ and $\omega_-$ comes from the multiplication of sinusoidal factors with phases $A(\tau)$ and $\gamma\tau$ present in \textcolor{black}{equations \ref{eqn:Analytic_I_1_0}-\ref{eqn:Analytic_I_3_0} and \ref{eqn:Kernel_Suscept} of Appendix \ref{Appendix_B}}, respectively. The integrand is multiplied by an envelope function that has frequency $\omega_{env}$ and decays like $\tau^{-1}$. Integrating in steps of approximately $2\pi / \omega_+$ eliminates the dependence on $\omega_+$. When $\tau \gtrsim \gamma\omega/\omega_c$, the integral can then be modelled as \begin{equation} K = \int_0^\tau dK_{\rho_Q} \approx C + D\frac{\sin(\omega_{-}(\tau)\tau)}{\tau}, \end{equation} where $C$ is the asymptotic value of the integral as $\tau \to \infty$ that we wish to evaluate and $D$ is an unimportant constant related to the amplitude of the envelope function. When $\tau \gtrsim \gamma\omega/\omega_c$ and $\tau \gg 1$, the relative decay in the integral and the relative change in $\omega_-$ are small over a short interval in $\tau$. We can therefore approximate the integral's form over a short interval in $\tau$ as, \begin{equation} K = C + E\sin(\omega_{-}\tau), \end{equation} where $E$ is another unimportant constant which has absorbed the approximately constant factor of $1/\tau$. We then may calculate the asymptotic value of the integral through numerical evaluations of the integral and the second derivative of the integral using \begin{equation} K'' = -\omega_{-}^2E\sin(\omega_{-}\tau), \end{equation} where each prime denotes a derivative with respect to $\tau$. Then, \begin{equation} C = K + \frac{K''}{\omega_{-}^2}. \end{equation} We evaluate $K$, $K''$, and $\omega_-$ numerically using five equally spaced (by steps of $\Delta\tau = 2\pi / \omega_+$) evaluations of $K$ ($K_{-2}$-$K_{2}$): \begin{equation} K'' \approx \frac{K_{1}-2K_{0}+K_{-1}}{\Delta\tau^2}, \end{equation} and \begin{equation} \omega_-^2 = -\frac{K'''}{K'} \approx \frac{K''_{-1}-K''_{1}}{K_{1}-K_{-1}} \approx \frac{K_{-2}-2K_{-1}+2K_{1}-K_{2}}{\Delta\tau^2(K_{1}-K_{-1})}. \end{equation} Then \begin{equation} C \approx \frac{K_{1}^2-K_{-1}^2+(K_{0}(K_{-2}-K_{2}))}{K_{-2}-2K_{-1}+2K_{1}-K_{2}}. \end{equation} Fig. \ref{fig:1} compares Faraday conversion coefficients calculated using this method to the thermal fit for $\rho_Q$ presented in \citet{shcherbakov_propagation_2008}. Throughout this paper we define relative error as \begin{equation} \text{Relative Error} = \frac{\text{Fitted Value}}{\text{Numerically Calculated Value}} - 1. \end{equation} \begin{figure} \centering \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQThermal.pdf}% \label{fig:1a}% }\qquad \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQThermalError.pdf}% \label{fig:1b}% } \caption{Comparison of \symphony{}'s numerically evaluated $\rho_Q$ to the thermal $\rho_Q$ fit presented in \citet{shcherbakov_propagation_2008}. Panel (a) is a log-log scale plot of $|\rho_Q|$ versus $\nu/\nu_c$ for a thermal ($\Theta_e = 10$, $\theta = 60^{\circ}$) distribution. The Shcherbakov fit is represented by the solid line and \symphony{}'s numerical evaluations are represented by the marks. The sign change in $\rho_Q$ is represented in panel (a) by the change from dashed to dotted linestyles. Relative error between the two versus $\nu/\nu_c$ on a log scale is shown in panel (b).} \label{fig:1} \end{figure} \section{Updated Fits for Transfer Coefficients} Here we provide fitting formulae for the complete set of absorptivity and emissivity coefficients in the Stokes basis. Unless stated otherwise the fitting formulae are identical to those in P16. The emissivity has the general form \begin{equation} j_S = \frac{n_e e^2 \nu_c}{c}J_S\left(\frac{\nu}{\nu_c},\theta\right), \end{equation} and the absorptivity has the general form \begin{equation} \alpha_S = \frac{n_e e^2 }{\nu m_e c}A_S\left(\frac{\nu}{\nu_c},\theta\right). \end{equation} Here $J_S$ and $A_S$ are dimensionless functions of the distribution specific parameters: $\Theta_e$ for thermal, $p$, $\gamma_{min}$, and $\gamma_{max}$ for power-law, and $w$ and $\kappa$ for \kdf{} distributions. \subsection{Thermal Distribution} We have replaced our $J_V$ fit for a thermal distribution with the fit originally presented in \citet{dexter_public_2016} (eqs. A14 and A20 in Appendix A). We approximate this fit with rational coefficients and recast it in our notation. The dimensionless emissivity is \begin{equation} \label{eqn:J_S_thermal} J_S = \exp{(-X^{1/3})} \times \begin{cases} \frac{\sqrt{2}\pi}{27}\sin{\theta}(X^{1/2}+2^{11/12}X^{1/6})^2, & \text{(Stokes I)}\\ -\frac{\sqrt{2}\pi}{27}\sin{\theta} \left(X^{1/2}+\frac{7\Theta_e^{24/25}+35}{10\Theta_e^{24/25}+75}2^{11/12}X^{1/6}\right)^2, & \text{(Stokes Q)}\\ 0, & \text{(Stokes U)}\\ \frac{1}{\Theta_e}\cos\theta\left(\frac{\pi}{3}+\frac{\pi}{3}X^{1/3}+(\frac{2}{300})X^{1/2}+(\frac{2\pi}{19})X^{2/3}\right). & \text{(Stokes V)} \end{cases} \end{equation} Here $X = \nu/\nu_s$, where $\nu_s \equiv (2/9)\nu_c\sin{\theta}\Theta_e^2$. For a thermal distribution we can use Kirchoff's law to obtain the absorptivity: \begin{equation} \label{eqn:Kirchoffs} j_S -\alpha_S B_\nu = 0, \end{equation} where $B_\nu \equiv (2h\nu^3/c^2)[\exp(h\nu/kT)-1]^{-1}$ is the Planck function. Equation \ref{eqn:Kirchoffs} corrects an error in equation 25 of P16. The dimensionless absorptivity is then \begin{equation} \label{eqn:KirchoffsDimensionless} A_S = J_S\frac{m_ec^2\nu_c}{2h\nu^2}(e^{h\nu / (kT)} - 1). \end{equation} Equations \ref{eqn:Kirchoffs} and \ref{eqn:KirchoffsDimensionless} (eqs. 25 and 32 in P16) were incorrectly combined in equation 32 of P16. Here it should be clear that equation \ref{eqn:Kirchoffs} applies to $j_S$ and $\alpha_S$, while equation \ref{eqn:KirchoffsDimensionless} applies to the dimensionless $J_S$ and $A_S$. \citet{shcherbakov_propagation_2008} provides fitting formulae for thermal rotativities that maintain accuracy across high frequencies ($X \gg 1$) and high temperatures ($\Theta_e \gtrsim 1$). \citet{dexter_public_2016} modifies these expressions to maintain accuracy for smaller $\nu$. These modified fits, in our notation \textcolor{black}{and sign convention}, are \begin{equation} \rho_Q = \textcolor{black}{-}\frac{n_ee^2\nu_c^2\sin^2\theta}{mc\nu^3} f_m(X) \left[\frac{K_1(\Theta_e^{-1})}{K_2(\Theta_e^{-1})}+6\Theta_e\right], \end{equation} where \begin{equation} f_m(X) = f_0(X) + \left[0.011\exp\left(-1.69X^{-1/2}\right) - 0.003135X^{4/3} \right]\left(\frac{1}{2}[1+\tanh(10\ln(0.6648X^{-1/2}))]\right), \end{equation} with \begin{equation} f_0(X) = 2.011\exp\left(-19.78X^{-0.5175}\right) - \cos\left(39.89X^{-1/2}\right) \exp\left(-70.16X^{-0.6}\right) - 0.011\exp\left(-1.69X^{-1/2}\right), \end{equation} and \begin{equation} \rho_V = \frac{2n_ee^2\nu_c}{mc\nu^2} \frac{K_0(\Theta_e^{-1}) - \Delta J_5(X)}{K_2(\Theta_e^{-1})} \cos\theta, \end{equation} where \begin{equation} \Delta J_5(X) = 0.4379\ln(1+1.3414X^{-0.7515}), \end{equation} and $K_n$ is the modified Bessel function of the second kind and order n. These expressions maintain accuracy for all $X$ where $\nu/\nu_c \gg 1$. \subsection{Power-Law Distribution Fits} The dimensionless emissivities are \begin{linenomath} \begin{align} J_S &= \frac{3^{p/2}(p-1)\sin{\theta}}{2(p+1)(\gamma_{min}^{1-p}-\gamma_{max}^{1-p})} \nonumber \\ &\times \Gamma\left(\frac{3p-1}{12}\right)\Gamma\left(\frac{3p+19}{12}\right)\left(\frac{\nu}{\nu_c\sin\theta}\right)^{-(p-1)/2} \nonumber \\ &\times \begin{cases} 1 & \text{(Stokes I)}\\ -\frac{p+1}{p+7/3} & \text{(Stokes Q)}\\ 0 & \text{(Stokes U)}\\ \frac{171}{250}\frac{p^{49/100}}{\tan\theta}\left(\frac{\nu}{3\nu_c \sin\theta}\right)^{-1/2}& \text{(Stokes V)}. \end{cases} \end{align} \end{linenomath} Note that $J_V$ has changed sign compared to P16 so that it is now consistent with IEEE/IAU conventions. Kirchoff's law cannot be used for non-thermal distributions. P16 fit the dimensionless absorptivities with \begin{linenomath} \begin{align} A_S &= \frac{3^{(p+1)/2}(p-1)}{4(\gamma_{min}^{1-p}-\gamma_{max}^{1-p})} \nonumber \\ &\times \Gamma\left(\frac{3p+2}{12}\right)\Gamma\left(\frac{3p+22}{12}\right)\left(\frac{\nu}{\nu_c\sin\theta}\right)^{-(p+2)/2} \nonumber \\ &\times \begin{cases} 1 & \text{(Stokes I)}\\ -(\frac{17}{500}p-\frac{43}{1250})^{43/500} & \text{(Stokes Q)}\\ 0 & \text{(Stokes U)}\\ \left(\frac{71}{100}p+\frac{22}{625}\right)^{197/500}(\frac{31}{10}(\sin \theta)^{-48/25}-\frac{31}{10})^{64/125}\left(\frac{\nu}{\nu_c \sin \theta}\right)^{-1/2}\text{sgn}({\cos\theta}), & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} \textcolor{black}{where $\text{sgn}(x)$ is the sign function which extracts the sign of its argument.} This expression corrects a typographical error in the argument of the first gamma function in P16, and we have changed the sign of $A_V$ to be consistent with the IEEE/IAU convention. We have also introduced a factor of $\text{sgn}({\cos\theta})$ to make $J_V$ antisymmetric about $\theta = \pi/2$. \citet{Jones_1977} provide approximate rotativities for the power-law distribution. Their fits, written in our notation, are \begin{equation} \rho_Q = -\rho_\bot \left(\frac{\nu_c\sin\theta}{\nu}\right)^3 \frac{\gamma_{min}^{2-p}}{(p/2)-1} \left[1 - \left(\frac{2\nu_c\gamma_{min}^2\sin\theta}{3\nu}\right)^{p/2-1}\right], \label{eqn:JOrhoQ} \end{equation} \begin{equation} \rho_V = 2\rho_{\bot}\frac{p+2}{p+1} \left(\frac{\nu_c\sin\theta}{\nu}\right)^2 \gamma_{min}^{-(p+1)} \ln{(\gamma_{min})} \cot\theta, \end{equation} where \begin{equation} \rho_\bot = \frac{n_ee^2}{mc\nu_c\sin\theta}(p-1) \left[\gamma_{min}^{1-p} - \gamma_{max}^{1-p}\right]^{-1}. \end{equation} These fits are relatively accurate for $\gamma_{min} \lesssim 10^2$ where $\nu/\nu_c \gg 1$. A comparison of the $\rho_Q$ fit in Eq. \ref{eqn:JOrhoQ} to \symphony{}'s numerically evaluated $\rho_Q$ is shown in Fig. \ref{fig:PLCompare}. \begin{figure} \centering \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQPL.pdf}% \label{fig:3a}% }\qquad \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQPLError.pdf}% \label{fig:3b}% } \caption{Comparison of \symphony{}'s numerically evaluated $\rho_Q$ to the power-law $\rho_Q$ fit presented in \citet{Jones_1977}. For this range of parameters, $\rho_Q < 0$. Panel (a) is a log-log scale plot of $|\rho_Q|$ versus $\nu/\nu_c$ for a power-law ($p = 3$, $\gamma_{min} = 2$, $\gamma_{max} = 1000$, $\theta = 60^{\circ}$) distribution. The Jones $\&$ Odell fit is represented by the solid line and \symphony{}'s numerical evaluations are represented by the $x$ marks. Relative error between the two versus $\nu/\nu_c$ on a log scale is shown in panel (b).} \label{fig:PLCompare} \end{figure} \subsection{Kappa Distribution Fits} P16 fit absorptivities and emissivities for kappa distributions by separately fitting the high-frequency and low-frequency limits and providing a bridging function between these limits. In terms of $\nu_{\kappa} \equiv \nu_c(w\kappa)^2\sin\theta$ and $X_\kappa \equiv \nu/\nu_\kappa$, the dimensionless emissivities in the low-frequency limit are \begin{linenomath} \begin{align} J_{S,lo} &= X_\kappa^{1/3}\sin(\theta)\frac{4\pi\Gamma(\kappa-4/3)}{3^{7/3}\Gamma(\kappa-2)} \nonumber \\ &\times \begin{cases} 1, & \text{(Stokes I)}\\ \frac{1}{2}, & \text{(Stokes Q)}\\ 0, & \text{(Stokes U)}\\ \left(\frac{3}{4}\right)^2[(\sin \theta)^{-12/5}-1]^{12/25}\frac{\kappa^{-66/125}}{w}X_\kappa^{-7/20}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} The dimensionless emissivities in the high-frequency limit are \begin{linenomath} \begin{align} J_{S,hi} &= X_\kappa^{-(\kappa-2)/2}\sin(\theta)3^{(\kappa-1)/2}\frac{(\kappa-2)(\kappa-1)}{4}\Gamma\left(\frac{\kappa}{4}-\frac{1}{3}\right)\Gamma\left(\frac{\kappa}{4}+\frac{4}{3}\right) \nonumber \\ &\times \begin{cases} 1, & \text{(Stokes I)}\\ \left[\left(\frac{4}{5}\right)^2+\frac{\kappa}{50}\right], & \text{(Stokes Q)}\\ 0, & \text{(Stokes U)}\\ \left(\frac{7}{8}\right)^2[(\sin \theta)^{-5/2}-1]^{11/25}\frac{\kappa^{-11/25}}{w}X_\kappa^{-1/2}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} The emissivity bridging function is \begin{equation} J_S = \begin{cases} (J_{S,lo}^{-x}+J_{S,hi}^{-x})^{-1/x}, & \text{(Stokes I)} \\ -(J_{S,lo}^{-x}+J_{S,hi}^{-x})^{-1/x}, & \text{(Stokes Q)} \\ (J_{S,lo}^{-x}+J_{S,hi}^{-x})^{-1/x}\text{sgn}({\cos\theta}), & \text{(Stokes V)} \\ \end{cases} \end{equation} where \begin{linenomath} \begin{align} \label{eqn:x_A_S} x = \begin{cases} 3\kappa^{-3/2}, & \text{(Stokes I)}\\ \frac{37}{10}\kappa^{-8/5}, & \text{(Stokes Q)}\\ 3\kappa^{-3/2}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} Notice that we have made sign corrections to $J_Q$ and $J_V$ compared to equations 35-37 of P16. We also multiply $J_V$ by an overall factor of $\text{sgn}({\cos\theta})$ to make it antisymmetric about $\theta = \pi/2$. The expression for $x$ for Stokes V has also been updated. The dimensionless absorptivities in the low-frequency limit are \begin{linenomath} \begin{align} A_{S,lo} &= X_\kappa^{-2/3}3^{1/6}\frac{10}{41}\frac{(2\pi)}{(w\kappa)^{10/3-\kappa}}\frac{(\kappa-2)(\kappa-1)\kappa}{3\kappa-1} \nonumber \\ &\times \Gamma\left(\frac{5}{3}\right) {}_{2}F_{1}\left(\kappa-\frac{1}{3}, \kappa+1, \kappa +\frac{2}{3}, -\kappa w\right) \nonumber \\ &\times \begin{cases} 1, & \text{(Stokes I)}\\ \frac{25}{48}, & \text{(Stokes Q)}\\ 0, & \text{(Stokes U)}\\ \frac{77}{100w}[(\sin \theta)^{-114/50}-1]^{223/500}X_\kappa^{-7/20} \kappa^{-7/10}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} The dimensionless absorptivities in the high-frequency limit are \begin{linenomath} \begin{align} A_{S,hi} &= X_\kappa^{-(1+\kappa)/2}\frac{\pi^{3/2}}{3}\frac{(\kappa-2)(\kappa-1)\kappa}{(w\kappa)^3} \nonumber \\ &\times \left(\frac{2\Gamma(2+\kappa/2)}{2+\kappa}-1\right) \nonumber \\ &\times \begin{cases} \left(\left(\frac{3}{\kappa}\right)^{19/4}+\frac{3}{5}\right), & \text{(Stokes I)}\\ \left(21^2\kappa^{-(12/5)^2}+\frac{11}{20}\right), & \text{(Stokes Q)}\\ 0, & \text{(Stokes U)}\\ \frac{143}{10}w^{-116/125}[(\sin \theta)^{-41/20}-1]^{1/2}\left\{13^2\kappa^{-8}+\frac{13}{2500}\kappa-\frac{263}{5000}+\frac{47}{200\kappa}\right\}X_\kappa^{-1/2}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} The absorptivity bridging function is \begin{equation} A_S = \begin{cases} (A_{S,lo}^{-x}+A_{S,hi}^{-x})^{-1/x}, & \text{(Stokes I)} \\ -(A_{S,lo}^{-x}+A_{S,hi}^{-x})^{-1/x}, & \text{(Stokes Q)} \\ (A_{S,lo}^{-x}+A_{S,hi}^{-x})^{-1/x}\text{sgn}({\cos\theta}), & \text{(Stokes V)} \\ \end{cases} \end{equation} where \begin{linenomath} \begin{align} x = \begin{cases} \left(-\frac{7}{4}+\frac{8}{5}\kappa\right)^{-43/50}, & \text{(Stokes I)}\\ \frac{7}{5}\kappa^{-23/20}, & \text{(Stokes Q)}\\ \frac{61}{50}\kappa^{-142/125}+\frac{7}{1000}. & \text{(Stokes V)} \end{cases} \end{align} \end{linenomath} We have again made sign corrections to the absorptivities for Stokes Q and Stokes V compared to equations 38-40 of P16. We also multiply $A_V$ by a factor of $\text{sgn}({\cos\theta})$ to make it antisymmetric about $\theta = \pi/2$. The third term in curly braces for $A_{V,hi}$ has been changed from its original presentation to improve the fit's accuracy. We provide fitting formulae for $\rho_V$ and $\rho_Q$ for four \kdf{} distributions between $\kappa = 3.5$ and $\kappa = 5$. The structure of these fits is based on Faraday mixing coefficient fits for a thermal distribution provided in \citet{shcherbakov_propagation_2008}. \begin{linenomath} \begin{align} \rho_Q &= -\frac{n_ee^2\nu_c^2\sin^2\theta}{mc\nu^3}f(X_\kappa) \nonumber \label{eqn:rho_Q_kappa}\\ & \times \begin{cases} 17w-3\sqrt{w}+7\sqrt{w}\exp({-5w}), & (\kappa = 3.5)\\ \frac{46}{3}w-\frac{5}{3}\sqrt{w}+\frac{17}{3}\sqrt{w}\exp({-5w}), & (\kappa = 4)\\ 14w-\frac{13}{8}\sqrt{w}+\frac{9}{2}\sqrt{w}\exp({-5w}), & (\kappa = 4.5)\\ \frac{25}{2}w-\sqrt{w}+5\sqrt{w}\exp({-5w}), & (\kappa = 5) \end{cases} \end{align} \end{linenomath} where \begin{equation} f(X_\kappa) = \begin{cases} 1 - \exp\left(-\frac{X^{0.84}}{30}\right) - \sin\left(\frac{X}{10}\right)\exp\left(-\frac{3X^{0.471}}{2}\right), & (\kappa = 3.5)\\ 1 - \exp\left(-\frac{X^{0.84}}{18}\right) - \sin\left(\frac{X}{6}\right)\exp\left(-\frac{7X^{0.5}}{4}\right), & (\kappa = 4)\\ 1 - \exp\left(-\frac{X^{0.84}}{12}\right) - \sin\left(\frac{X}{4}\right)\exp\left(-2X^{0.525}\right), & (\kappa = 4.5)\\ 1 - \exp\left(-\frac{X^{0.84}}{8}\right) - \sin\left(\frac{3X}{8}\right)\exp\left(-\frac{9X^{0.541}}{4}\right). & (\kappa = 5) \end{cases} \end{equation} \begin{figure} \centering \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQScatter.pdf}% \label{fig:2a}% }\qquad \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoQRelError.pdf}% \label{fig:2b}% }\qquad \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoVScatter.pdf}% \label{fig:2c}% }\qquad \subfloat[]{% \includegraphics[width=0.48\columnwidth]{rhoVRelError.pdf}% \label{fig:2d}% } \caption{Comparison of our fits to \symphony{}'s numerically evaluated Faraday mixing coefficients. Panels (a) and (c) are log-log scale plots of $|\rho_Q|$ and $\rho_V$, respectively, versus $X_\kappa$ for various \kdf{} distributions. The fitted values are shown by the solid line and \symphony{}'s numerical evaluations are represented by the marks. The sign change in $\rho_Q$ is represented in panel (a) by the change from "$+$" marks to "-" marks. Relative error of the fits versus $X_\kappa$ on a log scale is shown in panels (b) and (d). For these plots we set $w = 4$, $\theta = 60^{\circ}$, $B = 10$ Gauss, and varied $\nu$ to capture a range of $X_\kappa$.} \label{fig:2} \end{figure} Also, \begin{linenomath} \begin{align} \rho_V &= \frac{2n_ee^2\nu_c\cos{\theta}}{mc\nu^2}\frac{K_0(w^{-1})}{K_2(w^{-1})}g(X_\kappa) \nonumber \\ &\times \begin{cases} \frac{w^2+2w+1}{(25/8)w^2 + 4w+1}, & (\kappa = 3.5)\\ \frac{w^2+54w+50}{(30/11)w^2 + 134w+50}, & (\kappa = 4)\\ \frac{w^2+43w+38}{(7/3)w^2 + (185/2)w+38}, & (\kappa = 4.5)\\ \frac{w+(13/14)}{2w+(13/14)}, & (\kappa = 5) \end{cases} \end{align} \end{linenomath} where \begin{equation} \label{eqn:g_X} g(X_\kappa) = \begin{cases} 1 - 0.17\ln{\left(1+0.447X_\kappa^{-1/2}\right)}, & (\kappa = 3.5)\\ 1 - 0.17\ln{\left(1+0.391X_\kappa^{-1/2}\right)}, & (\kappa = 4)\\ 1 - 0.17\ln{\left(1+0.348X_\kappa^{-1/2}\right)}, & (\kappa = 4.5)\\ 1 - 0.17\ln{\left(1+0.313X_\kappa^{-1/2}\right)}. & (\kappa = 5) \end{cases} \end{equation} These fits become inaccurate when $X_\kappa \lesssim 10^{-1}$ or when $\nu/\nu_c \lesssim 1$. \textcolor{black}{Fig. \ref{fig:2} compares these fitted rotativities to \symphony{}'s numerical evaluations.} \section{Conclusion} We have corrected and extended earlier work on polarized radiative transfer coefficients. In particular we have made sign corrections and corrected typographical errors in the emissivity and absorptivity fits (eqs. \ref{eqn:J_S_thermal}-\ref{eqn:x_A_S}) originally presented in P16 to be consistent with IEEE/IAU conventions and our own coordinate system. In subsection \ref{Suscept_Tensor_Method} we present a new numerical integration method to calculate $\rho_Q$ from the components of the susceptibility tensor. We find that combining the relevant components of the susceptibility tensor prior to integration dramatically reduces cancellation error in evaluation of $\rho_Q$. Finally, we provide new fitting formulae for rotativities for various \kdf{} distributions in equations \ref{eqn:rho_Q_kappa}-\ref{eqn:g_X}. \textcolor{black}{The updated fits for all coefficients are now available in \symphony{}\footnote{The current version is available at \url{https://github.com/AFD-Illinois/symphony}} and are implemented in the ray-tracing code \texttt{ipole}\footnote{The current version is available at \url{https://github.com/AFD-Illinois/ipole}}.} It is important to note that the corrections here do not affect the simulated images used in \citetalias{PaperVII} and \citetalias{PaperVIII}. All images for these papers were run using the set of coefficients outlined in Appendix A of \citet{dexter_public_2016}. {\color{black} \section{} \textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}} (\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio (alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can promptly and briefly share materials of interest with the astronomical community in a form that will be searchable via ADS and permanently archived. The astronomical community has long faced a challenge in disseminating information that may not meet the criteria for a traditional journal article. There have generally been few options available for sharing works in progress, comments and clarifications, null results, and timely reports of observations (such as the spectrum of a supernova), as well as results that wouldn’t traditionally merit a full paper (such as the discovery of a single exoplanet or contributions to the monitoring of variable sources). Launched in 2017, RNAAS was developed as a supported and long-term communication channel for results such as these that would otherwise be difficult to broadly disseminate to the professional community and persistently archive for future reference. Submissions to RNAAS should be brief communications - 1,000 words or fewer \footnote{An easy way to count the number of words in a Research Note is to use the \texttt{texcount} utility installed with most \latex\ installations. The call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front matter and 493 words in the text/references/captions of this template. Another option is by copying the words into MS/Word, and using ``Word Count'' under the Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table (but not both) - and should be written in a style similar to that of a traditional journal article, including references, where appropriate, but not including an abstract. Unlike the other journals in the AAS portfolio, RNAAS publications are not peer reviewed; they are, however, reviewed by an editor for appropriateness and format before publication. If accepted, RNAAS submissions are typically published within 72 hours of manuscript receipt. Each RNAAS article is issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a long-term, citable record of work. Articles can be submitted in \latex\ (preferably with the new "RNAAS" style option in AASTeX v6.2), MS/Word, or via the direct submission in the \href{http://www.authorea.com}{Authorea} or \href{http://www.overleaf.com}{Overleaf} online collaborative editors. Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K}, including guidance on plagiarism \citep{2012AAS...21920404V}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.85,angle=0]{aas.pdf} \caption{Top page of the AAS Journals' website, \url{http://journals.aas.org}, on October 15, 2017. Each RNAAS manuscript is only allowed one figure or table (but not both). Including the \href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure} in a Note is encouraged, and the data will be provided as a link in the published Note.\label{fig:1}} \end{center} \end{figure} \acknowledgments Acknowledge people, facilities, and software here but remember that this counts against your 1000 word limit.
1,314,259,992,733
arxiv
\section{Introduction} \label{SEC:Intro} In biomechanics the application of electromyography is especially a field rife with various signal processing methods. One reason for this is the complexity of the EMG signal which requires processing if one wants to proceed beyond the simple level of on-off interpretation. A number of standard processing tools can be found in EMG softwares by commercial vendors. Researchers are usually also interested in new, or modified processing methods, which may take long to implement in commercial softwares if it happens at all. In this case one can choose ''prototyping'' tools like \textsc{LabView, Mathcad, Matlab, Octave, SciLab}, and so on. Over the the years we have for instance frequently used \textsc{Labwindows CVI} (a C-based graphical programming tool) and \textsc{Mathcad} for data acquisition, visualization and analysis. One common problem in data intensive projects is that the data and analysis documents get scattered among computers and data disks. Our solution is to transfer all data to a single database (in our case based on \textsc{MySql}). Subsets of data can then be selected by \texttt{sql} searches and exported as \texttt{csv}-tables or similar. The salient point is that all the analyses can be gathered in one single script file using the R-tool \citep{R2010}. Running this script generates all the graphs, figures and statistical reports that one may need for the project. This script is easily modified when needed and can be shared among collaborators and other researchers. We believe that R provides a useful platform for defining and using various signal analyzing algorithms in EMG studies, which may stimulate further refinements and developments through collaborative efforts. As noted by \citep[Preface]{Everitt2010}: \begin{quote} (...) R has started to become the main computing engine for statistical research (...) For reproducible piece of research, the original observations, all data processing steps, the statistical analysis as well as the scientific report form a unit and all need to be available for inspection, reproduction and modification by the readers. \end{quote} With this paper and the supplementary material we hope to give a demonstration of the potential uses of R in biosignal processing. \section{What is R?} R is an interpreted functional programming language for statistical computing and data visualization created by Ross Ihaka and Robert Gentleman (University of Auckland, New Zealand). R is part of the GNU project and since 1997 it is developed by the \emph{R Development Core Team}. The version R 1.0.0 was released on 29 February 2000. The official project website is \url{www.r-project.org} where one can naturally download the software (binaries, or source code) and manuals \citep{Venables2009}. R is an implementation of the S language developed at the AT\&T Bell Laboratories around 1975 (by Rick Becker, John Chambers and Allan Wilks), and is influenced by the \textsc{Lisp} dialect \textsc{Scheme} created at the MIT AI Lab, also around 1975 (by Guy L Steele and Gerald Jay Sussman). A large part of the software is written in the R language itself, but core functions are written in C and \textsc{Fortran}. As an interpreted language R may be slower than say pure C-programs. Klemens, who advocates C as general scientific programming tool, gives an example with running the Fisher test 5 million times: the speed relation came out as about 1:30 in favour of C \citep{Klemens2009}. Thus in speed-critical cases, such as where we have large simulation samples, it may be necessary to revert to compiled C programs. However, in many other cases the versatility of R will outweigh the possible gains from optimizations for speed. The R tool is continually extended by packages contributed by an active community; there are presently more than 2000 packages available, see \url{http://cran.r-project.org/web/packages/}. Downloading and installing the R software on your computer should require nothing more than basic computer skills. A useful companion is the R scripting tool \textsc{NppToR} (\url{http://sourceforge.net/projects/npptor/}) which is an adds-on to the editing tool \textsc{Notepad++} which can be downloaded from \url{http://notepad-plus.sourceforge.net/uk/site.htm}. Since documentation about R is easily found on the web we will move on to describe what we can do with the software. \section{Importing data} The first thing we want to do is to get our data into the workspace. R can access various databases directly but the most typical situation is where we have, say, EMG data exported on a text format. We use R as console program, so the command for reading a simple ASCI table of data from a file into variable \verb|M| using the \verb|read.table()| function is: \begin{verbatim} M <- read.table("C:/.../myData.asc", header = FALSE) \end{verbatim} where the first argument contains the path to your data file. If the columns are separated for instance by \verb|";"| (instead of space) we have to add to the arguments \verb|sep = ";"|. Note also that the assignment operator in R is an arrow \verb|<-| and not \verb|=|. We have assumed that the data table had no header. In this case column names can added by the command, \begin{verbatim} names(M) <- c("colName1", ..., "colNameN") \end{verbatim} if there are $N$ columns. You can then refer, for instance, to the first column as \verb|M$colName1|. The other way to extract the first column is as \verb|M[,1]|. One can inspect the content of the table read into \verb|M| by the command \verb|edit(M)|. Finally you can quit R by the command \verb|q()|. Instead of typing and running the commands at the console in a sequence, you can collect them in a script file (an ordinary text file) with the extension R, named say \verb|myScript.R|, and run it by the command \verb|source("C:/.../myScript.R")| where the argument contains the path to your script file. \section{Analysing data} \subsection{Plotting data} The next thing you want to do is to inspect your data. For a quick plot of the data in the example above use the command, \begin{verbatim} plot(M$colName1, type = "l") lines(M$colName2, type = "l") \end{verbatim} The second line adds the graph of the second column to the same plot. The \verb|plot| function has various arguments for controlling colours, titles, scales, and so on. Information about the function \verb|plot| is obtained by the command \verb|?plot|. An interesting feature is that with a few lines of code it is for example possible to plot histograms/graphs of hundreds of variables and print them to a pdf document for quick browsing on the computer. After these preliminaries we can move on to the processing of data. In this connection we must explain how we define new functions in R. \subsection{Basic signal processing -- some examples} \subsubsection{User defined functions} In most programming tasks it us convenient to able to define your own functions. We give a simple example how it works in R. Let us say that we want to sum the elements of vectors using a function \verb|sumVec(V)| which returns the sum and mean of a numeric vector \verb|V|. This can be defined in R by: \begin{lstlisting} sumVec <- function(V, start = 1){ n <- length(V) sm <- 0 for(i in start:n){ sm <- sm + V[i] } mn <- sm/(n - start + 1) return(list(sum = sm, mean = mn)) } \end{lstlisting} We can call this as \verb|result <- sumVec(V)| which puts the sum and mean of the elements of the vector \verb|V| into to a variable \verb|result|. Here we have also demonstrated the very useful \verb|list| structure in R. The sum (\verb|sm|) and mean (\verb|mn|) are put into a list with respective (arbitrary) names \verb|sum| and \verb|mean|. The variables are then referenced as \verb|result$sum| and \verb|result$mean| after the call \begin{lstlisting} result <- sumVec(V). \end{lstlisting} It is an advantage to have a function return a \verb|list| since it is then easy to modify the function by adding new variables to the output \verb|list| without affecting previous uses of the function. The function example also illustrates an other aspect of R function; that is, we may have arguments with default values, like \verb|start| as in the example. If the argument is not listed, as in \begin{lstlisting} result <- sumVec(V), \end{lstlisting} it will use the default value (\verb|start = 1|). In many functions we have set as default \verb|plot = FALSE| for a variable \verb|plot|, which means that the results will not plotted unless one adds \verb|plot = TRUE| to the arguments. Functions can also have other functions as arguments, as for example in the case of \verb|EMG_spec| below. This example also illustrates the similarity with the common C and \textsc{Java} syntax. User defined functions can be collected in a separate file \verb|myFunctions.R| which can be made active (loaded into the workspace) by calling it using \begin{verbatim} source("C:/.../myFunctions.R") \end{verbatim} This corresponds to the \verb|include| statement of header files and source files in C programming. The EMG processing functions to be described below are collected in the file \verb|EMGfuns.R| in the supplementary material to this paper. The hash-symbol \verb|#| is used for comment lines in the R scripts. \subsubsection{Simulated data} Is useful to have access to various test data, and as example we have implemented a a standard algorithm \citep[pp.70-71]{Hermens1999} in \verb|EMG_sim| which generates simulated EMG data of desired length. This function returns a \verb|list| where the data is contained in the component \verb|sim|. For instance one can probe the spectrum function using the test data as follows: \begin{lstlisting} mysim <- EMG_sim(3000) myspec <- EMG_spec(mysim$sim, plot = TRUE) \end{lstlisting} \subsubsection{Rectification, RMS and turns} Rectifying raw data means simply taking the absolute value of the elements. For EMG data it may be useful to first subtract any offset (bias) from the raw data. Thus the rectification of \verb|V| could be written as \verb|abs(V - mean(V))| and the \verb|EMG_rect| function in \verb|EMGfuns.R| becomes very simple. It is noteworthy that effect of the rectification of EMG is similar to the rectification of AM radio waves whose purpose is to enhance the low frequency components which encode the voice signals. For EMG the low frequency ''voice'' part corresponds to the encoded force \citep{Borg2007}. A bit less trivial from the programming point of view is the \verb|EMG_rms| function which computes the Random Mean Square of the EMG and can basically be represented as \begin{equation} \sqrt{ \frac{1}{\Delta T} \int_{t - \Delta T/2}^{t + \Delta T/2} EMG(u)^2 du}. \end{equation} To write these sorts of filtering or enveloping functions it is convenient to use the built-in \verb|filter(V, filtc, ...)| function. This takes input data \verb|V| and outputs a vector with elements \begin{equation} y[i] = \mathrm{filtc}[1] \cdot V[i+o] + \cdots + \mathrm{filtc}[p] \cdot V[i+o-(p-1)], \end{equation} where \verb|filtc[i]| are the filter coefficients. The offset $o$ depends on the argument \verb|sides| such that \verb|sides = 2| corresponds to zero lag with $o = (p-1)/2$ if $p$ is odd. For the moving average one uses \verb|method = "convolution"|. The function \verb|EMG_rms| has an argument \verb|DT| which determines the size in milliseconds of the moving window over which one calculates the RMS. This function also illustrates a special feature of R: the use of the \verb|'...'| argument. \begin{lstlisting} EMG_rms <- function(V, sampFreq = 1000, DT = 250, plot = FALSE, ...) { #part of the function declaration rectV <- sqrt(filter(rectV^2, filter1, sides = 2, method = "convolution", ...)) #rest of the function declaration } \end{lstlisting} In this case it means that we can pass arguments to the \verb|filter| function employed by \verb|EMG_rms|. For instance, \verb|filter| has an argument called \verb|circular|, and via \verb|EMG_rms(..., circular = TRUE)|, we can pass a value (\verb|TRUE| in this case) to this argument of the function \verb|filter|. A version of the classical method of counting turns \citep{Willison1963,Willison1964} is implemented by the function \verb|EMG_turns| and it also uses a moving window \verb|DT| over which one sums the number of turns. The implementation \verb|EMG_wturns| is maybe closer to the original idea by Willison but the practical difference between the two seems small. The turns functions return a structure \begin{verbatim} list(turns.ps = turns_per_sec, turns.where = turns). \end{verbatim} The variable \verb|turns.where| contains the time indexes where the turns are counted. (This is also shown in the plot of the function.) This data may be of interest when, for instance, one wants to calculate an entropy metric for the signal. \subsubsection{Time-frequency domain} Part of the inspection of the EMG signal is to study its frequency properties. This is typically performed by calculating the power spectrum of the data. For this purpose one subdivides the original times series into blocks of some time length $\Delta T$, then calculates the power spectrum for these and take their mean as the final power spectrum. For the subdivision one normally use a 50 \% overlap which further suppresses the variance of the final spectrum estimate \citep{Press2002}. This method is implemented in the function \verb|EMG_spec|. It uses a default windowing of the data by the filter \verb|filtWelch|. The window size $\Delta T$ is given by the argument \verb|DT| in milliseconds. The nominal frequency resolution is then given by $\Delta f = 1/ \Delta T $. The function outputs a \verb|list| with the power density estimate in \verb|psd|, which also contains mean (MNF) and median frequency (MDF) in \verb|meanf| and \verb|medianf|. The time-frequency methods naturally rely on the \emph{Fast Fourier Transformation} (FFT) which in R is called by \verb|fft|. Using the \emph{Short Time Fourier Transformation} (STFT), which applies the FFT to subintervals of the time series, we can, for instance, calculate how the mean frequency varies with time. This is implemented by the function \verb|EMG_stft_f|. One use of mean/median frequency is for the study of muscle fatigue as a function of time, which is often associated with a decrease in mean frequency \citep{Lindstrom1977}. Using the \verb|fft| transform we can filter EMG signals by suppressing the higher frequency components. The method employed in \citep{Borg2007} is here implemented by the function \verb|EMG_bw0|. It first calculates the Average Rectified Value, then applies \verb|fft| which gives the Fourier coefficients $c(f_k)$. These are multiplied by a filter factor, \[ c(f_k) \mapsto {\tilde c}(f_k) = \frac{c(f_k)}{1 + \left(\frac{f_k}{f_c}\right)^n}, \] where $f_c$ is the low pass cut-off frequency of the filter and $n$ is the order of the filter. Finally we obtain the filtered signal by applying the inverse \verb|fft| to ${\tilde c}(f_k)$. This is basically a zero-lag version of the Butterworth filter. With $n = 4$ and $f_c = 1$ Hz we obtained quite a good correspondence between gastrocnemius EMG and the muscle force as expressed by the anterior-posterior COP (center of pressure) during quiet standing. We have also implemented the filter corresponding to a second order critically damped system, \[ c(f_k) \mapsto {\tilde c}(f_k) = \frac{c(f_k)}{\left( 1 + i\frac{f_k}{f_c}\right)^2}, \] which is one of the basic models for the EMG-to-force transfer functio7n \citep{Soechting1975}. Note that the function \verb|EMG_crit2| too rectifies the EMG before filtering. For comparison of EMG vs EMG, or (filtered) EMG vs force etc, the correlation methods are essential. Given two time series $x$ and $y$ we may define a correlation function $c_{xy}(t)$ by, \[ c_{xy}(t) = \frac{\int_0^T {\tilde x}(u) {\tilde y}(u + t) du}{\sqrt{\int_0^T {\tilde x}(u)^2 du} \sqrt{\int_0^T {\tilde y}(u)^2 du}}, \] where ${\tilde x}$ is $x$ with the mean value $\bar x$ subtracted, ${\tilde x}(t) = x(t) - \bar x$, etc. In the discrete version this is implemented by \verb|EMG_corr| employing \verb|fft| methods. In the frequency domain a coherence function is defined by, \[ \label{EQ:Coherency} \text{coh}_{xy}(f) = \frac{\langle \hat{x}^\star(f) \hat{y}(f)\rangle}{\sqrt{\langle |\hat{x}(f)|^2\rangle} \sqrt{\langle |\hat{y}(f)|^2\rangle}}, \] where $\langle \cdots \rangle$ denotes statistical averaging. This is estimated by \verb|EMG_coh| by dividing the time series into time slices of size \verb|DT| and calculating the Fourier coefficients for these slices, and finally take the averages over the blocks. \verb|EMG_corr| and \verb|EMG_coh| can be used to investigate time lags and phase shifts between signals. In the case that we have a linear relationship, $y = H \star x + n$, with a transfer function $H$ (and uncorrelated ''noise'' $n$), we would get \[ \mathrm{coh}_{xy}(f) = \frac{\hat H(f)}{|\hat H(f)|} = e^{i \phi(f)}, \] where $\phi(f)$ is the phase function of the transfer function. A related time shift $\tau$ can then be obtained from $2 \pi \tau = d\phi(f)/df$. \subsubsection{Frequency band analysis} As is well known from musical transcription, it is convenient to study sound by how the power (intensity) is distributed over the frequency bands (pitch) as a function of time. This is useful also in basic signal analysis. The idea is to decompose signals using filter banks. Wavelets can be considered as a special realization of the idea of filter banks \citep{Vetterli1995}. An interesting hybrid method for ''intensity analysis'' of EMG has been proposed by \citep{Tscharner2000}. The idea is to divide the frequency band of interest, say one from 10 Hz to 200 Hz, into subbands centered on frequencies $f_c^{(j)}$ ($j = 1, \cdots , J$) such that the relative bandwidth $BW = \Delta f(j)/f_c^{(j)}$ scales as $1/\sqrt{f_c^{(j)}}$ over the frequency band. Here $\Delta f(j)$ is the frequency resolution of the ''mother wavelet'' $\psi$ at the center frequency $f_c^{(j)}$. In this way one can provide a distribution of signal power among the frequency bands. The power in the frequency band $j$ at time $t$ is given by $\left|c_j\right|^2$ where, \[ c_{j}(t) = \int \bar{\psi}_j (u - t) x(u) du, \] and ${\psi}_j$ is the wavelet centered at $f_c^{(j)}$. This differs from the recipe in \citep{Tscharner2000} but that is mainly because we use here the full complex coefficient. We present here a modification \citep{Borg2003} which is based on the Morlet function which in frequency space is given as, \[ \hat{\psi}(f_c, \alpha, f) = \exp\left( - \frac{2 \pi^2}{\alpha f_c} (f - f_c)^2 \right). \] For the center frequencies we select, following von Tscharner, \[ f_c^{(j)} = \frac{1}{s}(q+j)^2 \quad (j = 0, \cdots, J-1), \] determined by the parameters $s$ (''scale'') and $q$. The implementation is given by \verb|EMG_morvt| which is again based on using the \verb|fft| transformation. This function returns a \verb|list| where \verb|powc| refers to the matrix of the $c^{(j)}[i]$ coefficients, \verb|freqc| to the array with the center frequencies, and \verb|freqm| contains an estimate of the instantaneous mean frequency calculated as the average of the center frequencies weighted with the power coefficients $\left|c_j\right|^2$. \subsubsection{Multi resolution analysis, MRA} In the above examples we have relied on the basic libraries that belong the the default setup of the R system. In the next example we will take advantage of a library that provides functions for discrete wavelet analysis. There is for instance a package appropriately named \verb|wavelets| (by Erich Aldrich). In order to install it one enters the command \begin{verbatim} install.packages("wavelets") \end{verbatim} which will look up a depository and ask you to download the package. When successfully installed it can be loaded by the command \begin{verbatim} library(wavelets) \end{verbatim} The command \verb|library()| with empty argument will show the packages installed on your system. Information about the package \verb|wavelets| can be obtained by the command \begin{verbatim} help(package = "wavelets") \end{verbatim} or \verb|??wavelets|. In multi resolution analysis (MRA) we repeatedly apply low- and high-pass filters to a discrete time series which thus can be decomposed into fine and coarse grained parts. The simplest example is the Haar filter. If $x = (x_1, x_2, \cdots)$ then Haar low pass and high pass filters produce the series, $a = (a_1, a_2, \cdots)$ and If $b = (b_1, b_2, \cdots)$, with \begin{align*} &a_i = \frac{x_{2i} + x_{2i-1}}{\sqrt{2}}, \\ &b_i = \frac{x_{2i} - x_{2i-1}}{\sqrt{2}}. \\ \end{align*} The averaging procedure produces a coarse grained sum version $a$, while $b$ contains the detail. Symbolically the decomposition can be written $x = (a|b)$. This procedure can be repeated taking $a$ as an input for the decomposition procedure. In this fashion we obtain \[ x = (a^J|b^J|b^{J-1}| \cdots |b^1), \] for a decomposition of order $J$. The $k$:th level detail coefficients $b^k$ represent information about the changes in the times series on a time scale proportional to $2^k$. The function \verb|EMGx_mra()| is a wrapper for \verb|mra| in the \emph{wavelet} package. A new feature here is that the function \verb|mra| returns a \verb|class| object with \verb|slot|s whose names can be accessed by the function \verb|slotNames|. For instance the detail coefficients have the name \verb|D| and the sum coefficients the name \verb|S|. If \begin{verbatim} res <- mra(X) \end{verbatim} then the vectors with the coefficients are accessed as \begin{verbatim} res@D[[j]], and res@S[[j]], \end{verbatim} for the level $j$. The original data can be obtained as a sum of the decomposition, \begin{verbatim} X = res@D[[1]] + res@D[[2]] + ... + res@D[[J]] + res@S[[J]]. \end{verbatim} Thus \verb|res@D[[j]]| reflects the signal content on a time scale of the order $2^j \cdot f_s^{-1}$ where $f_s$ is the sampling rate. \subsubsection{Batch processing} As the number of data files grow it is important to be able to process them in one row. This kind of batch processing can be simply implemented in R. We will assume that have a set of data files \verb|name1.asc|, ... , \verb|nameN.asc|. One can collect these paths of these files into \verb|filelist.asc| and write a R-script which opens each of these files for processing. One thing to remember is that the file paths must be on the Unix format using \verb|/| (or \verb|\\| ) instead of \verb|\|. The files can also be selected interactively by using the \verb|tk_choose.files()| function, \begin{lstlisting} library(tcltk) # load the tcltk package Filters <- matrix(c("EMG data", ".asc", "All files", "*"), 2, 2, byrow = TRUE) if(interactive()) filelist <- tk_choose.files(filter = Filters) \end{lstlisting} This will open the ''Select files'' dialogue and put the selected files into the \verb|filelist| variable (with file paths on the Unix format). The following snippet is a simple example which opens the files in the \verb|filelist| and plots the first column to a pdf-file, and writes the standard deviation to a text-file. \begin{lstlisting} # filelist -- contains paths to asci files with EMG data outputpdf <- "C:/EMGanalysis.pdf" # output graphs to this file outputtxt <- "C:/EMGanalysis.txt" # output text/numbers to this file pdf(outputpdf) # starts the pdf driver and opens the output pdf-file fp <- file(outputtxt, "w") # opens text-file for writing n <- length(filelist) for(i in 1:n){ EMG <- read.table(filelist[i], header = FALSE) title <- paste("Data from ", filelist[i]) # this one goes to the text file --> cat("Standard deviation = ", sd(EMG$V1), " for data EMG$V1 in file ", filelist[i], "\n", file = fp) # this one goes to the pdf file --> plot(EMG$V1, main = title, type = "l") } close(fp) # closes the text file dev.off() # closes the pdf driver \end{lstlisting} It illustrates how one reads the data, opens a file for writing, where the writing to the file is performed using the function \verb|cat|. (It computes the standard deviation of the time series using the function \verb|sd| and writes it to the file.) This example is easily generalized to more complicated processings. \subsection{Statistics} R is by definition a statistics software package whence all the well-known, and many less well-known, statistical procedures are implemented. Important sub\-topics are descriptive statistics, statistical testing, and modeling data. Since our emphasis here is on signal processing we will not go into the statistical methods. At the very basic level we have, for instance, \verb|hist(X)| which computes and plots a histogram of numerical data \verb|X|, while \verb|plot(ecdf(X))| first calculates the empirical cumulative distribution function (ecdf) and the plots the result. The \emph{Student test} is performed by \verb|t.test| and, for instance, \verb|t.test(X, mu = 2)| computes the $p$-value for the mean to differ from 2, and the 95\% confidence interval for the mean of \verb|X|. For an introduction to statistical analysis using R we recommend \cite{Everitt2010} which is provided with a R-package \verb|HSAUR2| that contains the codes and the data sets. \section{Conclusions} We have given a brief introduction to some basic features of the R software used as a tool for analyzing and displaying EMG data, and biosignal data in general. Using R it is easy to document the exact procedures employed in analyzing the data so that it can be replicated by other researchers. A next level would be to develop a dedicated \textsc{Remg} package with tools covering various aspects of EMG and related kinesiological data and signals (MMG, ECG, etc). Such a package could be supplied with a representative set of data for testing and demonstrating the analysis methods. Finally we would also like to emphasize the usefulness of R in teaching basic data processing and visualization methods to biomechanics students. \begin{comment} \section*{Conflict of interest statement} The author acknowledges that he does not have any financial or personal relationships with other people or organizations that would inappropriately influence the results of this study. \end{comment} \section*{Acknowledgments} The author thanks Maria Finell for gathering the EMG- and balance data used in the examples (supplementary material). He is also indebted to W Jeffrey Armstrong for exchanges on the ''intensity analysis'' which has resulted in an update of the \verb|EMGfuns.R| file. \section*{Supplementary material} The file \verb|data.bal| contains quiet standing balance COP-data in ASCI format. First column contains COP X, second column contains COP Y, both in millimeters. The third column contains total vertical force (Newton). The columns are \verb|tab|-separated (\verb|\t|). Sampling rate is 100 Hz. The file \verb|data.emg| contains the EMG-data in ASCI format. The columns contain data from the muscles Tibialis anterior (right), Lateral Gastrocnemius (right), Medial Gastrocnemious (right), Tibialis anterior (left), Lateral Gastrocnemius (left), Medial Gastrocnemious (left), all sampled at 1000 Hz. The file \verb|emg_analysis.R| is the R-script which demonstrates a few basic analyzing methods with the EMG and the balance data. The script either produces a \verb|pdf| report (\verb|result.pdf|), or shows the results on the console, depending on the setting of the variable \verb|report| (\verb|TRUE| or \verb|FALSE|). The file \verb|EMGfiles.R| contains a script which demonstrates how to set up batch processing. The file \verb|EMGfuns.R| contains the basic R-scripts (functions) for analyzing EMG which are employed by \verb|emg_analysis.R|. The files are by default assumed to reside in the archive \verb|C:/EMGR|. The files can be downloaded as \url{http://terra.chydenius.fi/~frborg/emg/EMGR.zip}, and also from the arXiv site as supplementary material.
1,314,259,992,734
arxiv
\section{The SUPERBLINK survey} We have been conducting an all-sky survey for stars with large proper motions using data from the Digitized Sky Surveys (DSS). The scanned images in the DSS cover the entire sky in multiple bands and at various epochs, and the temporal baseline between the earliest and latest epoch is between 15 and 45 years for most areas on the sky. The large motion displayed by high proper motion stars between the two epochs is detected directly from the scans by means of an image subtraction algorithm, described in detail in \cite[L\'epine \etal (2002)]{LSR02}. Two-epochs finder charts are generated, which can be blinked on the computer screen. All objects detected in the survey are thus verified by eye, and spurious detections are excluded. At the bright end, stars tend to become saturated on the DSS images, and are no longer properly detected by the code. The TYCHO-2 catalog is used to complete the census at the bright end. Also, all stars detected by SUPERBLINK are searched for a counterpart in the TYCHO-2. The positional and proper motion information from the TYCHO-2 catalog are used for all matching counterparts. The two-epoch charts are however examined to verify consistency; in some cases, it is found that the proper motion from TYCHO-2 must be in error, and the SUPERBLINK proper motion is used instead. Counterparts in the the 2MASS All-Sky Catalog of Point Sources \cite[Cutri \etal (2003)]{C03} are also identified for all SUPERBLINK detections. At the faint end, the positions are thus those extrapolated from the 2MASS catalog, and are thus realized in the ICRS system and accurate to about 0.1''. Counterpart are also identified in the USNO-B1 catalog \cite[Monet \etal (2003)]{M03}, and together with 2MASS, provide optical and infrared magnitudes for almost all the stars. The systematic comparison with the 2MASS catalog allows allows us to systematically identify all common proper motion doubles which are resolved in the 2MASS images. The CCD observations from 2MASS have significantly higher resolution than the photographic images from the DSS. Numerous common proper motion doubles and multiple systems have being identified and their components will be listed as separate entries in the LSPM catalog. Many faint companions of Hipparcos stars are also being identified this way \cite[e.g. L\'epine \& Bongiorno (2007)]{LB07}. \begin{figure}[t] \begin{center} \includegraphics[width=5in]{2109_Lepine_1.eps} \caption{Comparison between the distribution of faint ($V>16$) stars from the NLTT catalog (top) and the new LSPM catalog (bottom). The new catalog fills in all the gaps of the NLTT, particularly in the low Galactic latitude fields, and provides the most complete all-sky census of high proper motion stars to date.} \label{fig1} \end{center} \end{figure} \section{Replacing the NLTT catalog} The NLTT catalog \cite[Luyten (1979)]{L79} was notoriously incomplete in two main regions: the sky south of Decl.=-30$^{\circ}$, and areas of high stellar density along the plane of the Milky Way. The incompleteness was most severe for stars fainter than magnitude V=16 (see Fig.1). The northern part of the SUPERBLINK survey was completed first, and the results have already been published in \cite[L\'epine \& Shara(2005)]{LSR05} as the LSPM-north catalog. The full LSPM catalog now fills in most of the remaining gaps in the south, and at last provides a true, all-sky census of faint stars with large proper motions (Fig.1). While there remains some level of incompleteness at low Galactic latitudes, especially toward the Galactic center, most of the variations in surface density observed in Fig.1 are due to selection effects from the high proper motion cutoff ($\mu>0.15''$ yr$^{-1}$) of the survey. A combination of the Sun's motion through the local standard of rest and the asymmetric drift of the Galactic thick disk and halo stars results in more stars having large transverse motions at high Galactic latitudes, hence the large density of stars detected there. Compared with the 58,845 stars with proper motions $\mu>0.18''$ yr$^{-1}$ listed in the NLTT catalog, the LSPM catalog will list over 122,000 stars with proper motions $\mu>0.18''$ yr$^{-1}$. With the increased sky coverage and completeness, the LSPM catalog makes the Luyten catalog obsolete, and from now on should be used as a replacement to the NLTT. For convenience, all NLTT stars will also be identified in the LSPM both by their LHS designation and NLTT catalog number. \section{Stellar contents and kinematics} A reduced proper motion diagram shows the stars in the LSPM to be of three main classes. Low-mass K and M red dwarfs from the disk population dominate, but a significant fraction are low-mass subdwarfs from the halo (sdK, sdM), and the catalog also contains thousands of white dwarfs. While we currently lack parallax distances for most of the stars in the LSPM catalog, photometric distances can be calculated for specific classes of objects. The red dwarfs, in particular, have a reasonably well calibrated $[M_v,V-J]$ color-magnitude relationship \cite[L\'epine (2005)]{L05}. With photometric distances and proper motions, it is possible to investigate the local kinematics of the red dwarfs in the vicinity of the Sun ($d<100$pc). By selecting stars is specific parts of the sky, one can obtain velocity-space projections in the UV, UW, and VW plane (Figure 2). Because of the high proper motion cutoff of the LSPM catalog ($\mu>0.15''$ yr$^{-1}$, stars with low projected velocities are not represented in the census, which leaves a low-velocity ``hole'' in the maps of projected velocities. The hole increases for stars at larger distances. Despite this artifact, one can see that the velocity space projections of the nearby red dwarfs are not isotropic and show considerable structure. A comparison with the velocity space distribution of Hipparcos stars calculated by \cite[Nordstr\"om \etal (2004)]{N04}, shows very good agreement with our data. This show how future astrometry, providing accurate parallaxes for all the LSPM stars, may have a major impact in uncovering fine structure in the kinematics of stars in the Solar vicinity. \begin{figure}[t] \begin{center} \includegraphics[width=6in]{2109_Lepine_2.eps} \caption{Projected motions in the UVW of red dwarf stars in the Solar neighborhood, based on LSPM catalog proper motions and photometric distances.The UW projection is from stars found at low Galactic latitude in the direction of the apex and antapex of the Sun's orbital motion around the Galaxy. The VW projection is obtained from stars in the direction of the Galactic center and anti-center.} \label{fig2} \end{center} \end{figure} \section{Conclusions} The LSPM catalog now has all-sky coverage. The LSPM-south catalog will complement the already released LSPM-north, and yield a highly complete catalog of stars with proper motion $\mu>0.15 ''$ yr$^{-1}$. The catalog is estimated to be $>98\%$ complete for all H-burning stars and white dwarfs with proper motions in the range above, covering virtually all objects down to visual magnitude 19. The catalog is realized at the bright end by the Tycho-2 catalog, down to magnitude $\approx10-12$. At the faint end, which encompass the vast majority of the stars, the proper motions are obtained from the SUPERBLINK software, while the positions are determined by the counterparts in the 2MASS catalog. Overall, the positional accuracy of the catalog is thus better than $0.12''$, while proper motions at the faint end have typical errors $\approx10$ mas yr$^{-1}$ north of Decl.=-30$^{\circ}$, and $\approx20$ mas yr$^{-1}$ south of this. All double stars which are resolved in the 2MASS survey have been identified and are listed individually. The SUPERBLINK survey is now being expanded to lower proper motion regimes, and future releases will expand the catalog to proper motions $\mu>40$ mas yr$^{-1}$.
1,314,259,992,735
arxiv
\section{Dual and conjugate variables for $q$-Araki-Woods von Neumann algebras in finite dimensions} \label{sec:dc} In this section we will consider only the case of finite-dimensional $\mathsf{H}_\Bbb R$. Fix then $d \in \Bbb N$ and write $\mathsf{H}$ for the complexification of $\mathsf{H}_\Bbb R=\Bbb R^d$. We assume that we are also given $(U_t)_{t \in \Bbb R}$, a group of orthogonal transformations of $\Bbb R^d$, whose generator (both on $\mathsf{H}_\Bbb R$ and on $\mathsf{H}$) will be denoted by $A$. The space $\mathsf{H}$ is thus equipped both with the standard scalar product and with the deformed scalar product $\langle \xi, \eta \rangle_U:= \langle \xi, \frac{2A}{1+A} \eta \rangle$; if we want to stress the difference we will sometimes use the notation $\mathsf{H}_U$. Further we write $\mathcal{F}_q(\Hil_U)$ for the associated $q$-Fock space, with $\FockqH_{\textup{alg}}$ as the subspace spanned by finite tensors, and $e_0=\Omega$ the vacuum vector. We denote the associated $q$-Araki-Woods von Neumann algebra $\Gamma_q(\mathsf{H}_\Bbb R, U_t)$ simply by $\mathsf{M}$ and the canonical $q$-quasi free state $\langle\Omega,\cdot\Omega\rangle_{\mathcal{F}_q(H_U)}$ by $\varphi$. Finally for each $\xi\in \FockqH_{\textup{alg}}$, we denote by $W(\xi)$ the unique element in $\mathsf{M}$ which satisfies $W(\xi)\Omega=\xi$. Let $\{e_1,\ldots, e_d\}$ be a linearly independent set of vectors in $\mathsf{H}$, and for $i \in \{1,\ldots,d\}$ let $A_i=W(e_i)$, i.e.\ $A_i \in \mathsf{M}$ and $A_i\Omega=e_i$. We say that a tuple $(D_1,\ldots, D_d)$ of unbounded operators on $\mathcal{F}_q(H_U)$ with $\FockqH_{\textup{alg}}$ contained in their domains and $\mathds{1}$ contained in the domains of their adjoints is a {\em (normalized) dual system} for $(A_1,\ldots, A_d)$ if for all $i, j \in \{1,\ldots,d\}$ \[{[D_i,A_j]}=\langle\bar{e}_j, e_i\rangle_U P_{\bc \Omega}=\varphi(A_jA_i)P_{\bc \Omega}\;\;\mbox{ and } D_i\Omega=0.\] Here $\bar{\xi}$ denotes the usual conjugate of a vector $\xi$ in $\mathbb{C}^d$ and $P_{\mathbb{C}\Omega}$ denotes the projection onto the one-dimensional subspace $\mathbb{C}\Omega$. Before we proceed any further, let us note that existence of dual variables implies existence of conjugate variables. We will actually show directly that the existence of dual variables implies existence of the conjugate variables with respect to the quasi-free difference quotients (see \cite[Definition 3.11]{Brent}), which in turn implies existence of the usual conjugate variables (see \cite[Remark 3.13]{Brent}). Recall that the \emph{quasi-free difference quotients} $\partial_{i}$ are defined as unique derivations from $\mathbb{C}[A_i,\dots, A_d]$ into $\mathsf{M} \overline{\otimes} \mathsf{M}^{op}$ such that $\partial_i(A_j) := \varphi(A_j A_i)\mathds{1}\otimes \mathds{1}$ for all $i,j\in \{1,\ldots,d\}$. The \emph{conjugate variable} for $\partial_i$ will be a vector $\xi_i \in L^{2}(\mathsf{M},\varphi)$ such that \[ \langle \xi, x\mathds{1}\rangle = \langle \mathds{1}\otimes \mathds{1}, \partial_i(x)(\mathds{1}\otimes \mathds{1})\rangle \] for all $x \in \mathrm{dom}(\partial_i)$. \begin{prop}[{See \cite[Theorem 2.5]{MS}}] Suppose that $(D_1,\dots, D_d)$ is a normalized dual system for $(A_1,\dots, A_d)$. Then $(D_{1}^{\ast}\mathds{1},\dots, D_{d}^{\ast}\mathds{1})$ are conjugate variables for $(A_1,\dots, A_d)$. \end{prop} \begin{proof} It suffices to check that for all $i \in \{1,\ldots,d\}$, $n \in \Bbb N$ and $j_1, \ldots,j_n \in \{1,\ldots,d\}$ we have $\langle D_{i}^{\ast}\mathds{1}, A_{j_{1}}\dots A_{j_{n}}\mathds{1}\rangle = \langle \mathds{1}\otimes \mathds{1}, \partial_i(A_{j_{1}}\dots A_{j_{n}})(\mathds{1}\otimes \mathds{1})\rangle$. The left-hand side is equal to $\langle \mathds{1}, D_{i} A_{j_{1}}\dots A_{j_{n}}\mathds{1}\rangle$. The defining property of $D_i$ says that $D_i A_{j_1} = A_{j_1} D_i + \varphi(A_{j_{1}} A_{i}) P_{\Omega}$. It follows that \begin{align*} \langle \mathds{1}, D_{i} A_{j_{1}}\dots A_{j_{n}}\mathds{1}\rangle &= \langle \mathds{1}, A_{j_{1}} D_{i}\dots A_{j_{n}}\mathds{1}\rangle + \varphi(A_{j_{1}} A_{i}) \langle \mathds{1}, A_{j_{2}}\dots A_{j_{n}}\mathds{1}\rangle \\ &= \langle \mathds{1}, A_{j_{1}} D_{i}\dots A_{j_{n}}\mathds{1}\rangle + \varphi(A_{j_{1}} A_{i}) \varphi( A_{j_{2}}\dots A_{j_{n}}) \end{align*} Continuing in this way we will obtain the final formula: \[ \langle \mathds{1}, D_{i} A_{j_{1}}\dots A_{j_{n}}\mathds{1}\rangle = \sum_{k=1}^{n}\varphi(A_{j_{k}} A_{i}) \varphi(A_{j_{1}}\dots A_{j_{k-1}})\varphi(A_{j_{k+1}}\dots A_{j_{n}}) + \langle \mathds{1}, A_{j_{1}}\dots A_{j_{n}}D_i \mathds{1}\rangle, \] where the last term vanishes as $D_i \mathds{1}=0$. This is equal to $\langle \mathds{1}\otimes \mathds{1}, \partial_i(A_{j_1}\dots A_{j_{n}})(\mathds{1}\otimes \mathds{1})\rangle$, because the value $\partial_i(A_{j_1}\dots A_{j_{n}})$ can be computed exactly as for free difference quotients, merely replacing Kronecker deltas $\delta_{i j_k}$ with the covariance $\varphi(A_{j_{k}} A_i)$. \end{proof} \begin{lem}\label{lem:basechange} Let $\{e_i\}_{1\leqslant i\leqslant d}$ and $\{f_i\}_{1\leqslant i\leqslant d}$ be two linearly independent sets in $\mathsf{H}$ such that for every $j \in \{1,\ldots,d\}$ we have $f_j=\sum_{k=1}^dx_{jk}e_k$ for some $x_{jk}\in \mathbb{C}$. If $A_i=W(e_i)$ and $C_i=W(f_i)$, $i \in \{1,\ldots,d\}$, then a dual system for $\{A_i\}_{1\leqslant i\leqslant d}$ exists if and only if one for $\{C_{i}\}_{1\leqslant i\leqslant d}$ does. \end{lem} \begin{proof} Note that the definition of Wick operators assures that $C_j=\sum_{k=1}^d{x_{jk}}A_k$, $j \in \{1,\ldots,d\}$. If $\{D_i\}_{1\leqslant i\leqslant d}$ denotes the dual system for $\{A_i\}_{1\leqslant i\leqslant d}$, then $\{E_i\}_{1\leqslant i\leqslant d}$ is the dual system for $\{C_i\}_{1\leqslant i\leqslant d}$, where for each $i \in \{1,\ldots,d\}$ we set $E_i=\sum_{k=1}^dx_{ik}D_k.$ Indeed, let us check: \begin{align*} [E_i, C_j] &= \sum_{k=1}^d \sum_{l=1}^d x_{ik} x_{jl} [D_k, A_l] = \sum_{k=1}^d \sum_{l=1}^d x_{ik} x_{jl} \langle\bar{e}_l, e_k\rangle_U P_{\bc \Omega}\\&= \langle\overline{ \sum_{l=1}^d x_{jl}e_l}, \sum_{k=1}^dx_{ik} e_k\rangle_U P_{\bc \Omega} = \langle\bar{f}_j, f_i\rangle_U P_{\bc \Omega}. \end{align*} \end{proof} \subsection*{Dual variables} Fix then $\{e_1,\ldots, e_d\}$, an orthonormal set in $\mathsf{H}$ with respect to the undeformed scalar product and as before let $A_i=W(e_i)$. For $i, j \in \{1,\ldots,d\}$ set $B_{ij}=\langle \overline{e}_i,e_j\rangle_U.$ Denote by $[d]^*$ the set of words in letters from the alphabet $\{1,\ldots,d\}$ and for any word $w=j_n\ldots j_1\in [d]^*$, define $e_{j_n\ldots j_1}=e_{j_n}\otimes\cdots\otimes e_{j_1}$. One notes that $W(\xi)=l_{\bar{\xi}}+l_{{\xi}}^*$ for any $\xi\in H$ where $l_\xi^*$ is the creation operator; this is a very easy instance of the general Wick product formula (see for example \cite[Proposition 2.12]{ABW}). Hence we have $A_i(e_{j_n\ldots j_1})=e_{ij_n\ldots j_1}+\sum_{k=1}^nq^{n-k}B_{ij_k} e_{j_{n}\ldots \hat{j}_k\ldots j_1}$, where $\hat{j}_k$ means to omit $j_k$. The aim is to exploit the results of \cite{Brent}; to that end we want to first define for each $i \in \{1,\ldots,d\}$ operators $D_i: \FockqH_{\textup{alg}} \to \FockqH_{\textup{alg}}$ such that \[ D_i \Omega = 0, \;\;\; [D_i, A_j] = B_{ji} P_{\bc \Omega}, \;\;\; j \in \{1,\ldots,d\}.\] We use below the notation of \cite[Section 4]{MS}, both in the formulation and in the proofs; in particular $B(n+1)$ appearing the following lemma denotes a collection of partitions introduced after \cite[Example 4.3]{MS}. We will just note the places where the arguments need to be extended or modified. \begin{lem} The algebraic formula for the dual variables is given as follows ($i \in \{1,\ldots,d\}$, $n \in \Bbb N$, $j_1, \ldots,j_n \in \{1,\ldots,d\}$): \[D_i (e_{j_n} \ldots e_{j_1}) = \sum_{\pi \in B(n+1)} (-1)^{{\pi(0)-1}} q^{\textup{cross}(\pi)} \delta_{p(\pi)}^B e_{s(\pi)}, \] where $\delta_{p(\pi)}^B:= \prod_{\underset{{l>m}}{(l,m)}\in \pi} B_{j_l,j_m}$. \end{lem} \begin{proof} Check first that \[ [D_i, A_j] \Omega =D_i e_j = \sum_{\pi \in B(2)} (-1)^{\pi(0)-1} q^{\textup{cross}(\pi)} \delta_{p(\pi)}^B e_{s(\pi)} = B_{ji} \Omega.\] Then we compute \begin{align*} D_i A_{j_{n+1}} (e_{j_n} \ldots e_{j_1}) =& D_i \left(e_{j_{n+1} \ldots j_1} + \sum_{l=1}^n q^{n-l} B_{j_{n+1}, j_l} e_{j_{n} \ldots \hat{j_l} \ldots j_1} \right) \\&= D_i e_{j_{n+1} \ldots j_1} + \sum_{l=1}^n \sum_{\sigma \in B(n)} (-1)^{\sigma(0)-1} q^{\textup{cross}(\sigma)+n-l} \delta_{p(\sigma)}^B B_{j_{n+1},j_l} e_{s(\sigma)} \end{align*} and \begin{align*} A_{j_{n+1}} D_i(e_{j_n} \ldots e_{j_1}) =& \sum_{\pi \in B(n+1)} (-1)^{\pi(0)-1} q^{\textup{cross}(\pi)} \delta_{p(\pi)}^B e_{j_{n+1}s(\pi)} \\ &+ \sum_{\pi \in B(n+1)} \sum_{k=1}^{|s(\pi)|} (-1)^{\pi(0)-1} q^{\textup{cross}(\pi)+ |s(\pi)|-k} \delta_{p(\pi)}^B B_{j_{n+1}, j_{s(\pi)_k}} e_{s(\pi)\setminus s(\pi)_k}. \end{align*} In the first step of the proof of \cite[Proposition 4.5]{MS} all terms in the last factor of the second sum are identified with some terms in the last factor of the first sum, by taking a pair $(\pi,k)$ and setting $\sigma \in B(n)$ by removing the singleton $s(\pi)_k$ and putting $l =s(\pi)_k$. Then we just have to observe that \[ \delta_{p(\sigma)}^B B_{j_{n+1},j_l} e_{s(\sigma)} = \delta_{p(\pi)}^B B_{j_{n+1}, j_{s(\pi)_k}} e_{s(\pi)\setminus s(\pi)_k} \] (and compute the crossings exactly as in \cite{MS}). After the subtracting one is left in the first sum with the following terms: \[D_i e_{j_{n+1} \ldots j_1} + \sum_{l=1}^n \sum_{\sigma \in B(n): \sigma(0)\geqslant l} (-1)^{\sigma(0)-1} q^{\textup{cross}(\sigma)+n-l} \delta_{p(\sigma)}^B B_{j_{n+1},j_l} e_{s(\sigma)}\] Now to each pair $(\sigma,l)$ as above we associate $\sigma' \in B(n+2)$ by inserting a `new point' at $l$ and pairing it with $n+1$. Thus the last expression simplifies to (after counting the crossings as in \cite{MS}) \[D_i e_{j_{n+1} \ldots j_1} - \sum_{\sigma' \in B(n+2): \sigma'(n+1) \textup{ not a singleton}} (-1)^{\sigma'(0)-1} q^{\textup{cross}(\sigma')} \delta_{p(\sigma')}^B e_{s(\sigma')};\] note that singletons do not change under this procedure, and $\delta_{p(\sigma)}^B B_{j_{n+1},j_l} = \delta_{p(\sigma')}^B$. The rest of the argument is just collecting the terms. \end{proof} The following is the main result of this Section. \begin{prop} \label{prop:conjugate} For each $i \in \{1,\ldots,d\}$ we have $e_0:=\Omega \in \textup{Dom } D_i^*$. Thus $(D_{1}^{\ast}e_0,\dots, D_{d}^{\ast}e_0)$ forms a set of conjugate variables for $(A_1,\dots, A_d)$. \end{prop} \begin{proof} As in \cite[Theorem 4.6]{MS} the proof amounts to studying the expression of the form \[ \langle e_0, D_i (\sum_{w \in [d]^*} \alpha_w e_w) \rangle_{\mathcal{F}_q(\mathsf{H}_U)}, \] where the sum is finite (but arbitrary). It is easy to see that it coincides with \[(*):= \sum_{m=1}^\infty \sum_{\;\;\pi \in B(2m), \pi(0)=m\;\;} \sum_{|w|=2m-1} \alpha_w (-1)^{m-1} q^{\textup{cross}(\pi)} \delta_{p(\pi), w}^B \] where $\delta_{p(\pi),w}^B$ is written for $\delta _{p(\pi)}^B$ in order to make the dependency on $w$ explicit. Fix for the moment $m\geqslant 1$ and do two things at once: first rewrite each word $w$ of length $2m-1$ as $vjw'$ with $v,w'$ words of length $m-1$ and $j \in \{1,\ldots,d\}$, and second identify each $\pi \in B(2m)$, $\pi(0)=m$ with a permutation $\pi' \in S_{m-1}$ (exactly as in \cite[Theorem 4.6]{MS}). We then have \begin{align*} \sum_{\pi \in B(2m), \pi(0)=m\;\;} & \sum_{|w|=2m-1} \alpha_w (-1)^{m-1} q^{\textup{cross}(\pi)} \delta_{p(\pi), w}^B \\&= (-1)^{m-1} q^{\frac{m(m-1)}{2}} \sum_{\pi' \in S_{m-1}\;\;} \sum_{|v|=m-1} \sum_{j=1}^d \sum_{|w'|=m-1} \alpha_{vjw'} q^{\textup{inv}(\pi')} B_{ji} \delta_{\pi'(v), w}^B, \end{align*} where $\delta_{\rho(v), w}^B = \prod_{l=1}^{m-1} B_{v_{\rho(l)} w_l}=\langle \overline{e}_{\rho(v)}, e_w\rangle_{\mathsf{H}_U^{\otimes |w|}}$ and $\textup{inv}(\pi')$ denotes the number of inversions of the permutation $\pi'\in S_{m-1}$. Further \begin{align*} (-1)^{m-1} q^{\frac{m(m-1)}{2}} \sum_{\pi' \in S_{m-1}\;\;}& \sum_{|v|=m-1} \sum_{j=1}^d \sum_{|w'|=m-1} \alpha_{vjw'} q^{\textup{inv}(\pi')} B_{ji} \delta_{\pi'(v), w}^B \\&= (-1)^{m-1} q^{\frac{m(m-1)}{2}} \sum_{|w'|=m-1} \sum_{|v|=m-1} \sum_{j=1}^d B_{ji} \alpha_{vjw'} \langle \overline{e}_v, e_{w'}\rangle_{\mathcal{F}_q(\Hil_U)} \\&= (-1)^{m-1} q^{\frac{m(m-1)}{2}} \sum_{|v|=m-1} \langle \overline{e}_v, \sum_{|w'|=m-1} \sum_{j=1}^d B_{ji} \alpha_{vjw'} e_{w'} \rangle_{\mathcal{F}_q(\Hil_U)}. \end{align*} So now we fix $v$ of length $m-1$ and look at the vector $\sum_{|w'|=m-1} \sum_{j=1}^d B_{ij} \alpha_{vjw'} e_{w'}$. We note that this is nothing but $\sum_{j=1}^d B_{ij} \tilde{L}_{vj} \left(\sum_{|w'|=m-1} \alpha_{vjw'} e_{vjw'}\right)$, where $\tilde{L}_{vj}$ denotes the composition of the $m$ relevant undeformed free annihilation operators (acting on $\FockqH_{\textup{alg}}$) of the form $\tilde{L}_{e_k}$ for $k \in \{1,\ldots,d\}$, whose action on $\FockqH_{\textup{alg}}$ is given simply by \[\tilde{L}_{e_k} (\xi_1 \otimes \cdots \otimes\xi_n) = \langle e_k, \xi_1 \rangle \xi_2 \otimes \cdots \otimes\xi_n. \] Naturally we also have \[ \sum_{j=1}^d B_{ij} {\tilde L}_{vj} \left( \sum_{|w'|=m-1} \alpha_{vjw'} e_{vjw'} \right)= \sum_{j=1}^d B_{ij} \tilde{L}_{vj} \left(\sum_{|w''|=2m-1} \alpha_{w''} e_{w''}\right)\] where we have used the fact that the set $\{e_1,\ldots,e_d\}$ in $\mathsf{H}$ is orthonormal in the undeformed scalar product. Set $T_{i,v}: \FockqH_{\textup{alg}} \to \FockqH_{\textup{alg}}$, $T_{i,v}:= \sum_{j=1}^d B_{ij} \tilde{L}_{vj}$. We need to argue that $T_{i,v}$ is bounded (and estimate its norm). Consider then a free left `undeformed' annihilation operator $\tilde{L}(\xi)$ for $\xi \in \mathsf{H}$. Then for any $\eta \in \mathsf{H}$ we have \[ \langle \xi, \eta \rangle = \left\langle \frac{2A}{1+A} \left(\frac{2A}{1+A}\right)^{-1} \xi, \eta \right\rangle = \left \langle \left(\frac{2A}{1+A}\right)^{-1} \xi, \eta \right\rangle_U ,\] so that setting $\tilde{\xi} = (\frac{2A}{1+A})^{-1} \xi$ we see that $\tilde{L}_\xi = L_{\tilde{\xi}}$, where $L_{\tilde{\xi}}$ denotes the free left annihilation operator on $\FockqH_{\textup{alg}}$ i.e. $L_{\tilde{\xi}}(\xi_1\otimes\cdots\otimes\xi_n)=\langle \tilde{\xi},\xi_1\rangle_{U}\xi_2\otimes\cdots\otimes\xi_n$. By \cite[Lemma 2.2]{MS} (and linearity) we have $\|L_{\tilde{\xi}}\|_{B(\mathcal{F}_q(\Hil_U))} \leqslant C\|\tilde{\xi}\|_{\mathsf{H}_U}$ (where $C>0$ depends only on $q$). But \[ \|\tilde{\xi}\|_{\mathsf{H}_U}^2 = \langle \tilde{\xi}, \tilde{\xi} \rangle_U = \langle \xi, (\frac{2A}{1+A})^{-1} \xi\rangle \leqslant D^2 \|\xi\|_{\mathsf{H}}^2,\] where $D:= \|(\frac{2A}{1+A})^{-1} \|^{\frac{1}{2}}$. Thus finally for each $j\in \{1,\ldots,d\}$ we have $\|\tilde{L}_{e_j}\|_{B(\mathcal{F}_q(\Hil_U))} \leqslant CD$, and setting $B:=\max_{i,j \in \{1,\ldots,d\}} |B_{ij}|$, we obtain for each $v \in [d]^*$, $|v|=m-1$, \[\|T_{i,v} \|_{B(\mathcal{F}_q(\Hil_U))} \leqslant d B (CD)^m.\] Then we obtain the following: \[(*) \leqslant \sum_{m=1}^\infty q^{\frac{m(m-1)}{2}} \sum_{|v|=m-1} \|\overline{e}_v\|_{\mathcal{F}_q(\Hil_U)} \|T_{i,v}\| \|\sum_{w \in [d]^*} \alpha_{w} e_{w}\|_{\mathcal{F}_q(\Hil_U)}. \] It is easy to check (as in \cite{MS}) that if we set $E = \max_{i,j \in \{1,\ldots,d\}} |\langle \overline{e}_i, \overline{e}_j \rangle_U|$ then we have for each $v\in [d]^*$ of length $k$ the estimate \[ \|\overline{e}_v\|^2_{\mathcal{F}_q(\Hil_U)} \leqslant E^k [k]_{|q|}!. \] The rest is just gathering the estimates: \[ (*) \leqslant \left( \sum_{m=1}^\infty q^{\frac{m(m-1)}{2}} d^{m-1} E^{\frac{m-1}{2}} \sqrt{[m-1]_{|q|}!} d B (CD)^m \right) \|\sum_{w \in [d]^*} \alpha_{w} e_{w}\|_{\mathcal{F}_q(\Hil_U)}, \] and noting that the series inside the brackets converges. \end{proof} In terminology of \cite{Brent} the last proposition can be rephrased as saying that the set $\{A_1,\dots, A_d\}$, which generates $\mathsf{M}$, has \emph{finite free Fisher information}. Together with Lemma \ref{lem:basechange} this yields the following corollary. \begin{cor}\label{cor:freeFisher} Let $\mathsf{H}_\Bbb R$ be a finite-dimensional real Hilbert space equipped with an orthogonal group $(U_t)_{t \in \Bbb R}$. The algebra $\Gamma_q(\mathsf{H}_\Bbb R, U_t)$ equipped with the canonical state $\varphi$ is generated by a finite set $G=G^*$ of eigenoperators of the modular group of $\varphi$ with finite free Fisher information. \end{cor} \begin{proof} By \cite[Proof of Theorem 2.2]{Hiai} we can choose a set of linearly independent vectors $(\xi_1, \ldots, \xi_d)$ in $\mathsf{H}$ such that $W(\xi_1), \ldots, W(\xi_d)$ form a self-adjoint set of eigenoperators of the modular group of $\varphi$. Lemma \ref{lem:basechange} and Proposition \ref{prop:conjugate} imply that $\{W(\xi_1), \ldots, W(\xi_d)\}$ has finite free Fisher information (see \cite[Remark 3.13]{Brent}). \end{proof} \section{Consequences for structure of $q$-Araki-Woods von Neumann algebras} \label{sec:main} We begin by quoting the main results of \cite{Brent} and some facts established in \cite{SW} (see also \cite{BM}). \begin{tw}[\cite{Brent}, Theorem A]\label{Thm:Nelson} Let $\mathsf{M}$ be a von Neumann algebra with a faithful normal state $\varphi$. Suppose $\mathsf{M}$ is generated by a finite set $G=G^{\ast}$, $|G|\geqslant 2$ of eigenoperators of the modular group $\sigma^{\varphi}$ with finite free Fisher information. Then $(\mathsf{M}^{\varphi})^{\prime} \cap \mathsf{M} = \mathbb{C}$. In particular, $\mathsf{M}^{\varphi}$ is a $\mathrm{II}_1$ factor and if $H < \mathbb{R}^{\times}_{\ast}$ is the closed subgroup generated by the eigenvalues of $G$ then $\mathsf{M}$ is a factor of type \[ \left\{\begin{array}{ll} \mathrm{III}_1 & \text{ if } H=\mathbb{R}^{\times}_{\ast} \\ \mathrm{III}_{\lambda} & \text{ if } H= \lambda^{\mathbb{Z}}, 0<\lambda<1 \\ \mathrm{II}_1 & \text{ if } H=\{1\}. \end{array} \right. \] \end{tw} \begin{tw}[\cite{Brent}, Theorem B]\label{Thm:Nelson2} Let $\mathsf{M}$ be a von Neumann algebra with a faithful normal state $\varphi$. Suppose $\mathsf{M}$ is generated by a finite set $G=G^{\ast}$, $|G|\geqslant 2$ of eigenoperators of the modular group $\sigma^{\varphi}$ with finite free Fisher information. Then $\mathsf{M}^{\varphi}$ does not have property $\Gamma$. Furthermore, if $\mathsf{M}$ is a type $\mathrm{III}_{\lambda}$ factor, $0<\lambda<1$, then $\mathsf{M}$ is full. \end{tw} \begin{tw}[\cite{SW}, Lemma 5(2) with its proof, and Theorem 7(1)] \label{Thm:SW} Let $(\mathsf{H}_{\mathbb{R}}, U_t) = (\mathsf{K}_{\mathbb{R}}, U_t^{\prime}) \oplus (\mathsf{L}_{\mathbb{R}}, U_{t}^{\prime\prime})$ be the decomposition into, respectively, the almost periodic and the weakly mixing part. Denote $\mathsf{M}:= \Gamma_q(\mathsf{H}_{\mathbb{R}}, U_t)$ and write $\mathsf{M}_1$ and $\mathsf{M}_2$ for the expected subalgebras corresponding to, respectively, the almost periodic and the weakly mixing parts. Then \begin{enumerate}[{\normalfont (i) }] \item $\mathsf{M}^{\varphi} \subset \mathsf{M}_1$, hence if $x\in \mathsf{M}\cap \mathsf{M}^{\prime}$ then $x \in \mathsf{M}_1$; \item if $(U_t)_{t \in \Bbb R}$ admits a non-zero fixed vector then $\mathsf{M}$ is a factor. \end{enumerate} \end{tw} With Corollary \ref{cor:freeFisher} and these tools in hand we can completely characterize factoriality of $q$-Araki-Woods algebras and establish all the other results listed in the introduction. \begin{tw} \label{thm:factor} Let $(\mathsf{H}_{\mathbb{R}}, U_t)$ be given, with $\dim(\mathsf{H}_{\Bbb R})\geqslant 2$. Then $\mathsf{M}:=\Gamma_q(\mathsf{H}_{\mathbb{R}}, U_t)$ is a factor. Moreover, if $G < \mathbb{R}^{\times}_{\ast}$ is the closed subgroup generated by the spectrum of $A$ then $\mathsf{M}$ is a factor of type \[ \left\{\begin{array}{ll} \mathrm{III}_1 & \text{ if } G=\mathbb{R}^{\times}_{\ast} \\ \mathrm{III}_{\lambda} & \text{ if } G= \lambda^{\mathbb{Z}}, 0<\lambda<1 \\ \mathrm{II}_1 & \text{ if } G=\{1\}. \end{array} \right. \] \end{tw} \begin{proof} By Theorem \ref{Thm:SW} the center of $\mathsf{M}$ is contained in the almost periodic part, so we may assume it is nontrivial. If it is one dimensional, then it necessarily contains a non-zero $U_t$-invariant vector, so this case is covered by Theorem \ref{Thm:SW} as well. We can therefore assume that we are in the almost periodic case with $\dim(\mathsf{H}_{\Bbb R})\geqslant 2$. If $\mathsf{H}_{\Bbb R}$ is infinite dimensional then factoriality has been obtained by Hiai (\cite[Theorem 3.2]{Hiai}, which in fact omits certain cases; see \cite[Theorem 4.3]{BMRW} for a complete result). In the finite dimensional case we can use Corollary \ref{cor:freeFisher} and Theorem \ref{Thm:Nelson}. Note that the infinite-dimensional almost periodic case can be also deduced from the finite-dimensional one via the inductive limit argument. If $(U_t)_{t \in \Bbb R}$ is almost periodic then the centralizer $\mathsf{M}^{\varphi}$ is irreducible in $\mathsf{M}$ (as follows from Corollary \ref{cor:freeFisher} and Theorem \ref{Thm:Nelson} in finite dimensions and \cite[Theorem 3.2]{Hiai}, \cite[Theorem 5.1]{BMRW} in the infinite dimensional case). Therefore in this case the type classification can be simply obtained from the spectral data of $A$ as in the statement (see \cite[Section 1]{Hiai}). On the other hand, if there is a non-trivial weakly mixing part, \cite[Theorem 8.1]{BM} implies that $\mathsf{M}$ is a $\mathrm{III}_1$ factor (see also \cite[Theorem 3.4]{Hiai} for an earlier result in the purely weakly mixing case). \end{proof} \begin{tw} \label{thm:noninj} The factor $\Gamma_q(\mathsf{H}_{\Bbb R}, U_t)$ is not injective as soon as $\dim(\mathsf{H}_{\Bbb R}) \geqslant 2$. \end{tw} \begin{proof} To prove non-injectivity, we will find an expected non-injective subalgebra; note that the case where the weakly mixing part is non-trivial has already been covered in \cite[Theorem 3.4]{Hiai}, where it was proved that in the purely weakly mixing case $\Gamma_q(\mathsf{H}_{\Bbb R}, U_t)$ is a non-injective factor. We therefore assume that we are in the almost periodic case. It means that either we will find a two dimensional subspace on which $(U_t)_{t \in \Bbb R}$ is trivial, or a two dimensional subspace on which $(U_t)_{t \in \Bbb R}$ is ergodic. In both cases the corresponding $q$-Araki-Woods algebra will be non-injective and with expectation, from which we will be able to conclude. In the former we are just dealing with a $q$-Gaussian algebra, which was covered in \cite[Theorem 2]{Nou}. In the latter we can conclude from Corollary \ref{cor:freeFisher} and Theorem \ref{Thm:Nelson2} that we have a type $\mathrm{III}_{\lambda}$ full subfactor, and fullness implies non-injectivity. \end{proof} We saw above that the $q$-Araki Woods factor is full when it is of type III$_\lambda$, $0<\lambda<1$ and dimension of $\mathsf{H}_\Bbb R$ is finite. We now establish fullness in the remaining type III$_1$ finite-dimensional case as well. We also establish solidity of such factors (see \cite{Oza} for the original definition for finite von Neumann algebras and \cite{HR} for the modification needed in the general case). \begin{tw} \label{thm:full} Let $(\mathsf{H}_\Bbb R, U_t)$ be given with $2\leqslant\dim \mathsf{H}_\Bbb R<\infty$. Then $\mathsf{M}:=\Gamma_q(\mathsf{H}_\Bbb R, U_t)$ is solid and full. \end{tw} \begin{proof} If $(U_t)_{t \in \Bbb R}$ is trivial, then $\mathsf{M}$ is a $q$-Gaussian algebra and its fullness is proved in \cite{MS}. If $\mathsf{M}$ is of type III$_\lambda$, $0<\lambda<1$, the statement about fullness follows from Corollary \ref{cor:freeFisher} and Theorem \ref{Thm:Nelson2}. It follows from \cite[Theorem 1.2]{Kuz} that the Cuntz-Toeplitz algebra $T_q(\mathsf{H}) \subset B(\mathcal{F}_q(\Hil_U))$, i.e.\ the $C^*$-algebra generated by the left creation operators $\{l^{\ast}_{\xi}: \xi \in \mathsf{H}\}$, is nuclear, as an extension of the Cuntz algebra by compacts. Arguing exactly as in \cite[Section 4]{Dima2} we can thus deduce that $\mathsf{M}$ satisfies the Akemann-Ostrand (AO) property. This further implies by \cite[Theorem A]{HR} that $\mathsf{M}$ is $\omega$-solid, where $\omega$ denotes a fixed non-principal ultrafilter and hence is solid (see the definition of $\omega$-solidity in \cite[Section 1]{HR}). Theorems \ref{Thm:Nelson} and \ref{Thm:Nelson2} together with Corollary \ref{cor:freeFisher} imply that the centralizer of $\mathsf{M}$ with respect to the canonical state is a non-injective II$_1$ factor. We can thus invoke \cite[Proposition 3.10]{HR} to conclude fullness when $\mathsf{M}$ is a type III$_1$ factor. \end{proof} \begin{rem} By \cite[Theorem 6.2]{HI} $q$-Araki-Woods factors are full if $(U_t)_{t \in \Bbb R}$ has a weakly mixing part. The same theorem also covers some almost periodic examples, but if the eigenvalues of the generator of $(U_t)_{t \in \Bbb R}$ grow sufficiently fast then fullness remains an open problem. \end{rem} \smallskip \noindent {\bf Acknowledgments. } A.S.\ was partially supported by the National Science Center (NCN) grant no. 2020/39/I/ST1/01566. M.W.\ was partially supported by the National Science Center (NCN) grant no. 2021/43/D/ST1/01446. The project is co-financed by the Polish National Agency for Academic Exchange within Polish Returns Programme. We are grateful to Cyril Houdayer, Kunal Mukherjee and Simeng Wang for their comments on the first draft of this note. \vspace{5 pt} \includegraphics[scale=0.5]{logoNAWA.png}
1,314,259,992,736
arxiv
\section{Introduction}\label{S:intro} Understanding massive star formation remains one of the most challenging and important problems of contemporary astrophysics (Beuther et al. 2007; Zinnecker \& Yorke 2007). The complexity of the process means that massive star formation theories, such as the turbulent core model (McKee \& Tan 2003), the competitive accretion model (Bonnell \& Bate 2006) and stellar coalescence model (Bonnell et al. 1998; Clarke \& Bonnell 2008) require close testing against observed systems. The closest forming (i.e. accreting) massive star is thought to be radio source I (Menten \& Reid 1995) within the Orion Nebula Cluster (ONC), at a distance of $414\pm7$~pc (Menten et al. 2007, adopted throughout), in the Kleinmann-Low (KL) region. As reviewed by Tan (2008), this source has been used as observational evidence in support of all three of the above theories. Part of this confusion is due to the Becklin-Neugebauer (BN) object, 9.9\arcsec to the NW (Fig.~1), which is a fast moving (radio-ONC-frame proper motion of $\mu_{\rm BN}=13.2\pm 1.1\:{\rm mas\:yr^{-1}}$, i.e. $v_{\rm 2D,BN}=25.9\pm2.2\:{\rm km\:s^{-1}}$ towards P.A.$_{\rm BN}=-27^\circ.5\pm4^\circ$, Plambeck et al. 1995; G\'omez et al. 2008) embedded B star ($L_{\rm BN}=(2.1 - 8.5)\times 10^3L_\odot$, Gezari, Backman \& Werner 1998, equivalent to a zero age main sequence mass $m_{\rm BN,zams} = 9.3\pm2.0{M_\odot}$). This proper motion implies that BN has been moving through the KL region and made a close, possibly coincident, passage with source {\it I} about 500 years ago. Thus to understand the nearest example of massive star formation, we need to understand the origin of BN's motion. Including the $(+21) - (+8) = +13\:{\rm km\:s^{-1}}$ radial velocity of BN with respect to the ONC mean (Scoville et al. 1983; Walker 1983), the 3D ONC-frame velocity of BN is $v_{\rm 3D,BN}=29\pm3\:{\rm km\:s^{-1}}$, and its kinetic energy is $E_{\rm BN} = (8.3\pm2.3)\times 10^{46} (m_{\rm BN}/10{M_\odot})\:{\rm ergs}$. BN is very likely to have formed somewhere in the ONC and then attained its high speed by a close interaction with a massive multiple stellar system followed by dynamical ejection (Poveda, Ruiz \& Allen 1967). Tan (2004) proposed BN was launched from the $\theta^1$~Ori~C binary (also shown in Fig.~\ref{fig:bn}), since this is the only stellar system in the ONC known to have all the physical properties required by this scenario: (1) a location along BN's past trajectory (\S\ref{S:ast}); (2) an (optical)-ONC-frame proper motion ($\mu_{\theta^1C}=2.3\pm0.2\:{\rm mas\:yr^{-1}}$, van Altena et al. 1988, i.e. $v_{\rm 2D,\theta^1C} = 4.5\pm0.4\:{\rm km\:s^{-1}}$, towards $\rm P.A._{\theta^1C}=142^\circ.4\pm4^\circ$) that is in the opposite direction to BN (the direction to BN from $\theta^1$~Ori~C is a P.A.$=-30^\circ.949$) and is of the appropriate magnitude (the dynamical mass of BN implied by this motion agrees with the estimate of $m_{\rm BN,zams}$ and is $m_{\rm BN,dyn}=8.6\pm1.0{M_\odot}$ assuming negligible error in $m_{\theta^1C}=49.5{M_\odot}$ and negligible motion of the pre-ejection triple system in this direction; a pre-ejection motion of 0.35~mas/yr along this axis (\S\ref{S:high}) would contribute an additional $1.5{M_\odot}$ uncertainty); (3) primary ($m_{\theta^1C-1}=34{M_\odot}$) and secondary ($m_{\theta^1C-2}=15.5{M_\odot}$) masses greater than $m_{\rm BN}$ (Kraus et al. 2007); (4) a semi-major axis of $a=17.0\pm5.8$~AU (Patience et al. 2008) and thus a total orbital energy ($E_{\rm tot}=Gm_{\theta^1C-1}m_{\theta^1C-2}/(2a)= (2.7\pm0.9)\times 10^{47}\:{\rm ergs}$) greater than the sum of BN's kinetic energy and $\theta^1$~Ori~C's kinetic energy ($1.00\times 10^{46}\:{\rm ergs}$) (see Tan 2008 for a review). Note, $\theta^1$~Ori~C's recoil in this scenario is large enough to remove it from the Trapezium region (see Pflamm-Altenburg \& Kroupa 2006 for theoretical studies of the dynamical decay of Trapezium-like systems) and may be enough to eject it from the ONC completely, with implications for the effectiveness of its ionizing feedback on disrupting the star cluster formation process. Rodr\'iguez et al. (2005) and Bally \& Zinnecker (2005) proposed BN was launched from an interaction with radio source {\it I}, which would require this system to be a massive binary, recoiling away from any large scale ($\gtrsim 100$~AU) gas that it was originally accreting. G\'omez et al. (2008) used the relative motion to BN with respect to source {\it I} to claim that BN could not have made a close passage with $\theta^1$~Ori~C, excluding this possibility at the 5-10~$\sigma$ level. We show in \S\ref{S:ast} that if BN's motion is considered in the reference frame of the ONC, then a close (coincident) passage with $\theta^1$~Ori~C is allowed by the data, which permits the scenario of dynamical ejection of BN from $\theta^1$~Ori~C. In \S\ref{S:high} we discuss the potential of future high precision astrometric measurements to constrain the properties of BN's dynamical ejection, which then constrain BN's interaction distance with source {\it I}, the mass of source {\it I}, and thus the strength of tidal perturbations on the massive protostar during this encounter. \vspace{0.2in} \section{Astrometry of BN in the Orion Nebula Cluster}\label{S:ast} \begin{figure}[h] \begin{center} \epsfig{ file=f1.eps, angle=0, width=\figwidth } \end{center} \caption{ \label{fig:bn} This diagram shows the positions of the Trapezium stars $\theta^1$~Ori~A, $\theta^1$~Ori~B, $\theta^1$~Ori~C and $\theta^1$~Ori~D that make up the core of the ONC. The positions of radio sources I and BN are also shown. The coordinates are relative to the present position of source I ($\alpha$(J2000)=05 35 14.5141, $\delta$(J2000)=-05 22 30.556) (Gomez et al. 2008). The proper motions relative to the cluster of BN (Gomez et al. 2008) and $\theta^1$~Ori~C (van Altena et al. 1988) are indicated with the arrows. Past trajectories (dashed line) and $1\sigma$ uncertainties (dotted lines) are drawn. } \end{figure} To determine BN's past trajectory through the ONC we use the absolute proper motion of BN ($\mu_\alpha {\rm cos}\delta = -5.3\pm0.9 {\rm mas\:yr^{-1}}$, $\mu_\delta=9.4\pm1.1 {\rm mas\:yr^{-1}}$ ($1\sigma$ errors); G\'omez et al. 2008) and then correct for the motion of the ONC (mean of 35 radio sources within central 0.1~pc of ONC: $\mu_\alpha {\rm cos}\delta = +0.8\pm0.2 {\rm mas\:yr^{-1}}$, $\mu_\delta=-2.3\pm0.2 {\rm mas\:yr^{-1}}$; G\'omez et al. 2005). The ONC-frame proper motions are shown in Fig.~\ref{fig:bn}. One sees that the past trajectory of BN through the ONC overlaps within the $1\sigma$ errors with the present position of $\theta^1$~Ori~C. Given the motions of BN and $\theta^1$~Ori~C, the time of coincidence (i.e. when the dynamical ejection took place) was 4530 years ago, i.e. about 174 orbital periods of $\theta^1$~Ori~C (although the orbital period is only poorly constrained at present to $26\pm13$~years, Patience et al. 2008). G\'omez et al. (2008) excluded a coincidence between BN and $\theta^1$~Ori~C because they used the motion of BN with respect to source {\it I} (which is measured using relative astrometry to greater accuracy so has smaller error bars), but did not allow for the fact that their data indicate that source {\it I} is moving. In the ONC frame this motion is claimed to be $\mu_\alpha {\rm cos}\delta = -3.7\pm1.2 {\rm mas\:yr^{-1}}$, $\mu_\delta=-3.4\pm1.3 {\rm mas\:yr^{-1}}$, corresponding to $\mu_{\rm I}=5.0\pm1.3\:{\rm mas\:yr^{-1}}$ (i.e. $9.9\pm2.6\rm km\s^{-1}$) towards a P.A.$=+133^\circ\pm16^\circ$. We note, as a separate point, that source {\it I} is elongated along the NW-SE axis, i.e. towards P.A.$\simeq+135^\circ$ (Reid et al. 2007). If the source exhibits variability affecting the location of the centroid of its emission, then this could lead to an apparent, but false, proper motion. This effect is a potential source of additional uncertainty in the motion reported for source {\it I} (and for source {\it n}) by G\'omez et al. (2008). Source {\it I} is thought to be a massive protostar and a large proper motion would be interesting for theories of massive star formation. F\~ur\'esz et al. (2008) measured the distribution of radial velocities in the ONC, finding it could be well fit by a Gaussian with $\sigma_{1D}=3.1\:{\rm km\:s^{-1}}$, for both the entire cluster and for stars within a 15\arcmin\ radius of the Trapezium. Assuming an isotropic velocity distribution, the proper motions should exhibit a Gaussian distribution of motions with $\sigma_{2D}=4.4\kms$. In comparison, Source {\it I}'s claimed motion of $9.9\pm2.6\rm km\s^{-1}$ is $(2.3\pm 0.6)\sigma_{2D}$, i.e. not significantly larger than expected of a typical cluster member. Note, Jones \& Walker (1988) found $\sigma_{2D}=2.9\kms$ from direct observation of proper motions (adjusted to $d_{\rm ONC}=414$~pc), for which source {\it I}'s motion would then be $(3.4\pm 0.9)\sigma_{2D}$. G\'omez et al. (2005) found $\sigma_{2D}=7.6\kms$ based on proper motions of 35 radio sources, for which source {\it I}'s motion would then be $(1.3\pm 0.3)\sigma_{2D}$. We conclude, in contrast to G\'omez et al. (2008), that it is premature to claim that source {\it I} has an anomalously large motion compared to other ONC stars. \section{Potential of High Precision Astrometry with SIM}\label{S:high} For wide angle absolute astrometry, SIM should be able to achieve a parallax accuracy of about 5~$\mu$as. Assuming a distance of about 400~pc, this will allow a parallax distance measurement accurate to 0.2\%, i.e. 0.9~pc. Once the motions of the primary and secondary components of $\theta^1$~Ori~C due to their binary orbit are accounted for, then the absolute proper motion of the system should be known to an accuracy of a few $\mu$as/yr. By averaging over many stars, an even greater accuracy should be achievable for the absolute proper motion of the ONC with GAIA. Since $\theta^1$~Ori~C is moving at a few mas/yr in the ONC frame (van Altena et al. 1988), then the accuracy of the position angle of the direction of motion would be $\sim0.06^\circ$. Presently it is only known to about 4$^\circ$. If, as seems very likely, BN was ejected from $\theta^1$~Ori~C, it should have been ejected in exactly the opposite direction to $\theta^1$~Ori~C's motion as measured in the center of mass frame of the pre-ejection triple system. Comparison of the ONC-frame motion of $\theta^1$~Ori~C with the present position and ONC-frame motion of BN, will yield information on motion of the pre-ejection triple system and any accelerations experienced by the stars since ejection. The expected size of pre-ejection triple system proper motion is uncertain. If the system (with total mass $\simeq 60{M_\odot}$) was in kinetic energy equilibrium with the other ONC stars (with, say, typical mass $1.0{M_\odot}$ and $\sigma_{2D}=4.0\kms$), then we would expect it to have a plane of sky motion $\sim 0.52\kms$ equivalent to a proper motion of 0.26~mas/yr. The observed proper motion dispersion of bright ($V\lesssim12.5$), i.e. massive, stars is $0.70\pm0.06$~mas/yr (van Altena et al. 1988). Assuming a 0.5~mas/yr proper motion for the pre-ejection triple system, of which 0.35~mas/yr would be expected to be tangential to the ejection axis, implies that the ONC-frame proper motion vectors of $\theta^1$~Ori~C and BN would be misaligned by $10^\circ$ from direct opposition. The current observed misalignment is $10^\circ\pm6^\circ$. Thus, in the limit that subsequent accelerations are negligible, high precision ONC-frame proper motions of $\theta^1$~Ori~C and BN (the latter expected from continued radio observations) can constrain the motion of the pre-ejection triple system. The expected gravitational accelerations of $\theta^1$~Ori~C and BN depend on the distribution of mass in their surroundings. Their trajectories are taking them away from the ONC center, so they will be experiencing a deceleration associated with climbing out of the cluster potential. This effect is largest for BN, but it is still small. BN has moved 0.12~pc (projected) from the ejection site, and if the enclosed mass is 500~${M_\odot}$ (likely to be a conservative upper limit, e.g. Hillenbrand \& Hartmann 1998), then for a starting velocity of $30\kms$, it would have decelerated by only 0.6~$\kms$. Close passage with individual stars can also cause more significant accelerations. $\theta^1$~Ori~C's trajectory may have brought it into relatively close proximity with $\theta^1$~Ori~A (a B0 star, i.e. $16{M_\odot}$, 13\arcsec\ to the NW, with a visual companion at 100~AU of $4{M_\odot}$ and a spectroscopic companion at $\sim 1$~AU of $\sim 3{M_\odot}$, Schertl et al. 2003). However, the relative motion of these stars is only about 1.2~mas~$\rm yr^{-1}$ (van Altena et al. 1988) so that the time of closest approach would have been about $10^4$~yr ago, long before the proposed interaction of $\theta^1$~Ori~C with BN. More importantly, BN made a close passage to source {\it I} about 500 years ago. From the bolometric luminosity of the KL region, source {\it I} is estimated to have a protostellar mass of about $20\:M_\odot$. As an example of the magnitude of the deflections that can be expected, treating BN as a massless test particle, its deflection angle due to source {\it I} is $2.25^\circ (m_{I,*}/20M_\odot)(b/1000{\rm AU})^{-1}(v_{\rm BN}/30{\rm km\:s^{-1}})^{-2}$, where $b$ is the initial impact parameter and $v_{\rm BN}$ is the velocity of BN relative to source {\it I}. A direct trajectory from $\theta^1$~Ori~C's present position (ideally this would be measured from $\theta^1$~Ori~C's position at the time of ejection) to BN's position has a closest projected separation from source {\it I}'s present position of 1.5\arcsec (about 600~AU). Thus an accurate astrometric solution of this system presents us with the unique opportunity of constraining the dynamical mass of source {\it I}, the nearest massive protostar, in combination with the true (unprojected) distance of closest approach. The true distance of closest approach is important for evaluating the tidal effects of BN on source {\it I}'s accretion disk, which are likely to have enhanced accretion to the star (Ostriker 1995; Moeckel \& Bally 2006). Such enhanced accretion is likely to have led to enhanced protostellar outflow activity, thus explaining the $\sim 1000$~yr timescale of the ``explosive'' outflow from this region (Allen \& Burton 1993; Tan 2004). \section{Conclusions} We have reviewed the latest evidence that BN was dynamically ejected from the $\theta^1$~Ori~C binary, finding that $\theta^1$~Ori~C has all the physical properties expected in this scenario. We showed that the trajectory of BN is also consistent with this scenario, in contrast to recent claims by G\'omez et al. (2008). We discussed how high precision astrometry of $\theta^1$~Ori~C with SIM can yield information on the pre-ejection velocity of the system and the size of any subsequent deflections, in particular that of BN caused by close passage with source {\it I}, the nearest massive protostar. \acknowledgements JCT acknowledges support from NSF CAREER grant AST-0645412 and a grant from NASA for SIM Science Studies.
1,314,259,992,737
arxiv
\section{Introduction} In this paper, we study spectral properties of the Schr\"odinger operator $$ P(h)=-h^2 \partial_x^2+V(x) $$ defined for $x$ in the half-line $(-\infty, B]$. Here $h>0$ is the semiclassical parameter and $V(x)$ is a piecewise continuous real-valued potential supported in $[0,B]$. The operator $P(h)$ with the Neumann boundary condition at $B$ is self-adjoint on $L^2(-\infty, B)$; therefore, its resolvent $$ R_V(\lambda)=(P(h)-\lambda^2)^{-1},\ \Imag\lambda>0, $$ is a bounded operator from $L^2$ to $H^2$ for $\lambda^2$ not in the spectrum of $P(h)$. This resolvent can be extended meromorphically as an operator $L^2_{\textrm{comp}}\to H^2_{\textrm{loc}}$ to $\lambda\in \mathbb C$ with isolated poles of finite rank; these poles are called \textbf{resonances}. (The reader is referred to \cite{TZ} for details.) To each resonance $\lambda$ corresponds a \textbf{resonant state}; that is, a nonzero $u\in H^2_{\textrm{loc}}(-\infty, B)$ solving the equation $(P(h)-\lambda^2)u=0$ with the Neumann boundary condition at the right endpoint and with the following \textbf{outgoing condition} at $-\infty$: $$ u(x)=Ae^{-{i\lambda x/h}}\text{ for all }x<0\text{ and some constant }A. $$ (Note that for $x<0$, $u$ solves the free equation $(-h^2 \partial_x^2-\lambda^2)u=0$, so it must be a linear combination of $e^{\pm {i\lambda x/h}}$.) For $\Imag\lambda>0$, the outgoing condition implies that $u$ is exponentially decreasing on the negative half-line and thus $u\in L^2$; therefore, $\lambda$ is a pole (of the resolvent) lying in the upper half-plane if and only if $\lambda^2$ is an eigenvalue of $P(h)$ on $L^2$. Since $P(h)$ is self-adjoint, all poles in the upper half-plane have to lie on the imaginary axis. There may be poles $\lambda$ with $\Imag\lambda<0$ and $\Real\lambda\neq 0$; however, we will restrict our attention to purely imaginary resonances: \begin{defi} A positive number $k$ is called a \textbf{bound state} if $ik$ is a pole of the resolvent $R_V(\lambda)$, and an \textbf{antibound state} if $-ik$ is a pole. \end{defi} We see from above that $k$ is an (anti)bound state if and only if there exists a nonzero solution $u$ of the problem \begin{gather} \label{e:equation} (P(h)+k^2)u=0,\\ \label{e:rightcond} u_x|_{x=B}=0,\\ \label{e:leftcondo} hu_x\pm ku|_{x=0}=0. \end{gather} The plus sign in (\ref{e:leftcondo}) corresponds to an antibound state and the minus sign corresponds to a bound state. We will also study Neumann eigenvalues of $P(h)$ on $[0,B]$; i.e., those $k$ for which there exists a nonzero solution to (\ref{e:equation}) with boundary conditions (\ref{e:rightcond}) and \begin{equation}\label{e:leftcondn} u_x|_{x=0}=0. \end{equation} Since the space of solutions to~(\ref{e:equation}) and~(\ref{e:rightcond}) is always one dimensional, \textbf{bound states, antibound states, and Neumann eigenvalues never coincide}. However, Bindel and Zworski proved in \cite{BZ} that bound and antibound states located away from zero coincide modulo errors of order $e^{-\delta/h}$ for some $\delta>0$, if the potential satisfies the following conditions: $$ \begin{gathered} \exists A>0,V_0>0:V(x)=V_0\text{ for all }x\in [0,A],\\ \exists \varepsilon>0:V(x)=0\text{ for all }x\in (A,A+\varepsilon). \end{gathered} $$ In this paper, we prove a similar result with more general assumptions on the potential: \begin{theo}\label{l:main} Suppose that $V$ is a piecewise continuous real-valued potential supported in $[0,B]$ and satisfying the following \textbf{bump condition}: \begin{equation}\label{e:bump} \exists A>0: V(x)>0\text{ for all }x\in (0,A]. \end{equation} Fix two constants $0<c_k<C_k<\infty$. Then there exist constants $C,\delta>0,h_0>0$ such that for $h<h_0$ and any $k\in [c_k,C_k]$: 1. If $k$ is a Neumann eigenvalue, then there exist a bound state $k_+$ and an antibound state $k_-$ such that $|k-k_\pm|\leq Ce^{-\delta/h}$. 2. If $k$ is a bound or an antibound state, then there exists a Neumann eigenvalue $k_0$ such that $|k-k_0|\leq Ce^{-\delta/h}$. \end{theo} \begin{figure} \includegraphics{plots.1} \caption{Bound and antibound states for two spline potentials (\texttt{splinepot([0, -0.4, -1, -0.2, -1, -0.4, 0], [-2, -1.5, -1, 0, 1, 1.5, 2])} and \texttt{splinepot([0, 0.2, -1, -0.2, -1, 0.2, 0], [-2, 1.5, -1, 0, 1, 1.5, 2])})} \end{figure} The bump condition (\ref{e:bump}) cannot be disposed of completely, as illustrated by the numerical experiments performed using \cite{B}. Figure~1 shows two potentials on the whole line, each supported in $[-2,2]$, and the corresponding bound states (denoted by squares) and antibound states (denoted by circles). The vertical coordinate of each (anti)bound state on the picture corresponds to its value $k$; the horizontal coordinate corresponds to the value of $h^{-1}$ used. We see that the conclusion of the theorem does not appear to hold for the potential on the left, which does not satify the bump condition; at the same time, it is true for the potential on the right. Theorem~1, formulated for the half-line case, applies to these numerical experiments on the whole line since for even potentials, the set of their (anti)bound states is composed of these states for the positive half-line with Dirichlet condition and the same states for the Neumann condition; the theorem above can be applied with Dirichlet condition in place of (\ref{e:rightcond}). (However, condition (\ref{e:leftcondn}) cannot be replaced by Dirichlet condition in the theorem.) The study of resonances in one dimension has a long tradition going back to origins of quantum mechanics, see for instance~\cite{LL}. One of the first studies of their distribution was conducted by Regge~\cite{R}; since then, there have been many mathematical results on the topic, including~\cite{AA}, \cite{BC}, \cite{F}, \cite{H}, \cite{K}, \cite{N}, \cite{S}, and~\cite{Z}. Concerning antibound states, Hitrik has shown in~\cite{H} that for a positive compactly supported potential, there are no antibound states in the semiclassical limit. This agrees with our result since there are no bound states in this case. Simon proved in~\cite{S} that between any two bound states, there must be an odd number of antibound states; the following corollary of this result follows almost immediately using the methods we develop to prove Theorem~1: \begin{theo}\label{l:simon} Consider the half-line problem with a bounded compactly supported potential $V$ (which does not need to satisfy any positivity condition). Then for each two bound states $0<k_1<k_2$, the interval $(k_1,k_2)$ contains at least one antibound state. In particular, if there are $n$ bound states in some subinterval of $(0,\infty)$, then there are at least $n-1$ antibound states in the same subinterval. \end{theo} The proof of Theorem 1 works as follows: we study the evolution (in $x$) of the vectors $(u,hu_x)$ for the three solutions of (\ref{e:equation}) with initial data at $x=0$ satisfying the conditions (\ref{e:leftcondo}) and (\ref{e:leftcondn}). The idea is to look at these three vectors at $x=A$. Since $V(x)+k^2\geq 0$ on the interval $(0,A)$, the transition matrix for the considered vectors from $x=0$ to $x=A$ will have an expanding and a contracting direction. (In fact, if we introduce rescaling $\tilde x=x/h$, then the behavior of the original system for small $h$ is similar to the behavior of the rescaled system for large $\tilde x$, and the latter will be similar to the behavior of the geodesic flow on a two-dimensional manifold of negative curvature.) It turns out that our three vectors lie in a certain angle between the expanding and the contracting directions, from which it follows that they will stay in this angle for later times (Lemma~\ref{l:estimate}); what is more, their polar angles will get exponentially close to each other (Lemma~\ref{l:close}). Finally, we can study how the polar angles of the considered vectors change with $k$ (Lemma~\ref{l:angle}): it follows (Lemma~\ref{l:nondegenerate}) that the polar angle for the solution with Neumann initial data at $x=0$ will strictly increase in $k$ and the polar angle for the solution with the same data at $x=B$ will decrease in $k$. The proof is then completed by a pertrubation argument (Lemma~\ref{l:perturb}). The detailed proofs of Theorems~\ref{l:main} and~\ref{l:simon} are given in Section~3. Both are elementary and use certain properties of ordinary differential equations presented in Section~2. The authors would like to thank Maciej Zworski for suggesting the problem and for many illuminating discussions. \section{Preliminaries} Throughout this section, $I$ is an interval in $\mathbb R$, $V(x)\in L^\infty(I;\mathbb R)$, $u(x), v(x)\in H^2(I;\mathbb R)$, $h>0$, and $P(h)=-h^2\partial_x^2+V(x)$. Any solution to the equation $P(h)u=0$ is determined by the vector $(u,hu_x)$ at any $x$; we will sometimes view this vector in polar coordinates: \begin{defi}\label{d:langle} Define the \textbf{length} $L(u)$ and the \textbf{polar angle} $\theta(u)$ by the equations $$ \begin{gathered} u=L(u)\cos\theta(u),\\ hu_x=L(u)\sin\theta(u). \end{gathered} $$ Here $\theta(u)$ lies in the circle $\mathbb S^1=\mathbb R/2\pi\mathbb Z$. \end{defi} \begin{lemm}\label{l:wronskian} Define the \textbf{Wronskian} $W(u,v)$ by $$ W(u,v)=h(uv_x-vu_x). $$ Then \begin{gather} \label{e:wrondef}W(u,v)=L(u)L(v)\sin(\theta(v)-\theta(u)),\\ \label{e:wronskian}h \partial_x W(u,v)=v\cdot P(h)u-u\cdot P(h)v. \end{gather} \end{lemm} Note that the $W(u,v)$ is just the oriented area of the parallelogram spanned by the vectors $(u,hu_x)$ and $(v,hv_x)$. The next lemma tells us that if the vector $(u,hu_x)$ falls inside a certain angle in the plane at the initial time, then it will stay inside that angle for all later times: \begin{lemm}\label{l:estimate} Suppose that $a^2\leq V(x)\leq b^2$ for all $x\in I$ and some constants $a,b>0$. Let $u$ be a solution to $P(h)u=0$ and define $$ W_+(u)=W(u,e^{bx/ h}),\ W_-(u)=W(e^{-{ax/h}},u). $$ Let $x_0$ be a point in $I$ and assume that $W_+(u),W_-(u)\geq 0$ at $x_0$. Then for $x\geq x_0$, the functions $W_\pm(u)$ are increasing in $x$ and \begin{equation}\label{e:ugeq} u\geq \frac{L(u)}{\sqrt{1+b^2}}. \end{equation} \end{lemm} \begin{proof} We have $$ e^{-{bx/ h}}W_+(u)=bu-hu_x,\ e^{ax/h}W_-(u)=au+hu_x. $$ Therefore, $W_+(u),W_-(u)\geq 0$ yields $|hu_x|\leq bu$ and thus (\ref{e:ugeq}). Next, $$ \begin{gathered} P(h)e^{bx/ h}=e^{bx/ h}(V(x)-b^2)\leq 0,\\ P(h)e^{-{ax/ h}}=e^{-{ax/ h}}(V(x)-a^2)\geq 0. \end{gathered} $$ Using (\ref{e:wronskian}), we see that $\partial_xW_\pm\geq 0$ as long as $u\geq 0$. It remains to prove that $u(x)\geq 0$ for $x\geq x_0$. Suppose this is false and let $x_1=\inf\{x\geq x_0\mid u(x)<0\}$. Then $u$ is not identically zero; since it solves a second order linear ODE, $L(u)>0$ everywhere. But $u\geq 0$ on $[x_0,x_1]$, so $W_\pm$ are increasing on this interval. In particular, $W_\pm\geq 0$ at $x_1$ and thus (\ref{e:ugeq}) holds at this point. However, by the choice of $x_1$ we have $u(x_1)=0$, which contradicts $L(u)>0$. \end{proof} In the next section, we will use the following crude estimate on how fast the solutions of an ODE can grow: \begin{lemm}\label{l:crude} Assume that $|V(x)|\leq M$ for $x\in I$ and that $u$ is a solution to $P(h)u=0$. Take $x_0,x_1\in I$; then $$ L(u)|_{x=x_1}\leq e^{(1+M)|x_0-x_1|/(2h)}\cdot L(u)|_{x=x_0}. $$ \end{lemm} \begin{proof} Without loss of generality we may assume that $x_1>x_0$. We have $L(u)^2=u^2+(hu_x)^2$; thus $$ h \partial_x(L(u)^2)=2huu_x(1+V(x))\leq (1+M)L(u)^2 $$ and the lemma is proven by Gronwall's inequality. \end{proof} \begin{lemm}\label{l:angle} Assume that $u(x,k)$ is a family of solutions to $(P(h)+k^2)u=0$, $x_0,x_1\in I$, and $u(x_0,k)$ and $u_x(x_0,k)$ are independent of $k$. Let $\Theta_1(k)=\theta(u(x,k))|_{x=x_1}$, $L_1(k)=L(u(x,k))|_{x=x_1}$. Then $$ \Theta_1'(k)=\frac{2k}{hL_1(k)^2}\int_{x_0}^{x_1}u(x,k)^2\,dx. $$ \end{lemm} \begin{proof} We have $W(u,u_k)|_{x=x_1}=L_1(k)^2\Theta'_1$. (To see that, differentiate the formulas in Definition~\ref{d:langle} in $k$ and use the definition of the Wronskian.) Now, we differentiate the equation $(P(h)+k^2)u=0$ in $k$ to get $(P(h)+k^2)u_k=-2ku$. It remains to apply (\ref{e:wronskian}) together with $W(u,u_k)|_{x=x_0}=0$. \end{proof} \begin{lemm}\label{l:perturb} Assume that $\Phi$ is a $C^1$ map from the interval $I=[K_0,K_1]$ to the circle $S^1=\mathbb R/2\pi \mathbb Z$ and $\Phi'(k)\geq\delta>0$ for all $k\in I$. Suppose that $\Psi:I\to \mathbb S^1$ is a continuous map such that $|\Psi(k)|\leq\varepsilon<\pi$ for all $k$. Put $\nu=\varepsilon/\delta$ and $I_\nu=[K_0+\nu,K_1-\nu]$. Then: 1. If $k_0\in I_\nu$ has $\Phi(k_0)=0$, then there exists $k_1\in I$ with $\Phi(k_1)=\Psi(k_1)$ and $|k_0-k_1|\leq\nu$. 2. If $k_1\in I_\nu$ has $\Phi(k_1)=\Psi(k_1)$, then there exists $k_0\in I$ with $\Phi(k_0)=0$ and $|k_0-k_1|\leq\nu$. \end{lemm} \begin{proof} We can lift $\Phi$ and $\Psi$ to continuous maps $\bar\Phi,\bar\Psi:I\to \mathbb R$; then $|\bar\Psi|\leq\varepsilon$ and $\bar\Phi(k')-\bar\Phi(k)\geq \delta(k'-k)$ for $k'\geq k$. 1. We have $\bar\Phi(k_0)=2\pi m$ for some $m\in \mathbb Z$. Then $\bar\Phi(k_0+\nu)\geq 2\pi m+\delta\nu\geq 2\pi m+\bar\Psi(k_0+\nu)$ and $\bar\Phi(k_0-\nu)\leq 2\pi m+\bar\Psi(k_0-\nu)$; it remains to apply the intermediate value theorem. 2. Similar to the previous statement, we have $\bar\Phi(k_1)=2\pi m+\bar\Psi(k_1)$ for some $m\in \mathbb Z$ and $\bar\Phi(k_1+\nu)\geq 2\pi m\geq \bar\Phi(k_1-\nu)$. \end{proof} \begin{lemm}\label{l:calc} Assume that $\Phi$ is a $C^1$ map from some interval $I$ to the circle $\mathbb S^1=\mathbb R/(2\pi \mathbb Z)$ with $\Phi'(k)>0$ for all $k\in I$. Let $\Psi:I\to \mathbb S^1$ be a continuous map such that $\Psi(k)\neq 0$ for all $k\in I$. If $k_1<k_2$ are two roots of the equation $\Phi=0$, then the interval $(k_1,k_2)$ contains at least one root of the equation $\Phi=\Psi$. \end{lemm} \begin{proof} As in the previous lemma, lift $\Phi$ and $\Psi$ to maps $\bar\Phi,\bar\Psi:I\to \mathbb R$; we can make $0<\bar\Psi(k)<2\pi$ for all $k\in I$. Since $\bar\Phi'>0$ everywhere, we have $\bar\Phi(k_j)=2\pi m_j$, where $m_1<m_2$ are some integers. Therefore, $\bar\Phi(k_1)<2\pi m_1+\bar\Psi(k_1)$ and $\bar\Phi(k_2)>2\pi m_1+\bar\Psi(k_2)$; it remains to apply the intermediate value theorem. \end{proof} \section{Proofs of the theorems} We assume in this section that $0<c'_k\leq k\leq C'_k$ for some constants $c'_k<c_k$ and $C'_k>C_k$; the constants in our estimates will depend on $c'_k$ and $C'_k$. (We need to expand the interval $[c_k,C_k]$ a little bit to be able to apply Lemma~\ref{l:perturb}.) Consider the solutions $u_\pm,u_0,u_1(x,k)$ to the equation (\ref{e:equation}) in $[0,B]$ with the initial data $$ \begin{gathered} u_{\pm 0}(0,k)=1,\ \partial_x u_0(0,k)=0,\ h\partial_xu_\pm(0,k)=\pm k,\\ u_1(B,k)=1,\ \partial_xu_1(B,k)=0. \end{gathered} $$ Define $\Theta_0(k)$, $\Theta_\pm(k)$, and $\Theta_1(k)$ to be the polar angles of vectors $(u,hu_x)$ at $x=A$ for $u=u_0,u_\pm,u_1$. Then $k>0$ is \begin{itemize} \item a Neumann eigenvalue if $u_0$ and $u_1$ are linearly dependent; that is, (recalling that they solve the same second order ODE) if $2(\Theta_0(k)-\Theta_1(k))=0$; \item a bound state if $2(\Theta_+(k)-\Theta_1(k))=0$; \item an antibound state if $2(\Theta_-(k)-\Theta_1(k))=0$. \end{itemize} Here we count angles modulo $2\pi$. To prove Theorem~\ref{l:main}, it suffices to use Lemma~\ref{l:perturb} (for $\Phi=2(\Theta_0-\Theta_1)$ and $\Psi=2(\Theta_0-\Theta_\pm)$) together with the following two facts: \begin{lemm}\label{l:close} For some constants $C_1$ and $\delta_1>0$ independent of $h$ and $k$, $$ |2(\Theta_0(k)-\Theta_\pm(k))|\leq C_1e^{-{\delta_1/ h}}\text{ for all }k\in[c'_k,C'_k]. $$ \end{lemm} \begin{lemm}\label{l:nondegenerate} We have $\Theta_0'(k)-\Theta_1'(k)\geq {1/ C_2}> 0$ for all $k\in [c'_k,C'_k]$ and some constant $C_2$ independent of $h$ and $k$. \end{lemm} We first prove Lemma~\ref{l:close}. Put $b=\max_{[0,A]}V(x)$, $k_b=\sqrt{k^2+b}$, $\psi_0(x)=e^{-{kx/ h}}$, $\psi_+(x)=e^{k_bx/ h}$, and consider the Wronskians $$ W_+(u)=W(u,\psi_+),\ W_0(u)=W(\psi_0,u). $$ These are nonnegative for $u=u_0,u_\pm$ at $x=0$. Then by Lemma~\ref{l:estimate}, all these six functions are nonnegative and increasing in $x$ for $0\leq x\leq A$. Our first goal is to get an exponential lower bound on the length $L(u)$ for $u=u_0,u_\pm$ at $x=A$. For $u_0$, note that by (\ref{e:wrondef}) $$ L(u_0)\geq \frac{W(\psi_0,u_0)}{L(\psi_0)} \geq \frac {W_0(u_0)|_{x=0}}{L(\psi_0)} \geq\frac 1Ce^{kx/h}. $$ Same applies to $u_+$. However, $u_-$ needs more careful analysis since $W_0(u_-)=0$ at $x=0$. For that, take $0<t<1$ and put $a=\min_{[t A,A]}V(x)>0$, $k_a=\sqrt{k^2+a}$, $\psi_-(x)=e^{-{k_ax/ h}}$, and $W_-(u)=W(\psi_-,u)$. First, we have by Lemma~\ref{l:crude} $$ L(u_-)\geq e^{-(1+k^2+b)x/(2h)}\cdot L(u_-)|_{x=0}. $$ Next, $W_0(u_-)\geq 0$ and $W_+(u_-)\geq 0$, so by~(\ref{e:ugeq}) $$ W_-(u_-)\geq (k_a-k)u_-\psi_-\geq \frac 1C L(u_-)\psi_-. $$ Finally, we apply Lemma~\ref{l:estimate} on the interval $[t A,A]$ to get $$ L(u_-)|_{x=A}\geq \frac{W_-(u_-)|_{x=tA}}{L(\psi_-)|_{x=A}} \geq \frac 1Ce^{(k(1-t)-(1+k^2+b)t)A/ h}. $$ For $t$ small enough and all $k$, $k(1-t)-(1+k^2+b)t\geq 0$, so we have $$ L(u_-)|_{x=A}\geq \frac 1C>0. $$ The next step is to use that $u_0$ and $u_\pm$ solve the same equation (\ref{e:equation}) and thus $W(u_0,u_\pm)$ is constant in $x$. Therefore, at $x=A$ we have by~(\ref{e:wrondef}) $$ |\sin(\theta(u_\pm)-\theta(u_0))|=\frac{|W(u_0,u_\pm)|}{L(u_0)L(u_\pm)}\leq Ce^{-{kA/ h}}. $$ That finishes the proof of Lemma~\ref{l:close}. To prove Lemma~\ref{l:nondegenerate}, first note that by Lemma~\ref{l:angle}, $\Theta_1'(k)\leq 0$ and $$ \Theta_0'(k)\geq \frac 1{ChL(u_0)^2|_{x=A}}\int_0^A |u_0(x,k)|^2\,dx $$ By (\ref{e:ugeq}), $u_0\geq L(u_0)/C$. Also, by Lemma~\ref{l:crude}, $L(u_0)\geq e^{C(x-A)/ h}L(u_0)|_{x=A}$ for $0\leq x\leq A$; thus $$ \int_0^A |u_0(x,k)|^2\,dx\geq \frac 1C\int_0^A e^{C(x-A)/ h}(L(u_0)^2|_{x=A})\,dx \geq \frac hCL(u_0)^2|_{x=A} $$ and Lemma~\ref{l:nondegenerate} is proven, which finishes the proof of Theorem~\ref{l:main}. \smallskip To prove Theorem~\ref{l:simon}, let $\Phi_\pm(k)=\theta(u_\pm)|_{x=B}$; a bound state corresponds to $2\Phi_+=0$ and an antibound state corresponds to $2\Phi_-=0$. Since $\theta(u_+)|_{x=0}$ is increasing with $k$, by an argument similar to the proof of Lemma~\ref{l:angle} we get $\Phi'_+(k)>0$ for all $k$. Moreover, $2(\Phi_+(k)-\Phi_-(k))$ is never zero, as this would correspond to $u_+$ and $u_-$ being linearly dependent. We may now apply Lemma~\ref{l:calc} with $\Phi=2\Phi_+$ and $\Psi=2(\Phi_+-\Phi_-)$.
1,314,259,992,738
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{M}{odern} software applications deal with large quantities of data, including critical and sensitive data. The high demand for services has driven developers to create software at a fast pace, thus introducing flaws and bugs in the code. The number of software vulnerabilities discovered each year has steadily increased over the last decades, with a record of 16,665 reported discoveries in 2018 \cite{vulnYear}. Reducing the number of vulnerabilities and potential security breaches is a crucial challenge. Exploiting flaws in code has become a multi-billion dollar a year business. Malware developed to exploit code vulnerabilities is an ever-increasing problem for users, corporations, and governments worldwide. When the Shadow Brokers \cite{shadowBroke} leaked the EternalBlue exploit \cite{etertalBlue}, which gave rise to the WannaCry ransomware \cite{wannaCry}, ensuing damages have been estimated to be up to 8 billion USD globally. An approach to reduce the number of vulnerabilities is based on the use of automatic tools for the identification and the patching of vulnerabilities. In this paper we explore the use of machine learning techniques for vulnerability detection. More precisely, we focus on the use of neural networks to identify the presence of potential stack-based buffer overflow vulnerabilities in assembly code. We make the assumption that code can be treated as a form of language, and we process it using recurrent neural networks (RNN) based on long short-term memory (LSTM) cells \cite{hochreiter1997long}. RNNs are a class of neural networks designed to handle input vectors of arbitrary lengths, and to extrapolate context from sequences. Such models were proven to be effective in tasks related to natural language processing (NLP) and they can be easily adapted to process code. To evaluate the effectiveness of RNNs in processing assembly files in order to spot buffer overflow vulnerabilities we carried out a set of empirical simulations. Our results, although preliminary, confirm the hypothesis that code may be dealt as a language, and they corroborate the intuition that neural network models may be successfully adapted to carry out vulnerability detection, in particular stack-based buffer overflow. The rest of the paper is structured as follows. Section \ref{sec:Background} briefly introduces relevant concepts from information security and from machine learning. Section \ref{sec:RelatedWork} offers an overview of the state of the art in the application of machine learning to vulnerability detection. Section \ref{sec:ProblemStatement} precisely defines the problem we aim at tackling, as well as the assumption we will make in our study. Section \ref{sec:Methodology} provides details on our methodology, including the process of data generation, data pre-processing and model design. Section \ref{sec:Experiments} reports a set of experiments aimed at validating our hypothesis, and Section \ref{sec:Discussion} offers an overall evaluation of our results. Finally, Section \ref{sec:Conclusion} summarizes our results and suggests possible future directions of research. \section{Background} \label{sec:Background} In this section we provide a brief review of the main ideas from computer security (buffer overflow) and machine learning (RNNs) that are relevant to our work. \subsection{Buffer overflow} Vulnerabilities in software can have serious consequences from the point of view of computer security. For instance, without proper input data validation or accurate memory management malicious actors can modify the intended program flow of the software, thus leading to arbitrary code execution. The exploitation of a software vulnerability is frequently the starting point of complex attacks; for instance, the vulnerable software can execute a malicious payload to open a communication channel back to the attacker's computer, then download and execute a script from the attacker's location. Perfect software would be an ideal solution to protect systems against similar attacks, but such a possibility is not realistic; based on existing surveys, typical software vulnerabilities such as \emph{buffer overflows}, \emph{use after free}, \emph{double free vulnerabilities}, \emph{access control problems}, \emph{race conditions} or \emph{authentication weaknesses} appear frequently in software products. Despite many defensive measures were taken by the compilers and the operating systems, several new type of modern exploitation still appear time by time, e.g. \cite{wang2019layered}. A more feasible option to mitigate such risks is to discover software vulnerabilities as early as possible, for instance using fuzz testing or code analysis. In this work we focus on detecting stack-based buffer overflows. Finding abnormal program behavior such as buffer overflow errors is crucial in all computers and devices \cite{zhai2015method}. An operating system executes binary code in the process virtual address space. In this space, the operating system keeps the executable code only of the binary, together with all the binary code that was linked (e.g., operating system APIs). In addition to the executable binary code, the virtual address space contains data sections such as the stack segments for each running thread, heap blocks and global variables. The partitioning of the memory space allows the operating system to keep processes separated, and it guarantees that different running binaries have no direct access to each other. The weak point of this data structuring is the fact that the code and the data are together in the same virtual address space; if there is no appropriate protection in place, this can lead to different types of memory corruption, such as executing data as code, or overwriting the code using data. Buffer overflows compromise the virtual address space by overflowing the storage place that was allocated for the data. This action can lead to the modification of the virtual address space, for instance, by overwriting the heap chunks or the stack frames of the running methods inside the binary. In case of stack-based buffer overflows, the return pointer of a method that is under execution is modified by overrunning a local variable inside the method. In some simple cases, stack-based buffer overflow may be carried out by exploiting methods that are vulnerable by default, such as the C methods \emph{gets}, \emph{puts}, \emph{strcpy}; these methods notoriously lack input validation, and, if the programmer does not perform proper size checking, this weakness can be easily used to successfully overflow the buffer. More complex and refined examples of stack-based buffer overflow may happen by exploiting a chain of several minor vulnerabilities, such as a chain of wrong input data that the attacker tries to set intentionally to overrun the vulnerable buffer. Our focus is mainly on the easiest case where the binary vulnerability is determined by the use of the aforementioned vulnerable methods, and where the solution corresponds to detecting the presence of such method calls without proper input checking. \subsection{Machine learning} The problem of detecting code that is potentially vulnerable to stack-based buffer overflow can be cast as a machine learning problem. In other words, we consider the possibility of training a model which could receive in input samples of code, and which could produce in output a signal denoting whether a potential buffer overflow vulnerability is present in the code. Neural networks have proven to be a powerful and flexible family of models to learn detection systems; for instance convolutional neural networks (CNNs) can be very effectively applied to image detection and recognition \cite{krizhevsky2012imagenet}. Since our problem requires processing inputs of arbitrary length (i.e., code made up of a variable number of lines) and to evaluate chunks of input in context (i.e., deciding about the vulnerability of certain methods in relation to the presence of proper input checking), we decided to rely on a specific family of neural networks: \emph{recurrent neural networks} \cite{goodfellow2016deep}. A RNN is a model that implements the following function: \[ \mathbf{s_n} = f (\mathbf{x_n}, \mathbf{s_{n-1}}), \] where $\mathbf{x_n}$ represents the $n^{th}$ input (or part of input), and $\mathbf{s_n}$ denotes the internal state of the RNN after processing the $n^{th}$ input. In our case, each input will represent a line of code in a program, and we will evaluate the last state $\mathbf{s_n}$ produced by the RNN to decide if the whole program contains a buffer overflow vulnerability. A long short-term memory (LSTM) \cite{hochreiter1997long} is a specific variant of a recurrent neural network which relies on gates to process and filter the information. This design choice allows LSTM to better model long-range dependence in the data. Such a feature is particularly important in our application as it allows to capture a wider context for potentially vulnerable instructions. \section{Related work} \label{sec:RelatedWork} In this section, we describe state-of-the-art work done on the problem of vulnerability detection, paying particular attention to types of data representation and model architecture adopted by other researchers. Although work on vulnerability detection has close connection to malware detection (see, for reference, recent work such as \cite{anderson2018ember} or \cite{raff2017malware}) and intrusion detection (refer, for instance, to \cite{bontemps2016collective}), we will focus mainly on vulnerability detection. In \cite{hovsepyan2012software}, Hovsepyan et al. process Java source code treating it as plain text, and they train a SVM classifier to decide on the presence of vulnerabilities in the code. A similar approach is adopted in \cite{pang2015predicting}, where Java source code is represented via n-grams and synthetic features before being processed by a SVM trained to detect vulnerable programs. Representation learning via principal component analysis has been applied to C source code in order to generate informative representations for vulnerability detection in \cite{yamaguchi2011vulnerability}. All these approaches closely resemble our work in that they subscribe, however implicitly or partially, to the hypothesis that code can be dealt as natural language; however, our work relies on assembly code, making the reasonable assumption that source code is often unavailable, and it applies more versatile models, such as RNNs. The use of CNN and RNN models has been considered in \cite{VDDR}, where Russel et al. trained a vulnerability detection model on large data sets of real-world C/C++ source code. Similarly, in \cite{VDP}, a RNN relying on bi-directional LSTM, is shown to be more effective at vulnerability detection than other standard pattern-based systems such as Flawfinder, RATS, or Checkmarx. Our approach resembles these studies in the choice of the family of models we consider (RNNs); however, once again, we decided not to consider source code, but, more realistically, assembly code. A different direction of research has considered analyzing programs using not only static features, but also dynamic features, that is, feature generated by a program at runtime. Grieco et al. profiled code for vulnerability by using a collection of both static and dynamic features (such as program events), and by training different machine learning algorithms to learn useful patterns \cite{vdml}. Wu et al. developed a model relying only on dynamic features (such as kernel function calls) and implemented deep neural networks to carry out vulnerability detection \cite{vddl}. Our work does not take into consideration dynamic features, although the architecture we have chosen would be versatile enough to be extended in the future to process such features. \section{Problem definition} \label{sec:ProblemStatement} In this work, we formulate and address the problem of discovering stack-based buffer overflow vulnerabilities in a program by processing its assembly code representation via RNNs. At its foundation, our work is grounded on the assumption that code constitute a form of language \cite{allamanis2018survey}. Code has a tightly structured syntax, and it can convey meaning in a similar fashion as written and spoken languages. Through our experiments, we assess the hypothesis that code may be processed as a form of language, and that this representation may be successfully employed to perform vulnearability detection. To do so, we aim at processing code in the form of text, with minimal pre-processing based on human prior knowledge, and using models that proved to be successful in dealing with natural language processing. Concerning the specific form of language considered, we decided not to focus on one specific high-level programming language. Instead we considered code written in assembly language. Assembly language constitutes a formal language that provides a middle ground between human-friendly programming languages and machine instructions. On one side, programming languages are very succinct and interpretable, but they come with a wide variety of vocabulary and grammars; models for a given language may hardly be extended to other languages; moreover, in the real world, source code of a program may be rarely available. On the other hand, machine code is heavily hardware dependent and verbose, as it specifies in fine-grained details elementary operations to be carried out. The assembly language offers thus a language that strikes a reasonable balance between conciseness and availability. Conciseness is important both to reduce the cost of storing and processing the data; availability is relevant in order to have models that may be trained on a substantial amount of data and that can be deployed in the real world. It is worth to underline that, so far, to the best of our knowledge, limited work has been done to exploit this specific level of representation for a program. \section{Methodology and Implementation} \label{sec:Methodology} In this section we discuss our methodological approach and our practical implementation: how we generated data for our problem, what sort of processing we performed on them, and finally we provide details about the model we implemented. Overall, our method is based on the following steps: (i) generation of a library of safe and vulnerable functions, (ii) sampling and aggregation of functions into programs, (iii) compilation of programs into assembly files, (iv) compression of the assembly files by removing redundant information, (v) tokenization of the assembly files, (vi) partitioning of the samples in training and test data, (vii) training of the model. This approach provides us with complete control over every aspect of the simulations, from the data (size, complexity, and variability) to the model (architecture, depth, and other training hyper-parameters). \subsection{Data generation} Given the absence of standardized public data sets for stack-based buffer overflows, we first considered the challenge of assembling a suitable data set of vulnerabilities. Given the availability of code online, a common approach to collect a data set of vulnerable code is to design a \emph{webscraper} to download C source code from open source projects, like the Linux kernel\footnote{\url{https://www.kernel.org/}}, or public repositories, such as Github\footnote{\url{https://github.com/}}. This approach would return large amounts of realistic data, although a good subset of the retrieved code may be umantained and of low quality (e.g., failing to compile). The main drawback is that all the data would be unlabelled. Labeling the data, so that it could be used for supervised learning in our RNN models, would be a non-trivial challenge; manual labelling would be extremely time-consuming, requiring several experts to analyze the code to understand if it has potential buffer overflows vulnerabilities or not; automatic labelling using existing tools does, in a way, defeat the purpose of our research as it would lead the RNN model to simply learn the function already encoded in the existing automatic tools. Even more critically, only a small percentage of the collected data would contain samples of stack-based buffer overflow vulnerabilities which would be relevant in our problem. Therefore, we opted to create our own dataset. This approach has some clear advantages: (i) since we define each function and program, and we know exactly whether it is vulnerable to stack-based buffer overflow or not, labelling is automatic and unequivocal; (ii) we have full control over the code style used, allowing us to produce more or less heterogeneous samples; (iii) since the generation is automated, we can easily scale up the size of the data, thus making the fitting of complex functions feasible. On the other hand, we acknowledge the fact that custom-made datasets are often criticized as lacking in realism; while this point is true, we hold that our samples are representative enough to validate or to disprove our hypothesis that RNNs may be used for stack-based buffer overflow vulnerability detection. If we were to prove that our models can successfully detect unsafe code, more realistic data and deeper network may be implemented to tackle more realistic challenges. A \emph{sample} in our synthetic dataset is a C program file. A program file is made up of a variable number of \emph{functions} that are called in the main body of the program file. In the next section, we explain how we generate functions and how we assemble them into programs. \subsubsection{Generating safe and vulnerable functions} We view a function as the atomic component of a C program. Each function is built around a \emph{system call}. Although there are many potentially vulnerable system calls in the C library, we restrict our attention to $8$ system calls that are particularly relevant in the context of buffer overflows: \emph{strcpy, strncpy, strcat, scanf, sprintf, gets, fgets, memcpy}. Some of these system calls are deemed intrinsically unsafe and are deprecated, while other C methods are considered safe and are recommended. However, our labelling does not depend on this static distinction. We define a function non-vulnerable if it correctly uses a safe system call or if it employs an unsafe system call together with right buffer checks; vice versa, we define a function vulnerable if it uses an unsafe system call with wrong checks or if it misuses a safe system call. For instance, although \emph{strcpy} is normally considered unsafe, we recognize that it may be used safely when accompanied by proper buffer checks. By taking into account these more subtle differences, we aim at training a model that does not just trivially spot the use of deprecated system calls, but evaluates their vulnerability in context. For each of the $8$ system calls above, we created $15$ different functions with a single vulnerable system call. In total, this leads to $120$ different vulnerable functions. These functions have one and only one vulnerability and use one and only one weak system call, thus defining simple functions with a single vulnerability, while preserving the realism of our data. For the benign functions, we decided to create an equal amount of instances. Among these instance, we included functions properly using safe system calls as well as repaired versions of the unsafe functions used to create vulnerable functions. Listing \ref{notSafeMemcpy} presents a vulnerable function along with its safe counterpart in Listing \ref{safeMemcpy}. The two listings show how easy it is to unintentionally introduce a vulnerability by using the wrong parameter in a C system call, and how subtle the difference between a safe and an unsafe function can be. These minimal differences allow us to verify whether a network is able to discriminate safe and unsafe programs given the context in which C function calls happen, and not just by the name of the function called. \begin{lstlisting}[backgroundcolor=\color{backgroundColour}, label=notSafeMemcpy, caption= A vulnerable function] void memcpySmallIntoLarge(char* s){ char dest[256]; memcpy(dest,s,strlen(s)); printf( } \end{lstlisting} \begin{lstlisting}[backgroundcolor=\color{backgroundColour}, label=safeMemcpy, caption= Repaired version of Listing \ref{notSafeMemcpy}] void memcpySmallIntoLarge(char* s){ char dest[256]; memcpy(dest,s,sizeof(dest)); printf( } \end{lstlisting} \subsubsection{Generating C programs} From the set of functions we have defined, we generate programs. We define \emph{positive sample} a C program that contains no vulnerable function; conversely, we define \emph{negative sample} a C program that contains at least one vulnerable function. We acknowledge that functions normally are vulnerable through their calling parameters and the state of the virtual address space. However, we limit ourselves to define vulnerable and benign samples without additional considerations such as those aforementioned. Given a number $N_f$ of function to be included in a program, we construct positive samples by randomly sampling $N_f$ safe functions, inserting their definition into a C program, and adding function calls in the \emph{main()} method; for negative samples, we randomly select a single unsafe function along with $N_f-1$ random safe functions, and we aggregate them in a C program as done for positive samples. The resulting programs are carefully checked to verify whether they are actually safe or unsafe. We followed a unit testing approach, validating each unit of code through a range of inputs. \subsection{Data processing} The output of our data generating process is a set of programs in the form of C source code. Since we do not want to work at this level of representation, we compile our data into assembly. Moreover, before feeding the data to the model, we further process it by removing redundant information, converting it into token strings, and finally partitioning it into training and test data. \subsubsection{Compiling C programs into assembly} First, we translate our source code into assembly code, using a compiler with Intel syntax for the 64-bit architecture. Compilation happens using the GCC compiler \cite{GCCWiki}, a versatile and cross-platform compiler widely adopted on many systems. Our C programs are compiled running the following command: \begin{lstlisting}[style=argvListing] gcc -S -fno-asynchronous-unwind-tables -masm=intel ./fileName.c -o fileName \end{lstlisting} For the sake of generalization, we rely on a minimal number of flags aimed at producing as short an assembly as possible; for details about the flags, please refer to \cite{GCCWiki}. A snippet of generated assembly is shown in Listing \ref{assembly}. \begin{lstlisting}[style=customasm, label=assembly, caption= Example of part of a C program compiled in assembly] .file "test_file_3_0.c" .intel_syntax_noprefix .text .glob main .type main, @function .LC0: .string "Enter the size of input:" .LC1: .string mov rbx, rax lea rdi, .LC0[rip] mov eax, 0 call printf@PLT push rbx sub rsp, 72 mov QWORD PTR -104[rbp], rdi \end{lstlisting} This processing step makes our model independent from the availability of C source code. Real-world executables, for which the source code has not been released, can still be decompiled into assembly language. Several tools, such as IDA Pro by Hex Rays \cite{idaPro}, Binary Ninja \cite{binaryNinja}, and NSA's own Ghidra \cite{ghidra}, allow for decompiling binaries into assembly code. On the other hand, succesfully reverse engineering machine code up to C is a more challenging, and still open, problem. Thus, processing assembly code instead of C codes makes our model more flexible and generic, potentially able to deal with a larger class of real-world samples. \subsubsection{Compressing the assembly code} Next, in order to simplify the model and make the learning process faster, we compress the assembly code by discarding redundant information. Several lines of the assembly code generated by the GCC compiler have limited value, containing either redundant or useless information for our aims. We then start by removing the entire prefix of each assembly (see lines 1-5 in Listing \ref{assembly} for an example). The prefix is equal for each assembly except the field \emph{.filename} which just reports the name of the original file; as such, no significant information is carried in these lines of code. Next, we consider other assembler directives and instructions that can be dropped \cite{assDirectives}: \begin{itemize} \item \emph{.LC} directives are used for storing declared strings in the C program (see line 7-10 in Listing \ref{assembly} for an example). The actual content of a string is not relevant in our modelling. However, in order to maintain as much as possible of the code context, notice that we keep references to the strings (see line 13 in Listing \ref{assembly} for an example). \item \emph{.size} directives are generated by compilers to include auxiliary debugging information in the symbol table. Compiler information is not relevant in our modelling. \item \emph{.ident} directives are used by some assemblers to place tags in object files. Object generation is not relevant in our modelling. \item \emph{.section} directives are used to manage sections in objects. Object management is not relevant in our modelling. \item \emph{endbr} instructions \cite{intelSpec} in the last part of each assembly are removed. \end{itemize} The end result is shorter assembly samples that still preserve the semantics of the code. Notice that in this processing steps we minimally rely on prior knowledge. An expert evaluation is indeed needed to decide what parts of an assembly code to discard. However, we argue that such choice boils down to a non-case-specific and procedural rule that consistently drop pre-determined lines. This pre-processing does not introduce any form of high-level knowledge, nor does it instantiate any sort of feature carrying human-injected knowledge. The point of this pre-processing is simply to reduce the size of the samples in order to limit the computational burden. It is our conjecture that, if our model were to be successful, scaled-up models would be able to successfully process non-compressed versions of the same assembly. \subsubsection{Tokenization of the assembly code} Following the standard in natural language processsing, we proceed with a tokenization of the assembly code. Although the main component of an assembly may be identified by a line of code, lines present too much variety to be reduced to tokens. Instead we split lines of code into atomic codewords, and we use regular expressions to meaningfully extract tokens. Regular expressions are designed to preserve relevant commands and their context, while at the same time lead to a restricted dictionary of meaningful symbols. As an example, consider line 19 in Listing \ref{assembly}; its tokenization returns: \begin{lstlisting}[style=argvListing] "mov", "QWORD", "PTR", "-104", "[", "]", ",", "rbp", "rdi", "/n" \end{lstlisting} Notice that in our tokenization we always keep the newline character '/n' to denote the end of lines of code, as we consider information on the end and the beginning of a line relevant in our analysis. \subsubsection{Partitioning of the data} At the end of this process, we are left with a data set where each sample, positive or negative, is a set of assembly tokens. According to the good practices of machine learning, we split this data set in a training data set (80\% of the data) and a test data set (20\% of the data). Training data are further partitioned in a set used for training our model and a development (or validation) set for hyperparameter optimization. Test data are used only for assessing the accuracy of our model. \subsection{Model definition} In order to assess our hypothesis, we adopted a standard neural network model widely used in NLP and we applied it to the processing and classification of assembly code. Given the available computational limitation, we implemented an effective, yet lightweight, RNN architecture designed to discriminate vulnerable and non-vulnerable samples taking into account both the local and the global context in each program. With reference to Figure \ref{fig:architecture}, our architecture is designed as follows. The model receives as input a tokenized assembly file. Each token is mapped to an integer value via a pre-computed table, thus creating an integer array. This vector is forwarded to a 512-dimensional embedding layer which maps the discrete representations into dense representations. Two hidden LSTM layers follow, with the task of processing the data and tracking the inner state of the network; temporal dropout is applied to the last layer of these two layers in order to improve the generalization performance. Finally, the output of the LSTM layers is forwarded to a fully-connected sigmoidal layer that produces the decision of the model. The entire model is trained using a binary cross-entropy loss \cite{goodfellow2016deep}. \begin{figure}[H] \includegraphics[scale=0.3]{/architecture.png} \caption{Architecture of our model} \label{fig:architecture} \end{figure} \section{Experiments} \label{sec:Experiments} In this section we present and discuss the simulations we have done in order to prove or disprove the language hypothesis applied to assembly code and to verify the feasibility of detecting stack-based buffer overflow using RNNs. Our RNN models have been implemented using PyTorch \cite{NIPS2019_9015}, an open-source machine learning library for Python. The source code for our model is available publicly online at \url{https://github.com/williamadahl/RNN-for-Vulnerability-Detection}. In all the simulations, we use a training set to train the model, a development (or validation) set for model selection, and a test set to evaluate the final performance. We track the loss on training and development set during training, and we measure the final performance in terms of correct classification rate (CCR, or accuracy). \subsection{Simulation 1: Detecting vulnerabilities} In this first simulation, we evaluate the network ability to discriminate between programs composed by safe functions and programs containing different vulnerable functions. Positive and negative samples use clearly different functions, thus presenting to the network a relatively easy discrimination task. \subsubsection{Protocol.} In our first simulation we sampled $2000$ benign and $2000$ vulnerable binaries. Each binary contains $3$ functions where the benign functions are sampled from the set of safe functions excluding repaired versions of vulnerable functions, while the vulnerable functions are sampled from the whole set of vulnerable functions. Training and development were conducted considering the batch sizes $\{20,40,80, 100\}$, while the learning rate was set relatively aggressive considering the values in the set $\{0.00025, 0.0005, 0.001, 0.002\}$. \subsubsection{Results.} \begin{figure}[H] \includegraphics[scale=0.42]{/ex1_plt0.png} \caption{Performance for Simulation 1 using learning rate $10^{-3}$ and a batch size of $80$.} \label{fig:ex1_plt0} \end{figure} We selected the best configuration of hyperparameters (batch size $80$, and learning rate $10^{-3}$) in terms of performance on the development data set; other configurations achieved lower results, and often required more epochs to achieve convergence. Figure \ref{fig:ex1_plt0} shows the result of training. Notice the two scales for the $y$-axis: on the left we report the correct classification rate (CCR) for the training (blue) and the development (red) dataset; on the right we report the loss function for the training dataset (green). At the end of training the final performance on the test set successfully converges to a CCR of $1.0$. Even with a modest dataset of 4000 samples we can observe in Figure \ref{fig:ex1_plt0} that we were able to achieve a very low loss of 0.001 and a CCR of 1.0 over the training \emph{and} development set. We can deduce from this experiment that the neural network exhibits the capacity to classify with very low error over the subset of functions we defined. \subsubsection{Discussion.} In this simulation, the model we designed proved able to discriminate with high accuracy between safe programs and vulnerable programs. However, what we considered is just a simplified problem, where positive and negative samples are clearly different. It may be hypothesized, then, that the RNN modeled generic patterns in our samples, which may not be tightly related to buffer overflow. \subsection{Simulation 2: Differentiation between vulnerable and repaired counterparts} To disprove the hypothesis that the RNN is not meaningfully capturing buffer overflow vulnerabilities, in this simulation we consider a more challenging scenario, in which vulnerable functions appear in negative samples, while the fixed version of the same vulnerable functions appears in positive samples. Vulnerable functions and their patched counterparts are very similar in code, often with only one line differentiating a positive from a negative sample. This setup allows us to probe thoroughly the discriminative power of the network. \subsubsection{Protocol.} We sample negative samples as functions containing a vulnerability from a single type of system-call, \emph{fgets}. Positive samples are built including the patched counterparts of the vulnerable functions; these patched functions still include the call to \emph{fgets}, but in a safe and controlled way. In total, we train the network with $200$ samples of both classes. We experimented with low batch sizes in the set $\{2, 5, 10\}$, due to the relative small sampled dataset, and learning rates in the set $\{0.000125, 0.00025, 0.0005, 0.001, 0.002\}$. \subsubsection{Results.} \begin{figure}[H] \includegraphics[scale=0.42]{/ex2_plt0.png} \caption{Performance for Simulation 2 using learning rate $1.25 \cdot 10^{-5}$ and a batch size of $10$.} \label{fig:ex2_plt0} \end{figure} The best hyperparameter configuration we found uses a batch size of $10$ and a learning rate of $1.25\cdot{10}^{-4}$. Figure \ref{fig:ex2_plt0} shows the dynamics of training, with the model achieving again a CCR close to $1.00$ on the training \emph{and} development set. CCR on the test set was similarly close to 1.0. Different choices for the hyperparameters, such as a learning rate above $2.5\cdot{10}^{-3}$ often got stuck and failed at learning. We hypothesize that the high similarity among positive and negative samples combined with a relatively high learning rate may lead to an oscillatory behaviour, where the network tends to alternatively overshoot or undershoot with respect to the thin margin separating the classes. Models trained with lower learning rate achieved overall better performances, although in some cases, even models with a learning rate of $5.0\cdot{10}^{-3}$ could achieve a satisfying CCR, despite showing high instability on average. \subsubsection{Discussion.} Our model was able to successfully learn to discriminate between subtly different samples. This confirm that the RNN model is able to capture subtle and contextual features of the assembly code that actually relate to buffer overflow vulnerabilities. This result is particularly relevant because, in the real world, the difference between a safe program and a vulnerable executable may be as small as a single line of code; performing a length check on an input, or size check for the allocation of a large buffer, may distinguish a safe program from an unsafe one. The high accuracy achieved by our network suggests that machine learning model may be successfully deployed to capture subtle flaws in code. \subsection{Simulation 3: Larger datasets and samples} In this last experiment, we test our model on a larger dataset drawn from an even more extensive range of functions. We wish to push our model further and simulate more realistic code samples. \subsubsection{Protocol.} In this final experiment we consider a collection of $4000$ benign and $4000$ vulnerable samples. Each sample consist of 20 functions, where the benign functions in both positive and negative samples are drawn from the same distribution. This distribution does not contain repaired counterparts to the vulnerable functions in the vulnerability library. The negative samples are drawn at random from the whole library of vulnerable functions. We trained a model made up by a two layer LSTM architecture with a fixed batch sizes of $80$, and a learning rate in the set $\{0.0001, 0.00025, 0.0005, 0.001\}$. We chose these hyperparameter ranges as they have proven successful in previous experiments on larger datasets. \subsubsection{Results.} \begin{figure}[H] \includegraphics[scale=0.46]{img/ex3_plt0.png} \caption{Performance for Simulation 2 using learning rate $5\cdot{10}^{-5}$ and a batch size of $80$.} \label{fig:ex3_plt0} \end{figure} Figure \ref{fig:ex3_plt0} shows the results achieved for the two layers LSTM configuration. From the graph, we observe an improvement in the long-term in terms of CCR on the training set and the development set. After about 17,500 steps, the neural network breaks through a plateau, and drastically improves CCR and reduce loss over the training and the development set. At termination our model achieved a CCR of 99.00\% and a 0.057 loss over the test set. The neural network was capable of processing and learning features for samples of substantial size, and converge within a reasonable time frame. These observations are encouraging as the "real world" code encompass a wide range of sizes and structure. For this particular experiment we also set up a shallower architecture with only one layer of LSTM. This simplified architecture did not perform as well as the two layer LSTM within a comparable time frame. \subsubsection{Discussion.} Although quite sensitive to hyperparameter tuning, the RNN model with the right setting (a batch size of 80 samples was by far the most successful configuration) yielded a model that achieved a satisfactory result within the training time constraint. It is interesting to observe that we had to apply a relatively low learning rates throughout our experiment to achieve satisfactory results. In conclusion, we can state that our neural network proved able to learn from complex samples made up of multiple functions. \section{Discussion} \label{sec:Discussion} Our simulations allows us to conclude that not very deep RNNs are able to successfully perform binary classification on the problem of detecting the presence of stack-based buffer overflow vulnerabilities. Even when presented with samples where discriminating between a positive and negative instance relied heavily on the context, the trained model was able to differentiate satisfactorily. The results provide a proof-of-concept validation of the language hypothesis we presented in Section \ref{sec:ProblemStatement}. However, our conclusions are limited by the relative simplicity of our data sets and our models. Our self-generated data set can hardly be used to generalize over the large variety of real-world data. Overcoming this limitation would require either the collection and the labeling of large quantities of code samples, or the refinement of generators used to create realistic code samples. In this last case, our own generator may be improved by introducing additional functions, more complex code structure, and recursive calls. Scaling up the complexity of the data has to be accompanied by a similar scaling in the depth and complexity of the RNN model. This would put more pressure on computational resources, as longer training and more hyperparameter tuning may be in order to produce a satisfactory model. \section{Conclusion} \label{sec:Conclusion} Our research aimed at exploring the application of the language hypothesis to assembly code, and more specifically, at understanding the possibilities and limitations of stack-based buffer overflow vulnerability detection using RNNs. In order to prove or disprove the hypothesis, we conducted a set of experiments on binary classification. Our results showed that RNNs are able to extract useful features from the assembly code language and evaluate the context of the instructions. Such conclusions lend support to the hypothesis on the similarity between natural language and programming languages, and provide ground for treating code as language in future research. Several possible extensions for further development lie open ahead. The easiest direction would be to scale up our model in terms of network depth, dataset size and realism; this would require a more significant computational effort, but it may further confirm (or, possibly, highlight the limitations of) the language hypothesis. Another direction would be to focus on alternative model architectures: we only considered LSTM layers, but layers with attention mechanisms \cite{attmek} may provide ground-breaking results, as they have done in standard NLP tasks \cite{attNeed}. Our model could also be enriched to perform multiclass prediction (thus distinguishing different types of buffer overflows) or to output not only the presence or absence of a vulnerability, but also its location (thus allowing to identify and fix buffer overflows more easily). Finally, our approach may be applied to other types of vulnerabilities; buffer errors have been the most ubiquitous type of vulnerability for the last 25 years \cite{topIncident25year}, but other vulnerabilities, such as race conditions or failures to validate input, can be equally harmful and may be predicted with similar RNN models. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,992,739
arxiv
\section{Tabular Benchmark Results} \section{Dataset Descriptions} \paragraph{CIFAR-100: Standard image classification} As a starting point of comparison to existing benchmarks, we include the {\bf CIFAR-100} task \citep{krizhevsky2009cifar}, which contains RGB images from natural settings to be classified into 100 fine-grained categories. CIFAR-100 is preferred over CIFAR-10 because it is more challenging and suffers less from over-fitting in previous research. \paragraph{Spherical: Classifying spherically projected CIFAR-100 images} To test NAS methods applied to natural-image-like data, we consider the task of classifying spherical projections of the CIFAR-100 images, which we call {\bf Spherical}. In addition to scientific interest, spherical image data is also present in various applications, such as omnidirectional vision in robotics and weather modeling in meteorology, as sensors usually produce distorted image signals in real-life settings. To create Spherical CIFAR, we project the planar signals of the CIFAR images to the northern hemisphere and add a random rotation to produce spherical signals for each channel following the procedure specified in \cite{cohen2018spherical}. The resulting images are 60$\times$60 pixels with RGB channels. \paragraph{NinaPro: Classifying electromyography signals} {\bf NinaPro} moves away from the image domain to classify hand gestures indicated by electromyography signals. For this, we use a subset of the NinaPro DB5 dataset \citep{atzori2012building} in which two Myo armbands collect EMG signals from 10 test individuals who hold 18 different hand gestures to be classified. These armbands leverage data from muscle movement, which is collected using electrodes in the form of wave signals. Each wave signal is then sampled using a wavelength and frequency prescribed in \cite{cote2019deep} to produce 2D signals. \paragraph{FSD50K: Labeling sound events} {\bf FSD50K} \citep{fonseca2020fsd50k} is derived from the larger Freesound dataset \citep{fonseca2017freesound} of Youtube videos with 51,000 clips totaling more than 100 hours of sound. These clips are manually labeled and equally distributed in 200 classes from the AudioSet ontology \citep{gemmeke2017audio}. Each clip could receive multiple labels. Unlike TIMIT~\citep{garofolo1993timit}, FSD50K does not focus exclusively on sounds of spoken language but includes sound events from physical sources and production mechanisms. The mean average precision (mAP) is used to evaluate classification results. \paragraph{Darcy Flow: Solving partial differential equations (PDEs)} \label{sssec:pde} Our first regression task, {\bf Darcy Flow}, focuses on learning a map from the initial conditions of a PDE to the solution at a later timestep. This application aims to replace traditional solvers with learned neural networks, which can output a result in a single forward pass. The input is a 2d grid specifying the initial conditions of a fluid, and the output is a 2d grid specifying the fluid state at a later time, with the ground truth being the result computed by a traditional solver. This we use the same Darcy Flow dataset that was used in \cite{li2021fno}. We report the mean square error (MSE or $\ell_2$). \paragraph{PSICOV: Protein distance prediction} {\bf PSICOV} studies the use of neural networks in the protein folding prediction pipeline, which has recently received significant attention to the success of methods like AlphaFold \citep{alphafold2}. While the dataset and method they use are too large-scale for our purposes, we consider a smaller set of protein structures to tackle the specific problem of inter-residual distance predictions outlined in \cite{adhikari2020fully}. 2D large-scale features are extracted from protein sequences, resulting in input feature maps with a massive number of channels. Correspondingly, the labels are pairwise-distance matrices with the same spatial dimension. The evaluation metric is mean absolute error (MAE or $\ell_1$) computed on distances below 8 \text{\normalfont\AA}, referred to as MAE$_8$. \paragraph{Cosmic: Identifying cosmic ray contamination} Images from space-based facilities are prone to corruption by charged particles collectively referred to as "cosmic rays." Cosmic rays on images should be identified and masked before the images are used for further analysis \citep{zhang2020deepcr}. The \textbf{Cosmic} task uses imaging data of local resolved galaxies collected from the Hubble Space Telescope. Inputs and outputs are same-size 2D matrices, with the output predicting whether each pixel in the input is an artifact of cosmic rays. We report the false-negative rate (FNR) of identification results. \paragraph{ECG: Detecting heart disease} Electrocardiograms are frequently used in medicine to diagnose sinus rhythm irregularities. The \textbf{ECG} task is based on the 2017 PhysioNet Challenge \citep{clifford2017af}, with 9 to 60-second ECG recordings sampled at 300 Hz and labeled using four classes: normal, disease, other, or noisy rhythms. Recordings are processed using a fixed sliding window of 1,000 ms and stride of 500 ms. We report the F1-score according to the challenge's guidelines. \paragraph{Satellite: Satellite image time series analysis} Satellite image time series (SITS) are becoming more widely available in earth monitoring applications. Our dataset comes from Formosat-2 satellite images acquired over Toulouse, France \citep{petitjean2012satellite}. Available in multiple channels, SITS track the land cover changes over several years as each pixel in the image represents a geographical region. The goal of the \textbf{Satellite} task is to generate land cover maps for geo-surveying. Specifically, a series of pixels in a given color channel which constitutes a time series to be classified into 24 land cover types. \paragraph{DeepSEA: Predicting functional effects from genetic sequences} Predicting chromatin effects of genetic sequence alterations is a significant challenge in the field to understand genetic diseases. \textbf{DeepSEA} \citep{zhou2015predicting}, provides a compendium of genomic profiles from the Encyclopedia of DNA Elements (ENCODE) project \citep{encode2004encode} to train a predictive model estimating the behavior of chromatin proteins, divided into 919 categories. Due to computation constraints, we subsample 36 of these categories as per \cite{zhang2021ambient} and further take 5\% of the training data for prediction. We report the area under the receiver operating characteristic (AUROC) following the previous work. \section{Baselines} \paragraph{Wide ResNet with Hyperparameter Tuning} Architectures based on ResNet \cite{he2016resnet} are a common first choice by practitioners faced with a new domain \citep{fonseca2020fsd50k,adhikari2020fully}; it is thus a natural source of fixed-architecture baselines for our study. We use the Wide ResNet variant \citep{zagoruyko2016wideresnet} with 16 layers and a widen factor of 4, and apply its original training routine directly for the constrained practitioner. For the other practitioner, we wrap the training procedure with a hyperparameter tuner, ASHA \citep{li2018system}, an asynchronous version of Hyperband \citep{li2017hyperband}. Given a range for each hyperparameter, ASHA uniformly samples configurations and uses brackets of elimination: at each round, each configuration is trained for some epochs, before the algorithm selects the best-performing portion based on validation metrics. This procedure is useful for finding suitable hyperparameters in an easy-to-use, automated fashion. \paragraph{Cell-based search using DARTS} The first NAS paradigm we consider is cell-based NAS. These methods first search for a genotype, a cell containing neural operations. During evaluation, an architecture is constructed by replicating the searched cell and stacking them together. The most popular search space for this approach is DARTS \citep{liu2019darts}, which assigns one of eight operations to six edges in two types of cells: ``normal'' cells preserve the shape of the input while ``reduction'' cells downsample it. For dense tasks, we do not use the reduction cell to prevent introducing a bottleneck. For 1D tasks, all convolutions in the cell are converted from 2D to 1D. Finally, to adhere to standard ML practices, we do {\em not} adapt the standard DARTS pipeline from the original paper. As this search space has been heavily studied, we use as a search routine a recent approach, GAEA PC-DARTS (GAEA), that achieves some of the best-known results on CIFAR-10 and ImageNet \citep{li2021gaea}. This NAS approach, due to its heavy retraining routine, is compared to the tuned WRN baseline of the less-resource-constrained practitioner. \paragraph{Macro NAS using DenseNAS} The second NAS paradigm we consider is macro NAS. Instead of building from a fixed cell, it requires the specification of a supernet with different inter-connected blocks. These blocks and connections are then pruned to construct an architecture. For our benchmark, we choose a recent search space, DenseNAS \citep{fang2020densenas}, which has near state-of-the-art results on ImageNet. DenseNAS searches for architectures with densely-connected, customizable routing blocks to emulate DenseNet \citep{huang2017densely}. In our experiments, we use the ResNet-based search space, DenseNAS-R1, which contains all neural operations of the WRN backbone. For 2D tasks, we adapt two super networks from the one used for ImageNet as inputs to the search algorithm. The super network for dense tasks maintains the same spatial dimensions without downsampling to avoid bottlenecks, and we lower the learning rate for evaluating architectures to prevent divergence. For transferring to 1D tasks, all network operations are switched from 2D to 1D. Other training and evaluation procedures are identical to those in the original paper and uniform across all tasks. DenseNAS is quick to search and evaluate, making it comparable to the fixed WRN baseline. We apply another search method to the fixed DenseNAS space to study the relative importance of search algorithms. Random search is implemented through randomly sampling architectures from the DenseNAS space and validating them after a brief training period of 10 epochs before evaluating the best performer. To ensure fairness of comparison, we allot equal GPU hours to random search and regular DenseNAS search, additionally applying a soft constraint that random architecture model sizes should not surpass DenseNAS searched architecture sizes significantly. \paragraph{Domain-specific NAS Baselines: Auto-DL and AMBER} To study the importance of search spaces, we further handpick two domain-specific NAS approaches applicable only to a subset of the tasks. Using an encoder-decoder architecture, Auto-DeepLab (Auto-DL) \citep{liu2019auto} is designed for dense prediction, e.g., semantic segmentation. While the decoder is fixed, Auto-DL searches for an encoder via first-order DARTS. We evaluate Auto-DL on Darcy Flow, PSICOV, and Cosmic tasks. AMBER \citep{zhang2021automated} aims to automate neural network design for 1D genomic data. This framework establishes a 12-layer network backbone and parametrizes a long-short term memory network (LSTM) as a controller to search for suitable 1D operations and residual connections, following the ENAS \citep{pham2018enas} optimization protocol. At each step, the controller samples architectures to compute reward based on area under the receiver operating characteristics (AUROC) before outputting the highest-reward architecture. We evaluate AMBER on the ECG, Satellite, and DeepSEA tasks. \paragraph{General-purpose baselines: Perceiver IO and XGBoost} As the overarching theme of NAS-Bench-360 is evaluate NAS methods on a wide variety of diverse tasks and when to even use NAS methods over fixed baselines, general-purpose non-NAS methods are obvious points of comparison. We evaluate two such baselines on NAS-Bench-360: the recent transformer-based Perceiver IO \citep{jaegle2022perceiver}, and the popular non-deep learning baseline, XGBoost \citep{xgboost}. Perceiver IO is a general-purpose transformer architecture that is designed to handle arbitrary input and output dimensionalities with minimal changes to its encoder and decoder networks---as such, we evaluate Perceiver IO on all 10 NAS-Bench-360 tasks. Similarly, the popular gradient-boosting method, XGBoost, is applicable to a wide variety of tasks and learning objectives, including single-output and multi-output classification and regression problems, which covers all 10 tasks in NAS-Bench-360. For efficiency and comparison to deep learning methods, we employ the GPU-based implementation of histogram gradient-boosting in XGBoost. \section{Comparison of NAS with Expert Architectures} We create a more challenging baseline for NAS by evaluating hand-designed architectures for each specific task. Hand-crafted networks are selected according to best-effort search. The full evaluation results of NAS methods vs. non-NAS baselines can be found in Table \ref{table-5}. Figure \ref{fig:nasvnonnas} illustrates a comparison between best-performing NAS methods vs. best non-NAS methods. Surprisingly, GAEA PC-DARTS beats all the baselines on a portion of the tasks. Here is a brief summary of these expert models and their citations: \begin{enumerate} \item DenseNet-BC (CIFAR-100): a more sophisticated version of ResNet, achieving state-of-the-art performance on vision classification \citep{huang2017densely}. \item S2CNN (Spherical): a spherical CNN contains special operations designed for spherical signals, state-of-the-art on spherically-projected MNIST \citep{cohen2018spherical}. \item Fourier Neural Operator (FNO) Network (Darcy Flow): via parametrization in Fourier space, FNO can efficiently learn a family of partial differential equations and their mapping to solutions \citep{li2021fno}. \item DEEPCON (PSICOV): a dilated-convolution neural network combined with dropout to optimize for protein distance prediction \citep{adhikari2020deepcon}. \item deepCR-mask (Cosmic): a modified version of UNet retaining data dimension to keep pixels at the borders to suit astronomy applications, state-of-the-art on this task \citep{zhang2020deepcr}. \item Attention-based model (NinaPro): a lightweight feed-forward neural network adopting attention modules in place of convolutions \citep{josephs2020semg}. \item VGG-like (FSD50K): a smaller VGG network with output features combining both global max pooling and average pooling for audio \citep{fonseca2020fsd50k}. \item ResNet-1D (ECG): ResNet with 1D convolution, using a larger kernel size of 16 and a stride of 2 for all convolutions. The architecture is state-of-the-art on several time-series prediction tasks in medicine \citep{hong2020holmes}. \item ROCKET (Satellite): a simple linear classifier with random convolution kernel as a feature extractor, achieving state-of-the-art performance on UCR time-series prediction tasks \citep{dempster2020rocket}. \item DeepSEA model (DeepSEA): the original 1D convolution model accompanying the dataset, state-of-the-art on DeepSEA itself \citep{zhou2015predicting}. \end{enumerate} \section{Experiment Details} \vspace{1mm} \subsection{Hyperparameter Tuning and Backbone} We use a wide residual network with 16 layers and a widening factor of 4 (WRN-16-4) for all tasks. For tuning hyperparameters, we use ASHA's default elimination schedule and search over 7 to 256 randomly sampled hyperparameter configurations matching GAEA PC-DART's runtime. The maximum epochs that a single configuration could be trained is equal to that of Wide ResNet's default, 200. We have selected the following hyperparameter ranges for tuning the Wide ResNet backbone: \begin{itemize} \item $\log_{10}$(learning rate): Unif[-4, -1] \item momentum: Unif\{0.0, 0.3, 0.6, 0.9\} \item $\log_{10}$(weight decay): Unif[-5, -2] \item dropout: Unif\{0.0, 0.3, 0.6\} \item batch size: 128 (all point tasks except FSD50K), 4 (Darcy Flow), 8 (PSICOV, Cosmic), 256(FSD50K, ECG, Deepsea), 4096 (Satellite) \end{itemize} \subsection{Reference Runtimes} Using a Nvidia V100 GPU, we have recorded the following runtimes for each experiment in this benchmark in Table \ref{table-4}. Overall, GAEA PC-DARTS is more costly than backbone with hyperparameter optimization, which is more costly than DenseNAS. The protein tasks requires heavy computation since the data is not static but generated during training. \begin{table} \caption{Experiment training runtimes of NAS-Bench-360 (GPU hours)} \label{table-4} \centering \begin{tabular}{lllll} \toprule Task & GAEA & DenseNAS & WRN & AMBER / Auto-DeepLab\\ \midrule CIFAR-100 & 9.5 & 2.5& 2& n/a\\ \midrule Spherical & 16.5 & 2.5& 2& n/a\\ \midrule Darcy Flow & 6.5 & 0.5& 0.5 &5.5\\ \midrule PSICOV & 18 &24& 18.5& 19\\ \midrule Cosmic & 21.5& 2.5& 4& 17.5\\ \midrule NinaPro & 0.5 & 0.2& 0.2& n/a\\ \midrule FSD50K &37&4.5&4& n/a \\ \midrule ECG &140&6.5&5&27 \\ \midrule Satellite &28&3&4.5&26 \\ \midrule DeepSEA &39.5&2&1.5&28 \\ \bottomrule \end{tabular} \end{table} \subsection{Model sizes and FLOPS statistics} Full information of model parameter counts and FLOPs can be found in Table~\ref{table-6} and Table~\ref{table-7}. \subsection{Adjustments for Dense Prediction Tasks} On the wide ResNet backbone, we add an adaptive averaging pooling operation to upsample the features back to their original dimensions before output. On the DARTS space, we prevent downsampling and keep spatial dimensions unchanged by disabling reduction cells and replacing them with normal cells. On DenseNAS, we configure the super-network to contain only blocks with the original spatial dimensions. \subsection{Adjustments for 1D Prediction Tasks} The WRN-1D does not have a convolution stem and uses larger kernel sizes of 8, 5, 3 in each convolution block. We substitute 2D operations with 1D operations within the DARTS and DenseNAS search spaces. \subsection{Random Seeds} For main experiments, we fix the random seed to be 0, 1, 2 for each of the 3 trials respectively. For AMBER experiments, we completed three trials as the package did not offer the option of setting random seeds. \subsection{Correlation between Performance and Model Size} We plot performances of 30 random architectures from the DenseNAS search space across three tasks in Figure \ref{fig:random}. From our random search experiment, larger models searched by NAS are not always better-performing. We study the Pearson correlation coefficient between test performance vs. model size in number of parameters for three tasks: FSD50K, Cosmic, and ECG. On Cosmic and ECG, the correlation is very weak ($r=0.01$ and $r=0.19$ respectively). On FSD50K, a stronger correlation ($r=0.79$) is observed but performance varies significantly even for architectures of the same size. \begin{figure}[!t] \begin{minipage}{0.30\linewidth} \centering \includegraphics[width=\linewidth]{figures/FSD_random} \end{minipage} \begin{minipage}{0.30\linewidth} \centering \includegraphics[width=\linewidth]{figures/Cosmic_random} \end{minipage} \begin{minipage}{0.30\linewidth} \centering \includegraphics[width=\linewidth]{figures/ECG_random} \end{minipage} \caption{Performances v. Model sizes for three sample tasks.} \label{fig:random} \end{figure} \section{Supplementary Materials} \vspace{1mm} \subsection{Data License} \begin{itemize} \item CIFAR-100: CC BY 4.0 (on \url{https://www.tensorflow.org/datasets/catalog/cifar100}) \item Spherical CIFAR-100: CC BY-SA \item NinaPro: CC BY-ND \item FSD50k: CC BY 4.0 \item Darcy Flow: MIT \item DeepCov, PSICOV: GPL \item Cosmic: Open License (https://registry.opendata.aws/hst/) \item ECG: ODC-BY 1.0 \item Satellite: GPL 3.0 \item Deepsea: CC BY 4.0 \end{itemize} \subsection{Data Preprocessing Details} \paragraph{CIFAR-100:} while the 10,000 testing images are kept aside only for evaluating architectures, the 50,000 training images are randomly partitioned into 40,000 for architecture search and 10,000 for validation. On all of the 50,000 training images, we apply standard CIFAR augmentations including random crops and horizontal flipping, and finally normalize them using a pre-calculated mean and standard deviation of this set. On the 10,000 testing images, we only apply normalization with the same constants. \paragraph{Spherical:} with the same split ratios CIFAR-100, the generated spherical image data is directly used for training and evaluation without data augmentation and pre-processing. \paragraph{NinaPro:} Containing less than 4,000 samples, the data is comprised of single-channel signals with an irregular shape of 16*52 pixels. This task also differs from CIFAR for its class imbalance, as over $65\%$ of all gestures are the neutral position. We split the data using the same ratio as CIFAR, resulting in 2638 samples for training, and 659 samples for validation and testing each. No additional pre-processing is performed. \paragraph{FSD50K:} The raw sound files are first resampled at a frequency of 22,050 Hz and transformed into 96-band, log-mel spectrograms, which is a representation of the sound's power spectrum. Small overlapping audio chunks of 1 second are obtained from these larger clips, resulting in an input size of 101*96 (101 frames of 96-band spectrograms). As data augmentation, background noise of the same frequency is also mixed into 75\% of the training data. We split 4,170 clips into the validation set and 10,231 clips into the test set following the original paper. During training, we train on one randomly-sampled chunk, instead of all chunks, from each clip. \paragraph{Darcy Flow:} we use scripts provided by \citep{li2021fno} to generate the PDEs and their solutions, for a total of 900 data points for training, 100 for validation, and 100 for testing. All input data is normalized with constants calculated on the training set before fed into the neural network and de-normalized following an encode-decode scheme. The solutions, or labels, for the training set are also encoded and decoded this way. The test labels are not processed. \paragraph{PSICOV:} we adopt the chosen subset of DeepCov proteins in \citep{adhikari2020fully}, consisting of 3,456 proteins each with 128*128 feature maps across 57 channels. 100 proteins from this set are used for validation and the rest for training. Test data for final evaluation is gathered from another set of 150 proteins, PSICOV. Since these produce feature maps that are larger (512*512), we run the prediction network over all of its non-overlapping 128*128 patches. \paragraph{Cosmic:} we use data from a specific filter, F435W, of the space telescope, representing the 3605–4882 \text{\normalfont\AA} spectral range. Image stamps of 256*256 pixels are taken from large images. The dataset contains 4,347 stamps for training, and 420 for test, and 483 for validation to match the test set size. \paragraph{ECG:} from the sliding window approach, 12,186 single lead recordings are converted into more than 330,000 recording segments comprised of 261,740 for training, 33,281 for validation, and 33,494 for test. Each segment is of the shape 1*1,000, representing one channel of 1,000-long temporal sequence. \paragraph{Satellite:} each satellite time-series is single-channel of length 46 (1*46). After applying standard normalization, we divide the one million entries to 800,000 for training, 100,000 for validation, and 100,000 for test. Zero-padding to 48-length sequences is required for DenseNAS' downsampling network. \paragraph{DeepSEA:} the genome sequences are 1,000-base pair (bp) long and represented as a 1000$\times$4 binary matrix, as each bp is represented as an one-hot encoding corresponding to either A,C,T,G at that location. Total training set size is 71,753. Validation and test sizes that are not subsampled are 2,490 and 149,400 respectively. \begin{table} \centering \begin{threeparttable} \caption{ Performance of NAS and baselines across NAS-Bench-360 compared to expert architectures. All results are averages of three random seeds, and lower is better for all metrics.\looseness-1 } \label{table-5} \footnotesize \begin{tabular}{llccccc} \toprule Search & Search & \multirow{2}{*}{CIFAR-100} & \multirow{2}{*}{Spherical} & \multirow{2}{*}{Darcy Flow} & \multirow{2}{*}{PSICOV} & \multirow{2}{*}{Cosmic} \\ space & algorithm & & & & & \\ \midrule WRN & default &{\res{23.35}{0.05}}& \res{85.77}{0.71}& \res{0.073}{0.001}& \res{3.84}{0.05}& \res{51.76}{2.09} \\ DenseNAS & random & \res{25.49}{0.41}& \res{71.23}{1.65}& \res{0.071}{0.006}& \res{3.70}{0.06}& \res{70.42}{6.07} \\ DenseNAS & original & \res{25.98}{0.38} &\res{72.99}{0.95} & \res{0.100}{0.010}& \res{3.84}{0.15}& \res{79.52}{2.20} \\ Perceiver IO & default & \res{70.04}{0.44} & \res{82.57}{0.19} & \res{0.240}{0.010} & \res{8.06}{0.06} & \res{100.0}{0.00} \\ XGBoost & default & \res{84.83}{4.15} & \res{96.92}{0.02} & \res{0.085}{0.000} & n/a$^*$ & \res{46.26}{0.09} \\ \midrule WRN & ASHA & \res{23.39}{0.01} & \res{75.46}{0.40} & \res{0.066}{0.00} &\res{3.84}{0.05} & \res{37.53}{10.2} \\ DARTS & GAEA &\res{24.02}{1.92} & {\bf\res{48.23}{2.87}} & {\res{0.026}{0.001}} & {\bf\res{2.94}{0.13}} & {\res{31.15}{3.48}} \\ \midrule Auto-DL & DARTS &n/a& n/a&\res{0.049}{0.005}&\res{6.73}{0.73}& \res{99.79}{0.02}\\ \midrule Expert & default & \textbf{\res{19.39}{0.20}} & \res{67.41}{0.76} &\textbf{ \res{0.008}{0.001}} & \res{3.35}{0.14} & \textbf{\res{25.29}{1.44}} \\ \toprule \toprule Search & Search & \multirow{2}{*}{NinaPro} & \multirow{2}{*}{FSD50K} & \multirow{2}{*}{ECG} & \multirow{2}{*}{Satellite} & \multirow{2}{*}{DeepSEA} \\ space & algorithm & & & & & \\ \midrule WRN & default & {\bf\res{\,\,6.78}{0.26}} & \res{0.92}{0.001}&\res{0.43}{0.01}&\res{15.49}{0.03}& \res{0.40}{0.001} \\ DenseNAS & random & \res{\,\,8.45}{0.56}&{\bf\res{0.60}{0.001}}&\res{0.42}{0.01}&\res{13.91}{0.13}&\res{0.40}{0.001} \\ DenseNAS & original & \res{10.17}{1.31}&\res{0.64}{0.002}&\res{0.40}{0.01}&\res{13.81}{0.69}&\res{0.40}{0.001} \\ Perceiver IO & default & \res{22.22}{1.80} & \res{0.72}{0.002} & \res{0.66}{0.01} & \res{15.93}{0.08} & \res{0.38}{0.004} \\ XGBoost & default & \res{21.90}{0.70} & \res{0.98}{0.002} & \res{0.56}{0.00} & \res{36.36}{0.02} & \res{0.50}{0.000} \\ \midrule WRN & ASHA & \res{\,\,7.34}{0.76} & \res{0.91}{0.030} &\res{0.43}{0.01} & \res{15.84}{0.52}& \res{0.41}{0.002} \\ DARTS & GAEA & \res{17.67}{1.39} & \res{0.94}{0.020} & \res{0.34}{0.01}& {\bf\res{12.51}{0.24}} & \res{0.36}{0.020}\\ \midrule AMBER & ENAS&n/a& n/a& {\res{0.33}{0.02}}&\res{12.97}{0.07}& {\res{0.32}{0.010}} \\ \midrule Expert & default & \res{8.73}{0.90} & \res{0.62}{0.004} & \textbf{\res{0.28}{0.00}} & \res{19.80}{0.00} &\textbf{\res{0.30}{0.024}} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[$*$] did not fit on a single V100 GPU. \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{figures/pp_nas} \caption{\label{fig:nasvnonnas} Performance profiles on all tasks for best-performing NAS vs. Non-NAS. The y-value indicates the fraction of tasks on which a plotted method’s error is within a multiplicative factor $\tau$ of the lowest error achieved by all plotted methods.. } \end{figure} \begin{table} \centering \begin{threeparttable} \caption{ Paramater counts of searched and baseline models for all tasks of NAS-Bench-360. Searched model sizes are reported as \res{mean}{standard deviation} of three random seeds. Results are reported in millions (M). Architectures with the best performance are bolded. } \label{table-6} \footnotesize \begin{tabular}{llccccc} \toprule Search space & Search algorithm& CIFAR-100 & Spherical & Darcy Flow & PSICOV & Cosmic \\ \midrule DenseNAS & random & \res{1.74}{0.12}& \res{2.23}{0.47}& \res{1.00}{0.18}& \res{1.21}{0.16}& \res{0.25}{0.06} \\ DenseNAS & original & \res{2.03}{0.53} &\res{1.84}{0.15} & \res{0.38}{0.13}& \res{0.93}{0.36}& \res{0.15}{0.16} \\ DARTS & GAEA &\res{4.92}{0.28} & \textbf{\res{1.67}{0.14}} & \res{0.63}{0.08} & \textbf{\res{0.53}{0.05}} & \res{0.43}{0.15} \\ Auto-DL & DARTS &n/a& n/a&\res{22.98}{3.49}&\res{6.50}{1.84}& \res{7.61}{2.14}\\ \midrule WRN & default &2.77& 2.77& 2.75& 2.76& 2.75 \\ Expert & default & \textbf{3.08} & 0.16 &\textbf{1.19} & 0.60 & \textbf{0.10} \\ \toprule \toprule Search space & Search algorithm & NinaPro & FSD50K & ECG & Satellite & DeepSEA \\ \midrule DenseNAS & random & \res{6.80}{0.46}&\textbf{\res{2.40}{0.00}}&\res{0.18}{0.05}&\res{0.79}{0.16}&\res{0.25}{0.04} \\ DenseNAS & original & \res{6.69}{0.53}&\res{1.45}{0.00}&\res{0.11}{0.05}&\res{1.08}{0.63}&\res{0.19}{0.00} \\ DARTS & GAEA & \res{3.35}{0.48} & \res{0.81}{0.11} & \res{3.31}{0.07}& \textbf{\res{3.35}{0.35}} & \res{2.91}{0.47}\\ AMBER & ENAS&n/a& n/a& \res{6.61}{0.33}&\res{6.22}{1.36}& \res{8.44}{1.47}\\ \midrule WRN & default & \textbf{2.75}&2.80&0.50&0.51& 0.51 \\ Expert & default & 1.36 & 0.35& \textbf{16.5} & 0.48 &\textbf{60.9} \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} \begin{table} \centering \begin{threeparttable} \caption{ FLOPS of searched and baseline models for all tasks of NAS-Bench-360. Searched model FLOPS are reported as \res{mean}{standard deviation} of three random seeds. Results are reported in GFLOPS. Architectures with the best performance are bolded. } \label{table-7} \footnotesize \begin{tabular}{llccccc} \toprule Search space & Search algorithm& CIFAR-100 & Spherical & Darcy Flow & PSICOV & Cosmic \\ \midrule DenseNAS & random & \res{0.46}{0.07}& \res{0.91}{0.07}& \res{14.42}{2.58}& \res{39.80}{5.09}& \res{8.42}{2.11} \\ DenseNAS & original & \res{0.44}{0.53} &\res{1.84}{0.15} & \res{5.43}{1.82}& \res{30.51}{11.90}& \res{5.00}{5.30} \\ DARTS & GAEA &\res{1.42}{0.09} & \textbf{\res{1.91}{0.65}} & \res{9.33}{1.13} & \textbf{\res{17.74}{1.68}} & \res{14.27}{4.90} \\ Auto-DL & DARTS &n/a& n/a&\res{2.54}{1.20}&\res{3.43}{1.27}& \res{2.44}{0.26}\\ \midrule WRN & default &0.78& 2.78& 39.72& 90.58& 90.06 \\ Expert & default & \textbf{1.18} & n/a &\textbf{n/a} & 0.01 & \textbf{1.96} \\ \toprule \toprule Search space & Search algorithm & NinaPro & FSD50K & ECG & Satellite & DeepSEA \\ \midrule DenseNAS & random & \res{1.02}{0.06}&\textbf{\res{ 0.40}{0.00}}&\res{0.11}{0.02}&\res{0.02}{0.01}&\res{0.15}{0.02} \\ DenseNAS & original & \res{0.97}{0.14}&\res{0.80}{0.00}&\res{0.16}{0.03}&\res{0.02}{0.01}&\res{0.10}{0.00} \\ DARTS & GAEA & \res{0.89}{0.12} & \res{2.57}{0.47} & \res{2.28}{0.05}& \textbf{\res{0.11}{0.07}} & \res{2.01}{0.33}\\ AMBER & ENAS&n/a& n/a& \res{0.03}{0.01}&\res{0.03}{0.01}& \res{0.04}{0.01}\\ \midrule WRN & default & \textbf{0.64}&7.56&1.02&0.04& 1.02 \\ Expert & default & 0.02 & 0.66& \textbf{0.70} & 0.01 &\textbf{0.12} \\ \bottomrule \end{tabular} \begin{tablenotes} \item *some expert models contain non-standard modules without FLOPS count. \end{tablenotes} \end{threeparttable} \end{table} \section{Ethics and Responsible Use}\label{app:ethics} Within our array of tasks, only NinaPro, ECG, and DeepSEA contain human-derived data. Our chosen subset of NinaPro contains only muscle movement data without any exposure of personal information. The original experiments to acquire NinaPro data are approved by the ethics commission of the canton of Valais, Switzerland \citep{atzori2012building}. The ECG data derives from an open challenge and is provided by the medical device company AliveCor, under the GPL license allowing it for public use. The DeepSEA data derived from ENCODE is part of an international collaborative effort, which is overseen and funded by the National Human Genome Research Institute (NHGRI). For other datasets, we have listed the data licenses in the appendix. While we do not view the specific datasets in NAS-Bench-360 as potential candidates for misuse, the broader goal of applying NAS to new domains comes with inherent risks that may require mitigation on an application-by-application basis. \section{Conclusion}\label{sec:conclusion} NAS-Bench-360 is a new performance benchmark consisting of ten diverse tasks derived from various fields of research and practice. It is designed for reproducible research on an academic budget that will guide the development of NAS methods and other automated approaches towards more robust performance across different domains. In initial results, we have demonstrated both the need for such a benchmark and the utility of NAS-Bench-360 specifically for developing new search spaces and algorithms. We also provide precompute architectures from the NAS-Bench-201 search space on two of the ten tasks. While the precomputed architectures on these two tasks are useful for analysis on their own, adding more precomputed search spaces and tasks is an area of further improvement. We welcome researchers to use the NAS-Bench-360 tasks to develop new procedures for automating ML. \section{Analysis} \label{analysis} We conclude our presentation of NAS-Bench-360 with three sets of analyses. The first, a performance analysis of NAS methods and fixed baselines across diverse tasks, reveals new insights about the capabilities and robustness of current NAS methods and demonstrates how our benchmark can enable critical next steps in NAS research. In our second analysis, we evaluate claims from the NAS literature originally made using computer vision tasks, and show that they do not generalize to diverse tasks; this demonstrates how NAS research can benefit from our contribution in the future. Finally, we extend an existing analysis of zero-cost proxy methods on diverse tasks that already uses NAS-Bench-360 \cite{colin2022adeeperlook}.\looseness-1 \subsection{Performance across diverse tasks using NAS-Bench-360} As discussed in Section~\ref{sec:experiment}, we start by considering two practitioners faced with a choice of spending their limited compute on a (possibly tuned) fixed-architecture CNN or trying to find a better architecture using NAS. With this study, we investigate whether modern NAS methods perform well beyond the tasks for which they were designed. \begin{enumerate}[leftmargin=*,topsep=-1pt,noitemsep]\setlength\itemsep{2pt} \item A surface-level analysis suggests that under light resource constraints, modern NAS in the form of DARTS (GAEA) is quite robust to a wide variety of tasks: Table~\ref{table-3} shows it is the highest-ranked domain-independent method and attains the most significant improvement over the fixed WRN baseline. The performance profile in Figure~\ref{fig:performance} (left) also seems favorable, although it is overtaken by tuned WRN at a higher $\tau$-suboptimality. However, a closer look at 2D point tasks in Figure~\ref{fig:performance} (right) reveals that DARTS is quite poor there, despite its design domain being image classification; in particular, it performs very poorly on NinaPro and FSD50K. Furthermore, on tasks where it performs well, it can still lag behind expert architectures; for example, on Darcy Flow, networks that use FNO \citep{li2021fno} or XD-operations \citep{roberts2021xd} do much better. Overall, our results suggest that this practitioner can apply NAS and expect to see some improvement, but also risks catastrophically poor performance (e.g. FSD50K) or not getting truly state-of-the-art results (e.g. Darcy Flow). \item Under stronger budget constraints, our experiments strongly suggest that a practitioner should simply apply the default WRN to their problem rather than undergo the additional complexity of using DenseNAS, as the latter attains little-to-no improvement over the former in Table~\ref{table-3} and has a usually-worse performance profiles Figure~\ref{fig:performance}. On the other hand, DenseNAS performs well on FSD50K---it outperforms all methods even while DARTS (GAEA) fails. \end{enumerate} These first experiments suggest that the modern NAS methods are not always robust to diverse tasks, especially under resource-constrained settings. We believe that NAS-Bench-360's main roles as a future benchmark include developing an understanding of the multi-domain performance of existing approaches and guiding research into better NAS methods. While the latter is beyond the scope of this paper, our additional experiments demonstrate how NAS-Bench-360 facilitates the former. Notably, several of our results address the question of the relative importance of search space vs. search algorithm. For example, Table~\ref{table-3} shows that on DenseNAS, random search is nearly identical to the more sophisticated weight-sharing scheme of the original paper; the two algorithms' performance profiles are also difficult to distinguish in Figure~\ref{fig:performance}. Furthermore, AMBER---a 1D NAS method whose search space includes larger-kernel convolutions for handling such tasks---does better than GAEA even though it uses an older search algorithm (ENAS). These both suggest that search space design, including the use of a wider variety of operations, may be at least as crucial for success as the search algorithm. This point is reinforced by example tasks such as Darcy Flow, where architectures with more exotic operations substantially outperform our best results, as discussed earlier. NAS-Bench-360 also reveals failure points of several methods, not just of general ones that usually perform quite well such as DARTS (GAEA) but also the objective-specific approach Auto-DL, which despite being designed for dense prediction tasks does poorly on all those considered here. Understanding when and why these performance drops happen is critical to developing a more robust NAS that is useful not just on average but in more challenging settings. \subsection{Do past NAS-Bench-201 analyses generalize to NAS-Bench-360?} Existing NAS-benches have been widely used for analyses such as (1) comparing performances of different architectures across tasks, (2) quickly evaluating search methods, and (3) investigating design choices that impact performance. In this section we show via the NAS-Bench-201 search space that the conclusions of past analyses cannot be assumed to hold on tasks beyond computer vision.\looseness-1 \subsubsection{Architecture transferability} We start by using the precomputed results outlined in Section~\ref{sec:precompute} to show in Figure~\ref{fig:ranking} the rank of each architecture across different datasets, indexed on the x-axis by its rank on CIFAR-100. This reveals that while architecture rankings are highly correlated on image classification datasets---as pointed out by the authors of the original benchmark~\cite{dong2020nasbench201}---the rankings become uncorrelated when evaluated on a more diversified set of tasks. Therefore, NAS evaluations should be done across domains to verify true generalizability, and NAS-Bench-360 is especially useful for this purpose.\looseness-1 \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figures/rank.png} \caption{\label{fig:ranking} Architecture rankings between computer vision tasks correlate on NAS-Bench-201 \citep{dong2020nasbench201} (left, sorted by performance on CIFAR-100) but are uncorrelated between CIFAR-100 and two NAS-Bench-360 tasks, NinaPro and Darcy Flow (right). } \end{figure} \subsubsection{Operation redundancy} Our final analysis using the NAS-Bench-201 search space is to investigate the conclusions of a more recent study on the redundancy of operations \cite{wan2022on}. We find that the operation redundancy phenomenon they outline is task-dependent and does not generalize to tasks beyond the three vision tasks---CIFAR-10, CIFAR-100, and ImageNet16-120---that they study. To conduct our study we follow their procedure to obtain ``operation importance'' distributions for each operation in the NAS-Bench-201 search space for NinaPro and Darcy Flow; additionally, we reproduce their results on CIFAR-10, CIFAR-100, and ImageNet16-120. {\em Operation importance} measures the incremental effect of each operation choice in the NAS-Bench-201 search space---1x1 convolutions (c1), 3x3 convolutions (c3), skip connections (skip), and 3x3 average pooling (ap3)---on performance~\cite{wan2022on}. The original analysis found that the operation importance distributions are roughly similar across the original NAS-Bench-201 computer vision datasets, which we confirm and show in Figure~\ref{fig:redundancy}. However, we found that the operation importance distributions were drastically different for NinaPro and Darcy Flow, which we also show in Figure~\ref{fig:redundancy}. Not only are their distributions different from those of the computer vision tasks in the original analysis, but the operation importance distribution for NinaPro differs significantly from that of Darcy Flow. This tells us that \textit{different operations are more useful for different tasks}, and using NAS-Bench-360, we find that we cannot conclude that any of these operations are universally redundant or useful in a given search space across tasks. In other words, using NAS-Bench-360, we find that the original claim that ``existing search spaces contain a high degree of redundancy'' \cite{wan2022on} does not hold when considering diverse tasks beyond computer vision.\looseness-1 \begin{figure}[!t] \centering \includegraphics[width=0.19\linewidth]{figures/cifar10_ops.pdf} \includegraphics[width=0.19\linewidth]{figures/cifar100_ops.pdf} \includegraphics[width=0.19\linewidth]{figures/imagenet_ops.pdf} \includegraphics[width=0.19\linewidth]{figures/ninapro_ops.pdf} \includegraphics[width=0.19\linewidth]{figures/darcy_ops.pdf} \caption{\label{fig:redundancy} Different operations are important for different tasks. While prior work \citep{wan2022on} shows that the operation importance distributions are stable across computer vision tasks---as shown by the high similarity of the three plots on the left---we find that they differ significantly for NinaPro and Darcy Flow.\looseness-1 } \end{figure} \subsection{Zero-cost proxies on diverse tasks} \begin{table}[!t] \centering \begin{threeparttable} \caption{Performance comparison of TE-NAS and GAEA using the DARTS search space on CIFAR-100, Spherical, NinaPro, and Darcy Flow. Lower is better for all metrics.} \label{table_4} \begin{tabular}{lcccccccc} \toprule & CIFAR-100 & Spherical & NinaPro & Darcy Flow \\ TE-NAS & 24.32 & 56.87 & {\bf 9.71} & {\bf 0.012} \\ GAEA & {\bf 24.02} & {\bf 48.23} & 17.67 & 0.026 \\ \bottomrule \end{tabular} \end{threeparttable} \end{table} We conclude with an analysis of TE-NAS \cite{chen2021neural}, a zero-cost proxy inspired by neural tangent kernel (NTK) analysis, on four NAS-Bench-360 tasks. Zero-cost proxies~\cite{mellor2021naswot,abdelfattah2021zcp} are the subject of a recent direction in NAS research that aims to construct quick-to-evaluate measures of architecture performance without doing any training. Recently, \cite{colin2022adeeperlook} evaluated several zero-cost proxies on tasks from NAS-Bench-360 (Spherical, NinaPro, and Darcy Flow), as well as on TransNAS-Bench-101 \cite{duan2021transnas}. One major weakness of zero-cost proxies that they point out is that zero-cost proxies are not much more computationally efficient than weight-sharing methods, as the total compute cost is still dominated by the evaluation of the searched architecture~\cite{colin2022adeeperlook}. For example, this renders TE-NAS in the DARTS search space comparable to GAEA DARTS in terms of computational efficiency. The authors of \cite{colin2022adeeperlook} also point out that the performance of different zero-cost proxies vary considerably across diverse datasets, even subject to the same search space. Performance may be strong on some tasks, but weak on others.\looseness-1 To expand such study of zero-cost proxies, we look at one that \cite{colin2022adeeperlook} do not consider---TE-NAS---and evaluate its performance on the DARTS space using four NAS-Bench-360 tasks: CIFAR-100, Spherical, NinaPro, and Darcy Flow. The results of this evaluation are shown in Table~\ref{table_4}. Unlike many other zero-cost-proxies~\citep{mellor2021naswot}, the fact that TE-NAS is constructed from a domain-agnostic NTK analysis rather than experiments makes it a potential candidate for good performance on diverse tasks. However, Table~\ref{table-4} shows that performance does vary considerably across tasks, as observed for other proxies by \cite{colin2022adeeperlook}. In-particular, TE-NAS performs okay on NinaPro and beats all methods in Table~\ref{table-2} on Darcy Flow---where its performance approaches that of the expert-designed FNO~\cite{li2021fno}---but does poorly on Spherical. This evaluation adds evidence to existing scientific findings already enabled by NAS-Bench-360~\cite{colin2022adeeperlook} and provides additional evidence for the need to evaluate all NAS methods, including zero-cost proxies, on diverse tasks.\looseness-1 \subsection{Experimental Setup}\label{sec:experiment} Below we discuss the main reporting details of our empirical evaluation. \begin{itemize}[leftmargin=*,topsep=-1pt,noitemsep] \item {\bf Hyperparameter tuning:} As detailed in the Appendix, we use the same hyperparameter ranges across all tasks to tune WRN. We use ASHA~\citep{li2018system} to search over these hyperparameters and give it a budget on each task that matches the total search and retraining budget of DARTS (GAEA). \item {\bf Aggregation metrics:} To aggregate results across tasks, we use the median rank of each method and its performance improvement over WRN for direct comparison via a singe number, as demonstrated in Table~\ref{table-3}. However, since these metrics can be sensitive to small differences in performance, we also employ performance profiles \citep{dolan2002benchmarking} to mitigate that effect while still accounting for outliers. As described in Figure~\ref{fig:comparison}, these curves denote for each $\tau$ the fraction of tasks on which a method is no worse than a $\tau$-factor from the optimal. Concretely, we plot $ \rho_s(\tau) = \frac{1}{|\mathcal{P}|} \left| \left\{ p \in \mathcal{P}: \frac{\text{error}_{p, s}}{ \min_{s \in \mathcal{S}} \text{error}_{p, s} } \leq \tau \right\} \right|$ given some method $s \in \mathcal{S}$ on tasks $\mathcal{P}$. \item {\bf Software and hardware:} We adopt the free, open-source software \textit{Determined}\footnote{\url{https://github.com/determined-ai/determined}} for experiment management, hyperparameter tuning, and cloud deployment. All experiments are performed on a single p3.2xlarge instance with an NVIDIA V100 GPU. Costs in GPU-hours are in the appendix. \end{itemize} \subsection{Precomputing NAS-Bench-201 on NinaPro and Darcy Flow}\label{sec:precompute} The intended goal of NAS-Bench-360 is to evaluate the performance of \textit{NAS search method and search space pairs} on diverse tasks, which precludes the precomputation of all architectures in general due to the lack of a single fixed search space. A complete lack of precomputed architectures would be perhaps limiting for many NAS researchers, who rely on precomputed NAS benchmarks when developing new search methods. In an effort to address this potential limitation, we precompute all architectures in the NAS-Bench-201 \citep{dong2020nasbench201} search space on two representative tasks in NAS-Bench-360: NinaPro and Darcy Flow. We follow the same experimental procedure as in the original NAS-Bench-201 benchmark \citep{dong2020nasbench201} to generate the precompute results, except where they vary the number of models trained for each architecture between one and three, we fix the number of trials per architecture to one. Note that since NAS-Bench-201 already includes precompute for CIFAR-100, a dataset we include in NAS-Bench-360.\looseness-1 \section{Introduction}\label{sec:intro} Neural architecture search (NAS) aims to automate the design of deep neural networks, ensuring performance on par with hand-crafted architectures while reducing human labor devoted to tedious architecture tuning \citep{elsken2019nas}. With the growing number of application areas of ML, and thus of use-cases for automating it, NAS has experienced an intense amount of study in well-established machine learning domains, with significant progress in search space design~\citep{zoph2018nas,liu2019darts,cai2019proxyless}, search efficiency~\citep{pham2018enas}, and search algorithms~\citep{xu2020pcdarts,li2021gaea,white2021bananas}. Notably, the field has largely been dominated by methods designed for and evaluated on benchmarks in computer vision \citep{liu2019darts,ying2019nasbench101,dong2020nasbench201}, yet the use of NAS techniques may be especially impactful in under-explored or under-resourced domains where less is known about useful architecture design patterns. There have been a few recent efforts to diversify these benchmarks to settings such as vision-based transfer learning \citep{duan2021transnas} and speech and language processing \cite{mehrotra2021asr,klyuchnikov2020nlp}; however, evaluating NAS methods on such well-studied tasks using traditional CNN search spaces does not give a good indication of their utility on more far-afield applications, which have often necessitated the design of custom neural operations \citep{cohen2018spherical,li2021fno}. We make progress towards studying NAS on more diverse tasks by introducing a suite of benchmark datasets drawn from various data domains that we collectively call {\bf NAS-Bench-360}. This benchmark consists of an organized setup of ten suitable datasets that represent diverse application domains, dataset sizes, problem dimensionalities, and learning objectives. We also include a standard image classification task as a baseline point of comparison, as many new methods continue to be designed for that setting. Note that the core component of NAS-Bench-360 is {\em not} a typical NAS benchmark, which often involves precomputing all architectures in some fixed search space. In contrast, our contribution is explicitly intended to be agnostic of the search space being used, as different search spaces may work well for different tasks. Thus NAS-Bench-360 is a task-oriented NAS benchmark with the intended use-case of evaluating NAS method and search space pairs on a wide variety of domains. However, to aid research, three of our tasks---for two of which we contribute the precompute---do come with trained architectures from the NAS-Bench-201 search space \citep{dong2020nasbench201}. Experimentally, we demonstrate the usefulness of NAS-Bench-360 by performing a set of analyses evaluating whether the success of NAS in computer vision is indicative of strong performance on the much broader set of problems to which NAS can be applied. Specifically, we report performance comparisons between NAS methods, investigate the validity of existing NAS hypotheses made solely on computer vision tasks, and extend an existing analysis of zero-cost proxies already-enabled by our benchmark~\cite{colin2022adeeperlook}. From these analyses, we arrive at the following conclusions: \begin{itemize}[leftmargin=*,topsep=-1pt,noitemsep]\setlength\itemsep{2pt} \item Resource-constrained practitioners may be better of using a fixed CNN rather than NAS (Figure~\ref{fig:comparison}). \item NAS-Bench-201 analyses on computer vision tasks do not generalize to diverse tasks. \item Zero-cost proxies perform inconsistently on diverse tasks, corroborating earlier findings \cite{colin2022adeeperlook}. \end{itemize} We have released all datasets, experiment code, precomputed models, seeds, and environments used in our experiments.\footnote{\url{https://github.com/rtu715/NAS-Bench-360}} Releasing our code, random seeds, and environments in the form of Docker containers assures reproducibility of all experimental results presented in this work and encourages the same level of reproducibility for future research performed using NAS-Bench-360. \begin{figure}[!t] \centering \includegraphics[width=0.47\linewidth]{figures/pp_fast.pdf} \hfill \includegraphics[width=0.47\linewidth]{figures/pp_slow.pdf} \vspace{-3mm} \caption{ Performance profiles on NAS-Bench-360 comparing NAS methods (blue) to a fixed CNN (orange), specifically a Wide ResNet (WRN) \citep{zagoruyko2016wideresnet}. Resource-constrained practitioners might be better off not using NAS (left), while less constrained practitioners can still benefit (right). The y-axis is the fraction of tasks on which error is within a factor $\tau$ of the optimal method, i.e. higher is better. } \label{fig:comparison} \end{figure} \section{Introduction}\label{sec:intro} Neural architecture search (NAS) aims to automate the design of deep neural networks, ensuring performance on par with hand-crafted architectures while reducing human labor devoted to tedious architecture tuning \citep{elsken2019nas}. With the growing number of application areas of ML, and thus of use-cases for automating it, NAS has experienced an intense amount of study in well-established machine learning domains, with significant progress in search space design~\citep{zoph2018nas,liu2019darts,cai2019proxyless}, search efficiency~\citep{pham2018enas}, and search algorithms~\citep{xu2020pcdarts,li2021gaea,white2021bananas}. Notably, the field has largely been dominated by methods designed for and evaluated on benchmarks in computer vision \citep{liu2019darts,ying2019nasbench101,dong2020nasbench201}, yet the use of NAS techniques may be especially impactful in under-explored or under-resourced domains where less is known about useful architecture design patterns. There have been a few recent efforts to diversify these benchmarks to settings such as vision-based transfer learning \citep{duan2021transnas} and speech and language processing \cite{mehrotra2021asr,klyuchnikov2020nlp}; however, evaluating NAS methods on such well-studied tasks using traditional CNN search spaces does not give a good indication of their utility on more far-afield applications, which have often necessitated the design of custom neural operations \citep{cohen2018spherical,li2021fno}. We make progress towards studying NAS on more diverse tasks by introducing a suite of benchmark datasets drawn from various data domains that we collectively call {\bf NAS-Bench-360}. This benchmark consists of an organized setup of ten suitable datasets that (a) can be evaluated in a unified way using existing NAS approaches and (b) represent diverse application domains, dataset sizes, problem dimensionalities, and learning objectives. We also include standard image classification evaluations as a baseline point of comparison, as many new methods continue to be designed for such tasks. NAS benchmarks typically involve precomputing all architectures in some fixed search space. In contrast, NAS-Bench-360 is explicitly intended to be agnostic of the search space being used---as different search spaces may work well for different tasks.\footnote{While NAS-Bench-360 is generally designed to be search space agnostic, we do provide precompute results using the NAS-Bench-201 search for specific analyses.} In this sense, NAS-Bench-360 is a task-oriented NAS benchmark with the intended use-case of evaluating NAS method and search space pairs on a wide variety of domains. We perform a thorough analysis of NAS methods and baselines on NAS-Bench-360, followed by a set of scientific analyses of published NAS findings originally demonstrated on computer vision tasks, and demonstrate that these analyses do not necessarily hold for diverse application domains. We have released all datasets, experiment code, seeds, and environments used in our experiments.\footnote{\url{https://github.com/rtu715/NAS-Bench-360}} Releasing our code, random seeds, and environments in the form of Docker containers assures reproducibility of all experimental results presented in this work and encourages the same level of reproducibility for future research performed using NAS-Bench-360. \begin{figure}[!t] \centering \includegraphics[width=0.47\linewidth]{figures/pp_fast.pdf} \includegraphics[width=0.47\linewidth]{figures/pp_slow.pdf} \caption{ Performance profiles for two settings on all ten tasks in NAS-Bench-360. The y-value indicates the fraction of tasks on which a plotted method’s error is within a multiplicative factor $\tau$ of the lowest error achieved by all plotted methods, thus higher is better. } \label{fig:comparison} \end{figure} \section*{Acknowledgments} We thank Maria-Florina Balcan for providing useful feedback. We also thank Hewlett Packard Enterprise for compute resources and the Determined AI open-source community for its support. This work was supported in part by DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., a Block Center Grant, a Two Sigma Fellowship Award, and a Facebook PhD Fellowship Award. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies. \section{Experimental design} \begin{table} \centering \begin{threeparttable} \caption{ Performance of NAS and baselines across NAS-Bench-360. Methods are divided into efficient methods (e.g. DenseNAS and fixed WRN) that take 1-10 GPU-hours, more expensive methods (e.g. DARTS and tuned WRN) that take 10-100+ GPU-hours, and specialized methods (Auto-DL and AMBER). All results are averages of three random seeds, and lower is better for all metrics.\looseness-1 } \label{table-2} \footnotesize \begin{tabular}{llccccc} \toprule Search & Search & \multirow{2}{*}{CIFAR-100} & \multirow{2}{*}{Spherical} & \multirow{2}{*}{Darcy Flow} & \multirow{2}{*}{PSICOV} & \multirow{2}{*}{Cosmic} \\ space & algorithm & & & & & \\ \midrule WRN & default &{\bf\res{23.35}{0.05}}& \res{85.77}{0.71}& \res{0.073}{0.001}& \res{3.84}{0.05}& \res{51.76}{2.09} \\ DenseNAS & random & \res{25.49}{0.41}& \res{71.23}{1.65}& \res{0.071}{0.006}& \res{3.70}{0.06}& \res{70.42}{6.07} \\ DenseNAS & original & \res{25.98}{0.38} &\res{72.99}{0.95} & \res{0.100}{0.010}& \res{3.84}{0.15}& \res{79.52}{2.20} \\ Perceiver IO & default & \res{70.04}{0.44} & \res{82.57}{0.19} & \res{0.240}{0.010} & \res{8.06}{0.06} & \res{100.0}{0.00} \\ XGBoost & default & \res{84.83}{4.15} & \res{96.92}{0.02} & \res{0.085}{0.000} & n/a$^*$ & \res{46.26}{0.09} \\ \midrule WRN & ASHA & \res{23.39}{0.01} & \res{75.46}{0.40} & \res{0.066}{0.00} &\res{3.84}{0.05} & \res{37.53}{10.2} \\ DARTS & GAEA &\res{24.02}{1.92} & {\bf\res{48.23}{2.87}} & {\bf\res{0.026}{0.001}} & {\bf\res{2.94}{0.13}} & {\bf\res{31.15}{3.48}} \\ \midrule Auto-DL & DARTS &n/a& n/a&\res{0.049}{0.005}&\res{6.73}{0.73}& \res{99.79}{0.02}\\ \toprule \toprule Search & Search & \multirow{2}{*}{NinaPro} & \multirow{2}{*}{FSD50K} & \multirow{2}{*}{ECG} & \multirow{2}{*}{Satellite} & \multirow{2}{*}{DeepSEA} \\ space & algorithm & & & & & \\ \midrule WRN & default & {\bf\res{\,\,6.78}{0.26}} & \res{0.92}{0.001}&\res{0.43}{0.01}&\res{15.49}{0.03}& \res{0.40}{0.001} \\ DenseNAS & random & \res{\,\,8.45}{0.56}&{\bf\res{0.60}{0.001}}&\res{0.42}{0.01}&\res{13.91}{0.13}&\res{0.40}{0.001} \\ DenseNAS & original & \res{10.17}{1.31}&\res{0.64}{0.002}&\res{0.40}{0.01}&\res{13.81}{0.69}&\res{0.40}{0.001} \\ Perceiver IO & default & \res{22.22}{1.80} & \res{0.72}{0.002} & \res{0.66}{0.01} & \res{15.93}{0.08} & \res{0.38}{0.004} \\ XGBoost & default & \res{21.90}{0.70} & \res{0.98}{0.002} & \res{0.56}{0.00} & \res{36.36}{0.02} & \res{0.50}{0.000} \\ \midrule WRN & ASHA & \res{\,\,7.34}{0.76} & \res{0.91}{0.030} &\res{0.43}{0.01} & \res{15.84}{0.52}& \res{0.41}{0.002} \\ DARTS & GAEA & \res{17.67}{1.39} & \res{0.94}{0.020} & \res{0.34}{0.01}& {\bf\res{12.51}{0.24}} & \res{0.36}{0.020}\\ \midrule AMBER & ENAS&n/a& n/a& {\bf\res{0.33}{0.02}}&\res{12.97}{0.07}& {\bf\res{0.32}{0.010}} \\ \bottomrule \end{tabular} \begin{tablenotes} \item[$*$] did not fit on a single V100 GPU. \end{tablenotes} \end{threeparttable} \end{table} Having detailed our construction of NAS-Bench-360, in this section we will establish the experimental setup for our analyses in the following section, which demonstrates the usefulness of NAS-Bench-360 for evaluating NAS methods on diverse tasks. We first specify the NAS methods and baselines we compare, followed by the details of the experimental setup and intended use of the benchmark. Finally, we provide details of the precomputed NAS-Bench-201 search space for two representative diverse tasks from NAS-Bench-360: NinaPro and Darcy Flow.\looseness-1 \subsection{Baselines and Search Procedures}\label{sec:method} Our initial experiments follow two practitioners with different resource settings: one with enough compute to tune a WRN (less-constrained) and another who can only train it once with the default hyperparameters (constrained). Given these two scenarios, we compare against NAS methods that each practitioner would be able to run. In both cases, we focus on two well-known search paradigms: cell-based NAS (using DARTS~\citep{liu2019darts}) and macro NAS (using DenseNAS~\citep{fang2020densenas}). We further compare these approaches to two customized NAS methods: Auto-DeepLab \citep{liu2019auto} for 2D dense prediction and AMBER \citep{zhang2021automated} for 1D prediction, as well as general-purpose baselines: Perceiver IO \cite{jaegle2022perceiver} and XGBoost \cite{xgboost}. Additional details are provided in the Appendix. \section{Related Work}\label{sec:related} Benchmarks have been critical to the development of NAS in recent years. This includes standard evaluation datasets and protocols, of which the most popular are the CIFAR-10 and ImageNet routines used by DARTS~\citep{liu2019darts}. Another important type of benchmark has been tabular benchmarks such as NAS-Bench-101~\citep{ying2019nasbench101}, NAS-Bench-201~\citep{dong2020nasbench201}, NAS-Bench-1Shot1~\citep{zela2020nasbench1shot1}, and TransNAS-Bench-101~\citep{Duan2021TransNASBench101IT}; these benchmarks exhaustively evaluate all architectures in their search spaces, which is made computationally feasible by defining simple searched cells. Consequently, they are less expressive than the DARTS cell \citep{liu2019darts}, often regarded as the most powerful search space in the cell-based regime. Notably, the full NAS-Bench-360 benchmark is {\em not} intended to be a tabular benchmark, i.e. we do {\em not} evaluate every architecture from a fixed search space on all ten of our tasks; instead, the focus is on the organization of a suite of tasks for assessing both NAS algorithms and search spaces, which would necessarily be restricted by fixing a search space for a tabular benchmark. Pre-computing on an expansive search space such as DARTS, with $10^{18}$ possible architectures, is computationally intractable. Architectures found on lesser search spaces are most likely suboptimal: a vanilla Wide ResNet (WRN) outperforms all networks in the NAS-Bench-201 search space on CIFAR-100. Nonetheless, we find that including precompute results for all of NAS-Bench-201 on two of our tasks is useful in evaluating various claims in the NAS literature centered on computer vision tasks.\looseness-1 While NAS methods and benchmarks have generally been focused on computer vision, recent work such as AutoML-Zero \citep{real2020automlzero} and XD-operations \citep{roberts2021xd} has started moving towards a more generically applicable set of tools for AutoML. However, even more recent benchmarks that do go beyond the most popular vision datasets have continued to focus on well-studied tasks, including vision-based transfer learning~\citep{duan2021transnas}, speech recognition~\citep{mehrotra2021asr}, and natural language processing~\citep{klyuchnikov2020nlp}. We aim to go beyond such areas to evaluate the potential of NAS to automate the application of ML in truly under-explored domains. One analogous work to ours in the field of meta-learning is the Meta-Dataset benchmark of few-shot tasks \citep{triantafillou2020metadataset}, which similarly aimed to establish a wide-ranging set of evaluations for that field. For our inclusion of diverse tasks, we title our benchmark NAS-Bench-360 to resemble the idea of a 360-degree camera that covers all possible directions. \section{NAS-Bench-360: A Suite of Diverse and Practical Tasks}\label{sec:tasks} In this section, we introduce the NAS setting targeted by our benchmark, our motivation for organizing a new set of diverse tasks as a NAS evaluation suite, and our task-selection methodology. We report evaluations of specific algorithms on this new benchmark in the next section. \subsection{Neural Architecture Search: Problem Formulation and Baselines} For completeness and clarity, we first formally discuss the architecture search problem itself, starting with the extended hypothesis class formulation \cite{li2021gaea}. Here the goal is to use a dataset of points $x\in\mathcal X$ to find parameters $\*w\in\mathcal W$ and $a\in\mathcal A$ of a parameterized function $f_{\*w,a}:\mathcal X\mapsto\mathbb{R}_{\ge0}$ that minimize the expectation $\mathbb{E}_{x\sim\mathcal D}f_{\*w,a}(x)$ for some test distribution $\mathcal D$ over $\mathcal X$; here $\mathcal X$ is the input space, $\mathcal W$ is the space of model weights, and $\mathcal A$ is the set of architectures. For generality, we do not require the training points to be drawn from $\mathcal D$ to allow for domain adaptation, as is the case for one of our tasks, and we do not require the loss to be supervised. Note also that the goal here does not depend on computational or memory efficiency, which we do not focus on in our evaluations; our restriction is only that the entire pipeline can be run on an NVIDIA V100 GPU. Notably, this formulation makes no distinction between the model weights $\*w$ and architectures $a$, treating both as parameters of a larger model. Indeed, the goal of NAS may be seen as similar to model design, except now we include the design of an (often discrete) {\em architecture space} $\mathcal A$ such that it is easy to find an architecture $a\in\mathcal A$ and model weights $\*w\in\mathcal W$ whose test loss $\mathbb{E}_\mathcal D f_{\*w,a}$ is low using a search algorithm. This can be done in a one-shot manner---simultaneously optimizing $a$ and $\*w$---or using the standard approach of first finding an architecture $a$ and then keeping it fixed while training model weights $\*w$ using a pre-specified algorithm such as stochastic gradient descent (SGD). This formulation divides NAS algorithms into two camps: one-shot, weight-sharing methods and non-weight-sharing ones such as random search, which operate by repeatedly sampling architectures and evaluating them. The formulation also includes non-NAS methods by allowing the architecture search space to be a singleton. When the sole architecture is a standard and common network such as WRN~\citep{zagoruyko2016wideresnet}, this yields a natural baseline with an algorithm searching for training hyperparameters, not architectures. For our empirical investigation, we compare the performance of state-of-the-art NAS approaches against that of the three baselines: WRN, PerceiverIO~\cite{jaegle2022perceiver}, and XGBoost~\cite{xgboost}. \begin{table}[h!] \footnotesize \caption{Task metadata for NAS-Bench-360. Metrics are standardized such that lower is better.} \label{table-1} \centering \begin{tabular}{l@{\hspace{10pt}}lllll@{\hspace{4pt}}c} \toprule Task name & Size & Dim. & Type & Learning objective & Metric & New to NAS?\\ \midrule CIFAR-100 & \multirow{2}{*}{60K} & \multirow{2}{*}{2D} & \multirow{2}{*}{Point} & Classify natural images & \multirow{2}{*}{0-1 error} & no, widely \\ \cite{krizhevsky2009cifar}&&&& into 100 classes && used \\ \midrule Spherical & \multirow{2}{*}{60K} & \multirow{2}{*}{2D} & \multirow{2}{*}{Point} & Classify spherically projected & \multirow{2}{*}{0-1 error} & \multirow{2}{*}{\checkmark} \\ \cite{cohen2018spherical}&&&& images into 100 classes & \\ \midrule NinaPro & \multirow{2}{*}{3956} & \multirow{2}{*}{2D}& \multirow{2}{*}{Point} & Classify sEMG signals into & \multirow{2}{*}{0-1 error} & \multirow{2}{*}{\checkmark} \\ \cite{atzori2012building}&&&& 18 classes of hand gestures & \\ \midrule \multirow{2}{*}{FSD50K} & \multirow{3}{*}{51K} & \multirow{3}{*}{2D} & Point & Classify sound events & \multirow{3}{*}{$1\hspace{-0.5mm}-\hspace{-0.5mm}\text{mAP}$} & \multirow{3}{*}{\checkmark} \\ \multirow{2}{*}{\cite{fonseca2017freesound}}&&&(multi-& in log-mel spectrograms & \\ &&&~label)& with 200 labels & \\ \midrule Darcy Flow & \multirow{2}{*}{1100} & \multirow{2}{*} {2D} & \multirow{2}{*}{Dense} & Predict the final state of a fluid & \multirow{2}{*}{relative $\ell_2$} & no, used \\ \cite{li2021fno}&&&& from its initial conditions && in~\cite{roberts2021xd} \\ \midrule \multirow{2}{*}{PSICOV} & \multirow{3}{*}{3606} & \multirow{3}{*}{2D} & \multirow{3}{*}{Dense} & Predict pairwise distances & \multirow{3}{*}{$\text{MAE}_8$} & \multirow{2}{*}{no, used} \\ \multirow{2}{*}{\cite{adhikari2020fully}}&&&& between residuals from & & \multirow{2}{*}{in~\cite{roberts2021xd}}\\ &&&& pairwise sequence features & \\ \midrule \multirow{2}{*}{Cosmic} & \multirow{3}{*}{5250} & \multirow{3}{*}{2D} & \multirow{3}{*}{Dense} & Predict probabilistic maps & \multirow{3}{*}{FNR} & \multirow{3}{*}{\checkmark} \\ \multirow{2}{*}{\cite{zhang2020deepcr}}&&&& to identify cosmic rays & \\ &&&& in telescope images & \\ \midrule ECG & \multirow{2}{*}{330K} & \multirow{2}{*}{1D} & \multirow{2}{*}{Point} & Detecting atrial cardiac disease & \multirow{2}{*}{$1\hspace{-0.5mm}-\hspace{-0.5mm}\text{F1}$} & \multirow{2}{*}{\checkmark} \\ \cite{clifford2017af}&&&& from ECG recordings & \\ \midrule Satellite & \multirow{2}{*}{1M} & \multirow{2}{*}{1D} & \multirow{2}{*}{Point} & Classify satellite image pixel time & \multirow{2}{*}{0-1 error} & \multirow{2}{*}{\checkmark} \\ \cite{petitjean2012satellite}&&&& series into 24 land cover types & \\ \midrule \multirow{2}{*}{DeepSEA} & \multirow{3}{*}{250K} & \multirow{3}{*}{1D} & Point & Predicting chromatin & \multirow{3}{*}{$1\hspace{-0.5mm}-\hspace{-0.5mm}\text{AUROC}$} & no, used \\ \multirow{2}{*}{\cite{encode2004encode}}&&&(multi-& and binding states & & in \\ &&&~label)& of RNA sequences && \cite{zhang2021ambient,zhang2021automated} \\ \bottomrule \end{tabular} \end{table} \subsection{Task Selection: Motivation and Methodology} Curating a diverse, practical set of tasks for the study of NAS is our primary motivation behind this work. We observe that past NAS benchmarks focused on creating larger search spaces and more sophisticated search methods for neural networks. However, the utility of these search spaces and methods are only evaluated on canonical computer vision datasets. On a broader range of problems, whether these new methods can improve upon simple baselines remains an open question. This calls for the introduction of new datasets lest NAS research overfits to the biases of CIFAR-10 and ImageNet. By identifying these possible biases, future directions in NAS research can be better primed to suit the needs of practitioners and to increase the deployment of NAS. Summarized in Table \ref{table-1}, NAS-Bench-360 consists of problems that are conducive to processing by convolutional neural networks, which includes a trove of applications associated with spatial and temporal data, spanning single and multiple dimensions. Most current NAS methods are not implemented to search for other types of architectures to process tabular data and graph data. Therefore, we have set this scope for our investigation. During the selection of tasks, diversity is our primary consideration. We define the following axes of diversity to govern our task-filtering process: the first is problem dimensionality, including both 2D with matrix inputs and 1D with sequence inputs; the second is dataset size, for which our selection spans the scale from 1,000 to 1,000,000; the third is problem type , divisible into tasks requiring a singular prediction (point prediction) and multiple predictions (dense prediction); fourth and finally, diversity is achieved through selecting tasks from various learning objectives from applications of deep learning, where introducing NAS could improve upon the performance of handcrafted neural networks. In lieu of providing raw data, we perform data pre-processing locally and store the processed data on a public Amazon Web Services S3 data bucket with download links available on our website. Our data treatment largely follows the procedure defined by the researchers who provided them. This enhances reproducibility by ensuring the uniformity of input data for different pipelines. Additional information about the datasets, pre-processing, and augmentation steps are described in the Appendix.
1,314,259,992,740
arxiv
\section{ Introduction. } Euler's dilogarithm appeared very soon, even if with a different name, in the evaluation of radiative corrections in QED. The first occurrence is perhaps in the 1934 paper by G. Racah on the radiation by fast particles \cita{Racah}, whose function $F(x)$ is equal to $-\hbox{Li}_2(-x)$ in Euler's notation. Two loop calculations \cita{KMR} required the polylogarithms, Nielsen's generalization \cita{Nielsen} of Euler's dilogarithm. More bibliographical indications as well as many relevant results are contained in the popular book by Lewin \cita{Lewin} (note the change in the titles of the two editions of the book). \\ While the polylogarithms are the natural analytical tool to use when dealing with the (relatively) simple integrals appearing in calculations with a few loops, it is known that they will not be sufficient when the number of loops will be larger than has been considered thus far or when several different scales are present. In a recent publication the set of polylogarithms has been extended into something called `multidimensional polylogarithms'~\cita{BBBL}. These functions seem to be very useful when more than one dimensionful parameter is involved. In principle they are a direct generalization of the definition of the power-series expansion of the polylogarithms to a multiparameter space. Besides the dilogarithm, Euler studied also harmonic sums. A recent publication by one of us~\cita{HS} investigated harmonic sums and their applicability, in particular to formulas in Mellin space. These harmonic sums seem to be the natural functions for the results of moment calculations of deep inelastic structure functions when only massless quarks are involved\footnote{This can be shown for all two loop calculations to any order in the expansion parameter $\epsilon$. For three loop calculations such results do not exist yet, but a recent result by Broadhurst and Kreimer~\cite{BK} shows that only at the 7-loop level the counter terms in the QCD beta function contains non-zeta like constants.}. If indeed all these moments can be expressed in terms of harmonic sums, the class of functions that will represent the results in the regular $x$-space will be formed by the inverse Mellin transforms of these harmonic sums. In ref~\cita{HS} it was indicated how one could obtain at least numerical representations of these functions by means of numerical integration. In the current paper we study these functions in a more systematic way. We start with a recursive integral definition of a class of functions, which we will call the harmonic polylogarithms (hpl's), which are by construction a generalization of Nielsen's polylogarithms; it turns out, further, that an important subset of the hpl's is also a subset of the multidimensional polylogarithms of ref~\cita{BBBL}. Then we will study a number of their properties, including expressions for products of harmonic polylogarithms with the same argument, the behaviour at $x=0,1$, the relevant expansions around those points, the algebra of the hpl's and the identities between hpl's of related arguments. Then we study special values and numerical evaluation. Finally we study the Mellin transforms of the harmonic polylogarithms and find that indeed they give the harmonic sums and that there is a one to one correspondence between them. As a consequence the investigation also leads to a rather simple algorithm for the inverse Mellin transform, even though in general the length of the resulting formulae requires a computer implementation for dealing with the great number of terms which are generated. All algorithms that we present have been programmed in the language of FORM~\cite{FORM}. The resulting procedures can be obtained from the second author. \section{ Definitions. } The harmonic polylogarithms of weight $w$ and argument $x$ are identified by a set of $w$ indices, grouped into a $w$-dimensional vector $\vec{m}_w$ and are indicated by $\hbox{H}(\vec{m}_w;x)$. \\ More explicitly, for $w=1$ one defines \begin{eqnarray} \hbox{H}(0;x) &=& \ln{x} \ , \nonumber\\ \hbox{H}(1;x) &=& \int_0^x \frac{dx'}{1-x'} = - \ln(1-x) \ , \nonumber\\ \hbox{H}(-1;x) &=& \int_0^x \frac{dx'}{1+x'} = \ln(1+x) \ . \labbel{eq:defineh1} \end{eqnarray} For their derivatives, one has \begin{equation} \frac{d}{dx} \hbox{H}(a;x) = f(a;x) \ , \labbel{eq:derive1} \end{equation} where the index $a$ can take the 3 values $0, +1, -1$ and the 3 rational fractions $f(a;x)$ are given by \begin{eqnarray} f(0;x) &=& \frac{1}{x} \ , \nonumber\\ f(1;x) &=& \frac{1}{1-x} \ , \nonumber\\ f(-1;x) &=& \frac{1}{1+x} \ . \labbel{eq:definef} \end{eqnarray} Note the (minor) asymmetry of \Eq{eq:defineh1}, in contrast with the higher symmetry of \Eq{eq:derive1}. \noindent For $w > 1$, let us elaborate slightly the notation for the $w$-dimensional vectors $\vec{m}_w$. Quite in general, let us write \begin{equation} \vec{m}_w = ( a, \vec{m}_{w-1} ) \ , \end{equation} where $a=m_w$ is the leftmost index (taking of course one of the three values $0, 1, -1 $), and $\vec{m}_{w-1}$ stands for the vector of the remaining $(w-1)$ components. Further, $\vec{0}_w$ will be the vector whose $w$ components are all equal to the index $0$. The harmonic polylogarithms of weight $w$ are then defined as follows: \begin{equation} \hbox{H}(\vec{0}_w;x) = \frac{1}{w!} \ln^w{x} \ , \labbel{eq:defh0} \end{equation} while, if $\vec{m}_w \neq \vec{0}_w$ \begin{equation} \hbox{H}(\vec{m}_w;x) = \int_0^x dx' \ f(a;x') \ \hbox{H}(\vec{m}_{w-1};x') \ . \labbel{eq:defn0} \end{equation} Quite in general the derivatives can be written in the compact form \begin{equation} \frac{d}{dx} \hbox{H}(\vec{m}_w;x) = f(a;x) \hbox{H}(\vec{m}_{w-1};x) \ , \labbel{derive} \end{equation} where, again, \( a=m_w \) is the leftmost component of \( \vec{m_w} \). \par In analogy with \Eq{eq:defh0}, if $\vec{1}_w, \vec{(-1)}_w$ are the vectors whose components are all equal to $1$ or $-1$, we have by applying recursively the definitions \begin{eqnarray} \hbox{H}(\vec{1}_w;x) = \frac{1}{w!} ( - \ln{(1-x)} )^w \ , \nonumber\\ \hbox{H}(\vec{(-1)}_w;x) = \frac{1}{w!} \ln^w{(1+x)} \ . \labbel{eq:defh1-1} \end{eqnarray} \par Let us now have a look at the first few values of the indices. For $w = 2$ one has the 9 functions \begin{eqnarray} \hbox{H}(0,0;x) &=& \frac{1}{2!} \ln^2{x} \ , \nonumber\\ \hbox{H}(0,1;x) &=& \int_0^x \frac{dx'}{x'} \hbox{H}(1;x') = - \int_0^x \frac{dx'}{x'} \ln(1-x') \ , \nonumber\\ \hbox{H}(0,-1;x) &=& \int_0^x \frac{dx'}{x'} \hbox{H}(-1;x') = \int_0^x \frac{dx'}{x'} \ln(1+x') \ , \nonumber\\ \hbox{H}(1,0;x) &=& \int_0^x \frac{dx'}{1-x'} \hbox{H}(0;x') = \int_0^x \frac{dx'}{1-x'} \ln{x'} \ , \nonumber\\ \hbox{H}(1,1;x) &=& \int_0^x \frac{dx'}{1-x'} \hbox{H}(1;x') = - \int_0^x \frac{dx'}{1-x'} \ln(1-x') \ , \nonumber\\ \hbox{H}(1,-1;x) &=& \int_0^x \frac{dx'}{1-x'} \hbox{H}(-1;x') = \int_0^x \frac{dx'}{1-x'} \ln(1+x') \ , \nonumber\\ \hbox{H}(-1,0;x) &=& \int_0^x \frac{dx'}{1+x'} \hbox{H}(0;x') = \int_0^x \frac{dx'}{1+x'} \ln{x'} \ , \nonumber\\ \hbox{H}(-1,1;x) &=& \int_0^x \frac{dx'}{1+x'} \hbox{H}(1;x') = - \int_0^x \frac{dx'}{1+x'} \ln(1-x') \ , \nonumber\\ \hbox{H}(-1,-1;x) &=& \int_0^x \frac{dx'}{1+x'} \hbox{H}(-1;x') = \int_0^x \frac{dx}{1+x'} \ln(1+x') \ . \labbel{eq:weight2int} \end{eqnarray} Those 9 functions can all be expressed in terms of logarithmic and dilogarithmic functions; indeed, if \begin{equation} \hbox{Li}_2(x) = - \int_0^x \frac{dx'}{x'}\ln(1-x') \labbel{Li2} \end{equation} is the usual Euler's dilogarithm, one finds \begin{eqnarray} \hbox{H}(0,1;x) &=& \hbox{Li}_2(x) \ , \nonumber\\ \hbox{H}(0,-1;x) &=& - \hbox{Li}_2(-x) \ , \nonumber\\ \hbox{H}(1,0;x) &=& - \ln{x} \ln(1-x) + \hbox{Li}_2(x) \ , \nonumber\\ \hbox{H}(1,1;x) &=& \frac{1}{2!} \ln^2(1-x) \ , \nonumber\\ \hbox{H}(1,-1;x) &=& \hbox{Li}_2\left(\frac{1-x}{2}\right) - \ln2 \ln(1-x) -\hbox{Li}_2\left(\frac{1}{2}\right) \ , \nonumber\\ \hbox{H}(-1,0;x) &=& \ln{x} \ln(1+x) + \hbox{Li}_2(-x) \ , \nonumber\\ \hbox{H}(-1,1;x) &=& \hbox{Li}_2\left(\frac{1+x}{2}\right) - \ln2 \ln(1+x) -\hbox{Li}_2\left(\frac{1}{2}\right) \ , \nonumber\\ \hbox{H}(-1,-1;x) &=& \frac{1}{2!} \ln^2(1+x) \ . \labbel{eq:weight2} \end{eqnarray} Something similar happens for harmonic polylogarithms and Nielsen's polylogarithms of weight 3; that is no longer true however from weight 4 on. To make an example, \begin{equation} \hbox{H}(-1,0,0,1;x) = \int_0^x \frac{dx'}{1+x'} \hbox{Li}_3(x') \labbel{eq:weight4ex} \end{equation} cannot be expressed in terms of Nielsen's polylogarithms of the same weight, even allowing for slightly more general arguments ({\it i.e.} when considering, besides $x$, also $-x$, $(1+x)/2, (1-x)/2$ etc.). In other words, the set of the $3^w$ harmonic polylogarithms of weight $w$ is in general a much wider set of functions than the set of the Nielsen's polylogarithms. \par It follows from the definition that if $\vec{m}_w \neq \vec{0}_w$ the hpl's vanish at $x=0$: \begin{equation} \hbox{H}(\vec{m}_w;0) = 0, \hskip 2truecm \vec{m}_w \neq \vec{0}_w \ . \labbel{eq:defh00} \end{equation} Likewise, if the leftmost index $m_w$ is not equal to 1, ($m_w \neq 1$), $\hbox{H}(\vec{m}_w;1)$ is finite; it is also finite when $\vec{m}_w = 1$, but all the remaining indices $\vec{m}_{w-1} $ are zero, ($\vec{m}_{w-1} = \vec{0}_{w-1}$). In the remaining cases, {\it i.e. } $m_w = 1$ and $\vec{m}_{w-1} \ne \vec{0}_{w-1}$, $\hbox{H}(\vec{m}_w;x)$ has a logarithmic behaviour at $x =1$: more exactly, if the $p$ leftmost indices are all equal to $1$, $\hbox{H}(\vec{m}_w;x)$ behaves for $x\to 1$ as a combination of powers of $\ln(1-x)$ ranging from the maximum value $p$ down to $0$ (the maximum power is decreased to \( p-1 \) if the remaining $w-p$ indices are all equal to zero; the study of the detailed logarithmic behaviours at $x=0,1$ will be carried out in Section 3). \par In dealing with specific cases and except for the smallest values of $w$, specifying explicitly all the components of $\vec{m}$ becomes quite cumbersome, so that a more compact notation is welcome. In the case that we ignore the functions of which the last index is zero we can use the same compactified notation as in ref~\cita{HS}. This is to say that, proceeding from right to left, all zeroes are simply eliminated by adding at the same time one to the absolute value of the previous index to the right, as in \begin{equation} \labbel{eq:notchange} \hbox{H}(0,0,1,0,-1;x) = \hbox{H}_{3,-2}(x) . \end{equation} In terms of this notation and excluding, as already stated, the cases in which the rightmost index is zero, one can formulate the following: \noindent {\sl theorem}: If $m_1 \ne 0$ one has \begin{equation} \labbel{eq:minsign} \hbox{H}_{m_p,\cdots,m_1}(-x) = \sign(p) \hbox{H}_{-m_p,\cdots,-m_1}(x)\ . \end{equation} The proof goes by induction and follows rather trivially from the definition of $\hbox{H}$. In the case that we use the notation in which the $m_i$ only have the value $0,1,-1$ the power of $-1$ is the number of indices that are not zero. \\ In general we will write the indices of the $\hbox{H}$-functions as subscripts when we use the notation of the {\sl r.h.s.} of \Eq{eq:notchange}, while we will use the notation of the {\sl l.h.s.} when the indices are supposed to have the values $0,1,-1$ only. In that last notation, to see the relation with the polylogarithms of Nielsen $S_{n,p}(x)$, defined in \cita{Nielsen}, let us indicate with $\vec{0}_n, \vec{1}_p$ as usual, two $n$-dimensional and $p$-dimensional vectors whose components are all equal to $0$ and $1$ respectively; one then has \begin{equation} S_{n,p}(x) = \hbox{H}( \vec{0}_n, \vec{1}_p ; x) \ . \labbel{Nielsen} \end{equation} As an obvious extension of the terminology, the product of two $\hbox{H}$-functions of weight $w_1$ and $w_2$ will be said to have total weight $w=w_1+w_2$. In the following we will often encounter homogeneous ``identities of weight $w$", {\it i.e.} relations (or identities) involving the sum of several terms, where each term is equal to the product of an integer or rational fraction times a $\hbox{H}$-function of weight \( w \) or a product of several $\hbox{H}$-functions separately of lower weight but with total weight \( w \). \par While the $\hbox{H}$-functions of weight $w$ are linear independent, the same is not true for the wider set of all the homogeneous expressions of weight $w$. The redundance can be used for establishing a number of (homogeneous) identities expressing a $\hbox{H}$-function of some argument and weight $w$ as a homogeneous expression of the same weight involving $\hbox{H}$-functions of the same or of related arguments (including constant arguments, such as for instance \( +1 \) or \( -1 \)). The identities can be useful, typically, for exhibiting explicitly the behaviour at particular points (such as the logarithmic behaviour at $0$ or $\pm 1$) or for obtaining relations between $\hbox{H}$-functions of special arguments. Quite in general, while establishing such identities can be more or less wearisome, there is almost always a straightforward ``standard method" for checking a given identity: one first verifies that the identity holds for a particularly convenient choice of the variable (or variables) and then differentiate it with respect to one of the arguments. In so doing one obtains another relation, however of lower weight, according to \Eq{derive}; the procedure can be iterated until a relation of weight $1$ is eventually obtained, whose check is trivial (because the $\hbox{H}$-functions of weight $1$ are just logarithms). \par Likewise, also the mathematical constants corresponding to the particular values of the $\hbox{H}$-function of weight $w$ (such as the values at $x=1$ when finite) can be given the same weight $w$. Those values, at $x=\pm1$ or other simple arguments, are of particular interest by themselves, as it turns out that they can be expressed in terms of a very small number of mathematical constants, such as Riemann \( \zeta \)-functions, \( \ln2 \) etc. We will see that they are connected to the sums to infinity of ref~\cite{HS} which have been systematically evaluated and tabulated\footnote{ in ref~\cite{HS} this was done only to weight $=7$} by one of us (J.V.) to weight $=9$ and can be evaluated basically to any weight, given enough computer resources\footnote{An alternative method to obtain the finite constants consists of their numerical evaluation to high precision and then fitting them to a presumed basis. Using this method Broadhurst~\cite{DBprivate} has evaluated all finite objects at weight $=9$ and some objects at the weights $10$ and $11$}. In similar ways these sums have been evaluated under the name of Euler/Zagier sums by the authors of ref~\cite{BBB}. Hence, whenever $\hbox{H}$-functions at $x=1$ will appear in this paper they can be regarded as known from ref~\cite{HS} or ref~\cite{BBB}, provided their weight is not too large. It will also be shown that one may alternatively consider them as unknown constants, to be expressed in terms of that much smaller number of mathematical constants by systematically exploiting the many identities among $\hbox{H}$'s of various arguments established in the rest of this paper. \section{ Identities between functions of the same argument. } Let us start by the integration by parts (ibp) identities. From the very definition, \begin{eqnarray} \hbox{H}(m_1\cdots m_q;x) &\!=\! & \int_0^x dx'\ f(m_1;x')\hbox{H}(m_2\cdots m_q;x') \nonumber \\ &\!=\!& \hbox{H}(m_1;x)\hbox{H}(m_2\cdots m_q;x) -\!\int_0^x dx'\ \hbox{H}(m_1;x')f(m_2;x')\hbox{H}(m_3\cdots m_q;x') \nonumber \\ &\!=\!& \hbox{H}(m_1;x)\hbox{H}(m_2\cdots m_q;x) -\hbox{H}(m_2m_1;x)\hbox{H}(m_3\cdots m_q;x) \nonumber \\ &\!+\!& \hbox{H}(m_3m_2m_1;x)\hbox{H}(m_4\cdots m_q;x) -\cdots -\sign(p)\hbox{H}(m_q\cdots m_1;x) \ . \labbel{eq:ibp} \end{eqnarray} The above identity can be immediately verified, independently of its derivation, by the `standard methods': it holds at $x = 0$; when differentiating with respect to $x$, one obtains a number of terms which are immediately seen to cancel out pairwise; therefore, the relation is true. This relation shows that in the case that $\vec{m}_q$ is symmetric and $q$ is even the $\hbox{H}$-function reduces to products of lower weight functions. In general the relation can be used when it is important to reduce the number of $\hbox{H}$-functions with the highest weight as much as possible. \par Another important set of identities expresses the product of any two $\hbox{H}$-functions of weight \( w_1 \) and \( w_2 \) as a linear combination of $\hbox{H}$-functions of weight \( w=w_1+w_2 \). Let us start from the case \( w_1 = 1 \); the identity reads \begin{eqnarray} \hbox{H}(a;x) \hbox{H}(m_p,\cdots,m_1;x) &=& \hbox{H}(a,m_p\cdots,m_1;x) \nonumber\\ &+& \hbox{H}(m_p,a,m_{p-1}\cdots,m_1;x) \nonumber \\ &+& \hbox{H}(m_p,m_{p-1},a,m_{p-2}\cdots m_1;x) \nonumber \\ &+& \cdots \nonumber \\ &+& \hbox{H}(m_p,\cdots,m_1,a;x) \ . \labbel{eq:single} \end{eqnarray} It can be established by induction in $p$. For $p =1$ it is almost trivial, corresponding to \Eq{eq:ibp} for $q=2$. Assume then that it holds for $p-1$; take the identity for $p-1$, multiply by $f(m_p;x)$ and integrate over $x$. In the {\sl r.h.s.} we can do the integral and obtain all necessary terms except for the one starting with $a$. The {\sl \l.h.s.} can be integrated by parts to give the proper {\sl l.h.s.} term plus another term that can be integrated and gives indeed the missing term. This completes the proof. \par Again, once established the identity can also be verified by the `standard method': it holds at $x=0$; the $x$-derivative consists of two groups of terms, a first group with the coefficient $f(a;x)$ contains just two terms which cancel out immediately, plus a second group proportional to $f(m_p;x)$, which is nothing but the same relation at level $p-1$, so that the procedure can be repeated $p$ times until everything cancels out. \par There is only one complication with \Eq{eq:single}. This concerns points in which one of the objects involved is divergent. Hence one cannot apply this equation for $x=1$ in the case that either $a=1$ or $m_p=1$. This is explained better in the section on the algebraic properties. \par \Eq{eq:single} can be generalized to the product of two $\hbox{H}$-functions $\hbox{H}(\vec{p};x) \hbox{H}(\vec{q};x)$; if $p,q$ are the dimensions of $\vec{p},\vec{q}$ (or, which is the same, the weights of the two $\hbox{H}$-functions), the product is equal to the sum of $ (p+q)!/p!q!$ terms, each term being an $\hbox{H}$-function of weight $(p+q)$ with coefficient $+1$, obtained by choosing $p$ indices in all possible ways (hence the binomial coefficients) and filling them from left to right with the components of \( \vec{p} \) without changing their order, while the remaining $q$ places contain the components of $\vec{q}$, again without altering their order. This can be expressed with the formula \begin{eqnarray} \hbox{H}(\vec{p};x)\hbox{H}(\vec{q};x) & = & \sum_{\vec{r} = \vec{p}\uplus \vec{q}} \hbox{H}(\vec{r};x) \labbel{eq:halgebra} \end{eqnarray} in which $\vec{p}\uplus \vec{q}$ represents all mergers of $\vec{p}$ and $\vec{q}$ in which the relative orders of the elements of $\vec{p}$ and $\vec{q}$ are preserved. \par As an example, for $p=2, \vec{p}=(a,b)$ and $q=3, \vec{q}=(r,s,t)$ one has \begin{eqnarray} \hbox{H}(a,b;x) \hbox{H}(r,s,t;x) &=& \hbox{H}(a,b,r,s,t;x) + \hbox{H}(a,r,b,s,t;x) \nonumber\\ &+& \hbox{H}(a,r,s,b,t;x) + \hbox{H}(a,r,s,t,b;x) \nonumber\\ &+& \hbox{H}(r,a,b,s,t;x) + \hbox{H}(r,a,s,b,t;x) \nonumber\\ &+& \hbox{H}(r,s,a,b,t;x) + \hbox{H}(r,a,s,t,b;x) \nonumber\\ &+& \hbox{H}(r,s,a,t,b;x) + \hbox{H}(r,s,t,a,b;x) \ , \labbel{eq:prod2-3} \end{eqnarray} as can be easily checked, again, by the `standard method'. \par The product identities \Eq{eq:halgebra} can be used to single out the terms in $\ln(x)$ from $\hbox{H}$-functions whose indices have trailing (or rightmost) indices equal to zero (as we will see in the next section $\hbox{H}$-functions with no trailing zeroes can be expanded in series of $x$ around $x=0$, while $\hbox{H}$-functions with trailing zeroes develop logarithmic singularities at that point). For $a=0$ in \Eq{eq:single}, recalling \( \hbox{H}(0;x) = \ln(x) \), \Eq{eq:defh0} and \Eq{eq:defineh1} one obtains \begin{eqnarray} \hbox{H}(m_1,\cdots,m_p,0;x) & = & \ln(x) \hbox{H}(m_1,\cdots,m_p;x) -\hbox{H}(0,m_1,\cdots,m_p;x) \nonumber \\ && -\hbox{H}(m_1,0,m_2,\cdots,m_p;x) -\cdots -\hbox{H}(m_1,\cdots,m_{p-1},0,m_p;x)\ . \labbel{eq:lnx} \end{eqnarray} In the case that $m_p$ is also zero we can move the last term to the left, divide by two and then use again \Eq{eq:single} for all the other terms, thus obtaining an identity which extracts the logarithmic singularities due to $2$ trailing zeroes. By suitably repeating the procedure as many times as needed, we can extract in general all the powers of $\ln(x)$ from the generic $\hbox{H}$-function. A couple of examples, if $a,b$ are any non-zero indices, are \begin{eqnarray} \hbox{H}(a,b,0,0;x) &=& \hbox{H}(0,0;x) \hbox{H}(a,b;x) \nonumber\\ &-& \hbox{H}(0;x)\biggl( \hbox{H}(a,0,b;x) + \hbox{H}(0,a,b;x) \biggr) \nonumber\\ &+& \hbox{H}(a,0,0,b;x) + \hbox{H}(0,a,0,b;x) + \hbox{H}(0,0,a,b;x) \ , \nonumber\\ \hbox{H}(a,b,0,0,0;x) &=& \hbox{H}(0,0,0;x) \hbox{H}(a,b;x) \nonumber\\ &-& \hbox{H}(0,0;x)\biggl( \hbox{H}(a,0,b;x) + \hbox{H}(0,a,b;x) \biggr) \nonumber\\ &+& \hbox{H}(0;x) \biggl( \hbox{H}(a,0,0,b;x) + \hbox{H}(0,a,0,b;x) + \hbox{H}(0,0,a,b;x) \biggr) \nonumber\\ &-& \biggl( \hbox{H}(a,0,0,0,b;x) + \hbox{H}(0,a,0,0,b;x) \nonumber \\ && + \hbox{H}(0,0,a,0,b;x) + \hbox{H}(0,0,0,a,b;x) \biggr) \end{eqnarray} \par In the same way one can use the product identities, \Eq{eq:halgebra} for extracting the terms singular as powers of \( \ln(1-x) \), or equivalently of $H(1;x)$ according to \Eq{eq:defineh1}, around \( x=1 \) from the $\hbox{H}$-functions whose leading (or leftmost) indices are equal to 1. If \( a=1 \) \Eq{eq:single} can be rewritten as \begin{eqnarray} \hbox{H}(1,m_1,\cdots,m_p;x) & = & \hbox{H}(1;x) \hbox{H}(m_1,\cdots,m_p;x) -\hbox{H}(m_1,1,m_2\cdots,m_p;x) \nonumber \\ & - & \hbox{H}(m_1,m_2,1,\cdots m_p;x) -\cdots -\hbox{H}(m_1,\cdots,m_{p-1}m_p,1;x)\ . \labbel{eq:ln(1-x)} \end{eqnarray} If \( m_1 \) has also the value \( 1 \) we can take the second term of the {\it r.h.s.} to the left, divide by two and obtain an identity to be used when the first \( 2 \) indices are both equal to \( 1 \) and so on. Let us show a couple of examples in the case of two indices \( a,b \) not equal to \( 1 \): \begin{eqnarray} \hbox{H}(1,1,a,b;x) &=& \hbox{H}(1,1;x) \hbox{H}(a,b;x) \nonumber\\ &-& \hbox{H}(1;x)\biggl( \hbox{H}(a,1,b;x) + \hbox{H}(a,b,1;x) \biggr) \nonumber\\ &+& \hbox{H}(a,1,1,b;x) + \hbox{H}(a,1,b,1;x) + \hbox{H}(a,b,1,1;x) \ , \nonumber\\ \hbox{H}(1,1,1,a,b;x) &=& \hbox{H}(1,1,1;x) \hbox{H}(a,b;x) \nonumber\\ &-& \hbox{H}(1,1;x)\biggl( \hbox{H}(a,1,b;x) + \hbox{H}(a,b,1;x) \biggr) \nonumber\\ &+& \hbox{H}(1;x) \biggl( \hbox{H}(a,1,1,b;x) + \hbox{H}(a,1,b,1;x) + \hbox{H}(a,b,1,1;x) \biggr) \nonumber\\ &-& \biggl( \hbox{H}(a,1,1,1,b;x) + \hbox{H}(a,1,1,b,1;x) \nonumber \\ && + \hbox{H}(a,1,b,1,1;x) + \hbox{H}(a,b,1,1,1;x) \biggr) \ ; \end{eqnarray} the structure is very much the same as in the equations for extracting the \( \ln(x) \) singularities related to the trailing zeroes. It is to be noted that the two procedures -- the ``extraction" of leading \( 1 \)'s and trailing \( 0 \)'s -- can be combined, to give, for instance \begin{eqnarray} \hbox{H}(1,1,-1,0;x) &=& \frac{1}{2}\hbox{H}(-1;x)\hbox{H}(0;x)\hbox{H}^2(1;x) - \hbox{H}(-1,1;x)\hbox{H}(0;x)\hbox{H}(1;x) \nonumber \\ &+& \hbox{H}(-1,1,1;x)\hbox{H}(0;x) - \frac{1}{2}\hbox{H}(0,-1;x)\hbox{H}(1;x)\hbox{H}(1;x) \nonumber\\ &+& \hbox{H}(0,-1,1;x)\hbox{H}(1;x) - \hbox{H}(0,-1,1,1;x) \ , \nonumber\\ \hbox{H}(1,1,0,0,0;x) &=& \frac{1}{12} \hbox{H}^3(0;x) \hbox{H}^2(1;x) \nonumber\\ &-& \hbox{H}(0,0,0,1;x)\hbox{H}(1;x) + \hbox{H}(0,0,0,1,1;x) \nonumber\\ &+& \hbox{H}(0,0,1;x)\hbox{H}(0;x)\hbox{H}(1;x) - \hbox{H}(0,0,1,1;x)\hbox{H}(0;x) \nonumber\\ &-& \frac{1}{2} \hbox{H}(0,1;x)\hbox{H}^2(0;x)\hbox{H}(1;x) \nonumber\\ &+& \frac{1}{2} \hbox{H}(0,1,1;x)\hbox{H}^2(0;x) \ . \labbel{ex11-10} \end{eqnarray} Therefore, one can always express a $\hbox{H}$-function with leading \( 1 \)'s and trailing \( 0 \)'s in terms of products of powers of \( H(0;x) \) and \( H(1;x) \), which exhibit the logarithmic singularities in those points, and of other ``irreducible" $\hbox{H}$'s, {\it i.e.} $\hbox{H}$'s whose first index is not \( 1 \) and the last index is not \( 0 \) and therefore is finite at both \( x=1 \) and \( x=0 \). We can push further this kind of reduction, by writing all the possible product identities \Eq{eq:halgebra} and the integration by part identities \Eq{eq:ibp} and using them for expressing as many as possible $\hbox{H}$'s of weight \( w \) and ``unwanted" indices in terms of products of a ``minimal" set of $\hbox{H}$'s of lower weight and ``accepted" indices. It is to be noted that the number of the $\hbox{H}$'s in the ``minimal" set is fixed, but their choice is not unique, even if the condition of the extraction of the leading \( 1 \)'s and trailing \( 0 \)'s is imposed. It is easily seen that at weight \( w \) the number of relations is nothing but the total number of the different products of $\hbox{H}$'s of lower weight and with total weight \( w \). These relations are independent when all $\hbox{H}$-functions of lower weight belong to their respective ``minimal sets". It is to be observed, in any case, that the above ``reduction" involves only different rearrangements, without any modification, of the set of indices which appear in the original $\hbox{H}$, \par An explicit calculation gives the set sizes of table~\ref{tab:basis}. \begin{table}[htb] \centering \begin{tabular}{r|rrr} Weight & Full basis & Irreducible set & Minimal set \\ \hline 2 & 9 & 4 & 3 \\ 3 & 27 & 12 & 8 \\ 4 & 81 & 36 & 18 \\ 5 & 243 & 108 & 48 \\ 6 & 729 & 324 & 116 \\ 7 & 2187 & 972 & 312 \\ 8 & 6561 & 2916 & 810 \end{tabular} \caption{\label{tab:basis}\sl Sizes of the various bases} \end{table} The use of the full basis in which each term has only a single $\hbox{H}$-function gives a unique expression in a rather simple way. This is also the preferred representation when higher weights have to be built up by successive integration. Expressions can also be given in terms of the irreducible set in a relatively easy way. This form is preferred when one has to avoid problems with divergencies. It can also be convenient when establishing identities for related arguments. The use of the minimal set is particularly convenient for the numerical evaluation of the $\hbox{H}$-functions, when a large number of them has to be evaluated in the same point. It should also be noted that the use of a minimal set is relatively easy for the lower weights (at weight 3 it requires only 4 substitutions) while for higher weights it will much less straightforward. \section{Power series expansions} In general the function $\hbox{H}_{\vec{m}}(x)$ does not have a regular Taylor series expansion. This is due to the effect that trailing zeroes in the index field may cause powers of $\ln(x)$. Hence the proper expansion is one in terms of both $x$ and $\ln(x)$. Let us first have a look at what happens when there are no logarithms. We will use now the other notation for the indices. In that case we have: \begin{eqnarray} \hbox{H}_1(x) & = & \sum_{i=1}^\infty \frac{x^i}{i} \nonumber \\ \hbox{H}_{-1}(x) & = & -\sum_{i=1}^\infty \frac{\sign(i)x^i}{i} \end{eqnarray} and assuming\footnote{Because of the linearity of the problem the presence of more than one term, each with a different $S_{\vec{n}}$ would not make much of a difference in the following considerations.} that \begin{eqnarray} \hbox{H}_{\vec{m}}(x) & = & \sum_{i=1}^\infty \frac{\sigma^i x^i}{i^a} S_{\vec{n}}(i) \end{eqnarray} in which $\sigma = \pm 1$ one can write the relations \begin{eqnarray} \label{eq:recsum} \hbox{H}_{0,\vec{m}}(x) & = & \sum_{i=1}^\infty \frac{\sigma^i x^i}{i^{a+1}} S_{\vec{n}}(i) \nonumber \\ \hbox{H}_{1,\vec{m}}(x) & = & \sum_{i=1}^\infty \frac{x^i}{i} S_{\sigma a,\vec{n}}(i\!-\! 1) \nonumber \\ & = & \sum_{i=1}^\infty \frac{x^i}{i} S_{\sigma a,\vec{n}}(i) -\sum_{i=1}^\infty \frac{\sigma^i x^i}{i^{a+1}} S_{\vec{n}}(i) \nonumber \\ \hbox{H}_{-1,\vec{m}}(x) & = & -\sum_{i=1}^\infty \frac{\sign(i)x^i}{i} S_{-\sigma a,\vec{n}}(i\!-\! 1) \nonumber \\ & = & -\sum_{i=1}^\infty \frac{\sign(i)x^i}{i} S_{-\sigma a,\vec{n}}(i) +\sum_{i=1}^\infty \frac{\sigma^i x^i}{i^{a+1}} S_{\vec{n}}(i) \end{eqnarray} At this point one could argue what is the better definition of the nested sums. A definition of the type \begin{eqnarray} Z_{a,\vec{m}}(n) & = & \sum_{i=1}^n \frac{Z_{\vec{m}}(i\!-\! 1)}{i^a} \end{eqnarray} will give only a single term in the expansion and is favored in the mathematical literature, because there one is mainly concerned with sums to infinity. For finite values of $n$ however this definition has the unelegant aspect that when $\vec{m}$ has $k$ components that are not zero, the value of $Z_{a,\vec{m}}(n)$ is zero for $n \le k$. We will mostly follow the conventions of ref~\cita{HS} in which we use the definition: \begin{eqnarray} S_{a,\vec{m}}(n) & = & \sum_{i=1}^n \frac{S_{\vec{m}}(i)}{i^a} \end{eqnarray} In this notation one has the property $S_{\vec{m}_k}(1) = \prod_{i=1}^k \sigma_i$ with $\sigma_i$ being the sign of $m_i$. These two notations will be referred to as $Z$-notation and $S$-notation respectively. The conversion from one notation to the other is not really very complicated if one realizes that $\sum_{j=1}^{i-1} = \sum_{j=1}^i - \delta_{ij}$. Hence the `leading' term has the same index field and the correction terms have fewer indices in which some adjacent indices may have been combined. For $k$ nonzero indices there are in total $2^{k-1}-1$ correction terms. The fact that trailing zeroes in the index field are responsible for powers of $\ln(x)$ can be seen easily now. Because \begin{eqnarray} \frac{1}{k!}\int^xdx\ x^m\ln^k(x) & = & x^{m+1}\sum_{\kappa=0}^k \frac{\sign(k-\kappa)}{\kappa!} \frac{\ln^\kappa(x)}{(m+1)^{k-\kappa+1}} \end{eqnarray} we see that once we start with $\hbox{H}(\vec{0}_k;x)$, the subsequent integrations due to other indices (the first of them not being zero of course, and factors $1/(1\pm x)$ being expanded in $x$) that come to the left of the $\vec{0}_k$ will always leave terms with at most $k$ powers of $\ln(x)$ and there will be a term with $k$ of those powers. Hence the trailing zeroes are responsible for powers of $\ln(x)$. Of course the exact dependence of $\ln(x)$ can be derived much easier by applying \Eq{eq:lnx} repeatedly till all trailing zeroes have been removed. This gives an expansion in terms of powers of $\ln(x)$ and $\hbox{H}$-functions that are of the type we have just studied and hence can be expanded in $x$. It is however also possible to work one's way through the integrals and the various expansions. This is much more work and leads eventually to the same result. Hence we have omitted this derivation. If we compare the $\hbox{H}$-function with the multidimensional polylogarithm in ref~\cite{BBBL} we may notice that this function can be rewritten into the following expansion: \begin{eqnarray} \lambda(^{z_1\cdots z_k}_{b_1\cdots b_k}) & = & \sum_{\nu_1 > \nu_2 >\cdots > \nu_k > 0}^\infty \prod_{j=1}^k \frac{b_{j-1}^{\nu_j}}{\nu_j^{z_j}b_j^{\nu_j}} \end{eqnarray} with $b_0 = 1$. These functions do not contain powers of $\ln(b_i)$ and hence they cannot represent all $\hbox{H}$-functions. If we restrict ourselves to $\hbox{H}$-functions without trailing zeroes one can write the terms in the expansion of these $\hbox{H}$-functions as \begin{eqnarray} \sum_{\nu_1 > \nu_2 >\cdots > \nu_k > 0}^\infty x^{\nu_1}\prod_{j=1}^k \frac{\sigma_j^{\nu_j}}{\nu_j^{z_j}} \end{eqnarray} if we use $Z$-sums and \begin{eqnarray} \sum_{\nu_1 \ge \nu_2 \ge\cdots \ge \nu_k \ge 1}^\infty x^{\nu_1}\prod_{j=1}^k \frac{\sigma_j^{\nu_j}}{\nu_j^{s_j}} \end{eqnarray} if we use $S$-sums. Hence it is clear that the $\hbox{H}$-functions without trailing zeroes are special cases of the multidimensional polylogarithms with $b_i = \pm 1/x$. For the computation of Feynman diagrams we do however need the $\hbox{H}$-functions with trailing zeroes because of the presence of the logarithms (see for instance ref~\cita{Zijlstra}). There is another interesting observation in the expansion. Considering that the expansion of an $\hbox{H}$-function with no trailing zeroes gives terms of the type \begin{eqnarray} \sum_{x=1}^\infty x^i \frac{\sigma^i S_{\vec{m}}(i)}{i^a} \nonumber \end{eqnarray} one can introduce another sum by dividing by either $1\!+\! x$ or $1\!-\! x$ and obtain: \begin{eqnarray} \sum_{x=1}^\infty x^i \frac{\sigma^i S_{\vec{m}}(i)}{i^a} & = & (1\!-\! x) \sum_{x=1}^\infty x^i S_{\sigma a,\vec{m}}(i) \nonumber \\ & = & (1\!+\! x) \sum_{x=1}^\infty x^i \sign(i) S_{-\sigma a,\vec{m}}(i) \end{eqnarray} At times this notation is more convenient. One should however remember that this notation breaks down at either $x=1$ or at $x=-1$, depending on the particular form used. Finally we notice that for $x=1$ we have that \begin{eqnarray} \sum_{x=1}^\infty x^i \frac{\sigma^i S_{\vec{m}}(i)}{i^a} & \rightarrow & S_{\sigma a,\vec{m}}(\infty) \end{eqnarray} and hence the values of the $\hbox{H}$-functions in $x=1$ are related to the values of the $S$-sums in infinity. The trailing zeroes do not cause essential problems because when those functions are first written in terms of powers of $\ln(x)$ these logarithms vanish in $x=1$ and we keep only the terms with $\hbox{H}$-functions without trailing zeroes. For the numerical evaluation of these objects one can use the algorithms of ref~\cita{BBBL} that relate them effectively to combinations of $\hbox{H}$-functions in $x=1/2$ after the appropriate conversions. This is particularly interesting for the higher weights because up to weight 7, 8 or 9 it is still possible to obtain expressions in terms of a very small number of constants (see ref~\cita{BBB} and \cita{HS}), but beyond these weights this becomes too time consuming\footnote{Thus far the only known exact method to do this involves solving simultaneously for all $2\ 3^{w-1}$ $\hbox{H}$-functions in $x=1$. See also a previous footnote.} while an expansion in $x=1/2$ is sufficiently fast for nearly all numerical applications, provided that only a limited number of them is needed. \section{ The algebra} \label{sec:algebra} The harmonic sums form an algebra~\cita{HS} in which the product of two sums with the same argument and having weights $w_1$ and $w_2$ respectively can be written as a sum of terms, each with a single sum of weight $w_1+w_2$. There are two sets of algebraic relations: the relations based on the shuffle algebra which hold for all values of the argument, and the relations based on the triangle theorem of ref~\cita{HS} which hold only for values in infinity, provided that not both harmonic sums are divergent. For the $\hbox{H}$-functions we have the general product formula based on \Eq{eq:halgebra}. This formula is related to the algebra of the harmonic sums, because the harmonic polylogarithms can be expressed in terms of series expansions in which the coefficients are harmonic sums: assume for the moment that neither $\vec{m}$ nor $\vec{n}$ have trailing zeroes. In that case we derive: \begin{eqnarray} \hbox{H}_{a,\vec{m}_p}(x)\hbox{H}_{\vec{n}_q}(x) & = & \frac{1}{1-x} \sum_{i=1}^\infty \frac{S_{\vec{m}_p}(i)x^i}{i^a} \sum_{j=1}^\infty S_{\vec{n}_q}(j)x^j \end{eqnarray} in which one of the two powers of $1/(1-x)$ has been absorbed in the sum over $i$. By combining the powers of $x$ this formula can be rewritten as \begin{eqnarray} \hbox{H}_{a,\vec{m}_p}(x)\hbox{H}_{\vec{n}_q}(x) & = & \frac{1}{1-x} \sum_{i=1}^\infty x^i \sum_{j=1}^i \frac{S_{\vec{m}}(j)S_{\vec{n}}(i-j)}{j^a}\ . \end{eqnarray} Note that the inner sum can be done and gives a set of terms that are all single $S$ functions, even though the expression may not be very compact. It is called a triangle sum and an algorithm for it is given in one of the appendices of ref~\cita{HS}. It is also available as a procedure in the language of FORM~\cite{FORM}. As a result one obtains an expression which can be resummed and gives terms with single $\hbox{H}$-functions. For $\hbox{H}$-functions in $x=1$ we have seen that they can be directly expressed in terms of harmonic sums in infinity. Therefore the general algebraic rules for those sums that are based on the shuffle algebra for harmonic sums can be applied. Hence we see a duality here: the general rules for the $\hbox{H}$-functions correspond to the special triangle rules for the harmonic sums, and the special rules for the $\hbox{H}$-functions in $x=1$ correspond to the general shuffle rules for the harmonic sums. There is one complicating factor when values in $x=1$ are considered. Let us start with assuming that the basic divergence $\hbox{H}(1;1)$ can be used as a symbol. In the case of a `proper' limit procedure such things can be done and after the divergences cancel the finite result should be correct. This is called regularization. The general algebraic relations are based on the triangle sums, rather than on the shuffle algebra, and the triangle sums are not correct when both objects are divergent. The subleading terms will be incorrect. This can be illustrated easily: \begin{eqnarray} \hbox{H}_1(x) & = & \sum_{i=1}^\infty \frac{x^i}{i} \nonumber \\ (\hbox{H}_1(x))^2 & = & 2 \hbox{H}_{1,1}(x) \nonumber \\ & = & 2 \sum_{i=1}^\infty x^i(\frac{S_1(i)}{i}-\frac{1}{i^2}) \nonumber \\ \hbox{H}_1(1) & = & \lim_{x\rightarrow 1} \sum_{i=1}^\infty \frac{x^i}{i} \nonumber \\ & = & S_1(\infty) \nonumber \\ \hbox{H}_{1,1}(1) & = & \lim_{x\rightarrow 1} 2 \sum_{i=1}^\infty x^i(\frac{S_1(i)}{i}-\frac{1}{i^2}) \nonumber \\ & = & 2 S_{1,1}(\infty) - 2 S_2(\infty)) \nonumber \\ & = & (S_1(\infty))^2 - S_2(\infty)) \end{eqnarray} and we see that \begin{eqnarray} (\lim_{x\rightarrow 1} \hbox{H}_1(x))^2 & \ne & \lim_{x\rightarrow 1} (\hbox{H}_1(x))^2\ . \end{eqnarray} The solution to this problem is to be found in $S$-space. There it is possible to regularize the infinite sums in a consistent way by replacing the sum to infinity by a sum to $M$ with $M$ very large but finite, then one can have the divergences cancel and finally take the limit $M\rightarrow \infty$. This does not correspond to anything one can do in $x$-space. Because the triangle theorem does not hold for two $S$-sums that are divergent, one cannot apply the regular algebraic relation for $\hbox{H}$-functions that are both divergent in $x=1$. Hence the proper algebraic relations at $x=1$ have to be derived by means of the shuffle algebra which holds for all $S$-sums: \begin{eqnarray} (\lim_{x\rightarrow 1} \hbox{H}_1(x))^2 & = & (S_1(\infty))^2 \nonumber \\ &=& 2 S_{1,1}(\infty) - S_2(\infty) \nonumber \\ &=& 2\lim_{x\rightarrow 1} \hbox{H}_{1,1}(x) + \lim_{x\rightarrow 1}\hbox{H}_2(x) \end{eqnarray} This way is consistent and will allow us to define the Mellin transform properly in one of the next sections. It involves the use of values in $x=1$. Because of the use of different algebraic relations for $x\neq 1$ and $x=1$, it may happen that expressions look rather complicated, but the various algebraic relations between $\hbox{H}$-functions in $x=1$ could simplify the expressions considerably. However at the moment there is no known systematic method to apply these relations in such a way that one does not have to solve for all values in $x=1$ first. This way all such objects can be expressed in a minimal independent set of objects. Unfortunately there are very many of these objects for a given weight $w$ ($2\ 3^{w-1}$) and even more relations and hence it is a formidable task to determine all values at $x=1$ in terms of a minimal set of constants when the weight is large. If the final answer is supposed to be finite one can however extract the powers of the basic divergences (they correspond to leading indices that are $1$) and hence still obtain a finite answer that can be evaluated numerically. The coefficients of the divergences can be checked to be zero numerically as well. \section{ Identities between $\hbox{H}$-functions of related arguments.} In this section we will look at the identities which can be established for suitable changes of the argument. The common feature is that any $\hbox{H}$-function of weight $w$ and argument $x$ can be expressed as an homogeneous expression of the same weight $w$, involving either $\hbox{H}$-functions depending on a same argument, say \( t\), related to \( x \) by the considered change, or constants corresponding to $\hbox{H}$-functions of special constant values of the arguments (typically \( 1 \)). \par The simplest change of the argument is the change $x\rightarrow -x$. We have seen its effect already in \Eq{eq:minsign}. \par Next is the relation between $\hbox{H}$-functions of $x^2$ and of $x$. Because $1+x^2$ is not a particularly interesting object we will have to exclude indices equal to -1 in the $\hbox{H}$-functions of $x^2$. Restricting the indices to only 1 and 0, we can proceed recursively on the weight. For weight 1 we have from \Eq{eq:defineh1}: \begin{eqnarray} \hbox{H}(0;x^2) &=& 2\hbox{H}(0;x) \nonumber \\ \hbox{H}(1;x^2) &=& \hbox{H}(1;x) - \hbox{H}(-1;x) \ , \labbel{eq:rel1}\end{eqnarray} so that the $\hbox{H}$'s of argument \( x^2 \) are expressed in terms of $\hbox{H}$'s of argument \( x \), as required. \par For $w > 1$, if $\vec{m}_w = \vec{0}_w$, \begin{equation} \hbox{H}(\vec{0}_w;x^2) = 2^w \hbox{H}(\vec{0}_w;x) \ ; \end{equation} otherwise, if $\vec{m}_w = (a,\vec{m}_{w-1})$ for the two cases $a = 0$ and $a = 1$ we have, by using the change of variable $x' = t'^2$ \begin{eqnarray} \hbox{H}(0,\vec{m}_{w-1};x^2) & = & \int_0^{x^2} \frac{dx'}{x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber \\ & = & 2 \int_0^x \frac{dt'}{t'} \hbox{H}(\vec{m}_{w-1};t'^2) \\ \hbox{H}(1,\vec{m}_{w-1};x^2) & = & \int_0^{x^2} \frac{dx'}{1-x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber \\ & = & \int_0^x dt' \left( \frac{1}{1-t'} - \frac{1}{1+t'} \right) \hbox{H}(\vec{m}_{w-1};t'^2) \ . \end{eqnarray} The expression of the \( \hbox{H}(\vec{m}_{w-1};t'^2) \) in terms of $\hbox{H}$'s of the same weight and argument \( t' \) is supposedly known (as we proceed recursively on the weight \( w \)); by substituting such expression and then using the very definition \Eq{eq:defn0} all the required \( x^2 \to x \) identities are obtained. An example of weight $w = 2$ is \begin{eqnarray} \hbox{H}(0,1;x^2) & = & 2 \int_0^x \frac{dt'}{t'} \hbox{H}(1;t'^2) \nonumber \\ & = & 2 \int_0^x \frac{dt'}{t'} \left( \hbox{H}(1;t')-\hbox{H}(-1;t') \right) \nonumber \\ & = & 2 \hbox{H}(0,1;x) - 2 \hbox{H}(0,-1;x) \end{eqnarray} and \Eq{eq:minsign} leads to the well known relation $\hbox{Li}_2(x^2) = 2\hbox{Li}_2(x)+2\hbox{Li}_2(-x)$. We can observe here that a limited set of \( x^2 \to x \) identities could be written only for the Nielsen's polylogarithms corresponding to the \( \hbox{H}_n(x) \) in the notation of \Eq{eq:notchange}, while for the hpl's the set is wider; as an example, one can derive for $w=3$: \begin{eqnarray} \hbox{H}(1,0,1;x^2) & = & 2\biggl(\hbox{H}(1,0,1;x)\!-\!\hbox{H}(-1,0,1;x) \!-\!\hbox{H}(1,0,-1;x)\!+\!\hbox{H}(-1,0,-1;x)\biggr) \labbel{x^2wider}\end{eqnarray} \par The next transformation of the argument we consider is $x \rightarrow 1-x$ which applies again to a smaller set of Nielsen's polylogarithms. Like the previous transformation it is of interest only when there are no negative indices ($1+x \rightarrow 2-x$ is not something we can work with). Proceeding recursively on \( w \), as before, for \( w = 1 \) we have \begin{eqnarray} \hbox{H}(0;1-x) &=& -\hbox{H}(1;x) \nonumber \\ \hbox{H}(1;1-x) &=& -\hbox{H}(0;x). \labbel{eq:rel1a} \end{eqnarray} The extension to higher weights requires a minimum of care. $\hbox{H}(a, \vec{m}_{w-1};1-x) $ of weight \( w > 1 \), with the first index \( a \) equal to \( 0 \) or to \( 1 \) is the generic function. As discussed in Section 3, if \( a=1 \) the function can be expressed in terms of a reduced set of functions, where the leading index \( 1 \) is carried only by \( \hbox{H}(1;1-x) \), for which \Eq{eq:rel1a} holds; therefore, only the case in which the first index \( a \) is 0 is to be considered. In that case, the change of variable \( x'=1-t' \) gives \begin{eqnarray} \hbox{H}(0,\vec{m}_{w-1};1-x) &=& \int_0^{1-x} \frac{dx'}{x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber\\ &=& \int_0^1 \frac{dx'}{x'}\hbox{H}(\vec{m}_{w-1};x') - \int_{1-x}^1 \frac{dx'}{x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber\\ &=& \hbox{H}(0,\vec{m}_{w-1};1) -\int_0^x \frac{dt'}{1-t'} \hbox{H}(\vec{m}_{w-1};1-t')\ , \end{eqnarray} where the constant \( \hbox{H}(0,\vec{m}_{w-1};1) \) is finite (it can be observed here that if the first index is $1$ one runs into the problem that $\hbox{H}(1,\vec{m}_{w-1};1)$ could be divergent). In the general case \( \hbox{H}(\vec{m}_{w-1};1-t') \) will not be irreducible. We can express it in terms of the $\hbox{H}$'s of an irreducible set of weight \( w-1 \), use the supposedly known \( x = 1-t \) identities of weight \( w-1 \) and finally obtain the required weight \( w \) identity by using the definition \Eq{eq:defn0}. As an example we have at weight 4 \begin{eqnarray} \hbox{H}(0,0,1,1;1-x) & = & \hbox{H}(0,0,1,1;1)-\hbox{H}(1;x)\hbox{H}(0,1,1;1)+\hbox{H}(1,1,0,0;x) \nonumber \\ &=& \hbox{H}(0,0,1,1;1) - \hbox{H}(0,1,1;1)\hbox{H}(1;x) - \hbox{H}(0,0,1,1;x) \nonumber \\ & + &\frac{1}{4} \hbox{H}^2(0;x)\hbox{H}^2(1;x) - \hbox{H}(0,1;x)\hbox{H}(0;x)\hbox{H}(1;x) \nonumber \\ & + & \hbox{H}(0,0,1;x)\hbox{H}(1;x) + \hbox{H}(0,1,1;x)\hbox{H}(0;x)\ . \labbel{1mxw4}\end{eqnarray} \par A transformation which applies to all the Nielsen's polylogarithms, \Eq{Nielsen} is \begin{equation} x = 1/y \ ; \hskip 2 truecm y = 1/x \ ; \labbel{eq:xinv} \end{equation} it will be shown that it applies as well to all the $\hbox{H}$-functions. Before continuing, let us recall that the Nielsen's polylogarithms have a (logarithmic) branch point at \( x = 1 \), but are otherwise analytic for smaller values of \( x \), including all the negative real axis; for studying the transformation \Eq{eq:xinv} it can be therefore convenient to establish the identities for negative values of \( x \), and then continue analytically to positive values. The analytic properties of the $\hbox{H}$-functions are more complicated. First of all, if the rightmost index is equal to \( 0 \), they have a branch point at \( x=0 \); that is not a problem, as we have already seen that we can express any $\hbox{H}$-function in terms of the functions of a reduced set where the trailing index \( 0 \) is carried only by powers of \( H(0;x)=\ln{x} \), whose analytic properties are well known. If the rightmost index is not \( 0 \) and all the indices are in general equal to \( 1 \) or \( 0 \), the $\hbox{H}$-functions have the same analytic properties as the Nielsen's polylogarithms; but if some of indices are equal to \( -1 \), a branch cut at \( x=-1 \) appears. Therefore, in the general case when indices equal to \( -1 \) are also present (and that is the case even of the reduced and minimal sets, see Section 3), there is no advantage in considering negative values of \( x \), so that we will start from the beginning with an argument equal to \( x + \ieps \), where \( x \) is real and satisfies the constraints \( 0 \le x \le 1 \), while \( \epsilon \) is positive and infinitesimally small; correspondingly, \begin{equation} y = 1/x - \ieps \ , \labbel{y}\end{equation} {\it i.e.} the real part of \( y \) is also positive, but \( y \ge 1 \), while its infinitesimal imaginary part is negative. \par As in the previous cases, we will proceed by induction on the weight \( w \) of the $\hbox{H}$-functions. At \( w=1 \) we have \begin{eqnarray} \hbox{H}(0;y) &=& - \hbox{H}(0;x) \ , \nonumber \\ \hbox{H}(1;y) &=& \hbox{H}(1;x) + \hbox{H}(0;x) - i\pi \ , \nonumber\\ \hbox{H}(-1;y) &=& \hbox{H}(-1;x) - \hbox{H}(0;x) \ ; \labbel{eq:rel10} \end{eqnarray} the constant $\pi$ has appeared; it must be given weight $1$, so that all the formulas will remain homogeneous of the same weight. When continuing the above equations to negative values of \( x \), in the interval \( -1 \ge x \ge 0 \), \( H(0;x) = \ln(x+\ieps) \) will develop a positive imaginary part; in particular, one has \begin{equation} \hbox{H}(0;-1) = i \pi \ , \labbel{-1+ieps} \end{equation} so that \( \hbox{H}(1;-1) \) takes the real value \( - \ln2 \), as expected. \par For $w > 1$, $\vec{m}_w = (a,\vec{m}_{w-1}) $, we can proceed by induction, along the following lines \begin{eqnarray} \hbox{H}(\vec{m}_w;y) &=& \int_0^y dy'f(a;y') \hbox{H}(\vec{m}_{w-1};y') \nonumber\\ &=& \int_0^1 dy'f(a;y') \hbox{H}(\vec{m}_{w-1};y') + \int_1^y dy'f(a;y') \hbox{H}(\vec{m}_{w-1};y') \nonumber\\ &=& \hbox{H}(\vec{m}_w;1) + \int_x^1 \frac{dx'}{x'^2} f\left(a,\frac{1}{x'}\right) \hbox{H}\left(\vec{m}_{w-1};\frac{1}{x'}\right) \ . \labbel{eq:rel11} \end{eqnarray} It is to be noted that one can assume that the first index \( a \) is different from \( 1 \); indeed, as seen in Section 3 any $\hbox{H}$-function of the form \( \hbox{H}(1,\vec{m}_{w-1};y) \) can be expressed in terms of a reduced set of functions, where the leading index \( 1 \) is carried only by powers of \( \hbox{H}(1;y) \), whose transformation is given by \Eq{eq:rel10}. For \( a \) different from \( 1 \), \( \hbox{H}(\vec{m}_w;1) \) is a finite constant and the above formulae are meaningful. One further finds \begin{eqnarray} \int \frac{dx'}{x'^2}f\left(0;\frac{1}{x'}\right) &=& + \int \ dx'\ \frac{1}{x'} \ , \nonumber\\ \int \frac{dx'}{x'^2} f\left(-1;\frac{1}{x'}\right) &=& + \int \ dx'\ \left(\frac{1}{x'} - \frac{1}{1+x'} \right) \ ; \nonumber \end{eqnarray} substituting in the {\it r.h.s.} of \Eq{eq:rel11} the identities (of weight \( w-1 \), and therefore known in an approach by induction) which express $\hbox{H}(\vec{m}_{w-1};y'=1/x')$ in terms of $\hbox{H}(\vec{m'}_{w-1};x')$, one obtains a combination of terms of the kind \[ \int_x^1 \ dx'\ f(a;x') \hbox{H}_{\vec{m'}_{w-1}}(x') = \hbox{H}(a,\vec{m'}_{w-1};1) - \hbox{H}(a,\vec{m'}_{w-1};x) \ , \] and the identities of weight \( w \) are established. As an example, we give the \( w=3 \) identity \begin{eqnarray} \hbox{H}\left(0,-1,1; \frac{1}{x}-\ieps\right) &=& - \hbox{H}(0,-1,1;x) + 2\hbox{H}(0,-1,1;1) \nonumber\\ &&{\kern-50pt} + 2\hbox{H}(0,0,-1;x) - 2\hbox{H}(0,0,-1;1) + \hbox{H}(0,0,1;x) - \hbox{H}(0,0,1;1) \nonumber\\ &&{\kern-50pt} - \biggl( \hbox{H}(0,-1;x) + \hbox{H}(0,-1;1) + \hbox{H}(0,1;1) \biggr) \hbox{H}(0;x) + \frac{1}{6}\hbox{H}^3(0;x) \nonumber\\ &&{\kern-50pt} - i\pi\left( \hbox{H}(-1;1)\hbox{H}(0;x) + \frac{1}{2}\hbox{H}^2(0;x) - \hbox{H}(0,-1;x) + H(0,-1;1) \right) \ . \labbel{ex:1/x} \end{eqnarray} Another important set of identities, which is however valid for any set of indices and has no counterpart within the Nielsen's polylogarithms, applies to arguments $x$ and $t$ related by the transformation \begin{equation} x = \frac{1-t}{1+t} \labbel{eq:rel2} \ , \end{equation} whose inverse is again \begin{equation} t = \frac{1-x}{1+x} \labbel{eq:rel3} \ . \end{equation} Even in that case, it turns out that any $\hbox{H}$-function of weight $w$ and argument $x$ can be expressed as a homogeneous expression of weight $w$, involving $\hbox{H}$-functions of argument $t$, related to $x$ by \Eq{eq:rel2}, as well as constants corresponding to $\hbox{H}$-functions of argument $1$. The proof is, again, by induction on the weight. If $w = 1$, from the very definition \Eq{eq:defineh1} one immediately finds \begin{eqnarray} \hbox{H}(0;x) &=& - \hbox{H}(1;t) - \hbox{H}(-1;t) \ , \nonumber\\ \hbox{H}(1;x) &=& - \hbox{H}(0;t) - \hbox{H}(-1;1) + \hbox{H}(-1;t) \ , \nonumber\\ \hbox{H}(-1;x) &=& \hbox{H}(-1;t) - \hbox{H}(-1;1) \ . \labbel{eq:rel4} \end{eqnarray} For $w > 1$ and $\vec{m}_w = \vec{0}_w$ the result is trivially true, as can be verified by inspection; the same is true also for $\vec{m}_w = \vec{1}_w$ and $\vec{m}_w = \vec{-1}_w$. In the more general case, write $\vec{m}_w = (a,\vec{m}_{w-1})$; where the index $a$ takes the values $0,1,-1$. As discussed in Section 3, and already recalled for the \( x \to 1-x \) identities, if \( a=1 \) the function can be expressed in terms of a reduced set of functions, where the leading index \( 1 \) is carried only by \( \hbox{H}(1;x) \), for which \Eq{eq:rel4} holds. In the other two cases \( a=0,-1 \) the change of variable \begin{eqnarray} x' & = & \frac{1-t'}{1+t'} \nonumber \end{eqnarray} gives \begin{eqnarray} \hbox{H}(0,\vec{m}_{w-1};x) &=& \int_0^x \frac{dx'}{x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber \\ &=& \hbox{H}(0,\vec{m}_{w-1};1) -\int_x^1 \frac{dx'}{x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber \\ &=& \hbox{H}(0,\vec{m}_{w-1};1) -\int_0^t dt' \left( \frac{1}{1-t'} + \frac{1}{1+t'} \right) \hbox{H}\left(\vec{m}_{w-1};\frac{1-t'}{1+t'} \right) \ , \nonumber \\ \hbox{H}(-1,\vec{m}_{w-1};x) &=& \int_0^x \frac{dx'}{1+x'} \hbox{H}(\vec{m}_{w-1};x') \nonumber\\ &=& \hbox{H}(-1,\vec{m}_{w-1};1) -\int_0^t dt' \frac{1}{1+t'} \hbox{H}\left(\vec{m}_{w-1};\frac{1-t'}{1+t'} \right) \ . \labbel{(1-t)/(1+t)} \end{eqnarray} At this point, one can substitute the relations already found to be valid at weight $w-1$, for expressing the functions \( \hbox{H}\left(\vec{m}_{w-1};(1-t')/(1+t')\right) \) in terms of \( \hbox{H} \)'s of weight \( w-1 \) and argument \( t' \), and then perform the last integration in \( t' \) according to the definition \Eq{eq:defn0}. \par As an example, we give the following \( w=3 \) identity \begin{eqnarray} \hbox{H}(-1,-1,1;x) &=& - \hbox{H}(0,-1,-1;t) + \hbox{H}(-1,-1,1;1) \nonumber\\ &&{\kern-50pt} + \hbox{H}(0,-1;t)\hbox{H}(-1;t) + \frac{1}{6}\hbox{H}^3(-1;t) - \frac{1}{2}\hbox{H}^2(-1;t)\hbox{H}(0;t) \nonumber\\ &&{\kern-50pt} - \frac{1}{2}\hbox{H}(-1;1)\hbox{H}^2(-1;t) - \hbox{H}(-1,1;1)\hbox{H}(-1;t) \ . \labbel{tx3}\end{eqnarray} \section{ Identities between $\hbox{H}$'s and related functions.} Let us introduce a related set of functions \( {\mathrm{G}}(\vec{m}_w;x) \), where \( \vec{m}_w \) has almost the same meaning as for the $\hbox{H}$'s, but the first index \( m_w \) is always equal to \( 1 \), {\it i.e.} \( \vec{m} = (1,\vec{m}_{w-1}) \), through the definitions \begin{equation} {\mathrm{G}}(1;x) = \intG \labbel{defG1}\end{equation} for \( w=1 \) and \begin{equation} {\mathrm{G}}(1,\vec{m}_{w-1};x) = \intG \hbox{H}(\vec{m}_{w-1};t) \labbel{defGw}\end{equation} for \( w>1 \). \par The \( {\mathrm{G}}(\vec{m}_w;x) \) are nothing but homogeneous combination of $\hbox{H}$-functions of weight \( w \). As by now usual, we will show it proceeding by induction on \( w \). For \( w=1 \), by performing explicitly the elementary integration we obtain from \Eq{defG1} \begin{equation} {\mathrm{G}}(1;x) = \hbox{H}(1;x) \ . \labbel{defGH1}\end{equation} Next, assume that the identities are established for \( w \); put \( \vec{m} = (a,\vec{m}_{w-1}) \), and consider the functions of weight \( w+1 \) given by \begin{equation} {\mathrm{G}}(1,a,\vec{m}_{w-1};x) = \intG \hbox{H}(a,\vec{m}_{w-1};t) \ . \labbel{defGHw+1}\end{equation} One can differentiate with respect to \( x \), then integrate by parts in \( t \), using of course \Eq{defGw} when relevant; considering for instance the case \( a=-1 \) one obtains \begin{eqnarray} \frac{\partial}{\partial x} {\mathrm{G}}(1,-1,\vec{m}_{w-1};x) &=& \left[ f(-1,x) - f(0,x) \right] {\mathrm{G}}(1,\vec{m}_{w-1};x) \nonumber\\ &+& \left[ f(-1,x) + f(1,x) \right] \hbox{H}(-1,\vec{m}_{w-1};1) \ . \labbel{derG-1}\end{eqnarray} Similarly, one has \begin{eqnarray} \frac{\partial}{\partial x} {\mathrm{G}}(1,0,\vec{m}_{w-1};x) &=& - f(0,x) {\mathrm{G}}(1,\vec{m}_{w-1};x) \nonumber\\ &+& f(-1,x) \hbox{H}(0,\vec{m}_{w-1};1) \nonumber\\ \frac{\partial}{\partial x} {\mathrm{G}}(1,1,\vec{m}_{w-1};x) &=& \left[ f(0,x) + f(1,x) \right] {\mathrm{G}}(1,\vec{m}_{w-1};x) \ . \labbel{derG01}\end{eqnarray} One can substitute the already obtained identities expressing \( {\mathrm{G}}(1,\vec{m}_{w-1};x) \) in terms of $\hbox{H}$'s of weight \( w \) and then integrate in \( x \) between \( 0 \) and \( x \) by using the very definition \Eq{eq:defn0} (according to \Eq{defGw} the \( {\mathrm{G}} \)-functions vanish at \( x=0 \)). The required identities of weight \( w+1 \) are then established. As an example, we give one of the identities of weight \( w=4 \) \begin{eqnarray} {\mathrm{G}}(1,0,-1,1;x) &=& - \hbox{H}(0,-1,0,1;x) - \hbox{H}(0,-1,1,1;x) \nonumber\\ &+& \hbox{H}(0,0,0,1;x) + \hbox{H}(0,0,1,1;x) \nonumber\\ &-& \hbox{H}(-1,1;1)\hbox{H}(0,-1;x) - \hbox{H}(-1,1;1)\hbox{H}(0,1;x) \nonumber\\ &+& \hbox{H}(0,-1,1;1)\hbox{H}(1;x) \ . \labbel{Gw4ex}\end{eqnarray} \par In the same way one can work out the similar identities existing for several related classes of functions such as, for instance, \[ \intG \hbox{H}(\vec{a};t) \hbox{H}(\vec{b};xt) \] or \begin{equation} \int_0^1 dt f(a,t) \hbox{H}(\vec{a};t) \hbox{H}(\vec{b};xt) \ . \labbel{moreG}\end{equation} \section{ Special values of the $\hbox{H}$'s and their numerical evaluation.} It is known that the Nielsen's polylogarithms for the special values of the arguments equal to \( +1, -1 \) and \( 1/2 \) can be expressed in terms of a few mathematical constants, typically Riemann \( \zeta \)-functions of integer arguments; the representations which they provide for those constants as definite integrals can be manipulated by means of integration by parts, changes of variables and the like providing the analytic values of a number of definite integrals of special interest. The same applies, and in much more systematic way, to the $\hbox{H}$-functions, thanks to the greater and wider sets of identities which they satisfy. \par In the case of the $\hbox{H}$'s, it is not necessary to consider as independent the values corresponding to the argument equal to \( -1 \); indeed, one can always express any $\hbox{H}$-function in terms of the reduced set of functions in which trailing indices equal to \( 0 \) are missing, so that by using \Eq{eq:minsign} one can replace a value at \( x=-1 \) with the value at \( x=1 \) of a related function. In analogy with the Nielsen's polylogarithms case, it is convenient to consider also the values at \( x=1/2 \) of the functions whose indices are equal to \( 0 \) or \( 1 \) ({\it i.e.} when the index \( -1 \) is missing). \par More specifically, one can consider: \begin{itemize} \item the \( x^2 \to x \) identities, Eq.s(\ref{eq:rel1}-\ref{x^2wider}), for \( x=1 \) ; \item the \( 1-x \to x \) identities, Eq.s(\ref{eq:rel1a}-\ref{1mxw4}). They can be used at \( x=1/2 \), providing a first set of identities for the values at \( x=1/2 \), but also at \( x=-1 \); in the second case, one gets values at \( x=2 \), which are converted into values at \( x=1/2 \) by using the \( x \to 1/x \) identities, Eq.s(\ref{eq:xinv}-\ref{ex:1/x}), as well as values at \( x=-1 \), which are converted into values at \( x=1 \) by \Eq{eq:minsign}; \item the just recalled \( x \to 1/x \) identities, Eq.s(\ref{eq:xinv}-\ref{ex:1/x}), at \( x=1 \) and \( x=-1 \), followed by the usual conversion to \( x=1 \) through \Eq{eq:minsign}; \item the \( x \to (1-t)/(1+t) \) identities, Eq.s(\ref{eq:rel2}- \ref{tx3}), at \( x=0 \) corresponding to \( t=1 \) (they are automatically satisfied, by construction, at \( x=1, t=0 \)); \item one more set of identities is obtained by writing the identities between ${\mathrm{G}}$-functions and $\hbox{H}$-functions,discussed in Section 8, at the special value \( x=-1 \), by using the relation, which follows from the definition \Eq{defGw} \[ {\mathrm{G}}(1,\vec{m};-1) = \hbox{H}(-1,\vec{m};1) \] and converting once more the values at \( x=-1 \) of the $\hbox{H}$'s into values at \( x=1 \) by means of \Eq{eq:minsign}. \end{itemize} \par The set of relations obtained in that way is highly redundant; it has been checked explicitly that they generate the table of the \( w=4 \) definite integrals given in Appendix B of the second reference of \cite{KMR}. It has not yet been investigated whether they are sufficient, by themselves, to generate also the tables of higher weights obtained in \cite{HS}. \par Another powerful method to obtain the values at $x = 1/2$ when there are no negative indices is by considering the transformation $x \rightarrow z/(1+z)$, which corresponds to a suitable combination of the transformations $x \to 1/x$ and $x \to (1-x)$. Using the same techniques as in the section on related arguments, all these objects are directly expressed in terms of $\hbox{H}$-functions in $x=1$. Such expressions can then be used in reverse to obtain the numerical values of the `independent constants' that occur in the expressions for the $\hbox{H}$-functions at $x=1$. As an example we have \begin{eqnarray} \hbox{Li}_3\biggl(\frac{1}{2}\biggr) & = & \frac{7}{8}\zeta_3 - \frac{1}{2}\zeta_2\ln(2) + \frac{1}{6} \ln^3(2) \end{eqnarray} which is of course well known. We have also \begin{eqnarray} \hbox{H}_{2,1}\biggl(\frac{1}{2}\biggr) & = & \frac{1}{8}\zeta_3 - \frac{1}{6} \ln^3(2) \end{eqnarray} Both relations provide a power series for the evaluation of $\zeta_3$. The method gives also an expression of $s_6 = S_{-5,-1}(\infty)$ in terms of $\hbox{H}_{5,1}(1/2)$, $\hbox{H}_6(1/2)$ and combinations of constants of a lower weight. Similar dependencies can be derived for the higher weight constants. \par Let us finish with a few remarks on the numerical evaluation of the $\hbox{H}$'s for arbitrary values of \( x \). According to the discussion of Section 3, it is sufficient to restrict ourselves to the $\hbox{H}$'s either of the reduced set or of a minimal set, as all the others can be obtained from them as suitable combinations. The $\hbox{H}$'s of such a set have no trailing indices equal to \( 0 \), so that they can be expanded in series of \( x \) around \( x=0 \). For small values of \( x \) the series will be rapidly convergent, but the convergence will slow down approaching the cuts at \( x=\pm1 \). But for \( x \) approaching \( 1 \) we can use the transformation \Eq{eq:rel2}, so that the corresponding \( t = (1-x)/(1+x) \) will fall in the region near \( 0 \) and the expansion in \( t \) will be rapidly converging. \par More exactly, the equation \begin{equation} r = \frac{1-r}{1+r} \end{equation} has the two solutions \( r = - 1 - \sqrt{2} \) and \( r = - 1 + \sqrt{2} \). Therefore, we can use the expansion around $x= 0$ in the interval $ -(\sqrt{2}-1) < x < \sqrt{2}-1 $, where $|x| < \sqrt{2}-1 < 1/2$, switching for $ \sqrt{2}-1 < x < \sqrt{2}+1 $ to $t = (1-x)/(1+x)$, which corresponds to $|t| < \sqrt{2}-1 <1/2$. For greater values of \( x \), one can use the \( x \to 1/x \) identities. For large negative values of \( x \), {\it i.e.} \( x < 1-\sqrt{2} \), one can flip the sign of \( x \) with \Eq{eq:minsign} and then proceed as above. \par In practice the transformation of \Eq{eq:rel2} can lead to a large number of functions to be evaluated and hence it may be more profitable to apply this transformation only for values of $x$ that are much closer to one. If, on the other hand, nearly all $\hbox{H}$-functions of a given weight have to be evaluated for some value of $x$ one can use the turnover value of $\sqrt{2}-1$ in a rather profitable way. \par The values in $x=1$ require some extra attention. These are actually needed rather frequently and hence there exists some literature on them. From \Eq{eq:recsum} it should be clear that an $\hbox{H}$-function in $x=1$ can be expressed in terms of either $S$-sums or $Z$-sums in infinity. Hence much information can be found in \cite{BBB}, \cite{BBBL} and the papers they refer to. Ref.~\cite{HS} gives a different method to evaluate these sums. Recently this method has been used by one of us (J.V.) to obtain all such sums up to weight 9 (see also footnote 2). For only nonnegative indices results have been obtained up to weight 11~\cite{DBprivate}. When the first index of the $\hbox{H}$-function (or the $S$-sum) is one, the value in $x=1$ (or the sum in infinity) will be divergent. Yet we have to consider these objects. As mentioned in the section on the algebra this can be done consistently only in terms of the sums. Hence the safest method is to rewrite the $\hbox{H}$-functions in $x=1$ immediately in terms of either $S$-sums or $Z$-sums. In the case that the weights are low enough, these can then be rewritten in terms of a limited set of `fundamental constants'. \section{Mellin transforms} At times one may need the Mellin transform of the Harmonic polylogarithms. In ref~\cita{HS} a method is given to evaluate such transforms for a class of functions which is more or less the class of $\hbox{H}$-functions. There is however one complication with Mellin transforms. Divergencies at $x=1$ must be extracted. This is because the Mellin transform is defined by \begin{eqnarray} \labbel{eq:mellin} M(f(x),N) & = & \int_0^1dx\ x^N f(x) \nonumber \\ M(\frac{f(x)}{(1\!-\! x)_+},N) & = & \int_0^1dx\ \frac{x^N f(x)-f(1)}{1\!-\! x} \nonumber \\ M(\frac{f(x)\ln^p(1\!-\! x)}{(1\!-\! x)_+},N) & = & \int_0^1dx\ \frac{\left(x^N f(x)-f(1)\right)\ln^p(1\!-\! x)}{1\!-\! x} \end{eqnarray} in which the function $f$ is supposed to be finite for $x=1$ when the factor $1/(1\!-\! x)_+$ is present. Hence we have to pay attention to the powers of $\ln(1\!-\! x)$. They can be isolated with \Eq{eq:ln(1-x)}. After this extraction the remaining $\hbox{H}$-functions are finite in $x=1$. At this point we can attack the Mellin transforms. It is easy to obtain the lowest weight results: \begin{eqnarray} \int_0^1dx x^n \hbox{H}(0;x) & = & -\frac{1}{(n+1)^2} \nonumber \\ \int_0^1dx x^n \hbox{H}(1;x) & = & \frac{S_1(n+1)}{n+1} \nonumber \\ \int_0^1dx x^n \hbox{H}(-1;x) & = & \sign(n)\frac{S_{-1}(n+1)}{n+1} +\frac{\ln(2)}{n+1}\left(1+\sign(n)\right) \end{eqnarray} in which we have used that $\hbox{H}(-1;1) = -S_{-1}(\infty) = \ln(2)$. The higher weight results can be obtained by recursion. Like in ref~\cita{HS} this is done by partial integration. We also exchange the sums immediately after each step so that we may do one of them immediately. The result is: \begin{eqnarray} \labbel{eq:recmellin} \int_0^1 dx\sum_{i=n}^\infty \sigma^i x^i \hbox{H}_{0,\vec{m}}(x) \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^k} & = & \sigma \hbox{H}_{0,\vec{m}}(1) \left( S_{\sigma(k\!+\! 1),\vec{p}}(\infty) -S_{\sigma(k\!+\! 1),\vec{p}}(n)\right) \nonumber \\ && -\int_0^1 dx\sum_{i=n}^\infty \sigma^i x^i \hbox{H}_{\vec{m}}(x) \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^{k\!+\! 1}} \\ \int_0^1 dx\sum_{i=n}^\infty \sigma^i x^i \hbox{H}_{1,\vec{m}}(x) \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^k} & = & \sigma \hbox{H}_{1,\vec{m}}(1) \left( S_{\sigma(k\!+\! 1),\vec{p}}(\infty) -S_{\sigma(k\!+\! 1),\vec{p}}(n)\right) \nonumber \\ && -\int_0^1 dx\sum_{i=n}^\infty x^i \hbox{H}_{\vec{m}}(x)\Bigl( \sigma S_{\sigma(k\!+\! 1),\vec{p}}(i\!+\! 1) \nonumber \\ && -\sigma^i \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^{k\!+\! 1}} -\sigma S_{\sigma(k\!+\! 1),\vec{p}}(n) \Bigr) \\ \int_0^1 dx\sum_{i=n}^\infty \sigma^i x^i \hbox{H}_{\!-\! 1,\vec{m}}(x) \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^k} & = & \sigma \hbox{H}_{\!-\! 1,\vec{m}}(1) \left( S_{\sigma(k\!+\! 1),\vec{p}}(\infty) -S_{\sigma(k\!+\! 1),\vec{p}}(n)\right) \nonumber \\ && -\int_0^1 dx\sum_{i=n}^\infty x^i \hbox{H}_{\vec{m}}(x)\Bigl( \sigma\sign(i) S_{\!-\!\sigma(k\!+\! 1),\vec{p}}(i\!+\! 1) \nonumber \\ && +\sigma^i \frac{S_{\vec{p}}(i\!+\! 1)}{(i\!+\! 1)^{k\!+\! 1}} -\sigma\sign(i) S_{\!-\!\sigma(k\!+\! 1),\vec{p}}(n) \Bigr) \end{eqnarray} The variable $\sigma$ is either $1$ or $-1$. This leaves only the evaluation of the $\hbox{H}$-functions in $x=1$. These values do not have to be finite. Only the $\hbox{H}$-functions that are used in the subtraction in \Eq{eq:mellin} are finite. This causes no problems provided the divergencies are regularized in the representation in terms of $S$-sums as explained before. As an example we show here a nontrivial Mellin transform: \begin{eqnarray}M\Biggl(\frac{H_{1,\!-\! 2,1,0}(x)}{1-x},N\Biggr) & = & S_{1,\!-\! 2,\!-\! 1,2}(N) -2S_{1,\!-\! 2,\!-\! 3}(N) +2S_{1,5}(N) \nonumber \\ && -\frac{1}{2}S_{1,\!-\! 2}(N)\Biggl(\zeta_2\ln(2)+\zeta_3\Biggr) +S_{1,2}(N)\Biggl(\frac{1}{2}\zeta_2\ln(2)-\zeta_3\Biggr) \nonumber \\ && +S_{1,1}(N)\Biggl(4\hbox{Li}_4\biggl(\frac{1}{2}\biggr)+\frac{1}{6}\ln^4(2) -\zeta_2\ln^2(2)-\frac{13}{40}\zeta_2^2\Biggr) \nonumber \\ && +S_1(N)\Biggl(\frac{9}{2}\zeta_2\zeta_3-\frac{83}{8}\zeta_5\Biggr) -\frac{1}{24}\zeta_2\ln^4(2) +\frac{7}{16}\zeta_2\zeta_3\ln(2) \nonumber \\ && -\zeta_2\hbox{Li}_4\biggl(\frac{1}{2}\biggr) +\frac{1}{4}\zeta_2^2\ln^2(2) +\frac{447}{840}\zeta_2^3 -\frac{157}{32}\zeta_3^2 +\frac{7}{2}S_{\!-\! 5,\!-\! 1}(\infty) \end{eqnarray} The sum in the last term is irreducible. In the case that the weight of the terms too large (currently larger than 9) it becomes rather hard to obtain the values for the $\hbox{H}$-functions in $x=1$ or alternatively for the $S$-sums in infinity. Because the algebra for the $\hbox{H}$-functions in $x=1$ is different from the algebra for the $\hbox{H}$-functions for general values of $x$ there may be large numbers of $\hbox{H}$-functions left that each are divergent at $x=1$. The reason is that some algebraic work is done first with the general algebraic rules and has to be `undone' with the rules for $x=1$. The relations that make the divergences cancel may not be easy to find. One can still obtain numerical results however. If one is faced with higher weights one may proceed as follows. The $\hbox{H}$-functions in $x=1$ are first expressed in terms of $S$-sums in infinity. Then the shuffle algebra for the $S$-sums is used to extract the divergencies in a way that is similar to how this is done for the powers of $\ln(1-x)$ for the $\hbox{H}$-functions. Because the divergences have to cancel each other, all divergent terms should disappear, even though we may not have the algebraic methods to prove this for the case at hand. The remaining finite expression can in principle be evaluated numerically. Inverse Mellin transforms are now relatively easy. As pointed out in ref~\cita{HS} each $S$-sum has a single most complicated original function in terms of $\hbox{H}$-functions in which we can define `most complicated' by function with largest weight or in the case of identical weights the largest number of nonzero indices. And actually one can obtain the relation between the $S$-sum of which one needs the inverse Mellin transform and this most complicated $\hbox{H}$-function from the recursion relations in \Eq{eq:recmellin}. Hence the algorithm is clear: \begin{itemize} \item Locate the most complicated $S$-sum(s). \item Construct the corresponding $\hbox{H}$-function(s) in $x$-space. \item Add it and subtract it. \item Make the Mellin transform of the subtracted version. This will cancel the original $S$-sum. \item Repeat the above steps until there are no more $S$-sums remaining. \item Multiply the remaining constant terms by $\delta(1\!-\! x)$. \end{itemize} This algorithm will properly terminate. It has only one problem: Some Mellin transforms have a factor $\sign(N)$ and some don't. What if we take an $S$-sum which should have a factor $\sign(N)$ but we omit it? Here we have to realize that the inverse Mellin transform is to be constructed from either all even or from all odd moments only. Hence we have to specify whether $N$ is even or odd. This will give a value to $\sign(N)$. Hence the only thing that remains is to give the relation between an $S$-sum and the most complicated $\hbox{H}$-function that contributes to it. \begin{itemize} \item If the number of negative indices is odd, there will be a factor $1/(1\!+\! x)$, otherwise there will be a factor $1/(1\!-\! x)_+$. \item Next copy the index field to the $\hbox{H}$-function. \item Working from the rightmost index to the left, each index will get a sign that is the combination of its old sign and the signs of all indices to the left of it. \item There will be an additional overall sign on the term which is the sign of the last index. \item There will be an additional overall sign on the term which is $\sign(w-1-d)$ in which $w$ is the weight of the $S$-sum and $d$ its depth (which is the number of nonzero indices). \item Each negative index in the current configuration will give a minus sign to the term. \end{itemize} We will give two examples of weight 7 functions. First and example that involves subtractions with $\ln^2(1\!-\! x)$ in the Mellin transform: \begin{eqnarray} S_{1,1,2,1,2}(N) & \rightarrow & \frac{1}{1\!-\! x}\Biggl( -\hbox{H}_{1,1,2,1,0}(x) -\frac{1}{2} \hbox{H}_{1,1}(x) \zeta_2^2 \Biggr) \nonumber \\ && +\delta(1\!-\! x)\Biggl( - \frac{3}{2}\zeta_2\zeta_5 - \frac{7}{5}\zeta_2^2\zeta_3 + 17\zeta_7 \Biggr) \end{eqnarray} In this case there is no difference for even values of $N$ and for odd values of $N$. However the next example is different. For even values of $N$ we have \begin{eqnarray} S_{\!-\! 1,1,\!-\! 2,1,2}(N) & \rightarrow & \frac{1}{1\!-\! x}\Biggl( - \hbox{H}_{-1,-1,2,1,0}(x) + 2\hbox{H}_{-1,-1,0,0,0,0}(x) - 2\hbox{H}_{0,0,0,0,0,0}(x) \nonumber \\ && \ \ \ + \frac{1}{2}\hbox{H}_{-1,-1}(x) \zeta_2^2 - \hbox{H}_{0,0}(x) \zeta_2^2 \nonumber \\ && \ \ \ + \hbox{H}_{-1}(x) ( \frac{1}{16}\zeta_2\zeta_3 + \zeta_2^2\ln(2) + \frac{67}{64}\zeta_5 ) \nonumber \\ && \ \ \ + \hbox{H}_{0}(x) ( - \frac{1}{8}\zeta_2\zeta_3 - 2\zeta_2^2\ln(2) - \frac{67}{32}\zeta_5 ) \nonumber \\ && \ \ \ - \frac{1}{8}\zeta_2\zeta_3\ln(2) - \frac{3}{2}\zeta_2^2\ln^2(2) - \frac{307}{560}\zeta_2^3 + \frac{157}{128}\zeta_3^2 + \frac{21}{64}\zeta_5\ln(2) - \frac{5}{4}\sigma_6 \Biggr) \nonumber \\ && +\frac{1}{1\!+\! x}\Biggl( (\hbox{H}_{1}(x)+2\hbox{H}_{0}(x)) ( \frac{1}{16}\zeta_2\zeta_3 - \frac{53}{64}\zeta_5 ) \nonumber \\ && \ \ \ - \frac{61}{560}\zeta_2^3 + \frac{35}{128}\zeta_3^2 + \frac{93}{64}\zeta_5\ln(2) - \frac{3}{4}\sigma_6 \ \Biggr) \nonumber \\ && +\delta(1\!-\! x)\Biggl( - \frac{1}{16}\zeta_2\zeta_3\ln^2(2) - \frac{957}{224}\zeta_2\zeta_5 + \frac{1}{120}\zeta_2\ln^5(2) - \zeta_2\hbox{Li}_5(\frac{1}{2}) \nonumber \\ && \ \ \ - \frac{93}{140}\zeta_2^2\zeta_3 - \frac{1}{12}\zeta_2^2\ln^3(2) - \frac{29}{280}\zeta_2^3\ln(2) - \frac{1355}{896}\zeta_3^2\ln(2) \nonumber \\ && \ \ \ - \frac{197}{64}\zeta_5\ln^2(2) + \frac{37215}{3584}\zeta_7 + \frac{19}{28}\ln(2)\sigma_6 - \frac{10}{7}\sigma_{7,a} + \frac{29}{14}\sigma_{7,b}\ \Biggr) \end{eqnarray} in which \begin{eqnarray} \sigma_6 & = & S_{-5,-1}(\infty) \nonumber \\ \sigma_{7,a} & = & S_{-5,1,1}(\infty) \nonumber \\ \sigma_{7,b} & = & S_{5,-1,-1}(\infty) \end{eqnarray} In the case of odd $N$ the terms with $1/(1\!+\! x)$ change sign. As one can see these formulae can become rather involved, even though the number of terms is rather small compared to the number of functions that exist in $x$-space for this weight. In the case that sums of a higher weight are considered one may not be able to substitute the values of the $\hbox{H}$-functions at $x=1$. The same considerations as for the Mellin transforms can be used to obtain an answer that can at least be evaluated numerically. In general the formulae will of course be much lengthier. \vskip 2 truecm {\large\bf Acknowledgements.} One of the authors (E.R.) wants to thank the Alexander von Humboldt Stiftung for the generous support of his stay at Karlsruhe. The other author (J.V.) would like to thank the Programa "Catedra" of the Fundacion BBV for support during the part of this work which was done at the Universidad Aut\'onoma of Madrid. \noindent We like to thank S. Moch for discussions. \vskip 2 truecm \def\NC{{\sl Nuovo Cimento }\ } \def\NP{{\sl Nuc. Phys. }\ } \def\PL{{\sl Phys. Lett .}\ } \def\PR{{\sl Phys. Rev. }\ } \def\PRL{{\sl Phys. Rev. Lett. }\ }
1,314,259,992,741
arxiv
\section{INTRODUCTION} The symmetry of the superconducting order parameter or energy gap $2\Delta(\vec{k})$ gives an important clue to the mechanism of high T$_c$ superconductivity. Experiments which probe a possible phase shift in the order parameter with Josphson tunneling\cite{Schrieffer} are in favor of a d$_{x^2-y^2}$-wave pairing. Moreover, various experiments\cite{Schrieffer}, including electronic Raman scattering\cite{Devereaux}, are indicating the existence of nodes in the energy gap. A d$_{x^2-y^2}$-symmetry of the order parameter is predicted for a pairing mechanism based on spin fluctuations\cite{Schrieffer}. Furthermore, there are theoretical predictions for the magnitude of the energy gap and its temperature dependence\cite{Pao}. Electronic Raman scattering of free carriers occurs due to mass fluctuations about the Fermi surface. A continuous scattering background up to high frequencies is observed in all investigated high T$_c$ superconductors. At temperatures below T$_c$ this scattering background becomes renormalized for different frequencies below the energy gap $2\Delta(\vec{k})$, depending on the scattering geometry. \section{RESULTS AND DISCUSSION} We investigated the temperature dependence of the electronic Raman scat\-ter\-ing in single crystals of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ and its variation with the oxygen content $\delta$. The crystals were characterized by magnetic measurements in a SQUID magnetometer, by X-ray diffraction, c-axis resistivity, and by Raman scattering. In order to change the oxygen content $\delta$, we annealed the same single crystal subsequently in Ar and in O$_2$. After each annealing step $\delta$ was determined by comparing T$_c$ and the c-axis parameter to T$_c$($\delta$) and c($\delta$) known from iodometric titration of polycrystalline samples. For the Raman measurements we used the 488nm excitation line of an Ar$^+$-laser in quasi-backscattering geometry and power levels below 15 W/cm$^2$. \begin{figure} \framebox[5in]{\rule[.9in]{0in}{.9in}} \caption{Normalized intensities of the A$_{1g}$, B$_{1g}$, and B$_{2g}$ symmetry component of the electronic Raman scattering for (a) $\delta$=0.17 and (b) $\delta$=0.29.} \end{figure} \begin{table} \begin{tabular}{|c|c||c|c|c||c|}\hline oxygen & T$_c$ & \multicolumn{3}{c||}{maximum (cm$^{-1}$) for} & maximum energy gap \\ content $\delta$ & (K) & A$_{1g}$ & B$_{2g}$ & B$_{1g}$ & $2\Delta_{max}/k_BT_c$ \\ \hline 0.17 & 86 & 330$\pm$20 & 370$\pm$30 & 520$\pm$20 & 8.7$\pm$0.3 \\ \hline 0.29 & 81 & 280$\pm$20 & 340$\pm$40 & 460$\pm$20 & 8.2$\pm$0.4 \\ \hline \end {tabular} table 1. Positions of the maximum for different symmetry components. The maximum energy gap is determined by the B$_{1g}$ maximum. \end{table} In order to suppress phonon peaks, and to emphasize the redistribution of the electronic Raman scattering intensity below T$_c$, spectra at T=10K are divided by spectra at T=90K, see fig.\ 1. In the underdoped ($\delta$=0.17, $\partial T_c/\partial\delta>0$) and in the overdoped regime ($\delta$=0.29, $\partial T_c/\partial\delta<0$) the frequency behaviors for the different symmetry components are consistent with a d$_{x^2-y^2}$-symmetry of the energy gap according to ref.\ 2, i.e.\ the low frequency behaviors and the maximum positions are different for the A$_{1g}$, B$_{2g}$, and B$_{1g}$ symmetry, see tab.\ 1. In the case of an energy gap with d$_{x^2-y^2}$-wave symmetry, $2\Delta(\vec{k})$ has a different weight in a particular direction of $\vec{k}$ for each symmetry component of the electronic Raman scattering\cite{Devereaux}. This leads to the different frequency behaviors. Regardless of the symmetry of the energy gap\cite{Devereaux}, the structure in B$_{1g}$-symmetry can be identified with the maximum energy gap $2\Delta_{max}$, see tab.\ 1. With doping the change of $2\Delta_{max}$ is stronger than the change in T$_c$, indicating non-BCS behavior ($2\Delta_{max}/k_BT_c\neq const.$). In fig.\ 2a we show the temperature dependence of the A$_{1g}$- and B$_{1g}$-peak of an overdoped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ single crystal with $\delta$=0.27, T$_c$=83K. Both peaks show the same T dependence up to T/T$_c$=0.73. This indicates that both peaks are due to the opening of the superconducting energy gap. In fig.\ 2b the temperature dependence of the B$_{1g}$-peak is shown for different oxygen contents $\delta$. Upon cooling below T$_c$ the energy gap opens more rapidly in the underdoped crystal ($\delta$=0.17). \begin{figure} \framebox[5in]{\rule[.9in]{0in}{.9in}} \caption{Temperature dependence of (a) the A$_{1g}$ and B$_{1g}$ peak for $\delta$=0.27, T$_c$=83K and (b) the B$_{1g}$ peak for $\delta$=0.17 and $\delta$=0.29 in Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$.} \end{figure} The results shown here are in good agreement with earlier measurements on Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ single crystals with different oxygen concentrations\cite{Martin,Staufer}, explained by a change in anisotropy or dimensionality, i.e coupling between CuO$_2$ planes with doping. However, similar behavior is seen in the less anisotropic YBa$_2$Cu$_3$O$_7$\cite{Hackl}. For this reason we emphasize the similarity of this behavior with predictions based on paramagnon coupling. Within this model spin fluctuations have a pair-breaking and pair-binding effect. The opening of the superconducting energy gap leads to a decrease of low frequency spin fluctuations and thus to less pair-breaking. This feedback effect leads in underdoped samples to a more abrupt temperature dependence of $\Delta(T)/\Delta_{max}$ and to a higher magnitude of $2\Delta_{max}$ compared to BCS theory\cite{Pao} and is in good agreement with our results. Since with increased oxygen content $\delta$ the Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ single crystals are farther away from the antiferromagnetic insulator, this feedback effect should become reduced. This explains the smaller $2\Delta_{max}/k_BT_c$ and its weaker temperature dependence for the overdoped crystals compared with the underdoped ones. In conclusion, the frequency dependencies of the different symmetry components of the electronic Raman scattering are consistent with a d$_{x^2-y^2}$-wave energy gap. Furthermore the measured magnitude of the energy gap, its temperature dependence and its variation with the oxygen content $\delta$ is consistent with a pairing mechanism due to antiferromagnetic spin fluctuations, i.e. paramagnon coupling. We would like to thank D.\ Einzel, T.P.\ Devereaux, A.\ Kampf, M.\ Krantz, M.\nolinebreak[4]\ Cardona, and K.\ Maki for stimulating discussions. This work was supported by DFG through SFB 341.
1,314,259,992,742
arxiv
\section{Introduction} In a paraconsistent context where formulas have three admissible assignments, and assuming the standard properties with respect to the ``classical'' assignments, that is \begin{center} \begin{tabular}{c|c} $A$ & $N A$ \\ \hline $\{1\}$ & $\{0\}$ \\ $\{1, 0\}$ & \\ $\{0\}$ & $\{1\}$ \\ \end{tabular} \end{center} \noindent there are only two possibilities for paraconsistent negation $N$, namely the de Morgan negation found in González-Asenjo/Priest's \textbf{LP} and the negation of Sette's \textbf{P$^1$}, respectively: \begin{center} \begin{tabular}{c|c} $A$ & $\sim \! A$ \\ \hline $\{1\}$ & $\{0\}$ \\ $\{1, 0\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{1\}$ \\ \end{tabular} \hfil \begin{tabular}{c|c} $A$ & $\neg A$ \\ \hline $\{1\}$ & $\{0\}$ \\ $\{1, 0\}$ & $\{1\}$\\ $\{0\}$ & $\{1\}$ \\ \end{tabular} \end{center} Among his many contributions in logic and philosophy, Chris Mortensen introduced a connexive logic commonly known as `\textbf{M3V}'. \textbf{M3V} is obtained by adding a special conditional to González-Asenjo/Priest's \textbf{LP}. Such conditional is structurally the same as the one used by Anderson and Belnap in \cite{AndersonBelnap1975} to show the consistency of the logic \textbf{E} and, in particular, to show how to block the paradox of necessity, i.e. to avoid validating formulas of the form $X>(Y>Z)$, with $>$ an entailment connective, $X$ a contingent truth and $(Y>Z)$ a logical truth.\footnote{A logic containing \textbf{M3V} was developed around the same time by Peña to cope with comparatives, gradables and vagueness. See \cite{Pena1995} for a summary of his results and \cite{Paoli2006} for a more friendly exposition of them.} Among its most notable features, besides its being connexive, \textbf{M3V} is negation-inconsistent and it validates the negation of every conditional. But Mortensen has also studied and applied extensively other non-connexive logics. On the one hand there is \emph{closed set logic}, \textbf{CSL}, a paraconsistent logic motivated by dualizing open set logic, i.e. intuitionistic logic. \textbf{CSL} has notoriously been found defective in lacking a conditional connective because in it there is no connective $\copyright$ such that $A\copyright B$ is untrue if $A$ is true and $B$ untrue, as one would expect from a conditional. The two most obvious candidates, $\neg A\vee B$ and $\neg (A\wedge\neg B)$ are true when $A$ is true and $B$ is untrue, delivering thus countermodels to Detachment.\footnote{Mortensen has always argued that this is not a serious defect, especially when it comes to doing mathematics with \textbf{CSL}. We will not address this issue here. The fact is that there is no such connective in the logic; how bad is that is a different discussion.} On the other hand, in \cite{Mortensen1989a} he proposed another logic, which later Marcos \cite{Marcos2006} modified to obtain a variant of Sette's logic, identified and called \textbf{P$^2$} by Marcos. In this paper, we analyze and compare systematically the connexive variants of \textbf{CSL} and \textbf{P$^2$}, obtained by adding the \textbf{M3V} conditional to them. Our main observations are two. First, that the inconsistency of \textbf{M3V} is exacerbated in the connexive variant of closed set logic, while it is attenuated in the connexive variant of the Sette-like \textbf{P$^2$}. Second, that the \textbf{M3V} conditional is, unlike other conditionals, \emph{connexively stable}, meaning that it validates the core connexive schemas when combined with the main paraconsistent negations. The plan of the paper is as follows. In Section 2 we present some preliminary, general notions that will be useful for the remainder of the paper. In Section 3 we present \textbf{M3V} and mention some of its properties; some of them are already well-known, but others are noticed here for the first time. In Section 4 we introduce \textbf{cCSL3}, closed set logic restricted to three admissible interpretations, like \textbf{M3V}, enriched with the \textbf{E}-conditional. We give some of its most notable features, including likenesses and differences with \textbf{M3V}. There we show that, unlike other conditionals, the \textbf{E}-conditional is connexively stable with respect to both $\sim$ and $\neg$. Finally, in Section 5 we present \textbf{cP$^2$}. It shares the $\{\sim, \rightarrow_{\tiny{\textbf{E}}}\}$-fragment with \textbf{M3V}, but still they differ in ways that are significant for connexive logicians. \section{Preliminary notions} Let $A$ and $B$ arbitrary formulas of a given formal language, and $\Gamma$ a set of formulas of that language. In this paper, logical consequence is understood as truth-preservation from premises to conclusions in all interpretations, that is: \begin{itemize} \item[] $\Gamma\models_{\tiny{\textbf{L}}} A$ if and only if, for all $\sigma$, if $1\in\sigma(B)$ for all $B\in\Gamma$ then $1\in\sigma(A)$ \end{itemize} \noindent Now, let $N$ and $>$ be a negation and a conditional, respectively. \emph{Unrestricted Detachment} is logically valid in \textbf{L} iff $$A, A> B\models_{\tiny{\textbf{L}}} B$$ \bigskip \noindent A logic \textbf{L} is \emph{connexive} iff the following hold: \noindent $\models_{\tiny{\textbf{L}}} N \! (A>N \! A)$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Aristotle's Thesis \noindent $\models_{\tiny{\textbf{L}}} N \! (N \! A> A)$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Variant of Aristotle's Thesis \noindent $\models_{\tiny{\textbf{L}}} (A > B) > N \! (A > N \! B)$\ \ \ \ \ \ Boethius' Thesis \noindent $\models_{\tiny{\textbf{L}}} (A > N \! B) > N \! (A > B)$\ \ \ \ \ \ Variant of Boethius' Thesis \noindent and \noindent $\not\models_{\tiny{\textbf{L}}}(A> B)>(B> A)$\ \ \ \ \ \ \ \ \ \ \ Non-symmetry of implication \bigskip \noindent A logic \textbf{L} is \emph{hyper-connexive} iff it is connexive and at least one of the following holds: \noindent $\models_{\tiny{\textbf{L}}} N \! (A > N \! B) > (A > B)$\ \ \ \ \ \ Converse of Boethius' Thesis \noindent $\models_{\tiny{\textbf{L}}} N \! (A > B) > (A > N \! B)$\ \ \ \ \ \ Converse of Variant of Boethius' Thesis \bigskip \noindent A logic \textbf{L} is \emph{nexive} iff the following hold: \noindent $\models_{\tiny{\textbf{L}}} N \! (A>N \! A)$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Aristotle's Thesis \noindent $\models_{\tiny{\textbf{L}}} N \! (N \! A> A)$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Variant of Aristotle's Thesis \noindent $\models_{\tiny{\textbf{L}}} (N \! A > B) > N \! (A > B)$\ \ \ \ \ \ Francez's Thesis \noindent $\models_{\tiny{\textbf{L}}} (A > B) > N \! (N \! A > B)$\ \ \ \ \ \ Variant of Francez's Thesis \noindent and \noindent $\not\models_{\tiny{\textbf{L}}}(A> B)>(B> A)$\ \ \ \ \ \ \ \ \ \ \ Non-symmetry of implication \bigskip \noindent A logic \textbf{L} is \emph{hyper-nexive} iff it is nexive and at least one of the following holds: \noindent $\models_{\tiny{\textbf{L}}} N \! (A > B) > (N \! A > B)$\ \ \ \ \ \ Converse of Francez's Thesis \noindent $\models_{\tiny{\textbf{L}}} N \! (N \! A > B) > (A > B)$\ \ \ \ \ \ Converse of Boethius' Thesis \bigskip \noindent A logic \textbf{L} is \emph{contradictory} or \emph{negation-inconsistent} iff there is an $A$ such that $\models_{\tiny{\textbf{L}}} A$ and $\models_{\tiny{\textbf{L}}} N \! A$. \section{Mortensen's three-valued connexive logic} The logic \textbf{M3V} was introduced, although not with that name, in \cite{Mortensen1984} (the name was given in \cite{McCall2012}, presumably to mean ``Mortensen's 3-valued connexive logic''). The following truth tables, with $V_{\tiny{\textbf{M3V}}}=\{2, 1, 0\}$ and $D^{+}=\{2, 1\}$, characterize \textbf{M3V}: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\sim \! A$ & $A\wedge B$ & $A\vee B$ & $A\rightarrow_{\tiny{\textbf{E}}} B$\\ \hline $2$ & $2$ & $0$ & $2$ & $2$ & $1$\\ $2$ & $1$ & $0$ & $1$ & $2$ & $0$\\ $2$ & $0$ & $0$ & $0$ & $2$ & $0$\\ $1$ & $2$ & $1$ & $1$ & $2$ & $1$\\ $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $1$ & $0$ & $1$ & $0$ & $1$ & $0$\\ $0$ & $2$ & $2$ & $0$ & $2$ & $1$\\ $0$ & $1$ & $2$ & $0$ & $1$ & $1$\\ $0$ & $0$ & $2$ & $0$ & $0$ & $1$ \end{tabular} \end{center} \noindent A biconditional can be defined as usual, that is, as $(A\rightarrow_{\tiny{\textbf{E}}} B)\wedge(B\rightarrow_{\tiny{\textbf{E}}} A)$. It must be noted that Mortensen's satisfiability conditions for the conditional are structurally the same as the ones used by Anderson and Belnap in \cite{AndersonBelnap1975} to show the consistency of the logic \textbf{E}, hence the subscript. In particular, they showed how to block the paradox of necessity, i.e. to avoid validating formulas of the form $X>(Y>Z)$, where $X$ is a contingent truth and $(Y>Z)$ is a logical truth.\footnote{A logic containing \textbf{M3V} was developed around the same time by Peña to cope with comparatives, gradables and vagueness. See \cite{Pena1995} for a summary of his results and \cite{Paoli2006} for a more friendly exposition of them.} The three-valued nature of Mortensen's logic, along with the number of elements in $D^{+}$ and the evaluation conditions for negation motivate the representation of Mortensen's 2, 1, 0 as three subsets of the set of classical values $\{1, 0\}$, namely $\{1\}$, $\{1, 0\}$ and $\{0\}$, respectively, leaving the remaining subset $\{ \ \}$ aside as in the two-valued relational semantics for \textbf{LP}: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\sim \! A$ & $A\wedge B$ & $A\vee B$ & $A\rightarrow_{\tiny{\textbf{E}}}B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{1\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{1, 0\}$ \end{tabular} \end{center} Applying the mechanical procedure described in \cite{OmoriandSano2015} for turning truth tables employing three of the four truth values of \textbf{FDE} into Dunn conditions (i.e., pairs of positive and negative conditions in terms of containing or not containing the classical values 0 or 1), we define a relation $\sigma$, which takes formulas as its domain and the set of truth values $\{1, 0\}$ as its codomain. Then, the positive condition describes the cases in which $1\in\sigma(X)$, and the negative condition describes the cases in which $0\in\sigma(X)$. From the truth tables above we can infer that the conditions for the implication-free fragment of the language are standard, and that the clauses for $\rightarrow_{\tiny{\textbf{E}}}$ are as follows: \begin{itemize} \item $1\in\sigma(A\rightarrow_{\tiny{\textbf{E}}} B)$ if a and only if $1\notin A$, or $0\notin B$, or both $0\in A$ and $1\in B$ \item $0\in\sigma(A\rightarrow_{\tiny{\textbf{E}}} B)$ if and only if $1\in\sigma(A)$ or $0\in\sigma(A)$ and either $1\in\sigma(B)$ or $0\in\sigma(B)$ \end{itemize} \noindent We are now in a position to point out some of \textbf{M3V}'s main features. \begin{itemize} \item Unlike \textbf{LP}, \textbf{M3V} validates unrestricted Detachment. \item It is connexive. \item It is contradictory. As witnesses, consider $(A\wedge\sim \! A)\rightarrow_{\tiny{\textbf{E}}} A$ and $\sim \! ((A\wedge\sim \! A)\rightarrow_{\tiny{\textbf{E}}} A)$. \item All conditionals are false in \textbf{M3V}. The falsity condition for the conditional is but a sophisticated way of expressing $0\in\sigma(A\rightarrow_{\tiny{\textbf{E}}} B)$, which implies that $\models_{\tiny{\textbf{M3V}}} \sim \! (A \rightarrow_{\tiny{\textbf{E}}} B)$, for any $A$ and $B$. \item Though all conditionals are false in \textbf{M3V}, some of them are true as well. Simply consider a conditional where both antecedent and consequent are just true. The conditional is false, yet true as well. \item $\models_{\tiny{\textbf{M3V}}} \sim \! (A \rightarrow_{\tiny{\textbf{E}}} B)$ implies $\models_{\tiny{\textbf{M3V}}}\sim \! (A\rightarrow_{\tiny{\textbf{E}}} \sim \! B)$, by a simple substitution in the consequent. Due to the validity of the latter, we say that \textbf{M3V} is \emph{ultra-Abelardian}.\footnote{Claudio Pizzi has urged the connexive logic community not to multiply the principles with names of ancient philosophers. However, that plays a role in keeping a healthy logical memory. Peter Abelard held that conditionals express natures and that natures are characterized positively. For example, he believed that it would not be part of a human’s nature to not be a stone, although being an animal would be. (For details see \cite{Martin2004}.) Thus, for him, no conditional of the form $A\rightarrow\sim \! B$, where $A$ is necessarily positive ---that is, its main connective is not a negation--- and $\sim \! B$ is not a subformula of $A$, is true on pain of contradiction. Omitting the constraints on $A$ and $\sim \! B$ would lead to ultra-Abelardianism.} \item Almost obvious given the validity of $\sim \! (A\rightarrow_{\tiny{\textbf{E}}} B)$, but even more overlooked, is the fact that \textbf{M3V} validates some schemas from Abelian logic, namely the \emph{Centering} principles\footnote{Nonetheless, it does not validate the \emph{Meyer-Slaney relativity axiom (schema)}, characteristic of purely implicative Abelian logics: \noindent $\not\models_{\textbf{\tiny{M3V}}}((A\rightarrow_{\tiny{\textbf{E}}} B)\rightarrow_{\tiny{\textbf{E}}} B)\rightarrow_{\tiny{\textbf{E}}} A$ \noindent (For a countermodel, let $\sigma(A) = \{0\}$ and $\sigma(B) = \{1\}$.) The validity of $\sim \! (A\rightarrow_{\tiny{\textbf{E}}} A)$ demands moreover a comparison with Meyer and Martin's \textbf{SI$\sim$I} ---see \cite{MeyerMartin2019}---, where such schema is valid too. In that logic, $(C\rightarrow D)\rightarrow((A\rightarrow C)\rightarrow (A\rightarrow D))$ and $(A\rightarrow C)\rightarrow((C\rightarrow D)\rightarrow (A\rightarrow D))$, both object-language expressions of transitivity, are valid, but their negations are not. Nevertheless, since all conditionals are false in \textbf{M3V}, the negation of these forms of transitivity is valid as well.}: \noindent $\models_{\tiny{\textbf{M3V}}} \sim \! (A\rightarrow_{\tiny{\textbf{E}}} A)$ \noindent $\models_{\tiny{\textbf{M3V}}} \sim \! (A\rightarrow_{\tiny{\textbf{E}}} A)\leftrightarrow_{\tiny{\textbf{E}}}(A\rightarrow_{\tiny{\textbf{E}}} A)$ (This provides other witnesses of negation-inconsistency, namely $A\rightarrow_{\tiny{\textbf{E}}} A$ and $\sim \! (A\rightarrow_{\tiny{\textbf{E}}} A)$. \item \textbf{M3V} is not hyper-connexive. Suppose it were, and that the Converse of Boethius hold. By ultra-Abelardianism and Detachment, $A\rightarrow B$ would be valid, but it is not. (A similar argument can be run using the Converse of the Variant of Boethius and the falsity of all conditionals.) \item Francez's logics (see \cite{Francez2016}; see also \cite{Francez2019} and \cite{Francez2021}) have been the only recognized nexive logics so far. But \textbf{M3V} is nexive too, as a consequence of all negated conditionals being true. It is not hyper-nexive, though. (The proof is similar to the proof that it is not hyper-connexive.) \end{itemize} From the above, perhaps the most surprising feature is the fact that all conditionals are false in \textbf{M3V}. Indeed, one could argue that \textbf{M3V} is an interesting logic in so far as having arbitrary false conditionals, among many otherwise familiar properties, is an interesting feature for a logic to have. Nonetheless, this may require some philosophical elucidation. The first thing to be said is that Cantwell's logic for conditional negation \textbf{CN} and \textbf{M3V} are inter-definable. In particular, the \textbf{E}-conditional is the contraposable conditional defined with the conditional in \textbf{CN}; see \cite{OmoriWansing2020}. Thus, one could attempt to build upon the intuitive features of \textbf{CN} to obtain some extra-logical support for \textbf{M3V}. True, the intuitiveness of the basic notions do not transfer immediately to the derived notions, but it could be a start. We do not follow that route, though. In our view, it is not unreasonable to have a logic in which all conditionals are false. On the one hand, tradition has it that certain syllogisms that are deemed valid often lack some tacit premise. For example, from ``Every human is mortal'' infer ``I am mortal'', where premise ``I am a human'' is tacit, i.e. it is a suppressed or unstated truth or piece of information not mentioned explicitly yet being part of the argument so that the conclusion indeed follows. This kind of argument is called \emph{enthymeme} by Aristotle (\emph{Rethoric}, 1357a16-21) and the implication relation between its premises and its conclusions is called \emph{enthymematic implication} by Sylvan \cite[p. 142]{Sylvan1989BG6}. Following this line of thought, \textbf{M3V} might be considered as a logic of enthymematic implication, i.e. as a logic about conditional arguments that strictly speaking are invalid, since they always lack some antecedent, premise or background information in order to hold (i.e. in order to entail the conclusion or consequent), but which may also be accepted as valid \emph{sotto voce}, \emph{prima facie} or \emph{ceteris paribus}. On the other hand, connexive logic has been intimately attached to counterfactual notions from its very (contemporary) beginnings. (See \cite{Angell1962}.). This is relevant because, for example, Alan Hájek has long argued, in still unpublished but much read work, for the idea that most counterfactuals are false. (See \cite{Hajek20XX}.) According to him, the indeterminism and indeterminacy associated with most counterfactuals entail their falsehood. Yet, counterfactual reasoning seems to play an important role in science, and ordinary speakers judge many counterfactuals that they utter to be true. Thus, \textbf{M3V} could be regarded as both a (zero-order) formalization of a radical version of Hájek's ideas on the falsity of counterfactual conditionals, while also capturing the idea that some of them need to be true.\footnote{There are of course many ways to address Hájek's challenge, and many of them that do not require a contradictory logic. Here we simply suggest that \textbf{M3V} can be taken as a formalization of a certain form of that debate. For another proposal in the connexive vicinity to address Hájek's challenge, see \cite{KapsnerOmori2017}.} Finally, Meyer and Martin wanted to provide a logic for Aristotle's syllogistic, which was irreflexive. In their logic \textbf{SI$\sim$I}\footnote{They do not call it in that way, though. However, we simply indicate what further axiom schemas are added to the basis \textbf{S}, with `I' standing for $A\rightarrow A$, and `$\sim$I' for $\sim \! (A\rightarrow A)$.}, $A\rightarrow A$ was treated as a borderline case, both a fallacy with no valid instances (due to the irreflexivity of entailment) and a validity (because of the truth-preservation account of entailment), hence the validity of both $A\rightarrow A$ and $\sim \! (A\rightarrow A)$. One could explore the idea that implication or entailment are relations so demanding that no sentences can be ever in that relation, hence the validity of $\sim \! (A\rightarrow B)$. However, as in the case of \textbf{SI$\sim$I}, one could argue that, for theoretical simplicity, in this case, the functional approach, the truth of some instances of $A\rightarrow B$ are required as well. We know that all what we have said is far from convincing. However, making a full case for the conceptual usefulness of \textbf{M3V} is beyond our aims. We merely expressed some ideas to take this logic as more than a mathematical curiosity. \section{Connexive closed set logic} The logic that we call `closed set logic' was introduced algebraically in \cite{Mortensen1995} and subsequently studied in \cite{Mortensen2000}, \cite{Mortensen2003} and \cite{Mortensen2007}.\footnote{Although the ideas underlying it are older, going back at least to \cite{McKinseyTarski1948}. The first systematic treatment of that logic on its own was the proof-theoretical analysis in \cite{Goodman1981}.} We focus here in the restriction to three interpretations, \textbf{CSL3}, defined by the following tables: \begin{center} \begin{tabular}{cc|c|c|c} $A$ & $B$ & $\neg A$ & $A\wedge B$ & $A\vee B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{1\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1, 0\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ \end{tabular} \end{center} It is common wisdom that there is no conditional in \textbf{CSL3}. Consider a connective defined as follows: $$A\rightarrow B\coloneqq \neg A\vee B \ (= \neg(A\wedge \neg B))$$ \noindent This connective does not validate Detachment.\footnote{Although, in all fairness, it validates a restricted version, due to Beall \cite{Beall2011}, \cite{Beall2015} in the context of \textbf{LP}, namely, $A,~ A~\rightarrow~ B~\models_{\tiny{\textbf{CSL3}}}~ B\vee(A\wedge\neg A)$.} There are several ways to expand \textbf{CSL3} with a conditional connective that validates Detachment. In fact, 2$^4$ non-connexive conditionals could do the job; see \cite[p. 72]{CarnielliMarcos2002}. But consider the expansion \textbf{cCSL3}, which adds the \textbf{E}-conditional, one of the 2$^4$ mentioned above, to \textbf{CSL3}. Let us point out some of \textbf{cCSL3}'s main features, starting with those involving just its $\{\neg, \rightarrow_{\tiny{\textbf{E}}}\}$-fragment. \begin{itemize} \item \textbf{cCSL3} validates unrestricted Detachment. \item All conditionals are false in \textbf{cCSL3} and so $\models_{\tiny{\textbf{cCSL3}}} \neg(A\rightarrow_{\tiny{\textbf{E}}}B)$, for any $A$ and $B$. \item It is connexive. \item It follows that if \textbf{cCSL3} is connexive and all conditionals are false in it, \textbf{cCSL3} is contradictory, just as \textbf{M3V}. As witnesses, consider $A\rightarrow_{\tiny{\textbf{E}}}A$ and $\neg(A\rightarrow_{\tiny{\textbf{E}}}A)$. \item \textbf{cCSL3} is ultra-Abelardian. \item \textbf{cCSL3} does not validate exactly the same centering principles as \textbf{M3V}. One has \noindent $\models_{\tiny{\textbf{cCSL3}}} \neg(A\rightarrow_{\tiny{\textbf{E}}}A)$ \noindent $\models_{\tiny{\textbf{cCSL3}}} (A\rightarrow_{\tiny{\textbf{E}}}A)\rightarrow_{\tiny{\textbf{E}}}\neg(A\rightarrow_{\tiny{\textbf{E}}}A)$ \noindent but also \noindent $\not\models_{\tiny{\textbf{cCSL3}}} \neg(A\rightarrow_{\tiny{\textbf{E}}}A)\rightarrow_{\tiny{\textbf{E}}}(A\rightarrow_{\tiny{\textbf{E}}}A)$ \item The above implies that \textbf{cCSL3} also lacks the Deduction Property. In fact, every logical truth in \textbf{cCSL3} entails any other logical truth, in particular, $\neg(A\rightarrow_{\tiny{\textbf{E}}}A)\models_{\tiny{\textbf{cCSL3}}}A\rightarrow_{\tiny{\textbf{E}}}A$, yet $\not\models_{\tiny{\textbf{cCSL3}}} \neg(A\rightarrow_{\tiny{\textbf{E}}}A)\rightarrow_{\tiny{\textbf{E}}}(A\rightarrow_{\tiny{\textbf{E}}}A)$. \item The invalidity of $\neg(A\rightarrow_{\tiny{\textbf{E}}}A)\rightarrow_{\tiny{\textbf{E}}}(A\rightarrow_{\tiny{\textbf{E}}}A)$ generalizes. Since any conditional of the form $X\rightarrow_{\tiny{\textbf{E}}}Y$ is false and any conditional of the form $\neg(W\rightarrow_{\tiny{\textbf{E}}}Z)$ is just true, it follows that no conditional of the form $\neg(W\rightarrow_{\tiny{\textbf{E}}}Z)\rightarrow_{\tiny{\textbf{E}}}(X\rightarrow_{\tiny{\textbf{E}}}Y)$ is valid.\footnote{And the validity of $(A\rightarrow_{\tiny{\textbf{E}}}A)\rightarrow_{\tiny{\textbf{E}}}\neg(A\rightarrow_{\tiny{\textbf{E}}}A)$ also generalizes: every conditional of the form $(X\rightarrow_{\tiny{\textbf{E}}}Y)\rightarrow_{\tiny{\textbf{E}}}\neg(W\rightarrow_{\tiny{\textbf{E}}}Z)$ is valid.} \item It is clear now that \textbf{cCSL3} and \textbf{M3V} validate different arguments. As another witness, consider $A\models_{\textbf{M3V}} \sim\sim \! A$ but $A\not\models_{\textbf{cCSL3}}\neg\neg A$. \item \textbf{cCSL3} is not hyper-connexive. The argument is as for \textbf{M3V}. \item \textbf{cCSL3} is nexive, just as \textbf{M3V}. And like \textbf{M3V}, it is not hyper-nexive. Again, the proof is an adaptation of the proof that \textbf{M3V} is not hyper-connexive. \end{itemize} A natural question at this point is whether $\neg$ is definable in \textbf{M3V}. It is not. It could be defined as $\sim \! \circ(A\rightarrow_{\tiny{\textbf{E}}}\sim \! \circ\circ \! A)$, with $\circ$ a consistency connective: \begin{center} \begin{tabular}{c|c} $A$ & $\circ A$ \\ \hline $\{1\}$ & $\{1\}$ \\ $\{1, 0\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ \\ \end{tabular} \end{center} \noindent But such a connective is not definable in \textbf{M3V}: The connective is not definable in \textbf{CN} as per \cite{Omori2016}, and a connective is definable in \textbf{M3V} iff it is definable in \textbf{CN}, as per \cite{OmoriWansing2020}.\footnote{What about defining the \textbf{LP} negation in \textbf{cCSL3}? We do not know, but our guess is that it cannot be defined.} The list of properties above does not highlight enough some features of \textbf{cCSL3}, especially around connexive principles: \begin{itemize} \item $\neg(A\rightarrow_{\tiny{\textbf{E}}}B)$ is just true in all interpretations in \textbf{cCSL3}; $\sim \! (A\rightarrow_{\tiny{\textbf{E}}}B)$ is true in all interpretations in \textbf{M3V}, but it is also false under some of them. This has consequences for the connexive principles, as we will see. \item Recall that, in \textbf{M3V}, Aristotle's Theses are true under all interpretations, although there are some interpretations under which they are also false. That is not the case in \textbf{cCSL3}: Aristotle's Theses are just true. \item Boethius' Theses are true under all interpretations in \textbf{M3V}, although they are also false under all interpretations. That is the case as well in \textbf{cCSL3}, with the difference that in this logic, the negations of Boethius' Theses are just true. \item Both $(A\wedge\sim \! A)\rightarrow_{\tiny{\textbf{E}}} A$ and $\sim \! ((A\wedge\sim \! A)\rightarrow_{\tiny{\textbf{E}}} A)$ are valid in \textbf{M3V}, they are both true and false in all interpretations. But although both $(A\wedge\neg A)\rightarrow_{\tiny{\textbf{E}}} A$ and $\neg((A\wedge\neg A)\rightarrow_{\tiny{\textbf{E}}} A)$ are valid in \textbf{cCSL3}, the latter is just true in all interpretations. \item More generally: If both $X$ and $\sim \! X$ are valid in \textbf{M3V}, then $\neg X$ is just true in \textbf{cCSL3}, unlike $\sim \! X$ in \textbf{M3V}, even if $X$ fails to be valid in \textbf{cCSL3}. (The proof is straightforward. For schemas exemplifying this, recall the ones for the failure of the Deduction Property.) \end{itemize} Finally, an attractive feature of the \textbf{E}-conditional should be mentioned: unlike some well-known connexive conditionals in the literature, it is stable under changes of negation. Let us make that more precise. Let a \emph{standard negation} be a unary connective $N$ satisfying that $\sigma(N A) = \{1\}$ if $\sigma(A) = \{0\}$, and $\sigma(N A) = \{0\}$ if $\sigma(A) = \{1\}$. If a standard negation $N$ is such that, in a logic \textbf{L}, $A, N A\not\models_{\tiny{\textbf{L}}} B$, we will call it a \emph{standard paraconsistent negation}. Let us define the \emph{type of standard paraconsistent negations} (TSPN) as the set of all such connectives definable according to a set of admissible evaluations. In the present context, TSPN only has two connectives: $\sim$ and $\neg$. Now, let us say that a conditional $A>B$ is \emph{connexively stable} with respect to TSPN iff \noindent $\models_{\tiny{\textbf{L}}} N_i \! (A> N_i \! A)$ \noindent $\models_{\tiny{\textbf{L}}} N_i \! (N_i \! A>A)$ \noindent $\models_{\tiny{\textbf{L}}} (A>B)>N_i \! (A> N_i \! B)$ \noindent $\models_{\tiny{\textbf{L}}} (A> N_i \! B)> N_i \! (A>B)$ \noindent and \noindent $\not\models_{\tiny{\textbf{L}}}(A>B)>(B>A)$ \noindent for each $N_i$ in TSPN. From the previous discussion, $\rightarrow_{\tiny{\textbf{E}}}$ is connexively stable with respect to TSPN. However, the main connexive conditionals in the literature are not connexively stable. The conditionals defined by the following tables validate the connexive principles only with $\sim$, but not with $\neg$: \begin{center} \begin{tabular}{c|ccc} $A\rightarrow_{W} B$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ \hline $\{1\}$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ \end{tabular} \end{center} \begin{center} \begin{tabular}{c|ccc} $A\rightarrow_{BL} B$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ \end{tabular} \end{center} \noindent They are, respectively, Wansing's conditional from \cite{Wansing2005} restricted to three admissible interpretations ---found explicitly for three interpretations in \cite{Olkhovikov2001}, \cite{Cantwell2008}, \cite{Omori2016}---, and Belikov and Loginov's conditional from \cite{BelikovandLoginov20XX}. Although Aristotle's Thesis becomes just true under all interpretations with the first conditional and $\neg$, Boethius' Thesis fails: it is just false when $A$ is just true and $B$ is both true and false. The problem with the second conditional is a sort of dual: Boethius' Thesis is valid, but Aristotle's Thesis fails when $A$ is both true and false. Note that Francez's conditional, from \cite{Francez2019}, restricted to three admissible interpretations, i.e. \begin{center} \begin{tabular}{c|ccc} $A\rightarrow_{F} B$ & $\{1\}$ & $\{1, 0\}$ & $\{0\}$\\ \hline $\{1\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{0\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$\\ \end{tabular} \end{center} \noindent is also stable with respect to standard paraconsistent negations. It can be easily verified that it does not allow for the countermodels present in the previous conditionals.\footnote{Angell-McCall's conditional, found in \cite{Angell1962} and \cite{McCall1966}, is connexive with respect to Boolean negation, but not with respect to other standard explosive negations. The definition of this notion, and the verification of the claim about the Angell-McCall's conditional are left as an exercise.} \section{Connexive P$^2$} There is one more logic due partly to Mortensen, but also partly to Marcos. In \cite{Mortensen1989a}, Mortensen introduced a logic called `\textbf{C$_{0.2}$}' characterized by the following tables: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\sim \! A$ & $A\wedge_{\tiny{\textbf{P}}} B$ & $A\vee_{\tiny{\textbf{P}}} B$ & $A\rightarrow_{\tiny{\textbf{P}}}B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{ \ \}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{ \ \}$ & $\{1\}$ & $\{ \ \}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{ \ \}$ & $\{ \ \}$ & $\{ \ \}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{ \ \}$ & $\{0\}$ & $\{ \ \}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{ \ \}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ \end{tabular} \end{center} \noindent (Mortensen originally used three values, 1, 2, and 3, being 1 the only designated value. We are taking advantage here of the Dunn semantics, as mentioned in Section 3.) Marcos \cite{Marcos2006} suggested to replace the interpretation $\{ \ \}$ by the interpretation $\{1, 0\}$ ---or, in his original terms, to make the value 2 designated along with 1---, and put $\sim$ instead of $\neg$, to get the logic \textbf{P$^2$}, whose tables look like these: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\sim \! A$ & $A\wedge_{\tiny{\textbf{P}}} B$ & $A\vee_{\tiny{\textbf{P}}} B$ & $A\rightarrow_{\tiny{\textbf{P}}}B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ \end{tabular} \end{center} Now, to get a connexive variant of this, \textbf{cP$^2$}, replace the \textbf{P}-conditional with the \textbf{E}-conditional, i.e. obtain a logic characterized by the following tables: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\sim \! A$ & $A\wedge_{\tiny{\textbf{P}}} B$ & $A\vee_{\tiny{\textbf{P}}} B$ & $A\rightarrow_{\tiny{\textbf{E}}}B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{1\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{1, 0\}$ & $\{0\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1, 0\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{1, 0\}$ \end{tabular} \end{center} \textbf{M3V} and \textbf{cP$^2$} coincide in the $\{\sim, \rightarrow_{\tiny{\textbf{E}}}\}$-fragment, but they differ in ways that are significant for connexive logicians. Consider the following (non-core) connexive principles: \noindent $\sim \! ((A \rightarrow_{\tiny{\textbf{E}}} B) \wedge_{\tiny{\textbf{P}}} (\sim \! A \rightarrow_{\tiny{\textbf{E}}} B))$ \ \ \ \ \ \ Aristotle's Second Thesis \noindent $\sim \! ((A \rightarrow_{\tiny{\textbf{E}}} B) \wedge_{\tiny{\textbf{P}}} (A \rightarrow_{\tiny{\textbf{E}}} \sim \! B))$ \ \ \ \ \ \ \ Abelard's Principle \noindent These are valid in \textbf{M3V}, as originally reported in \cite{EstradaRamirez2016}, but they are not in \textbf{cP$^2$}. For a countermodel, consider the case when both $A$ and $B$ are both true and false. (This will do for both principles.) For the record, these are countermodels for the principles written in the language of \textbf{cCSL3}. \paragraph{Short digression.} \noindent There is a different, more direct way of presenting \textbf{cP$^2$}, starting directly with Sette's \textbf{P$^1$} without the detour through Mortensen's \textbf{C$_{2.0}$}. Consider Sette's logic \textbf{P$^1$}, characterized by the following truth tables: \begin{center} \begin{tabular}{cc|c|c|c|c} $A$ & $B$ & $\neg A$ & $A\wedge_{\tiny{\textbf{P}}} B$ & $A\vee_{\tiny{\textbf{P}}} B$ & $A\rightarrow_{\tiny{\textbf{P}}}B$\\ \hline $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$\\ $\{1, 0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$\\ $\{0\}$ & $\{1\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{1, 0\}$ & $\{1\}$ & $\{0\}$ & $\{1\}$ & $\{1\}$\\ $\{0\}$ & $\{0\}$ & $\{1\}$ & $\{0\}$ & $\{0\}$ & $\{1\}$ \end{tabular} \end{center} To obtain \textbf{P$^2$}, simply replace $\neg$ by $\sim$, as it has been already noticed in \cite{Karpenko1999}.\footnote{He anticipated thus Marcos' formulation of \textbf{P$^2$}. However, Karpenko wrongly claims that Mortensen's original logic \textbf{C$_{0.2}$} is paraconsistent. Karpenko assumed that the value 2, in Mortensen's presentation, is designated, which is not. Marcos correctly noticed that \textbf{P$^2$} requires a certain amount of dualization in Mortensen's \textbf{C$_{0.2}$}.} Then, to get \textbf{cP$^2$}, replace the \textbf{P}-conditional with the \textbf{E}-conditional. \section{Conclusions} In this paper, we took two interests of Mortensen, connexivity and certain brands of paraconsistency, and combined them into single logics. Although connexivity is at least a matter of two, negation and the conditional, the \textbf{E}-conditional of Mortensen's \textbf{M3V} excels among other conditional in validating the connexive schemas even when combined with other (paraconsistent) negations. Also, some features of \textbf{M3V} are exacerbated when a different negation is used. For example, in \textbf{M3V} all negated conditionals are true, but also sometimes false, whereas changing the negation leads to the result that negated conditionals are just true, never false. There are at least five ways in which this work can be continued. First, when presented in slightly different ways, a logic might exhibit more interesting features. As we mentioned, the logics \textbf{CN} and \textbf{M3V} are inter-definable; it would be worth take a look at the logics defined here with other conditionals definable on them. Second, one could enrich the languages here with consistency connectives to make a comparison with the \textbf{LFIs}. Third, one could try to get both negations in a single language and study the effect of that on connexive principles. Fourth, at least in the \textbf{E}-conditional , Mortensen suggested to couple closed set logic with different notions of logical consequence. This would allow, among other things, to discriminate between schemas that are true under all interpretations and those that are just true under all interpretations. This would give rise to ``exactly true'' or ``non-falsity'' versions of the logics above, which have been studied in the vicinity of \textbf{FDE}. (See for example \cite{PietzRivieccio2013} and \cite{ShramkoZaitsevBelikov2019}, but also \cite{EstradaGonzalez2020} for a discussion closer to the present context.) Speaking of that, and finally, one can move the entire discussion on top of \textbf{FDE} to work with more admissible interpretations. That would augment the number of logical and conceptual distinctions available to work with. \bibliographystyle{eptcs}
1,314,259,992,743
arxiv
\section*{Modulating anchoring at liquid crystal interfaces} Employing microfluidics, we fabricate water-CLC-water double emulsions, where an aqueous inner droplet is coated by a CLC shell that is, in turn, dispersed in an aqueous phase (see SI, Materials and Methods). The CLC phase is composed of a mixture of 4-cyano-4'-pentylbiphenyl (5CB), nematic at room temperature, and a chiral dopant (S)-4-cyano-4-(2-methylbutyl)biphenyl (CB15). The pitch $p$ is determined by the amount of dopant present in the solution \cite{cb15pitch}. The thickness of the shell can be kinetically tuned via osmotic annealing \cite{alex-waltz, osswell}: increasing the molar salt concentration in one of the aqueous phases creates an osmotic pressure difference through the CLC shell, resulting in a flux of water across the permeable shell. Due to the density mismatch between the inner and middle phases, the inner droplet either floats or sinks inside the outer one, leading to a non-uniform shell thickness. This thickness heterogeneity can be reduced using heavy water (D$_{2}$O) to approximately match densities. A 1\% wt polyvinyl alcohol (PVA) in water solution is added to the inner and outer phases to stabilize both CLC-water interfaces \cite{tll-sds}. Water and PVA are also known to induce degenerate planar anchoring on the 5CB-water interface \cite{abbsurf}. The pitch axis of a CLC orients perpendicular to a boundary with degenerate \textit{planar} anchoring so as to satisfy, without frustration, both the boundary condition and the tendency of the CLC to twist. However, at a \textit{homeotropic} boundary, there is no direction in which the pitch axis can orient such that the director is perpendicular to the entire boundary --- a homeotropic boundary competes with the tendency to twist. To study these two cases and their relationship, we induce an anchoring transition at the CLC-water interfaces using two methods: by introducing high concentrations of surfactant to the water phases around the shell or by slowly heating the shell towards the CLC-isotropic transition. Our results do not depend on our method of surface modification, leading us to conclude that the observed textures and transitions are robust. In the first modality, we add sodium dodecyl sulfate (SDS) to induce an anchoring transition from planar to homeotropic (Fig.~\ref{AnchChange}A and B). This method has proven to be effective in nematic LC-water interfaces on flat surfaces \cite{abbsurf} and shells \cite{tll-sds}. We find stripe patterns that are stable over weeks (Fig.~\ref{AnchChange}A) at concentrations of SDS well above the critical micelle concentration (cmc) ($>\!\!5$ mM SDS). We add sodium chloride (NaCl) to the surfactant solution in order to provide excess electrolytes that both lower the cmc and screen the electrostatic interactions between the surfactant head groups for denser packing of surfactants on the CLC-water interface \cite{abbsurf}. Osmotically swelling the emulsions additionally appears to change the amount of surfactant on the inner shell surface [evident from the pattern changes on the shell (SI Fig.~\ref{innerFCDs})], allowing for tuning of the surfactant density on the inner shell surface. Sample containers are sealed to prevent the evaporation of water so that surfactant concentrations can be kept constant over long time periods. Alternatively, raising the temperature of a shell with planar boundary conditions induces similar patterns (Fig.~\ref{AnchChange}C) to those seen with surfactants when the temperature reaches a few tenths of degrees Celsius below the phase transition temperature to the isotropic state. This temperature-induced anchoring transition is likely linked to the nucleation of a new interface between the PVA and the CLC phase: an isotropic-CLC interface that is known to induce weak out-of-plane molecular anchoring \cite{nem-iso}. The coexistence of a thin isotropic film with a bulk liquid crystal phase and its growth as a function of temperature has been previously reported \cite{nature-interface, yang-interface, kleman-interface, lagerwall-interface}. At the water-CLC interface, the CLC molecules interact with the amorphous, randomly-conformed PVA, creating a greater degree of disorder in the CLC near the interface than in the rest of the bulk. If the system is heated with a low rise temperature ramp (0.01$^{\circ}$C/min), isotropic domains nucleate at the interfaces before the phase transition is triggered in the bulk. These domains grow and coalesce at the interfaces, replacing the water-CLC interface (planar anchoring) by an isotropic-CLC interface (out-of-plane anchoring), as schematically represented in Fig.~\ref{AnchChange}D. As the temperature increases further, the isotropic layers expand into the shell, thinning the CLC further until the entire thickness is in the isotropic phase (Fig.~\ref{AnchChange}C and SI Videos 2 \& 3). Using two independent mechanisms to change the boundary conditions allows us to cross-corroborate our results. Indeed, the coincidence of the observed patterns establishes that these structures are truly an anchoring effect and do not result from variations in elastic constants (since the surfactant studies are at constant temperature) or molecular chemistry (since the temperature modality does not require concentration changes). Under planar anchoring conditions, the CLC shell has topologically required defects (Fig.~\ref{AnchChange}A-iv and Fig.~\ref{AnchChange}C-i), as stated by the Poincar\'e-Hopf theorem \cite{poincare, hopf}. Namely, the molecules cannot be aligned over the entire surface of the sphere, yielding defects whose topological charges or winding numbers --- a measure of the rotational distortion that they impose on the director field around them --- add up to $+2$. Due to the periodicity imposed by the cholesteric pitch, the CLC can be seen as a layered system, forcing the defects in the shell to extend into the bulk as defect lines \cite{depablo-cholshells, alex-waltz, alex-defects}. Alternatively, under strong homeotropic anchoring conditions, the CLC shell has disclination lines in the bulk, away from the interface (Fig.~\ref{AnchChange}A-i, SI Fig.~\ref{StrHomeoAnc}). The CLC is forced to twist rapidly at these defect lines to satisfy the energetic tendency of the system to twist while keeping the system from violating the homeotropic boundary condition \cite{homeotdrop, zumerknot}. Between these limiting cases, we find new states with \textit{moderate} homeotropic anchoring strengths. When a shell having initially strong homeotropic anchoring (10 mM SDS, 0.1 M NaCl, and 1\% wt PVA) is placed in a solution lacking surfactant, the surfactant surface density on the interface decreases and the shell pattern changes in a sequence shown in Fig.~\ref{AnchChange}A and SI Video 1. We find first a half-pitch-wide, thin stripe pattern (Fig.~\ref{AnchChange}A-ii). In this state, the shell surface is tiled with double spiral domains. Then, we observe a pattern with much thicker stripes (Fig.~\ref{AnchChange}A-iii), where long stripes are modulated by shorter, perpendicular stripes. Similar patterns are seen when a shell having initially planar anchoring (Fig.~\ref{AnchChange}C-i) is subjected to a linear increase in temperature, starting a few tenths of degrees below the isotropic-CLC phase transition temperature ($\sim35.3^{\circ}$C for 5CB doped with $2.8$\% CB15), at a rate of 0.01$^{\circ}$C/min (SI Video 2). As the water-CLC interface is replaced by a water-isotropic LC interface, we find a thick stripe pattern (Fig.~\ref{AnchChange}C-ii), followed by a half-pitch-wide, thin stripe pattern (Fig.~\ref{AnchChange}C-iii). As the shell is heated further, isotropic regions confine the shell to such an extent that the CLC cannot twist and is essentially nematic (Fig.~\ref{AnchChange}C-iv, inset shows nematic-CLC boundary). Further heating induces a transition to the isotropic phase. We observe a clear dependence of the stripe patterns on the shell confinement. Being density-matched, the shell in Fig.~\ref{AnchChange}A is thick ($c\gg 1$) everywhere, while the shell in Fig.~\ref{AnchChange}C is very heterogeneous in thickness. In the latter case, the density of the CLC is greater than that of the inner aqueous droplet, causing the inner droplet to rise and thin the top region of the shell ($c \lesssim 1$). This has an effect on the observed patterns: thin stripes wrap into double-spiral regions (Fig.~\ref{AnchChange}C-iii, inset) in the thick part of the shell, as in Fig.~\ref{AnchChange}A-ii, but not in its thinnest part. Also, thicker stripes have secondary, perpendicular stripes (Fig.~\ref{AnchChange}C-ii, inset) in the thick part of the shell, as in Fig.~\ref{AnchChange}A-iii, but not in the thin part. How do the topology, the anchoring strength, and the shell thickness govern the formation of stripes and defect structures? To gain further insight, we use a $\mathbf{Q}$-tensor based, Landau-de Gennes model, where in the uniaxial limit, the components $Q_{ij}$ correspond to the director components $n_i$ by $Q_{ij} = 3S(n_i n_j - \delta _{ij} /3) / 2$, and $S$ is the Maier-Saupe order parameter \cite{lasso, z-rav, ms} (see SI, Materials and Methods). We vary the homeotropic and planar anchoring strength at the shell boundaries, along with the shell thickness, to study the effect of changing the boundary conditions on the system's metastable states. \begin{figure*}[ht!] \centerline{\includegraphics[width=1\textwidth]{SDS-PVAHeat-Comparison-GD.eps}} \caption{\label{AnchChange} \textbf{Stripe patterns emerge on a CLC shell as the anchoring is changed from homeotropic to planar.} Throughout, the pitch is 5 $\mu$m (2.8\% CB15). A) Time series of a CLC shell losing surfactant from its interface from i) to iv) (See SI Video 1). As the surfactant surface concentration is diluted from 10~mM SDS with time, the homeotropic anchoring strength weakens. Four distinct patterns, from strongest to weakest homeotropic anchoring, are identifiable: subsurface defect lines (i), double spiral domains (ii), thick planar stripes (iii), and planar anchoring (iv). B) Schematic of surfactants on the CLC-water interface. The surfactant tails cause the CLC molecules to prefer perpendicular alignment to the interface. C) Time series of a heated CLC shell, with temperature increasing from i) to iv) (See SI Video 2). A few tenths of degrees below the clearing point, an isotropic layer nucleates at the CLC-water interface, inducing weak out-of-plane anchoring of the remaining CLC. As temperature is increased, the isotropic layer grows inwards, confining the remaining CLC until the entire thickness turns isotropic. Four patterns are also identified: planar anchoring (i), thick planar stripes (ii), thin stripes (iii), and (iv) complete untwisting. D) Schematic of the isotropic layer growing between the bulk CLC and the PVA-stabilized interface between water and liquid crystal. The radius (R) and thickness (h) of the shells are R $\sim$ 150 $\mu$m and h $\sim$ 50 $\mu$m in A) and R $\sim$ 75 $\mu$m and h $\sim$ 15 $\mu$m in C).} \end{figure*} \section*{Thick stripes - the ``bent'' state} Thick stripes, stripes larger than the half-pitch, occur when the cholesteric layers are concentric in the bulk (with the pitch axis oriented in the radial direction) and distort only close to the interfaces to accommodate the boundary conditions: out-of-plane anchoring at the outer shell interface relaxes via a cholesteric texture seen in Fig.~\ref{UState}A-ii and Fig.~\ref{UState}A-iii, over an anchoring penetration depth governed by the pitch \cite{pdepth}. The layers beyond the reach of this anchoring length have planar alignment, with a pitch axis oriented perpendicular to the interface. Since the outermost layer has tilted anchoring cues from the interface, the pitch must reorient near the surface, producing a bending in the cholesteric layer closest to the boundary. Simulations of a $0.81 \times0.81\times0.54$ $\mu$m CLC slab with a 0.28 $\mu$m pitch, with moderate homeotropic anchoring on the top and bottom boundary ($W_0 \approx 8 \times 10^{-5}~ \mathrm{J}/\mathrm{m}^2$) and periodic boundary conditions in the horizontal directions reveals the director field structure of the thick stripes. This structure is depicted in Fig.~\ref{UState}D, which shows a color map of the vertical component $\mathbf{n} \cdot \hat{\mathbf{z}}$ of the director. The CLC adapts to the anchoring conditions and the rigidity of the boundary by bending the outermost cholesteric layer, as predicted by Saupe \cite{saupe}, satisfying the homeotropic anchoring near the interface and the planar anchoring from the bulk layers. These bent layers form pitch-wide stripes, consistent with experiments. In bending the layers, the system introduces pairs of defect lines with singularities in the pitch axis but smooth in the nematic director, so called $\mathbf{\lambda}^{+}$ and $\mathbf{\lambda}^{-}$ defect lines \cite{pdepth, saupe, Sec2012}, as illustrated in Fig.~\ref{UState}D. A free interface makes the stripe periodicity more pronounced (see Fig.~\ref{Coexistence}). The stripe pattern evolves with the length ratio $c$, as shown in Fig.~\ref{UState}A, where a uniformly thin CLC shell with $c \sim 1$ undergoes a temperature-induced anchoring transition. A video of this process, showing the precise temperatures of each stage, is in SI Video 3. The shell has planar anchoring defects (Fig.~\ref{UState}A-i) initially, until it exhibits stripes a few times wider than the pitch (Fig.~\ref{UState}A-ii). As the shell is heated further, the isotropic domains at the interface grow into the shell, thinning the CLC and continuously decreasing the stripe width (Fig.~\ref{UState}B) until it is equal to the pitch (Fig.~\ref{UState}A-iii). At that point, as the system temperature is increased slightly more, the stripe width discontinuously jumps to a half-pitch (Fig.~\ref{UState}A-iv, Fig.~\ref{UState}B). A plot of the grayscale values of the stripes across the front of the periodicity change further corroborates the sudden ``doubling'' of stripes (Fig.~\ref{UState}C). Although $p$ slightly increases with temperature (in the order of 0.01 $\mu$m/$^{\circ}$C), this effect is too small to contribute to the evolution of the patterns \cite{pakula2015}. The abrupt ``doubling" of stripes is a clear manifestation of the unbending of the cholesteric layers. As the CLC is confined with increasing temperature, the CLC loses the bulk, concentric layers (blue layers in Fig.~\ref{UState}D for the slab geometry) that intervene between the bent layers at the interface. As the thickness decreases, the surface anchoring dominates the energetics, rearranging the director field to eliminate the $\mathbf{\lambda}^{+}$ and $\mathbf{\lambda}^{-}$ defect lines. When the slab thickness is about the size of the bent layer (about a half-pitch), the slab takes on a uniform cholesteric texture with the pitch axis parallel to the bounding surfaces, shown in Fig.~\ref{UState}E. Varying the confinement length scale can also trigger the growth of secondary patterns. In particular, $c$ governs the presence of perpendicular sub-stripes on the shells. This effect is apparent in Fig.~\ref{AnchChange}C-ii, on the sides of the shell, where the thickness of the shell increases, and is evident in more homogeneously thick shells (Fig.~\ref{AnchChange}A-iii). The stripes may form as a result of an instability in thick shells, with a periodicity possibly set by a similar mechanism seen in hybrid nematic pancakes \cite{oksana}. The anchoring conditions between the shells and the nematic pancakes are similar: perpendicular stripes only appear when there are enough cholesteric layers to reinforce planar anchoring, providing overall hybrid anchoring for the outermost layer. \begin{figure*}[ht!] \centerline{\includegraphics[width=1\textwidth]{BentUnbent.eps}} \caption{\label{UState} \textbf{Unbending transition under increasing confinement} A) Micrographs show a CLC shell with initially planar anchoring (i) heated towards the isotropic phase transition temperature with concomitant thinning of the cholesteric shell and introduction of weak out-of-plane anchoring. As the isotropic layer expands, thick planar stripes form initially (ii) and continuously decrease to the size of the pitch (iii), at which point, the stripe narrows discontinuously to the size of a half-pitch (iv), until the entire shell transitions to the isotropic phase (see SI Video 3). The pitch is 5 $\mu$m, the radius (R) and thickness (h) of the shell are R $\sim$ 200 $\mu$m and h $\sim$ 1 $\mu$m. B) Stripe periodicity over pitch versus temperature difference to the phase transition temperature. A continuous decrease followed by an abrupt halving is clearly seen. C) Grayscale values of stripes in (A-iv) before (red line, 1) and after (red line, 2) the front of the unbending transition show the halving of the stripe periodicity. D) Simulation of a $0.81 \times0.81\times0.54$~$\mu$m CLC slab with a 0.28 $\mu$m pitch, moderate homeotropic anchoring ($W_0 \approx 8 \times 10^{-5}~ \mathrm{J}/\mathrm{m}^2$) on the top and bottom, and periodic boundary conditions in the horizontal directions. The outermost cholesteric layers are bent, yielding a stripe periodicity greater that the pitch. Alternating $\lambda^{+}$ and $\lambda^{-}$ pitch defect lines are labeled. E) When the slab thickness decreases from 0.54 to 0.13 $\mu$m ($\sim$half-pitch), the CLC unbends and runs parallel to the interface, creating half-pitch-wide stripes. } \end{figure*} \section*{Double-spiraled thin stripes - the cholesteric focal conic domain} Thin stripes occur when the pitch axis is in the tangent plane of the interface, producing half-pitch-wide stripes. If the shells are thick and the homeotropic anchoring energy is sufficiently high, the thin stripes wrap into double spiral domains, as shown in Fig.~\ref{ThinStripes}. Double spirals are signatures of focal conic domains (FCDs), which are regions where the cholesteric layers bend around two focal lines \cite{pieranski, boulig-spherulites}. In this state, all the cholesteric layers across the shell thickness bend to become orthogonal to the boundaries, not merely the boundary layer (see schematics in SI Fig.~\ref{FCDDiagram} and Fig.~\ref{ThinStripes}D). The handedness of the double spirals appearing in the shells coincides with the handedness of the CLC. The double spiral structure has been shown to arise from a locally curved surface cutting out the bent layers of the CLC \cite{boulig-spherulites, pieranski, pdepth}. In order for double spirals to be energetically favorable, the CLC shell must be thick enough such that the bulk elastic energy can overcome surface tension and \textit{deform} the shape of the interface into hills and valleys, with each hill accommodating a double spiral \cite{pieranski, pdepth, mitov}. This is evident in SI Fig.~\ref{BentStateFCD}, which shows a CLC shell with a top, thin region exhibiting the ``bent''-state discussed above (SI Fig.~\ref{BentStateFCD}, left) and a bottom, thicker region with double spiral domains (SI Fig.~\ref{BentStateFCD}, right). Moderation of the anchoring strength and lowering the surface tension of the CLC interface by the introduction of either surfactant or isotropic phase facilitate the controllable formation of these patterns that are commonly seen in nature, such as on the iridescent shell of jeweled beetles \cite{beetle-science, beetle-mitov}. The surface corrugations at moderate anchoring strengths are related to the variability of the molecular orientation at the interface that induce gradients in the surface tension, leading to interface undulations, as seen in our simulations of the CLC-isotropic interface in Fig.~\ref{Coexistence} and in previous work \cite{cholisointerface}. The strong confinement and spherical curvature imposed by the shell impacts the organization and shape of the FCDs, leading to interesting effects. Shells with hybrid anchoring (7 mM of SDS only in the outer phase) have double spirals that pack in hexagons and pentagons, as shown in Fig.~\ref{ThinStripes}A. Typically, cholesteric FCDs are hexagonal in order to regularly tile planar surfaces. However, hexagons cannot pack efficiently on spheres, forcing the formation of pentagonal FCDs: the combination of hexagons and pentagons provides a full tiling of the shell surface, as seen also in fullerenes and soccer balls. Confinement also entails some anchoring violation on the inner surface. The CLC shells, with a thickness of approximately 20-30 $\mu$m and a pitch size of 5 $\mu$m, have double spiral domains with a radius of around 33 half-pitch-wide stripes, giving a total radius of around 16 cholesteric layers (about 80 $\mu$m). These layers cannot fit into the shell thickness without creating energetically costly layer distortions -- the system compromises by violating the inner shell planar anchoring, as we can see from the stripe pattern on the inner shell surface (Fig.~\ref{ThinStripes}A, right). On the other hand, in emulsions with matching homeotropic anchoring (7 mM of SDS in the inner and outer phases), the FCDs form from both the inner and outer shell surfaces, as shown in Fig.~\ref{ThinStripes}B. The FCDs formed on the inner shell surface are mirror images of those appearing on the outer one (they show opposite handedness when observed on the microscope), but their centers are shifted. As a result, the FCDs have more varied polygonal shapes since the focal-line intersections on one surface correspond to the centers of polygons on the opposite surface, as demonstrated in Fig.~\ref{ThinStripes}C, leading to the classic staggered packing of polygonal domain textures \cite{boulig-pt2-polyg}. Blue lines represent the edges of FCDs on the outer surface (Fig.~\ref{ThinStripes}B, left), while purple lines represent the edges of FCDs on the inner surface (Fig.~\ref{ThinStripes}B, right). We observe that greater homeotropic anchoring strength is needed to induce FCDs from the inside of CLC shells than from the outside for shells with $c\gg 1$ because the creation of hills is less costly at the outer surface. This is also an effect of curvature: because the FCDs formed on the outer and inner shell surfaces are mirror images, the hills formed on the outer surface have the same type of curvature as that of the surface that they deform, while those formed on the inner surface have the opposite curvature, and then, are less energetically favorable. Adding SDS to the inner water phase alone, no matter the concentration, fails to form double spiral domains on the inner shell surface. To induce FCDs to form \textit{only} on the inner shell surface, an extreme salt concentration (1 M NaCl) must also be added to the inner water phase, as shown in SI Fig.~\ref{innerFCDs}. The high salt concentration allows the surfactants to pack more densely on the surface, effectively increasing the homeotropic anchoring energy of the system. Similar patterns are observed in a thick CLC shell with the temperature anchoring transition. When the thin-stripe pattern emerges, hexagonal FCD packing is observed before the polygonal texture appears (SI Video 2). At the beginning of the temperature ramp, the isotropic layer nucleating at the shell interfaces is very thin, and the FCDs form just on the outer shell surface where it is easier to deform the interface. As the temperature increases, the isotropic region increases in thickness, so that the CLC bulk elastic energy can overcome the now weaker isotropic-CLC interfacial tension on both sides of the shell, allowing for a polygonal texture to form. We probe the director structure of FCDs via simulation. In Fig.~\ref{ThinStripes}D, the director structure of a system with a 2.2~$\mu$m diameter, a 0.7~$\mu$m thickness, and a $0.42$~$\mu$m pitch ($c\approx 1.7$) is plotted with a color map of the radial component $\mathbf{n} \cdot \hat{\mathbf{r}}$ of the director. Note that the length scales probed by the simulations are limited by the mesh spacing of the simulation grid, although we do not expect significant differences in larger shell simulations, especially in regions away from the core of the pitch defects (see SI, Materials and Methods for more information). For moderate homeotropic anchoring conditions ($W_0 \approx 1.5 \times 10^{-5}~ \mathrm{J}/\mathrm{m}^2$), the CLC twists along the surface of the sphere, producing two FCDs at the poles of the sphere. The spherical topology of the system dictates a minimum of two focal conic domains, a state found in experiments when the CLC shells are left to anneal for about one month. The surface stripes of the FCDs end at double spirals (Fig.~\ref{ThinStripes}D, top view), as they do in experiments. The cross section reveals that the cholesteric layers underneath the double spirals bend upwards. The system does \textit{not} have any director defects - only \textit{pitch} defects, areas with an ill-defined twist direction. The pitch defect of the FCD is evident from the cross section in Fig.~\ref{ThinStripes}D, where alternating red and blue values of $\mathbf{n} \cdot \hat{\mathbf{r}}$ along the spherical surface (indicating a twist axis along the sphere surface) collide with a region at the poles where the twist axis is pointing radially (the up-down direction in the figure), indicating a discontinuity in the pitch axis. The pitch defect appears to consist of two intertwined $\lambda$-lines, similarly to the defect core discussed in Ref.~\cite{pieranski}. Note that unlike a typical focal conic domain at a single free interface where the twisted $\lambda$ lines connect and terminate in the bulk \cite{pieranski}, the lines in the shell in Fig.~\ref{ThinStripes}D appear to span the entire shell thickness, generating a double spiral at the inner shell surface, as well. \begin{figure*}[ht!] \centerline{\includegraphics[width=0.6\textwidth]{FocalConicDomains-Trimmed.eps}} \caption{\label{ThinStripes} \textbf{Ordering focal conic domains on a sphere.} CLC shells with 7 mM SDS in the outer phase, without (A) and with (B) 7 mM SDS in the inner phase have thin stripes that form double spiral domains. Left micrographs focus on the outer surface of the shell, right micrographs focus on the inner surface. The pitch is 5 $\mu$m. C) Blue lines represent the edges of FCDs on the outer surface of (B), while purple lines represent the edges of FCDs on the inner surface. The cholesteric polygonal texture comes from the staggered packing of FCDs. D) A simulated CLC shell (2.2 $\mu$m diameter, 0.70 $\mu$m thickness, and 0.42~$\mu$m pitch) with matching homeotropic anchoring conditions ($W_0 \approx 1.5 \times 10^{-5}~ \mathrm{J}/\mathrm{m}^2$) on the inner and outer surfaces. The top view shows the double spiral pattern of the outer shell surface, while the cross section additionally shows the patterning of the inner shell surface. The shell has no defect in $\mathbf{n}$, only two defects in the pitch axis, located underneath the double spirals.} \end{figure*} \section*{Transition back to planar from homeotropic equilibrium} A CLC shell with multiple double spiral domains is in a metastable state: after about one month, the FCDs coalesce with one another and only the two, topologically required FCDs remain at opposite poles on the sphere, similar to simulation results in Fig.~\ref{ThinStripes}D. However, unlike simulation results, the majority of shells with only two FCDs have double spirals of \textit{opposite} handedness. This is apparent in Fig.~\ref{ThinStripes}A when comparing the handedness of double spirals of the right and left panels, where the right panel is focused \textit{through} the shell. Over long time periods, FCDs are formed on both sides of the shell because of the solubility of hydrocarbon surfactants in 5CB. Since FCDs of a given cholesteric all have double spirals of the same handedness, one double spiral must have formed from the inner surface in order to have the opposite handedness on the outer surface. The ubiquity of shells with oppositely winding FCDs suggests that it is energetically more favorable for FCDs formed on the same side of the shell to coalesce than it is for FCDs on opposite sides. The schematic in Fig.~\ref{EqThinStripes}B depicts the formation of \textit{single} spirals, for the sake of simplicity, on a sphere by taking a flexible line connecting two poles and moving the mid-point of the line around the equator until spirals are formed at the poles. The spirals that result are of opposite handedness (Fig.~\ref{EqThinStripes}B, right) and there must be a defect where the stripe wrapping direction switches handedness. In experiments, this defect is a stripe dislocation near the shell equator, an area in which extra stripes are nucleated, disturbing the periodic stripe ordering. The stripe dislocation is circled in red in the right panel of Fig.~\ref{EqThinStripes}A. In another shell, the stripe dislocation is visible from a side-view (Fig.~\ref{EqThinStripes}C, inset). We probe the topological defects in CLCs by \textit{transitioning} this shell's boundary conditions from homeotropic to planar to change the topology of the CLC bulk. Cholesteric topological defects are profoundly different than those in the nematic phase. Though an analogy is commonly drawn between cholesteric ``layers'' and smectics, this is known to be a crude approximation, if at all, to the actual topology of the cholesteric phase \cite{geomchol, pieranski}. To illustrate this difference between defects in nematic shells and cholesteric shells, let us consider a nematic shell with planar anchoring on both boundaries. The Poincar\'e-Hopf theorem requires that on each boundary the nematic has a total defect charge of $+2$. If the shell is thin enough that we can approximate it as a single surface, then energetic considerations would suggest that the charge breaks up into four $+1/2$ disclinations \cite{tll-nature}. However, as the shell thickens, point disclinations on the inner and outer surface will be connected by a bulk defect line (recall that in the nematic, bulk defect lines are characterized by a $\mathbb{Z}_2 = \pi_1(\mathbb{R}P^2)$ charge only). At some point, the energy will be lowered by combining the bulk disclination lines in pairs, allowing them to ``escape into the third dimension,'' leaving behind charge $+1$ disclinations on the surfaces \cite{vv-nemshell}. We can alternatively consider homeotropic surface anchoring: in the case of a homeotropic nematic shell, the final low energy state has no defects {\sl whatsoever}. If we were considering a spherical droplet, there would be a radial defect or hedgehog in the center but, because we are considering a shell, the defect is virtual. Now let us consider our experiments where we \textit{transition} from the original planar state of a nematic shell to the final homeotropic state. In this case, defects in the bulk coalesce and \textit{escape} as the topology of the system changes \cite{tll-sds}. The cholesteric, however, {\sl does not allow} these waltzes. In addition to the director, a cholesteric must have a pitch axis perpendicular to the director, together forming a triad at each point in space. Triads do not escape. For instance, if the director deforms to remove the winding of a $+1$ disclination line, the pitch axis will now wind --- the nematic line defect becomes a pitch line defect. We observe the reverse, a pitch defect becoming a nematic defect, when we transition the experimentally equilibrated CLC shell from moderate homeotropic to planar anchoring. \begin{figure*}[ht!] \centerline{\includegraphics[width=1\textwidth]{EquilibriumThinStripes-Edited.eps}} \caption{\label{EqThinStripes} \textbf{Equilibrium thin stripes.} A) A CLC shell with 7 mM SDS in the water phases equilibrates after one month, leaving only two, topologically required FCDs. Inset shows a side view of a FCD. B) Schematic showing how two spirals can form on a sphere from a line connecting the two poles. For two spirals of opposite handedness, a point (black) must exist where the handedness is not defined, playing the role of the stripe dislocation in the right of A) and in the red inset of C). C) The two FCDs in an equilibrated CLC shell have double spirals of the opposite handedness, requiring a stripe dislocation (red inset). When the surfactant is washed away, the stripes rotate as they become wider and unwind, evident from the opposite rotation of FCD poles (blue arrows) with respect to the stripe dislocation (red arrows). The stripes widen first at the poles (i-ii) until an instability occurs (iii-iv). The stripe dislocation becomes the planar defect. D) Schematic of cholesteric layers in a shell for planar, hybrid, and homeotropic anchoring. Blue boxes highlight that layers do not rearrange much to accommodate the anchoring change near a FCD. The opposite is true away from a FCD, highlighted by red boxes.} \end{figure*} In Fig.~\ref{EqThinStripes}C, this anchoring transition reveals stripe dynamics that are associated with the pitch axis reorienting on a sphere --- from parallel to the shell surface to radial. When the surfactant is washed away from equilibrated CLC shells, the stripe pattern rotates as a result of the stripes becoming thicker and unwinding. In Fig.~\ref{EqThinStripes}C, FCD poles rotate with the opposite handedness (blue arrows) compared to the stripe dislocation (red arrows), matching the unwinding dynamics of the simplified schematic in Fig.~\ref{EqThinStripes}B --- as the stripes become wider, the single spiral will rotate with a handedness opposite that of the equatorial defect. The stripes widen sharply at the poles first, indicative of the pitch axis tilting away from parallel, causing the outer cholesteric layer to bend as they do in the ``bent''-state (i-ii). Evidently, the pitch axis reorientation is energetically more favorable near the pitch defects of the FCDs. While the stripes unwind, more layers bend and widen at the poles until the cholesteric layers are planar and concentric, apparent from the lack of stripes. During this process, an instability occurs on the thin, top portion of the shell (iii-iv) and stripes at a set distance away from the pole develop perpendicular sub-stripes. Near the poles where the stripes initially widen the most, the cholesteric layers are strained and start to undulate perpendicular to the stripe direction. This way of relieving the twist energy is reminiscent of the Helfrich-Hurault instability of a cholesteric frustrated by a magnetic field \cite{helfhurault}. The undulated areas widen with the rest of the stripes as the anchoring transition continues, and a defect comes into view (v). As the anchoring evolves from homeotropic to planar, new defects must form: concentric layers of cholesteric topologically require a total defect charge of +2. We observe that these defects form precisely where the dislocation was located. The planar defect moves closer to the thinner region of the shell to reduce the elastic distortion. Videos of this process are in SI Videos 4-6. The pitch defects of the shell adapt to the anchoring transition by becoming sites where the boundary conditions change the most. The pitch defects of FCDs accommodate stripe width changes first, while other pitch defects, such as the stripe dislocation, become the topologically required planar defects. Fig.~\ref{EqThinStripes}D illustrates that if a defect were to form on the shell with one or both surfaces having homeotropic anchoring, the defect is more energetically likely to form near the equator, away from the double spiral poles. The textures in the blue boxes near a double spiral demonstrate that the layers need not drastically change their orientation to accommodate the anchoring change. More rearrangements are needed away from a double spiral, as highlighted by red boxes. Indeed, in the hybrid schematic in Fig.~\ref{EqThinStripes}D, the red box encompasses a pitch defect, while in the homeotropic schematic, the red box features extreme changes in director orientation compared to the planar schematic. The great degree of rearrangement needed to adapt to the anchoring transition at the equator makes a defect at the equator more probable. Energetic defects not required by the topology can become topologically required by dynamically changing the system's boundary conditions. This process further demonstrates that the direction of the pitch axis depends upon the nematic director itself. The topology of the cholesteric is not just a recapitulation of the biaxial nematic. Indeed, the theory of cholesteric defects is still not completely formulated. This transition of the equilibrated CLC shell from moderate homeotropic to planar anchoring further sheds light on the geometry and topology of CLCs. We have demonstrated the ability to reproducibly control defect textures and patterns in CLC shells by tuning confinement and the out-of-plane anchoring strength via two methods: by varying the chemistry of the water phases and the shell interfaces and by slowly increasing the temperature of the system. We control the anchoring strength to be \textit{moderate}, allowing the CLC to twist \textit{at} the surface, better mimicking structures seen in biological systems. This corrugated surface additionally lends itself to complex assemblies at the CLC-water interface. Numerical work further corroborates our description of the nuanced complexion of stripes, defects, and topology. This comprehensive study of cholesteric textures through anchoring transitions and geometric confinement and the topologically-constrained pathways from one defect configuration to another lays the groundwork for designing LC defects as templates for nanomaterials and deepens our understanding of the cholesteric phase. We thank D.A. Beller, M. Benzaquen, C. Blanc, S. \v{C}opar, O. Dauchot, E. Lacaze, F. Livolant, and S. \v{Z}umer, for fruitful discussions and support. This work was supported by NSF MRSEC Grant DMR1120901, by ANR Grant 13-JS08-0006-01, and by Institut Pierre-Gilles de Gennes, Program ANR-10-IDEX 0001-02 PSL and ANR-10-EQPX-31. M.O.L. acknowledges support from the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Grant No. DE-FG02-05ER46199, and from the Simons Foundation for the collaboration ``Cracking the Glass Problem'' (Grant No. 454945). R.D.K. was partially supported by a Simons Investigator grant from the Simons Foundation. \newpage \renewcommand\thefigure{S\arabic{figure}} \setcounter{figure}{0} \section*{Supplementary Information} \section*{Materials and Methods} \paragraph*{Reagents, Microfluidics and Optical Characterization} For the cholesteric liquid crystal (CLC), we use 5CB (4-cyano-4'-pentylbiphenyl, Kingston Chemicals Limited and Sython Chemicals) doped with CB15 ((S)-4-cyano-4-(2-methylbutyl)biphenyl, EMD Performance Materials and Sython Chemicals) for a right-handed cholesteric pitch. The pitch of the CLC is determined with a Grandjean-Cano wedge cell \cite{GJwedge, Cwedge}. 2.8\% CB15 gives a pitch of $\sim$ 5$\mu$m. CLC shells are produced using glass capillary microfluidic devices, similar to those of previous works \cite{ufluidics}. Two different geometries for devices were used, both yielding similar shells. For surfactant experiments, we utilized three nested, coaxial capillaries, as shown in SI Fig.~\ref{ExpSetup}A. The water phases have 1\% wt PVA (polyvinyl alcohol, Sigma-Aldrich, 87-89\% hydrolyzed, average $M_{w} = 13-23$ kg/mol) to stabilize the emulsions. For temperature experiments, the geometry of the device consists in two tapered cylindrical glass capillaries facing opposite directions, fitted in a square capillary, as shown in SI Fig.~\ref{ExpSetup}B. The inner water phase has 1\% wt PVA (Sigma-Aldrich, 87-89\% hydrolyzed, average $M_{w} = 13-23$ kg/mol), while the outer water phase has 1\% wt PVA and 10\% wt glycerol (VWR Chemicals), in order to increase its viscosity. Glycerol cannot be used for surfactant transition experiments, as glycerol affects the surface activity of the hydrocarbon surfactant. For the surfactant anchoring transition, after the double emulsions are collected, they are left to stand for a few hours to allow them to equilibrate and settle to the bottom of the vial. To induce homeotropic anchoring on the outer surface of the CLC shell, the emulsions are then pipetted into vials containing aqueous solutions of 1\% wt PVA, at least 0.1 M NaCl (sodium chloride, Fisher Scientific), and varying concentrations of SDS (sodium dodecyl sulfate, Sigma-Aldrich), in the range 0-10 mM (SI Fig.~\ref{ExpSetup}C). The vial is gently shaken and left for 10 minutes before pipetting and sealing the drops into a viewing chamber (SI Fig.~\ref{ExpSetup}D). An upright microscope in transmission mode fitted with crossed polarizers (Zeiss AxioImager M1m) and a high-resolution color camera (Zeiss AxioCam HRc) is used to take polarized micrographs. Samples are viewed over many days and weeks because the CLC relaxation time is long --- the CLC shells need many weeks to reach their equilibrium state. For emulsions observed over long time periods, heavy water (D$_{2}$O, Sigma-Aldrich) is used for approximate density matching with the CLC to prevent one side of the shell from becoming too thin. The role of NaCl is to increase the interfacial density of the surfactant \cite{abbsurf} and to decrease the critical micelle concentration (cmc). The cmc of SDS at 25 $^{\circ}$C in a 0.1 M NaCl solution is $\sim$ 1.47 mM \cite{sds-cmc}. The cmc of SDS decreases with increasing salt concentrations. No effect of the hydrolysis of SDS is seen on the shell pattern when comparing newly prepared SDS solutions and older solutions. A higher molar concentration of NaCl in the inner phase can additionally be used to cause the inner drop to osmotically swell with time \cite{osswell}, facilitating the study of CLC shells of varying thicknesses. The amount of water added to the inner phase after osmotically swelling the shell (determined from the volume and shell thickness) is not enough to dilute the inner salt concentration to match that of the outer water phase. Thus, we hypothesize that salt must move through the shell during the swelling process. Typically, 1 M NaCl is added to the inner water phase in experiments for osmotic swelling. For the temperature anchoring transition, temperature control is achieved using a mK1000 temperature controller and a TS62 thermal stage (Instec). In all experiments, the shells undergo a temperature ramp of 0.01$^{\circ}$C/min spanning several hours. Observation is conducted under an upright polarizing microscope (Nikon Ni-U) equipped with a DSLR camera (Nikon D300s). \begin{figure*} \centering \includegraphics[width=1\textwidth]{ExperimentSetupGD.eps} \caption{ \textbf{Fabrication of CLC shells.} Cholesteric liquid crystal (CLC) double emulsions are produced using either a microfluidic device with two nested coaxial capillaries, in the case of surfactant experiments (A), or a microfluidic device with two tapered capillaries facing opposite directions, for temperature experiments (B). The inner and outer phases are water with 1\% wt PVA. The middle phase is CLC. The shells are collected into a vial containing water and 1\% wt PVA (C). For surfactant experiments, the shells are then pipetted into a vial containing water, 1\% wt PVA, 0.1 M NaCl, and varying concentrations of SDS. After about 10 minutes, the CLC shells are pipetted into a viewing chamber that is then sealed to prevent evaporation (D).} \label{ExpSetup} \end{figure*} \paragraph*{Numerical Methods} To simulate the cholesteric droplets, a Landau-de Gennes free energy \cite{z-rav} was minimized to find the symmetric, traceless rank-2 tensor field $\mathbf{Q}(\mathbf{x})$ describing the orientation of the liquid crystal on a cubic lattice of sites $\mathbf{x}$. In the uniaxial limit, the director direction $\mathbf{n}$ (expressed as a vector) is related to $\mathbf{Q}$ via $\mathbf{Q}=(3S/2)\left[\mathbf{n} \otimes \mathbf{n}-\mathbf{I}/3\right]$, $\otimes$ is a dyadic product (i.e., $[\mathbf{n} \otimes \mathbf{n}]_{ij} \equiv n_i n_j$) and $\mathbf{I}$ is the identity tensor. The bulk free energy is given by \begin{equation} f_{\mathrm{bulk}}= \frac{A}{2} \Tr \mathbf{Q}^2 + \frac{B}{3} \Tr \mathbf{Q}^3+ \frac{C}{4}( \Tr \mathbf{Q}^2)^2, \end{equation} where $\Tr$ is the trace. It can be shown that a uniaxial $\mathbf{Q}$-tensor minimizes this free energy with a value of $S=S_0\equiv(-B+\sqrt{B^2-24AC})/6C$. The gradient contribution to the free energy reads \begin{equation} f_{\mathrm{grad}}= \frac{L_1}{2} (\nabla \times \mathbf{Q}+2q_0 \mathbf{Q})^2 + \frac{L_2}{2} (\nabla \cdot \mathbf{Q})^2+f_{24}, \end{equation} where $q_0$ is $2 \pi/p$ for a cholesteric pitch $p$. We also have a saddle-splay free energy density $f_{24}$ which is written in terms of the $\bm{Q}$ components $Q_{ij}$ as $f_{24} \equiv L_{24}(\partial_i Q_{jk}\partial_k Q_{ij}-\partial_i Q_{ij} \partial_k Q_{jk})/2$, where we sum over all indices $i$,$j$, and $k$. The Landau-de Gennes free energy is minimized using the conjugate gradient method from the ALGLIB package (\url{http://www.alglib.net/}). We can divide by an overall energy scale to set $A=-1$. Then, to check the robustness of the simulated patterns, two sets of constants were used. In the one-constant approximation, used for the CLC slab simulations in the main text, we chose standard values $B=-12.33$, $C=10.06$, $L_1= L_2\equiv L=2.32$, (and no saddle-splay term: $L_{24}=0$) corresponding to values in Ref.~\cite{z-rav}. To check that the one-constant approximation, lack of a saddle-splay term, and particular values of the elastic parameters did not significantly influence our results, we also performed simulations in the two-constant approximation with $L_1 \neq L_2$. In the spherical shell simulations in the main text we used $B=-1.091$, $C=0.6016$, $L_1=0.00761$, $L_2=0.02282$, and $L_{24}=L_2/2$. The anchoring was modeled using a Rapini-Papoular surface potential for the perpendicular anchoring and a degenerate planar anchoring potential \begin{equation} f_{\mathrm{hom.}} = W_0 \int \mathrm{d} A \, \Tr [(\mathbf{Q}-\mathbf{Q}^{\parallel})^2], \end{equation} where $\mathbf{Q}^{\parallel}= (3 S_0/2)( \bm{\nu} \otimes \bm{\nu}-\mathbf{I}/3)$ is the uniaxial $\mathbf{Q}$-tensor constructed to orient parallel to the surface normal vector $\bm{\nu}$. That is, we penalize deviations of the $\mathbf{Q}$ tensor away from a uniaxial one oriented along the boundary surface normals. The planar anchoring condition is similar, except we penalize whenever the $\mathbf{Q}$ tensor deviates away from the plane of the surface \begin{equation} f_{\mathrm{plan.}} = W_1 \int \mathrm{d}A \left[ \Tr[(\bar{\mathbf{Q}}-\bar{\mathbf{Q}}^{\perp})^2] +(\Tr \mathbf{Q}^2-3 S_0^2/2)^2 \right], \end{equation} where $\bar{\mathbf{Q}} \equiv \mathbf{Q}+S_0 \mathbf{I}/2$ and $\bar{\mathbf{Q}}^{\perp}=(\mathbf{I}- \bm{\nu} \otimes \bm{\nu}) \bar{\bm{Q}} (\mathbf{I}- \bm{\nu} \otimes \bm{\nu})$ is a projection of the $\bm{Q}$ tensor on the plane perpendicular to the surface normal $\bm{\nu}$. The length scales we can simulate are limited by the mesh spacing $\Delta x$ of the cubic grid used to compute the derivatives in the elastic terms $f_{\mathrm{grad}}$. This mesh spacing is typically set by the nematic correlation length, which describes the characteristic spatial variation of the Maier-Saupe order parameter $S$ and, therefore, the size of nematic defects \cite{z-rav}. In the one-constant approximation we use, this spacing is given by $\Delta x =\sqrt{2K/[9 S_0^2L \tilde{A}]}\approx4.4~\mathrm{nm}$, where $K= 10^{-11}~\mathrm{N}$ is the Frank elastic constant for 5CB, $S_0 \approx 0.533$, and $\tilde{A}=0.172 \times 10^6~\mathrm{J}/\mathrm{m}^3$ is the unscaled constant $A$ in the potential. Choosing this lengthscale allows us to capture any nematic defects, which is especially important for strong anchoring conditions as shown in Fig.~\ref{StrHomeoAnc}. However, in shells with weak homeotropic anchoring, we do not find any nematic defects, and we expect that the potential $f_{\mathrm{bulk}}$ is minimized at all points in the simulation. In this case, it should be possible to use a larger spacing $\Delta x$, and we do not expect our results to change much if we simulated larger shells. For example, in the two-constant approximation, we tried a larger mesh spacing of $\Delta x \approx 10$~nm. This choice did not influence the major features of our results, such as the focal conic domains in the spherical shells and the orientation of the pitch axis along the sphere surface. We do not expect to find differences for even larger mesh spacings; the only issue is that it becomes more difficult for the simulation to find a minimum energy state because the energy becomes dominated by contributions from $f_{\mathrm{bulk}}$. \section*{SI Section 1: Bulk defect lines with strong homeotropic anchoring} \begin{figure*} \centering \includegraphics[width=1\textwidth]{ThickHomeotropicStripes.eps} \caption{\textbf{Subsurface defect lines with strong homeotropic anchoring.} CLC shells with 0.1 M NaCl in the water phases and 10 mM SDS in both the inner and outer phases (A) and only in the outer phase (B) have subsurface defect lines. Pictures are focused either on the inner surface (i) or on the outer surface (ii). Scale bars are all 50 $\mu$m. A and C) Because the defect lines (black lines in the simulations) form from both surfaces, the defect lines align to reduce the distortion. B and D) Because the pitch axis must distort away from being parallel to the surface with hybrid anchoring, double spirals delineated by the defect line are less tightly wound. In the simulations in C and D), the one-constant elastic constant approximation is used. The shells have a radius of 0.42 $\mu$m and a thickness of 0.31 $\mu$m. In this case, the smallness of the simulated shell is necessitated by the presence of the defect lines at the surface, which have a size governed by the nematic correlation length, which is on the order of a few nanometers \cite{z-rav}. The anchoring is strong, with $W_0 \approx 4 \times 10^{-3}$~J/m${}^2$ for the homeotropic surfaces and $W_1=W_2 \approx 4 \times 10^{-3}$~J/m${}^2$ for the planar ones (see SI). The pitch is 0.3 $\mu$m. } \label{StrHomeoAnc} \end{figure*} When the homeotropic anchoring strength is increased beyond that of the thin stripe pattern, the ground-state twist of the cholesteric is further frustrated. To fit in more LC molecules at the interface, the twist of the cholesteric is pushed away from the surface and into the LC bulk. Subsurface defect lines are formed instead of the alternating, thin homeotropic and planar stripes that have the width of a half-pitch. Strong homeotropic anchoring forces the cholesteric to be perpendicular to the entire boundary and, in order to satisfy the energetic twist of the system, the cholesteric must twist rapidly in the bulk in discrete areas, creating defects. Subsurface defect lines are shown in shells with matching homeotropic anchoring and hybrid anchoring in SI Fig.~\ref{StrHomeoAnc}A and C and Fig.~\ref{StrHomeoAnc}B and D, respectively. In SI Fig.~\ref{StrHomeoAnc}A and B, the left column is focused on the inner surface, while the right is focused on the outer surface. The SDS concentration is 10 mM. Similar defects are seen in previous work on single emulsions of CLCs \cite{homeotdrop, zumerknot}. Although defect lines are pushed away from the surface, they still remain relatively near the surface, as shown in simulations of shells with matching strong homeotropic anchoring conditions (SI Fig.~\ref{StrHomeoAnc}C, black). The defect lines formed near the inner and outer surfaces reduce the distortions from their defects further by lining up with one another, evident in both simulations and experiments (SI Fig.~\ref{StrHomeoAnc}A). Subsurface defect lines in hybrid shells, with strong homeotropic anchoring on the outer surface, have stripes with a wider width as a consequence of the cholesteric layers being further frustrated (SI Fig.~\ref{StrHomeoAnc}B and D). Simulations reveal that the inner, planar surface has defects typically associated with planar, concentric layers (SI Fig.~\ref{StrHomeoAnc}D), distorting the cholesteric layers towards the outer surface, resulting in spirals that are not as tightly wound as those in shells with matching homeotropic anchoring conditions. \section*{SI Figures and Videos} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{InnerFCDs.eps} \caption{\label{innerFCDs} \textbf{Focal conic domains (FCDs) form on inner shell surface.} A CLC shell has 5 mM SDS, 1 M NaCl, and 1\% PVA in the inner phase water phase and 5 mM SDS, 0.1 M NaCl, and 1\% PVA in the outer phase. FCDs can only be created on the inner shell surface alone when an extreme concentration of salt (1 M NaCl) is added to the inner water phase. A) Before the shell completes osmotically swelling, FCDs, formed from the inner shell, are seen (zoomed on right). The high salt concentration screens the SDS surfactant head group, allowing the surfactant to pack more densely on the interface. B) After one day, the shell completes its swelling. The surfactant and salt concentrations are decreased, and the FCDs disappear as the inner salt concentration matches the outer salt concentration. The shell has mostly planar anchoring on its surface, apparent from the defect required from concentric cholesteric layers (zoomed on right).} \end{figure*} \begin{figure*}[ht!] \centerline{\includegraphics[width=0.5\textwidth]{FCDDiagram.eps}} \caption{\label{FCDDiagram} \textbf{Double spirals are signatures of focal conic domains (FCDs).} Layers of concentric circles (dashed lines), representative of cholesteric layers, collide on a focal line (green). In red, we depict the surfaces of the shell confining the cholesteric, separating physical areas of the circular sections (black dashed lines) from virtual areas (gray dashed lines). If we rotate the two-dimensional texture around the vertical axis (blue) we get a texture between two red spheres with a focal plane dividing the top and bottom concentric CLC spheres. Topology, however, forces in two charge +1 nematic defects along the rotation axis. Though a +1 disclination in a uniaxial nematic can ``escape into the third dimension'' \cite{cladiskleman}, the biaxial-like cholesteric texture cannot \cite{geomchol}. As a result, the nematic disclinations become pitch defects, located along the rotation axis. Alternatively, were we to rotate the two-dimensional texture around the horizontal axis (green), we generate a toroidal focal conic \cite{pieranski}, with one focal line connecting the centers of the concentric CLC circles and another focal line along the axis of rotation. Pitch defects, at the center of double spirals, are located along the green axis of rotation. The director configuration within the shell is similar for either of these rotation axes. The structure of the FCDs becomes more complex as the number of double spiral domains increases.} \end{figure*} \begin{figure*}[ht!] \centerline{\includegraphics[width=0.9\textwidth]{CoexistenceSlab.eps}} \caption{\label{Coexistence} \textbf{Interface undulations at an isotropic-CLC interface.} We simulate an isotropic-CLC interface in a slab geometry ($24 \times 24$ nm in the $x$ and $y$ directions and $18$ nm wide in the vertical direction) by tuning our Landau-de Gennes potential to the coexistence point, which corresponds to constants $A= 5.2311$, $B=-11.2612$, and $C=0.8889$. In this case, the CLC and isotropic phases are equally favored energetically. We plot just the CLC\ region, showing the local orientation of the molecules according to the $\bm{Q}$ tensor. By starting with an initial condition that is isotropic on the top of the slab and a uniform CLC on the bottom, we make an interface where the two regions initially meet in the middle of the slab. The anchoring conditions are strongly planar on the bottom of the slab (to force the pitch to align along the $z$-axis in the bulk of the slab) and free on the top. There are periodic boundary conditions in the $x$ and $y$ directions. After minimizing the Landau-de Gennes free energy, we see that the isotropic-CLC interface undulates (with hills and valleys indicated) and forms stripes with a periodicity equal to one pitch. This illustrates that a free CLC interface naturally undulates in response to the variation of the molecular orientation at the interface. We use the two-constant approximation with $L_1=L_2=2.32$ and $L_{24}=L_2/2$ (see SI, Materials and Methods). The pitch is 6 nm in the CLC region. } \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{BentState-FCD.eps} \caption{\label{BentStateFCD} \textbf{``Bent'' state can coexist with FCD state in CLC shell with heterogeneous thickness.} A CLC shell with 0 mM SDS, 1 M NaCl, and 1\% PVA in the inner water phase begins to swell in an outer water solution with 7 mM SDS, 0.1 M NaCl, and 1\% PVA. Thickness variation in the CLC shell shows that thinner regions have stripes from bent cholesteric layers (left), expressing the ``bent'' state, while thicker regions in the back of the drop have FCDs (right).} \end{figure*} \indent \indent \textbf{SI Video 1: Surfactant anchoring transition - strong homeotropic to planar.} A cholesteric liquid crystal shell is first equilibrated with no SDS in the inner water phase and 10 mM SDS in the outer phase. Both water phases have 1\% PVA and 0.1 M NaCl. The shell is then transferred to a solution with no SDS, but still with 1\% PVA and 0.1 M NaCl. As the SDS diffuses from the outer surface of the shell and into the outer water phase, the anchoring on the shell weakens, causing the shell patterns to change. Subsurface defect lines become focal conic domains, then thick planar stripes, until the planar anchoring state is reached. The pitch is 5 $\mu$m. \textbf{SI Video 2: Temperature anchoring transition - 3\% CB15, thick shell.} A cholesteric liquid crystal shell with only 1\% PVA in the water phases under goes a temperature ramp towards the cholesteric-isotropic transition temperature at a rate of 0.01 $^{\circ}$C/minute. The absolute value of temperature is given on every frame of the video. The pitch is 5 $\mu$m. Wide, planar stripes form with the ``bent''-state, then the stripe width halves discontinuously to the thin stripe state, until the shell fully transitions to isotropic. The shell is 32 $\mu$m thick, $c>1$ initially. Since many pitches can fit into the shell thickness, the cholesteric layers can easily bend to form focal conic domains. Hexagonal packing of the FCDs occur first, then the polygonal texture forms, until the shell thins, becoming a 2D nematic before fully transitioning to the isotropic phase. \textbf{SI Video 3: Temperature anchoring transition - 3\% CB15, thin shell.} A cholesteric liquid crystal shell with only 1\% PVA in the water phases undergoes a temperature ramp towards the cholesteric-isotropic transition temperature at a rate of 0.01 $^{\circ}$C/minute. The relative value of temperature to the phase transition temperature is given on every frame of the video. The pitch is 5 $\mu$m. The shell thickness is comparable to or less than the pitch, $c<1$. Wide, planar stripes form with the ``bent''-state, then the stripe width halves discontinuously to the thin stripe state. The shell is then so thin that it is essentially a 2D nematic, until the shell fully transitions to isotropic. The accompanying graph shows how the stripe periodicity changes with time from measuring intensity differences. The discontinuity in the periodicity when the CLC becomes very thin is apparent in the center plot of periodicity/pitch vs. $\Delta T(^{\circ} C)$ from the clearing point. \textbf{SI Video 4: Equilibrium to planar transition, 1.} Video of Fig.~\ref{EqThinStripes}C. The stripe dislocation becomes the planar defect. \textbf{SI Video 5: Equilibrium to planar transition, 2.} SDS is removed from the outer cholesteric shell surface, decreasing the homeotropic anchoring strength and unwinding the stripes. The stripe patterns match those of Fig.~\ref{EqThinStripes}C and SI Video 4. The stripe dislocation becomes the planar defect. \textbf{SI Video 6: Equilibrium to planar transition, 3.} As SDS diffuses from the outer cholesteric shell surface to the surrounding water phase, the homeotropic anchoring strength decreases and the stripes unwind. The top of the shell is thinner than in the previous two surfactant anchoring transitions from equilibrium thin stripes, evident from the Skyrmions at the top of the shell. As the stripes unwind, a Skyrmion becomes a planar defect. A Skyrmion is a pitch defect, just as a stripe dislocation is a pitch defect, making it an energetically favorable place for the planar defect to form. \bibliographystyle{unsrt}
1,314,259,992,744
arxiv
\section{Introduction} \label{sec:intro} Vision Transformer (ViT)\cite{dosovitskiy2020vit} and its variants have achieved great success in a variety of computer vision tasks, such as image classification\cite{dosovitskiy2020vit,liu2021swin,graham2021levit}, object detection\cite{li2022exploring,fang2021yolos,carion2020detr}, semantic segmentation\cite{zheng2021rethinking,strudel2021segmenter,chen2021transunet}, etc. However, massive parameters and calculations of the Transformer model hinder its applications on portable devices such as mobile phones. To address this, various model compression algorithms have been widely studied, such as distillation\cite{touvron2021training,Touvron2022DeiTIR,jia2021efficient}, pruning\cite{pan2021reduct,zhu2021vision,yu2021unified} and quantization\cite{li2022qvit,lin2022fq,li2022ivit}. Among them, binary Transformers aggressively compress weights and activations to a single bit, which gives $32\times$ saving on memory consumption. Meanwhile, efficient bit-wise operations can greatly accelerate model inference and reduce energy consumption. However, the performance degradation restricts the wide application of binary networks, which is mainly caused by the limited representation ability and difficulty in optimization. To tackle these bottlenecks, binarized convolutional neural networks (CNN) literature has been proposed to minimize the binarization error\cite{rastegari2016xnor,zhou2016dorefa,bulat2019xnor+}, enhance the representation ability\cite{liu2018bi,liu2020reactnet,zhuang2022structured} and relieve the gradient approximation error in optimization\cite{qin2020forward,hou2016loss,bai2018proxquant}. Also, many attempts have been made in previous studies to binarize BERT\cite{devlin2018bert} for natural language processing (NLP) tasks, such as correcting the attention value range mismatch\cite{qin2022bibert}, stronger distillation and migrating some methods on binary CNNs to Transformers\cite{liu2022bit}. However, there are few studies on the binarization of vision Transformers yet. In addition to the common challenges mentioned above, binarizing Transformers presents two new technical challenges. \textbf{Firstly, it lacks effective methods for accurately binarizing softmax attention.} Self-attention module aims to find pairwise similarity between all the tokens\cite{vaswani2017attention}, which is very different from convolutional or fully-connected layers as the values of attention scores are all positive values between (0, 1) while the ordinary weights have both positive and negative values, as shown in Figure~\ref{fig:attentiondetails}. Therefore, its functionality and data distribution are quite different from the ordinary weights. The recent study BiBERT\cite{qin2022bibert} simply binarizes the attention score before $\mathrm{Softmax}$ to \{0, 1\} to maximize the information entropy. However, it ignores the impact of $\mathrm{Softmax}$ on the distribution and will lead to huge quantization errors or even damage the attention mechanism. \textbf{Secondly, how to preserve the information in the pretrained model during binarization is under explored.} Unlike binary CNNs that perform well when training from scratch\cite{qin2020forward,tu2022adabin,liu2020reactnet}, we observe that BiViTs heavily rely on pretrained model and are sensitive to quantization, as shown in Figure~\ref{fig:pretrainimpact}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/pretrain_impact_crop.pdf} \caption{\textbf{Impact of pretrained model when binarizing Transformers.} The experiment is conducted on TinyImageNet dataset. Initiating Transformers from the pretrained models greatly boosts the accuracy.} \label{fig:pretrainimpact} \end{figure} Even the initial weights are derived from pretrained model, directly binarizing all parameters still causes a huge loss of pretrained information, which then leads to a severe performance drop. Also, the loss of pretrained information is difficult for Transformers to recover through quantization-aware training (QAT). In particular, MLP modules account for nearly half of the computations and parameters within a Transformer\cite{liu2022ecoformer}. It is composed of an activation layer and two fully-connected layers, which are equivalent to $1 \times 1$ convolutions and are widely known to be difficult to optimize due to the limited representational capability\cite{zhuang2022structured,garg2021confounding,liu2018bi}. How to binarize the attention more effectively and how to retain the information of the pretrained model remains an open question. To reduce the quantization error in binarizing attentions, we first analyze the long-tailed distribution of attention scores and discover their inner variety between different attention vectors. To adaptively search the optimal threshold for binarization, we propose an optimization algorithm based on sparse coding and block coordinate descent and further propose an efficient approximation called Softmax-aware Binarization to avoid conducting the optimization on each forward pass. To retain pretrained information and further enhance model representation ability, we then propose Cross-layer Binarization to decouple the quantization of self-attention and MLPs to avoid mutual interference and adopt parameterized scaling factors for weights binarization. To our best knowledge, we are the first to successfully binarize Transformers for vision tasks. In summary, our contributions are as follows: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item We design Softmax-aware Binarization for the self-attention module, which adapts to the long-tailed data distribution and greatly reduces the quantization error. \item We propose Cross-layer Binarization and Learnable Weight Binarization to retain pretrained information and further enhance the representation ability of binary Transformers, which facilitates BiViTs to converge and improve the accuracy. \item Combining the above contributions, we propose the first applicable BiViT. Experiments on TinyImageNet and ImageNet datasets show that it consistently outperforms current state-of-the-arts by great margins. \end{itemize} \section{Related Work} \subsection{Vision Transformers} Transformer\cite{vaswani2017attention} is initially used to process long sequences in NLP tasks. ViT\cite{dosovitskiy2020vit} first adapts Transformers to vision tasks by splitting images into grids and constructing vision token sequences. DeiT\cite{pmlr-deit-touvron21a} further improves the data efficiency of vision Transformers. Benefiting from the global receptive field and the powerful long-range modeling capabilities of Transformer models, ViTs demonstrate promising performance against CNN counterparts. Many follow-up works are proposed to explore hierarchical structures\cite{liu2021swin,wang2021pyramid,zhang2022nested}, inject convolutional layers\cite{li2021localvit,guo2022cmt,dai2021coatnet} and apply ViTs to different vision tasks\cite{zhang2022topformer,zeng2021improving,fang2021yolos}. However, the inference speed of ViTs is usually slower than that of CNNs in practical applications\cite{li2022efficientformer}. The reasons mainly include the lack of special optimization (such as Winograd\cite{liu2018winograd} for convolutional layers) and the quadratic computational complexity of the self-attention module. To reduce the computational complexity of ViTs, many methods have been proposed, including linear attention\cite{shen2021linear,cai2022efficientvit,wang2022pvt}, redundancy reduction\cite{pan2021reduct,zhang2022minivit,yang2021nvit} and quantization\cite{li2022qvit, lin2022fq,liu2021post}. However, current Transformer quantization work mainly focuses on fixed-point quantization, whether through Quantization-Aware Training (QAT)\cite{li2022q,li2022qvit} or Post-Training Quantization (PTQ)\cite{liu2021post,lin2022fq,yuan2021ptq4vit}. Research on ternary or binary quantization remains to be studied. \subsection{Binary Neural Networks} Binary neural network (BNN) quantizes both weights and activations to 1-bit, which greatly reduces the complexity of the model. The binarization of models usually requires QAT to restore accuracy. To overcome the non-differentiability of quantizer during training and the limited representation capacity, many methods have been proposed to help binarize CNNs, such as binary-friendly model structure\cite{liu2018bi,liu2019circulant,zhu2019benn,mishra2017wrpn, bulat2020high,bethge2021meliusnet}, knowledge distillation\cite{liu2020reactnet,mishra2017apprentice,martinez2020training}, soft function\cite{he2022binarizing,qin2020forward,zhang2022root,ding2022ie}, optimizer selection\cite{liu2021adam,alizadeh2019systematic,courbariaux2016binarized}, etc. Although some of them are also effective for Transformer models, as analyzed in BiT\cite{liu2022bit}, methods focusing on binary Transformers still need to be developed to relieve accuracy degradation. The literature closely related to our work includes BinaryBERT\cite{bai2021binarybert}, BiBERT\cite{qin2022bibert} and BiT\cite{liu2022bit}. They put forward some improvements on the binarization of BERT\cite{devlin2018bert} model, including binarization function and model distillation, and evaluated them on NLP tasks. However, none of these methods are evaluated on computer vision tasks. In the following sections, we will migrate these methods to Swin\cite{liu2021swin} and NesT\cite{zhang2022nested} as our baselines to test their performance and analyze their drawbacks. Then we propose to improve BiViT's performance by accurate attention binarization and pretrained information preservation. To the best of our knowledge, we are the pioneering work in the field of BiViTs. \section{Method} \subsection{Preliminaries} \label{sec:preliminaries} Generally, BNNs follow \cite{rastegari2016xnor} to use $\mathrm{Sign}$ function to binarize weights and activations to \{-1, +1\}, and use Straight-Through Estimator (STE)\cite{bengio2013estimating} to overcome the non-differentiability of the $\mathrm{Sign}$ function, as follows: \begin{gather} {\hat{x} = \mathrm{Sign}}(x)=\left\{ \begin{aligned} +1&, \mathrm{if}\ x>0 \\ -1&, \mathrm{otherwise}, \end{aligned} \right. \end{gather} \begin{gather} \frac{\partial \mathcal{L}}{\partial x}\approx\left\{ \begin{aligned} \frac{\partial \mathcal{L}}{\partial \hat{x}}&, \mathrm{if}\ \lvert x \rvert <1 \\ \noindent 0&, \mathrm{otherwise}. \end{aligned} \right. \end{gather} To estimate the full-precision $\mathbf{x}\in\mathbb R^{n}$, BNNs further use a scaling factor $\alpha \in\mathbb R^{+}$ to reduce the quantization error: \begin{equation} \label{eq:scale} \alpha = \frac{{\lVert \mathbf{x} \rVert}_{\ell_1}}{n},\quad \mathbf{x} \approx \alpha \hat{\mathbf{x}}. \end{equation} With both weights and activations binarized, Binary GEneral Matrix Multiplication (BGEMM) can be used to accelerate the inference, which can be efficiently implemented by $\mathrm{XNOR}$ and $\mathrm{bitcount}$ operations\cite{rastegari2016xnor}. In special cases, if only one operand is binarized, the multiplication can still be replaced by addition to accelerate the calculation akin to BinaryConnect~\cite{courbariaux2015binaryconnect}. However, binarization using $\mathrm{Sign}$ can be problematic in Transformers. In the self-attention mechanism\cite{vaswani2017attention}, the calculation formula of attention is defined by: \begin{gather} \text{Attention}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) = \mathrm{Softmax}\left(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_k}}\right) \mathbf{V}, \end{gather} where $\mathbf{Q}$, $\mathbf{K}$, $\mathbf{V}$ are respectively query, key and value matrics and $d_k$ is the dimension of the key. We can see that $\mathrm{Softmax}$ operation is used in the last dimension to obtain the attention scores. According to the definition of $\mathrm{Softmax}$, the results are non-negative and they will all be $+1$ after $\mathrm{Sign}$ function. In order to solve the problem of value range mismatch, BiBERT\cite{qin2022bibert} proposes to use the $\mathrm{Bool}$ function to binarize the attention scores without $\mathrm{Softmax}$ operation: \begin{gather} \mathrm{Bool}(x)=\left\{ \begin{aligned} 1&, \mathrm{if}\ x>0 \\ 0&, \mathrm{otherwise}. \end{aligned} \right. \end{gather} However, this will lead to huge quantization errors since $\mathrm{Softmax}$ is totally discarded, as we will analyze in Section~\ref{sec:sib}. \subsection{Softmax-aware Binarization} \label{sec:sib} \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{figures/attention_zoom_croped.pdf} \caption{\textbf{The long-tailed distribution of attention scores.} Most attention scores are around zero but the maximum values can reach $0.99$.} \label{fig:attentiondistribution} \end{figure} Self-attention mechanism\cite{vaswani2017attention} is designed to model global relationship among different patches (tokens) and focuses on important token pairs. Figure~\ref{fig:attentiondistribution} presents the distribution of attention scores in the pretrained NesT-T\cite{zhang2022nested} model. It is obvious that after $\mathrm{Softmax}$ operation, attention scores follow a long-tailed distribution and more than $99.5\%$ of them are less than $0.05$, which is highly sparse. For a more detailed explanation, we take a deep look at an actual attention vector (\ie, one row of the attention matrix) from the NesT-T pretrained model, as shown in Figure~\ref{fig:attentiondetails}(a). In this case, if we directly use $\mathrm{Bool}$ function to binarize, nearly half of the attention scores are set to 1 and they have the same contribution to binarization (Figure~\ref{fig:attentiondetails}(c)), which is inconsistent with the actual distribution of softmax attention scores where few values dominate (see Figure~\ref{fig:attentiondetails}(b)). \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/softmax-aware-crop.pdf} \caption{\textbf{Details of attention binarization.} (a) Original attention from NesT-T pretrained model. (b) Attention processed by $\mathrm{Softmax}$ operation. (c) Attention binarized with BiBERT's Bi-Attention techniques. (d) Attention binarized by our method. } \label{fig:attentiondetails} \end{figure} In order to reduce the quantization error while binarizing attention scores, the ideal binarization method must satisfy the following two properties: 1) Compared with using $\mathrm{Bool}$ function, the proportion of activated attention scores (set to 1) should be smaller. As shown in Figure~\ref{fig:attentiondistribution}, most values are around 0, which can be ignored during calculation, and only a few significant values are considered. 2) The activation threshold should not be a fixed value. $\mathrm{Softmax}$ is operated on every row-wise attention vector (\ie, each row of the attention matrix) while different attention vectors follow different distributions. For example, the maximum value of some of them can reach $0.99$, while the others are only about $0.05$. Empirically, even though most attentions are dominated by only a few elements, the threshold to activate should be different across all attention vectors. To achieve this, the key is to find the optimal threshold $T$ for each attention vector binarization (\ie, $T$ is different for each row). Inspired by sparse coding\cite{lee2006sparse} and LQ-Nets\cite{zhang2018lq}, we formulate the quantized vector $\mathbf {x}_q \in\mathbb R^{n}$ by the inner product between a basis vector $\mathbf v \in\mathbb R^{k}$ and the binary encoding vector $\mathbf b \in \{0,1\}^{k \times n}$ : \begin{gather} \label{eq:formulatex} \mathbf {x}_q = \mathbf v^{T} \mathbf b, \end{gather} where $k$ is the target bit-width. Then the optimization problem can be fomulated as: \begin{equation} \mathbf{v^*},\mathbf{b^*}=\mathop{\arg\!\min}\limits_{\mathbf{v}, \mathbf{b}} {\lVert{\mathbf v^{T} \mathbf b -\mathbf{x}}\rVert}_{2}^2 ,\quad s.t.\ \mathbf{b}\in\mathbb \{0,1\}^{k \times n}. \end{equation} In this paper, the bit-width $k$ is set to 1, thus the basis vector $\mathbf v$ becomes a scalar $v$. However, with both $v$ and $\mathbf b$ to be solved, brute-force search can be computationally expensive. Instead, the optimization problem can be efficiently solved in a block coordinate descent approach. Specifically, we alternatively optimize the basis $v$ and binary encoding vector $\mathbf b$ while keeping another fixed: \\ \hspace*{\fill} \\ \noindent\textbf{Update $v$:} With fixed binary encoding vector $\mathbf b$, the optimization problem will degenerate to a special case of linear regression. Therefore, the optimal $v$ can be derived by: \begin{equation} \label{optimalv} v^* = \frac{\mathbf{x} \cdot \mathbf{b}}{{\lVert \mathbf{b} \rVert}_{2}^2} , \end{equation} where $\cdot$ represents dot product between two vectors. \noindent\textbf{Update $\mathbf b$:} Since we get the optimal $v$ with Eq.~(\ref{optimalv}), the two values for binarization becomes $\{0, v^*\}$. The optimal transition point (threshold) can be simply calculated as: \begin{equation} \label{eq:calculateT} T = \frac{0+v^*}{2} . \end{equation} Then we binarize the real-valued attention vector with the threshold to update the binary encoding $\mathbf b$: \begin{equation} \label{eq:binarizeattention} \mathbf{b^*} = \mathrm{Bool}(\mathbf{x}-T). \end{equation} After iterating for $N$ times, the quantization error between binary attention and the full-precision counterpart decreases significantly, as shown in line 2 of Table~\ref{table:quanterror}. \begin{table}[h] \caption{Quantization error under different methods. We set $N=5$ and $\beta=0.25$ in practice.} \label{table:quanterror} \centering \begin{tabular}{cc} \hline Method & Quantization Error \\ \hline BiBERT & 0.683 \\ Optimal $T$ & 2.58e-05 \\ Approximate $T$ & 2.72e-05 \\ Approximate $T$ w/o scales & 0.141 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/relation_va.pdf} \caption{\textbf{Relation between the maximum value of attention (X-axis) and optimal $T$ (Y-axis).} Blue dots are maximum value of each attention vector sampled from the pretrained NesT-T model and red dashed line represents the result of linear regression on these attention scores.} \label{fig:relationva} \end{figure} Although this optimization strategy minimizes the quantization error, it is impractical to optimize each generated attention vector during inference. Besides, we calculate an optimal $v$ for each attention vector, which introduces an extra computational burden. To simplify optimization and maintain similar computational complexity as the previous methods\cite{qin2022bibert,bai2021binarybert,liu2022bit}, we try to seek a relationship between the optimal $T$ (calculated by Eq.~(\ref{eq:calculateT})) and the distribution of the attention scores. As shown in Figure~\ref{fig:relationva}, the optimal $T$ is highly related to the maximum value of the attention scores. Therefore, we approximate $T$ using a fixed coefficient $\beta$ (shared in the model) and the maximum value of the attention vector $\mathbf{x}$ to greatly accelerate the inference: \begin{equation} \label{eq:alpha} T = \beta \mathrm{Max}(\mathbf{x}). \end{equation} Experimental result demonstrates that this approximation barely increases the quantization error, as shown in line 3 of Table~\ref{table:quanterror}. However, compared with the previous methods\cite{qin2022bibert,bai2021binarybert,liu2022bit}, multiplying the basis scalar $v$ by the binary encoding vector $\mathbf{b}$ to get $\mathbf{x}_q$ in Eq.~(\ref{eq:formulatex}) still introduces extra computation. To keep the same computational complexity as previous methods, we make a second approximation and further discard the basis scalar $v$ in Eq.~(\ref{eq:formulatex}) since the obtained binary attention scores $\mathbf{b}$ already satisfy the two properties mentioned in Section \ref{sec:sib}. In this case, the quantization error is shown in line 4 of Table~\ref{table:quanterror}. It should be noted that this step is simply a trade-off between accuracy and complexity. By default, we use the algorithm with two approximations for experiments. Since our binary strategy mimics the $\mathrm{Softmax}$ operation, we use $\mathrm{Softmax}$ to approximate the gradient instead of STE in the backward pass. Specifically, the gradients are backpropagated as if the elements are processed by $\mathrm{Softmax}$ during the forward pass. \begin{equation} \label{eq:softmaxgradient} \frac{\partial \mathcal L}{\partial\mathbf x} = \frac{\partial \mathcal L}{\partial\mathbf b} \frac{\partial \mathbf b}{\partial\mathbf x} \approx\frac{\partial \mathcal L}{\partial \mathbf{b}}\ \frac{\partial \mathrm{Softmax}(\mathbf{x})}{\partial \mathbf{x}} . \end{equation} This can effectively address the mismatch between the forward quantizer and its backward approximator. Overall, the training process of our Softmax-aware Binarization is summarized in Algorithm~\ref{algor:sab}. \begin{algorithm}[htbp] \caption{Softmax-aware Binarization for self-attention modules.} \label{algor:sab} \small \begin{algorithmic}[1] \STATE \textbf{Input}: the softmax attention scores $\mathbf x \in\mathbb R^{n}$ \STATE {\textbf{Forward propagation}} \STATE \quad Approximate the transition point $T$ by the maximum value of attentions with Eq.~(\ref{eq:alpha}):\\ \quad \quad $T = \beta \mathrm{Max}(\mathbf{x})$;\\ \STATE \quad Binarize attentions with transition point $T$ by Eq.~(\ref{eq:binarizeattention}):\\ \quad \quad $\hat{\mathbf{x}} = \mathrm{Bool}(\mathbf{x}-T)$;\\ \STATE {\textbf{Backward propagation}} \STATE \quad Calculate the gradients \wrt $\mathbf x$ with Eq.~(\ref{eq:softmaxgradient}):\\ \quad \quad $\frac{\partial \mathcal L}{\partial\mathbf x}\approx\frac{\partial \mathcal L}{\partial \hat{\mathbf{x}}}\ \frac{\partial \mathrm{Softmax}(\mathbf{x})}{\partial \mathbf{x}}$. \end{algorithmic} \end{algorithm} \subsection{Information Preservation} \subsubsection{Cross-layer Binarization} \label{sec:cr} The pretrained model is crucial for BiViTs, as empirically justified in Figure~\ref{fig:pretrainimpact}. However, compared with binary BERT\cite{qin2022bibert,liu2022bit,bai2021binarybert}, BiViTs are rather difficult to optimize. To prove this, we directly migrate BiBERT\cite{qin2022bibert} to Swin-T\cite{liu2021swin} and NesT-T\cite{zhang2022nested} to verify its performance on vision tasks. The results are shown in Table~\ref{tinyimgnetresult}. We observe that its accuracy degradation in image classification tasks can reach $40\%$, which indicates that vanilla BiViTs cannot make good use of the pretrained information and is difficult to optimize on vision tasks. To explore the reasons, we present the architecture and parameters of Swin-T in Figure~\ref{fig:cr}. The MLP, which is equivalent to $1\times1$ convolutions, is hard to quantize (as analyzed in Section~\ref{sec:intro}) and has more parameters than the self-attention module. To tackle this problem, we propose Cross-layer Binarization (CB), which is analogous to the previous two-stage training scheme\cite{zhuang2018towards,martinez2020training}, to decouple the quantization of self-attention and MLP module to reduce mutual interference. In the first stage, we keep MLP to full precision and binarize all the self-attention modules with Softmax-aware Binarization. Then in the second stage, we binarize MLPs to get the final model. \begin{figure}[htb] \setlength{\parskip}{0pt} \centering \includegraphics[width=0.8\columnwidth]{figures/Transformer_structure_newsmall_crop.pdf} \caption{\textbf{The architecture and parameters of Swin-T model.} MLP is difficult to binarize due to $1 \times 1$ convolutions and has far more parameters than other modules.} \label{fig:cr} \end{figure} Compared with previous two-stage training schemes that first binarize activations and then weights\cite{martinez2020training,liu2020reactnet,bahri2021binary}, CB is designed for Transformers to relieve the mutual interference and mitigate information loss caused by binarizing MLPs. The experimental results demonstrate that using CB brings more accuracy improvement than the traditional two-step training scheme. Also, the rich reserved information of the pretrained model makes it possible to perform fine-tuning on the binary Transformer. As we will show in Section~\ref{sec:ablacr}, training with CB provides more competitive performance under the same number of training iterations. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figures/channel_scale_crop.pdf} \caption{\textbf{Parameterized and calculated scaling factors in NesT-T model.} The dashed line represents the mean value of calculated scaling factors.} \label{fig:scale} \end{figure} \subsubsection{Learnable Weight Binarization} To further narrow the performance gap between the binarized model and the full-precision counterpart, an intuitive idea is to increase the representation ability of the binarized model. However, this usually results in increased number of model parameters. To this end, we propose to parameterize scaling factors (as defined in Eq.~(\ref{eq:scale})) to enhance the representation ability of the binarized model. Motivated by SE-Net\cite{hu2018squeeze}, channel-wise scaling factors can be regarded as the importance of each channel, rather than an approximation of its distribution. In order to preserve the model structure and complexity, we directly replace the scaling factor by a learnable parameter. The parameterized scaling factors could be optimized in conjunction with other network parameters via backward propagation during training. As shown in Figure~\ref{fig:scale}, the deviation of the scaling factors obtained by Eq.~(\ref{eq:scale}) across channels is small, indicating that weight distribution of each channel is similar. On the contrary, the parameterized scaling factors vary greatly from channel to channel, showing that it learns to pay more attention to specific channels and thus enhancing the representation capacity of the model. \section{Experiment} \subsection{Implementation Details} \noindent\textbf{Dataset and architecture.} We conduct experiments on two standard benchmarks: TinyImageNet\cite{wu2017tiny} and ImageNet (ILSVRC12)\cite{deng2009imagenet}. The input resolution is $224 \times 224$. For data augmentation, we follow the settings in DeiT\cite{pmlr-deit-touvron21a}, which are common practices in ViTs. To demonstrate the versatility of our method, we adopt two widely-used efficient architectures: Swin-T\cite{liu2021swin} and NesT-T\cite{zhang2022nested}. We do not conduct experiments on full-attention models (like ViT\cite{dosovitskiy2020vit}) because of their low data efficiency in visual tasks. All the blocks in Transformer models are binarized without exception. For binary attention modules, all weights and intermediate results including $\mathbf{Q}$, $\mathbf{K}$, $\mathbf{V}$ and projection layers, are binarized. For binary MLP modules, weights are binarized in all experiments. We leave the input embedding layer and output layer unbinarized as it is the common practice of BNNs\cite{rastegari2016xnor}. \noindent\textbf{Training setup.} All experiments are implemented with PyTorch\cite{paszke2019pytorch} and timm\cite{rw2019timm} library. For both datasets, we employ Adam\cite{kingma2014adam} optimizers without weight decay and train models for 300 epochs using a cosine annealing schedule with 5 epochs of warm-up. The initial learning rate is set to 5e-4. When training is split into two stages, we train 150 epochs at each stage to keep the number of total iterations the same. Knowledge distillation (KD)\cite{hinton2015distilling} is used in all experiments. Specifically, we use the distribution loss proposed in \cite{liu2020reactnet} for optimization. Before training, all parameters are initialized with the pretrained model. \subsection{Ablation Studies} \subsubsection{Effect of Softmax-aware Binarization} First, we conducted ablation experiments over Swin-T and NesT-T models on TinyImageNet to prove the effectiveness of the proposed Softmax-aware Binarization. To eliminate the impact of MLPs, we keep them full-precision and only binarize attention modules. As shown in Table~\ref{abla:attn}, Softmax-aware Binarization consistently narrows the accuracy gap between the teacher and the binary Transformer model, indicating that the quantization error in the self-attention module is effectively suppressed. \begin{table}[h] \caption{Ablation study on Softmax-aware Binarization. } \label{abla:attn} \centering \resizebox{\columnwidth }{!}{ \begin{tabular}{cccc} \hline Arch & Method & \begin{tabular}[c]{@{}c@{}}ATTN\\ BitWidth (W/A)\end{tabular} & Top-1 (\%) \\ \hline \multirow{3}{*}{Swin-T} & FP & 32/32 & 80.57 \\ & BiBERT & 1/1 & 73.39 \\ & + Softmax-aware Binarization & 1/1 & \textbf{74.62} \\ \hline \multirow{3}{*}{NesT-T} & FP & 32/32 & 80.31 \\ & BiBERT & 1/1 & 68.51 \\ & + Softmax-aware Binarization & 1/1 & \textbf{70.73} \\ \hline \end{tabular}} \end{table} Also, we conduct experiments to verify the impact of $\beta$ estimation (see Eq.~(\ref{eq:alpha})) on the accuracy of the model. The results in Table~\ref{tab:thresh} show that the model is not sensitive to $\beta$ within a reasonable interval (about $0.25$ to $0.45$). In the following experiments, $\beta$ is set to $0.25$ to enable efficient bit-shift operation in Eq.~(\ref{eq:alpha}). However, the accuracy of the model decreases when $\beta$ is too small (less than $0.2$). This indicates that as the threshold becomes too small, the Softmax-aware Binarization is less effective and too many attention scores are activated. \begin{table}[h] \caption{Comparisons of binary attention's performance under different thresholds. The experiment is conduct over NesT-T model on TinyImageNet.} \label{tab:thresh} \centering \begin{tabular}{cc} \hline Method & Top-1 Acc.(\%) \\ \hline FP & 80.31 \\ BiBERT & 68.51 \\ Ours ($\beta$=0.20) & 69.18 \\ Ours ($\beta$=0.25) & 70.73 \\ Ours ($\beta$=0.35) & 70.81 \\ Ours ($\beta$=0.45) & 70.68 \\ \hline \end{tabular} \end{table} \subsubsection{Effect of Information Preservation} \label{sec:ablacr} To demonstrate the effectiveness of Cross-layer Binarization (CB), we compare the accuracy of training with CB schemes with that of directly training fully-binarized networks and training with traditional two-step schemes\cite{martinez2020training}. The experiments are conducted with activations in MLP modules remaining full-precision. As shown in the Table~\ref{abla:cr}, using CB can improve the accuracy by $11.6\%$ compared with one-step training, making it more applicable. Moreover, CB outperforms the traditional two-step training scheme by 4.1\%, demonstrating its strong ability to retain pretrained information. We expect that the two methods can be combined to further improve the accuracy since they are orthogonal, but will leave it for future work. \begin{table}[h] \caption{Ablation study on Cross-layer Binarization. Here, ``CB'' denotes Cross-layer Binarization and ``TS'' denotes traditional two-stage training scheme. } \label{abla:cr} \centering \resizebox{\columnwidth }{!}{ \begin{tabular}{cccc} \hline Method & \begin{tabular}[c]{@{}c@{}}ATTN\\ BitWidth (W/A)\end{tabular} & \begin{tabular}[c]{@{}c@{}}MLP\\ BitWidth (W/A)\end{tabular} & Top-1 (\%) \\ \hline FP & 32/32 & 32/32 & 80.31 \\ Ours (w/o CB) & 1/1 & 1/32 & 58.20 \\ Ours (w/ TS) & 1/1 & 1/32 & \textbf{65.64} \\ Ours (w/ CB) & 1/1 & 1/32 & \textbf{69.83} \\ \hline \end{tabular} } \end{table} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figures/CR_helps_training_crop.pdf} \caption{\textbf{Training accuracy curve with less training epochs.} The experiment is conducted over NesT-T model on TineImageNet dataset.} \label{fig:cr_training} \end{figure} With the pretrained information well preserved, CB also accelerates the convergence process of the binary model. As shown in Figure~\ref{fig:cr_training}, training binary models on ImageNet with CB requires fewer iterations to achieve ideal results. For instance, when CB is not used for training, $70$ epochs are required to reach a Top-1 accuracy of $50\%$. However, it only takes $30$ epochs for CB to reach the same accuracy, which significantly improves the training efficiency. Parameterized scaling factors can be applied to both self-attention and MLP modules. To show the improvement brought by parameterized scaling factors, we conduct the experiments by starting with a BiBERT-based baseline and then adding parameterized scaling factors to its self-attention and MLP modules separately. As shown in Table~\ref{abla:adap}, parameterized scaling factors are effective in both modules, getting 4.2\% and 1.3\% accuracy improvement respectively. Therefore, we use parameterized scaling factors in both modules by default if they are binarized. \begin{table}[h] \caption{Ablation study on parameterized scaling factors.} \label{abla:adap} \centering \resizebox{\columnwidth }{!}{ \begin{tabular}{cccc} \hline Method & \begin{tabular}[c]{@{}c@{}}ATTN\\ BitWidth (W/A)\end{tabular} & \begin{tabular}[c]{@{}c@{}}MLP\\ BitWidth (W/A)\end{tabular} & Top-1 (\%) \\ \hline FP & 32/32 & 32/32 & 80.31 \\ BiBERT & 1/1 & 32/32 & 68.51 \\ +Parameterized Scales & 1/1 & 32/32 & \textbf{72.75} \\ \hline FP & 32/32 & 32/32 & 80.31 \\ BiBERT & 32/32 & 1/1 & 70.02 \\ +Parameterized Scales & 32/32 & 1/1 & \textbf{71.35} \\ \hline \end{tabular} } \end{table} \subsection{Comparison with SOTA methods} \subsubsection{Evaluation On TinyImgnet} \label{sec:tinyimgnet} \begin{table}[ht] \caption{Comparisons of different network architectures on TinyImageNet. Here, ``FP'' means full-precision pretrained model, ``ATTN'' denotes attention module and (W/A) represents the number of bits used in weights or activations.} \label{tinyimgnetresult} \centering \resizebox{\columnwidth }{!}{ \begin{tabular}{ccccc} \hline Arch & Method & \begin{tabular}[c]{@{}c@{}}ATTN\\ BitWidth (W/A)\end{tabular} & \begin{tabular}[c]{@{}c@{}}MLP\\ BitWidth (W/A)\end{tabular} & Top-1 (\%) \\ \hline \multirow{8}{*}{Swin-T} & FP & 32/32 & 32/32 & 80.57 \\ & BiBERT & 1/1 & 1/1 & 41.89 \\ & BiT & 1/1 & 1/1 & 40.52 \\ & Ours & 1/1 & 1/1 & \textbf{58.66} \\ \cline{2-5} & FP & 32/32 & 32/32 & 80.57 \\ & BiBERT & 1/1 & 1/32 & 65.93 \\ & BiT & 1/1 & 1/32 & 61.82 \\ & Ours & 1/1 & 1/32 & \textbf{71.20} \\ \hline \multirow{8}{*}{NesT-T} & FP & 32/32 & 32/32 & 80.31 \\ & BiBERT & 1/1 & 1/1 & 32.39 \\ & BiT & 1/1 & 1/1 & 34.72 \\ & Ours & 1/1 & 1/1 & \textbf{52.21} \\ \cline{2-5} & FP & 32/32 & 32/32 & 80.31 \\ & BiBERT & 1/1 & 1/32 & 49.53 \\ & BiT & 1/1 & 1/32 & 46.43 \\ & Ours & 1/1 & 1/32 & \textbf{69.83} \\ \hline \end{tabular} } \end{table} We first evaluate our performance on the TinyImageNet dataset. As shown in Table~\ref{tinyimgnetresult}, previous Transformer binarization methods have severe accuracy degradation (more than $40\%$) on image classification tasks, which makes it barely usable in real applications. With our proposed method, the performance gap between the binary and full-precision model is greatly narrowed. For models with all weights and activations binarized, our method can improve the accuracy by nearly $20\%$ ($52.21\%$ vs. $32.39\%$ for NesT-T). However, this result is still not ideal. One reason is that the MLP module accounts for a large part of parameters and is hard to compress, as analyzed in Section~\ref{sec:intro}. Therefore, we reserve the activations in MLP modules as full-precision to make it more applicable. The experimental results show that leaving activations unbinarized can greatly reduce the performance degradation caused by binary MLPs ($69.83\%$ vs. $52.21\%$), achieving a better trade-off between accuracy and complexity. In this case, the model size is also reduced and we can still use addition to replace multiplication for more efficient inference. For models with $1W32A$ in MLP modules, the accuracy of our method is also much better than current SOTA methods. Another finding is that the accuracy of binary Swin-T is clearly better than binary NesT-T, while the performance of their full-precision models is similar. This may be attributed to the mask mechanism in the local window attention of Swin, which has the effect of restraining the number of original attentions greater than zero and thus helps the model to binarize. However, for most Transformers with standard self-attention (like NesT), our method brings huge accuracy gains. \subsubsection{Evaluation On ImageNet} We further evaluate the effectiveness of our method on the ImageNet dataset, and the results are shown in Table~\ref{imgnetresult}. To preserve the accuracy, we binarize all weights while leaving the activations in MLP modules full-precision. For Swin-T, our method outperforms previous SOTA by a margin of $2.5\%$, achieving a competitive $70.8\%$ Top-1 accuracy. For NesT-T, where the previous method even fails to converge, our method obtains a $68.7\%$ Top-1 accuracy. The improvement can be attributed to the effective binarization for self-attention modules and the information well preserved from full-precision model. The results also demonstrate the feasibility of binary Transformers in real-world visual tasks for the first time. \begin{table}[h] \caption{Comparisons of different network architectures on ImageNet, ``*'' denotes the model fails to converge. In particular, we list the parameters of the model for comparison between different architectures. It is worth noting that \textbf{the attention module} of the Transformer model is under $1W1A$ configuration.} \label{imgnetresult} \centering \resizebox{\columnwidth }{!}{ \begin{tabular}{cccc} \hline Arch & Method & \begin{tabular}[c]{@{}c@{}}MLP\\ BitWidth (W/A)\end{tabular} & Top-1 (\%) \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}ResNet-18\\ (Params: 11.7M) \end{tabular}} & FP & / & 69.6 \\ & AdaBin\footnotemark[1] & / & 63.1 \\ & IR-Net\footnotemark[2] & / & 66.5 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}ResNet-34\\ (Params: 21.8M) \end{tabular}} & FP & / & 73.3 \\ & AdaBin\footnotemark[1] & / & 66.4 \\ & IR-Net\footnotemark[2] & / & 70.4 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Swin-T\\ (Params: 28.3M)\end{tabular}} & FP & 32/32 & 81.2 \\ & BiBERT & 1/32 & 68.3 \\ & Ours & 1/32 & \textbf{70.8} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}NesT-T\\ (Params: 17.1M)\end{tabular}} & FP & 32/32 & 81.1 \\ & BiBERT & 1/32 & 0.27* \\ & Ours & 1/32 & \textbf{68.7} \\ \hline \end{tabular}} \end{table} \footnotetext[1]{CNNs with binary weights and activations.} \footnotetext[2]{CNNs with binary weights and full-precision activations.} \subsection{Comparison with Binary CNNs} We list the amount of model parameters and accuracy of binary ResNet\cite{he2016deep} in Table~\ref{imgnetresult}, so that we can compare the two model structures. CNN is mainly composed of convolutional layers, which only contains a small amount of $1 \times 1$ convolution layers, so most of the models can be compressed efficiently. The accuracy degradation of ResNet-34 in $1W32A$ configuration is less than $3\%$ compared with the full-precision model. On the other hand, Transformers also have their advantages. The global receptive field and attention mechanism have strong representation capability, which makes the accuracy of the full-precision ViT models much higher than that of ResNet under the similar number of parameters. Since we inherit the information of the pretrained model, a stronger teacher model will undoubtedly help the training of the BiViT. For example, when the attention maintains $1W1A$ (while the MLP module remains $1W32A$), BiViT can still provide better accuracy than ResNet with the $1W32A$ configuration ($70.8\%$ vs. $70.4\%$). Moreover, BiViT provides a strong baseline for future research on binarizing ViTs, whether for image classification or other vision tasks. \section{Conclusion and Discussion} In this paper, we have proposed to tackle two fundamental challenges with customized solutions for BiViT, and have successfully applied binary Transformers to visual tasks for the first time. Inspired by the long-tailed distribution of softmax attention, we have proposed the Softmax-aware Binarization for self-attention, the core module of Transformers, which greatly reduces the quantization error. To preserve information from pretrained model and enhance representation ability, we have proposed the Cross-layer Binarization scheme that decouples the quantization of self-attention and MLPs, and parameterized scaling factors for weight binarization. Combined with the above points, our BiViT has achieved significant accuracy improvement on the image classification task, with up to 70.8\% Top-1 accuracy on ImageNet. In the future, we will extend BiViT to more downstream vision tasks such as dense detection and segmentation. \noindent\textbf{Limitations and societal impact.} The performance gap between BiViT and full-precision counterparts needs to be further narrowed. Also, there are many factors that affect the accuracy of binary Transformers that have not been studied in this paper, such as the binary-friendly Transformer structure design and the impact of soft functions. Our BiViT does not have any potential negative societal impacts. {\small \bibliographystyle{ieee_fullname}
1,314,259,992,745
arxiv
\section{Introduction} PF1B is a user facility at the Institut Max von Laue - Paul Langevin (ILL) in Grenoble, France, for experiments in elementary particle and nuclear physics using polarized or non-polarized cold neutrons. PF1B is located at the end position of the ``ballistic'' $m=2$ super-mirror cold neutron guide H113 with an exit cross section of $60\times 200~mm^2$ ~\cite{Abe2006nima} (note that the capture flux at the guide exit of $1.35\times 10^{10}~n/cm^2/s$ reported in~\cite{Abe2006nima} has been improved to $2.2\times 10^{10}~n/cm^2/s$ at nominal reactor power by replacing guide sections that had suffered radiation damage and by upgrading the in-pile part). An important component of PF1B is a cold neutron polarizer that has to produce a large-area, $\sim 80\times 80~mm^2$, well-polarized neutron beam over an extended range of neutron wavelengths, $0.3-2.0~nm$. For the needs of experiments of different types, PF1B has to provide several options of optimization including a maximum total flux of polarized neutrons over a large beam cross section, a maximum flux density of polarized neutrons over a relatively small beam cross section, and ultra-high precision of the knowledge of the polarized neutron beam properties. While in the first two cases, the average polarization can be moderate (typically $P_n > 0.98$), the latter option requires ultra-high polarization ($P_n \geq 0.997$) to minimize systematic uncertainties ~\cite{Ves2008prc,Gle2017plb,Goe2007plb,Gag2016prc,Mar2019prl,Kre2005plb}. In order to achieve high polarization levels with reasonable transmission over the full wavelength range, the preferred technology is typically super-mirror (SM) benders ~\cite{Mez1977cp,Dra1977jtp,Sch1989pb,Maj1995pb,Mar2007tsf,Kri2008,Mez1989spie}. Ultra-high polarization is difficult to achieve with single reflection or transmission by a polarizing SM, though noticeable investigation was made in that direction \cite{Ple2010nima}. Therefore, the design of the polarizer geometry aims at making most neutrons reflect at least twice on polarizing SMs, with sufficiently large angle of incidence, when going through the device \cite{Kre2005nima,Pet2016nima}. The previous PF1B polarizer was built using the traditional technology of air-gaped reflection-type polarizing benders. It consisted of $30$ channels of $80~cm$ length and $2~mm$ width. The thickness of the borofloat glass substrates was $0.7~mm$. The Co/Ti/Gd SM coatings ~\cite{Sch1989pb,Els1994tsf,Cou2013ILL} had the effective critical velocity of $m=2.8$. The polarizer cross section was $80\times 80~mm^2$, the radius $300~m$, and the applied magnetic field $120~mT$ ~\cite{Sol2002ILL}. This polarizer was produced in collaboration between the ILL (SM coatings) and the TU M{\"u}nchen (glass and assembly). It was installed in 2002 downstream the H113 guide in an effective neutron capture flux of $\sim 2\times 10^{10}~n/cm^2/s$. The measured transmission was $\sim 0.49$ for the ``good'' polarization component. The capture-flux-averaged polarization was $P_n\sim 0.985$. When ultra-high polarization was required, we installed a second polarizer of the same type in the ``X-SM geometry'', thus providing a mean polarization of $P_n =0.997$ and a transmission of $\sim 0.25$ for the ``good'' polarization component ~\cite{Kre2005nima}. This geometry assumes that the reflecting planes in the second polarizer are orthogonal to the reflecting planes in the first one. However, during more than 15 years of successful exploitation, this polarizer was irradiated with a very high neutron fluence which resulted in a significant irradiation damage to the mirrors' borofloat glass substrates, mainly by the charged particles from the reaction $^{10}B(n,\alpha)$ in the glass substrate. It is also strongly activated, mainly due to the presence of Co in the SM coatings, which makes its handling seriously complicated. In this paper, we present the new, Advanced polarizer built for the PF1B instrument at ILL with improved polarization and free from the radiation damage and activation issues. The polarizer design, the choice of substrate and SM coating, and the magnetic housing are described in great details in our previous publications ~\cite{Pet2016nima,Pet2019rsi}. Here, we focus on the full scale polarizer production and the results of measurements of its characteristics. \section{\label{sec:Design}Polarizer design} To avoid some drawbacks of the previous polarizer, namely the high activation of Co in the Co/Ti SM coatings and the neutron-induced degradation associated with the $^{10}B(n,\alpha)$ reaction in the borofloat glass substrates, we decided to build a solid-state polarizer with Fe/Si coatings ~\cite{Maj1995pb,Kri1998pb,Hog1999pb,Stu2006pb,Big2009ILL,Wil2011itr}. An immediate advantage of the solid-state polarizer is its compactness. It has a more favorable ratio of channel to inter-channel width and allows to design a magnetic system with better performance. In traditional C-benders, a solid-state polarizer is built of a bent stack of thin ($150-200~\mu m$) single crystal Si wafers, coated on both sides with Fe/Si SM coatings terminated by Gd absorbing layers. Each Si plate coated with two reflecting SMs is a spin-dependent guide for neutrons which enter the plate bulk through the entrance edge of the plate. To avoid the direct view (i.e.\ neutron trajectories which do not touch the polarizing coatings) the bending angle $\gamma_C$ should meet the following condition: \begin{equation} \gamma_C\geq 8d/L, \label{eq:gammac} \end{equation} where $d$ and $L$ are thickness and full length of the channel. A double-bent polarizer of this type is known as the S-bender~\cite{Stu2006pb,Big2009ILL,Wil2011itr}. In our design of the new PF1B polarizer, we follow the concept proposed and described in detail in our previous publications~\cite{Pet2016nima,Pet2019rsi}. According to this ``advanced'' concept, we replace the single-crystal Si substrates by single-crystal sapphire. Since the neutron-optical potential for sapphire is higher than that for spin-down neutrons in the magnetized Fe of the SM coatings, this choice allows us to avoid the total reflection regime for neutrons of the unwanted spin direction propagating through the substrates. This modification expands the efficient polarizer bandwidth into the low $Q$ region. The polarizer is built of two independent stacks of flat substrate plates of $80\times 25\times 0.18~mm^3$ (i.e.\ $L/2=25~mm$, $d=0.18~mm$), each coated with SMs on both sides. Each plate in the stack is mounted on the top of the previous one. The total number of plates in the stack is 440 thus providing the total polarizer cross section of $80\times 80~mm^2$. The two stacks of mirrors are tilted by angle $\gamma_\nu$: \begin{equation} \gamma_\nu\geq 4d/L. \label{eq:gammanu} \end{equation} We denote this type of polarizer the V-bender. Fig.~\ref{fig:Vbender} illustrates the V-bender geometry in comparison with the traditional C-bender geometry. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig1.png} \caption{\label{fig:Vbender} The V-bender geometry: two stacks of both-side SM-coated plane parallel substrate plates (with thickness $d$ and length $L/2$) are tilted by angle $\gamma_\nu$ with respect to each other. To avoid the ``direct view'', the angle has to meet condition Eq.~(\ref{eq:gammanu}). Note that the two halves of the V-bender do not need to be in contact, also the slits do not need to be lined up. The traditional C-bender geometry is indicated by dashed lines for comparison (the internal dashed lines are omitted for better readability).} \end{figure} An important feature of the V-bender is the absence of pronounced Bragg dips in the transmission for particular wavelength values. Such dips were observed in experiments with solid-state S- and C-benders when the angular divergence of the incident neutron beam is comparable to the bending angle~\cite{Stu2006pb,Sha2014nima}. For the reflection of neutrons from a flat perfect crystal, the acceptance angle of Bragg reflection is very small (typically $1-10~\mu rad$) and the corresponding dip would be completely washed out by the angular divergence of the incident beam (typically a few tens $mrad$) even in the case of an unfortunately chosen crystal orientation. Indeed, these expectations were confirmed in experiment as we observed no dip in the transmission. As mentioned in ~\cite{Kat2018pcgcm,Pet2019rsi}, sapphire substrates are readily available ~\cite{Siegert} with sufficient polishing quality for SM coating and minimize the substrate bending due to the residual stress present in the coating. The latter is expected to be relevant when considering the geometrical imperfections of the final assembled mirror stack with respect to the ideal one. Together with the neutron optical properties, these features led us to choose sapphire as substrate material. \section{Polarizing coating} To explore the full angular divergence of the H113 ballistic $m=2$ SM guide and to cover the broad wavelength band of $0.3-2.0~nm$, we use the same ``inverse'' scheme $m=3.2$ Fe/SiNx SM coatings, consisting of 603 individual layers, which was previously used for the production of a solid-state S-bender~\cite{Stu2006pb,Big2009ILL}. The term ``inverse'' refers to the sequence order for the SM coating depositions, starting with a thicker layer as opposed to the sequence in SMs used for neutrons incident from air. Note that with such a sequence in a solid-state polarizer, the first layer visible to neutrons is the thickest one, as in the case of air-gaped devices. An absorbing Gd layer also has to be coated on top of the SM so that neutrons which are transmitted through the SM do not get out of the polarizer or into the next plate. In order to guarantee that these neutrons are absorbed rather than reflected, even at low $Q$ values, an anti-reflecting and absorbing Si/Gd multilayer consisting of 41 individual layers and based on the same principle as in Ref.~\cite{Sch1994pb} was designed. The total thickness of Gd is larger than $500~nm$, assuring that the transmission of non-reflected neutrons through the interface between the plates stacked inside the polarizer is always well bellow $10^{-3}$ in the operation conditions. The SM coating was produced in-house by reactive magnetron sputtering~\cite{Hog1999pb,Big2014jpcs}. Since the S-bender production~\cite{Stu2006pb,Big2009ILL}, some investigations were made about the magnetic properties of the coatings~\cite{Mar2016nima,Mar2019cry} and the neutron beam depolarization at reflection~\cite{Kla2013pp,Kla2016nima}. The coating process had been optimized further, resulting in magnetically softer multi-layers, i.e.\ requiring a weaker applied magnetic field to be magnetized close to saturation. \begin{figure} \centering \includegraphics[width=\columnwidth]{FigurePlateau.png} \caption{\label{fig:Plateau} Scheme of the deposition tray. The production version contains 48 sapphire plates of dimension $80\times25\times 0.18~mm^3$ each, a witness float glass sample and a witness thick sapphire plate with dimensions $80\times40\times 3~mm^3$ for neutron reflectometry measurements. These 2 additional substrates were placed in the free space (top-right corner). One thin sapphire substrate is highlighted in green, and the inset shows a detail of the triangular-shaped separator, used for masking the mirror edges without reducing the thickness deposited on their faces.} \end{figure} With our in-line sputtering machines, we used the tray shown in Fig.~\ref{fig:Plateau} to coat one face on a set of 48 sapphire substrates at once. In order to prevent coating the edges of the plates (where in particular Gd would reduce the neutron transmission significantly), special care was taken in designing the deposition tray, so that each mirror face is maximally coated without depositing material on the edges. In practice, an area of about $0.3~mm$ wide along the mirror edges was masked for this purpose. A witness float glass sample and a witness sapphire sample of $80\times40\times 3~mm^3$ were coated together with each set. All single-crystal sapphire substrates ($0.18~mm$ plates and $3~mm$ witness plates), with the c-plane parallel to the surface, were from ~\cite{Siegert}. For technical reasons, each coating was made in two steps with two different machines: one for the SM and one for the anti-reflecting absorbing multi-layer. The typical production cycle, for coating a set of 48 plates on both sides, was about one week. Twenty-five such cycles were achieved, spanning about six months, resulting in a total SM coated area of about $4~m^2$. For each coating run, the witness float glass sample was removed from the tray after the SM coating, so that it can be measured by neutron reflectometry in the standard way, with neutrons coming from the ``air'' side. The witness sapphire samples of $3~mm$ thickness underwent both steps and were measured with neutrons entering by the substrate edge, reflecting at the SM from the substrate side. This allowed a systematic control of each coating performance in the same conditions as for the thin plates, which could not be measured by neutron reflectometry. Fig.~\ref{fig:Reflectivity} shows typical spin-dependent reflectivity $R_{+},R_{-}$ measured with our test reflectometer T3 for one of the $3~mm$-thick sapphire witness samples. Most of the features, in particular the low-$Q$ part of the ``$R_{-}$'' curve, are consistent with the simulations presented in ~\cite{Pet2019rsi}. Through the whole production, the measured polarization after single reflection was $>0.985$ in the range $1<m<3$. We also performed more accurate spin-dependent reflectometry measurements at SuperADAM~\cite{Dev2013rsi} equipped with an opaque polarized $^3$He analyzer. Fig.~\ref{fig:RvsQSAdam} shows the results of measurements for all 4 spin components $R_{++},R{+-}, R_{-+},R_{--}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{T3ReflectometryFleches2.png} \caption{\label{fig:Reflectivity} Spin-dependent reflectivity ($R_{+}=R_{++}+R_{-+}$: red, $R_{-}=R_{--}+R_{+-}$: blue; left axis) measured with the T3 instrument on a $3~mm$ thick sapphire witness sample coated with Fe/SiNx/Gd SM and anti-reflecting absorbing layer in a production run. Neutrons with wavelength $0.75~nm$ entered by the substrate edge and were reflected at the SM from the sapphire side. The polarization (P: green; right axis) was calculated from $R_{+}$ and $R_{-}$, applying only a background correction.} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{RvsQSAdam.png} \caption{\label{fig:RvsQSAdam} Spin-dependent reflectivity $R_{++},R_{+-},R_{-+},R_{--}$ measured at SuperADAM reflectometer with an opaque polarized $^3$He analyzer (the analyzing power of $>0.999$) and the applied magnetic field of $0.7~T$.} \end{figure} Fig. ~\ref{fig:RvsQSAdam} shows that the beam polarization after single reflection ~\cite{Pet2019rsi} is $0.995<P<0.999$ in the same range $1<m<3$, i.e. it is significantly higher than the value $0.985$ measured at T3. We explain this difference by a lower polarization of the incident neutron beam and imperfections of the spin-flipper at T3 instrument. For a polarizer based on multiple reflections, the resulting polarization is defined by the depolarization at the last reflection ~\cite{Pet2019rsi}. Therefore, we also measured at SuperADAM the spin-dependent reflectivity as a function of the applied magnetic field strength, see FIG. ~\ref{fig:RvsBSAdam}. In contrast to $R_{++}$ and $R_{--}$ that are field-independent for $B>10~mT$, the reflectivity $R_{+-}$ and $R_{-+}$ decays rapidly at the field strengths of up to $100~mT$, and continues decaying slowly at larger fields. This confirms the importance of a high magnetizing field and justifies our magnetic housing with $0.38~T$ permanent magnet ~\cite{Pet2019rsi} for the PF1B polarizer. \begin{figure} \centering \includegraphics[width=\columnwidth]{RvsBSAdam.png} \caption{\label{fig:RvsBSAdam} Spin-dependent reflectivity $R_{++},R_{+-},R_{-+},R_{--}$ versus the applied magnetic field measured at SuperADAM with an opaque polarized $^3$He analyzer (the analyzing power of $>0.999$). $m=1.92$.} \end{figure} \section{Assembling} \subsection{\label{sec:Control precision}In-situ procedure to control the assembling precision} In solid-state neutron polarizers, the plates with the polarizing mirrors are mounted on the top of each other thus forming a cassette which is needed in order to cover a significant cross section of the neutron beam. It is usually assumed that all plates are nearly identical and have a plane-parallel geometry. Real plates can differ from plane-parallel as a result of the polishing process and due to residual stress in the coating. Imperfections of the geometry of individual plates as well as dust particles between them would result in a scatter of individual angles between the reflecting surfaces and the incident beam direction. The errors in setting the inclination angle would result in neutron reflection losses and an increased angular divergence of the reflected beam. The width of such dispersion increases with the number of plates in the cassette according to the law of ``Gaussian Random Walk'', see Appendix~\ref{app:RestrictionOnMirrorNumber}. The primary effect of such an angular dispersion is the degradation of the polarizer transmission for the ``good'' spin component, provided that the width of the dispersion is comparable to the critical angle of the polarizer (typically $10-20~mrad$). This mechanism may explain the significant discrepancy between the expected and measured transmission often observed in experiments with solid-state neutron-optical devices ~\cite{Sha2014nima}. Some polarization loss may also be attributed to this effect since such an angular spread may make neutron trajectories without reflection possible. The most straightforward way to minimize the losses of efficiency due to this mechanism would be to improve the precision in setting individual plates and to reduce the number of plates in the cassette. However, the number of plates in the cassette is fixed by the size of the required beam, and setting the inclination angle of individual plates with the precision much better than $\pm1~mrad$ is challenging. An advantage of the rather simple V-bender geometry is the possibility to develop a procedure to actively control the inclination angle of each plate in the cassette, see Fig.~\ref{fig:Scheme}. The idea is to limit the cumulative effect of successive random plate misalignment, by making it ``less'' random through deterministic control and intervention on each plate. The stack of assembled plates is illuminated with a narrow laser beam (diameter $1~mm$), and the reflected beam is projected on a full-frame ($24\times 36~mm$) sensor of a digital photo-camera. Assuming $\boldsymbol{n}$ is normal to the top mirror of the reference plate and $\boldsymbol{n'}$ is normal to the top mirror of the inspected plate, which is inclined relative to $\boldsymbol{n}$ by a small angle $\delta$, the shift $\Delta$ of the spot position on the camera sensor is: \begin{equation} \Delta \approx 2\delta~h, \label{eq:Delta} \end{equation} where $h\approx1000~mm$ is the distance between the plates and the sensor. We take the position of the reflected spot from the very first plate as the reference. If we observe that $\Delta$ is outside the accepted tolerance (typically $\pm~1~mrad$), we carefully inspect this plate and usually find a large dust particle. After removing it, the plate is assembled back to the cassette and we continue the procedure. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig3.png} \caption{\label{fig:Scheme} Scheme of the in-house-built optical inclinometer: 1 - laser, 2 - plates in the stack, 3 - full-frame sensor of a digital photo-camera.} \end{figure} We found that at normal laboratory conditions, $\sim 0.25$ of the plates show an angular deviation $>1~mrad$. Therefore, we decided to perform all manipulations with the sapphire plates (before and after SM deposition) inside our class-100 laminar-flow box. This solution dramatically improved the output of ``good'' plates: practically all of them show a small spread of the inclination angle ($<1~mrad$). During the stacking procedure, when misalignment with the underlying plate of $>1~mrad$ occurs, there is still a possibility to flip the last plate upside down and check if the misalignment is reduced. In the rare case when this did not solve the problem, the last plate was replaced by another one. To simplify the polarizer assembling we decided to use intermediate stacks composed of 25-30 plates. During assembling, the orientation of each plate is laser controlled according to the scheme shown in Fig.~\ref{fig:Scheme} and, finally, the positions of all plates in the intermediate stack are fixed with UV-cured optical glue NOA65 ~\cite{Tho} applied to both opposite short sides, see Fig.~\ref{fig:InterStack}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig4.png} \caption{\label{fig:InterStack} An intermediate stack of 25 sapphire plates. The neutron beam is incident on the long side ($80~mm$). The position of each individual plate is laser controlled according to the scheme shown in Fig.~\ref{fig:Scheme}. Ultraviolet cured glue is applied to both opposite short sides ($25~mm$).} \end{figure} Further assembling of polarizing cassettes composed of intermediate stacks also was performed in the same ``clean room'' with the optical control and under an applied load, see Fig.~\ref{fig:Assembling}. Intermediate stacks were inserted between two optically polished flats made of Borofloat glass and mounted on top of each other. The Borofloat flats and the cassette body made from a B-Al compound serve also as a neutron diaphragm absorbing neutrons outside the polarizer aperture. Two pneumatic actuators apply homogeneous load to the cassette ($\sim 1~bar$) in order to minimize possible gaps between the plates. The fully assembled cassette of plates is fixed with non-magnetic screws. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig5.png} \caption{\label{fig:Assembling} Final assembling of the polarizer cassette. Intermediate stacks are confined between two thick ($25~mm$) Borofloat glass flats coated with Al. A green-line laser beam incident on the upper flat and the reflected beam are registered with a photo-camera. Two pneumatic actuators apply homogeneous pressure ($\sim 1~bar$) on the cassette.} \end{figure} \subsection{\label{sec:Neutron test}Neutron test of the assembly accuracy} We may suspect that the optical inspection of the assembling described above is valid only locally, within the laser spot size of $\sim 1~mm$. The global slope (averaged over the full plate surface) may be different. Therefore, we also performed neutron tests of the assembly quality at the Super-Adam reflectometer at the ILL ~\cite{Dev2013rsi}. A fully assembled cassette of 440 double-sided coated plates was installed in the target position of the instrument. A very narrow neutron beam (width $0.1~mm$, horizontal angular divergence $0.05~mrad$ FWHM, height $60~mm$) with the wavelength of $\lambda =0.5~nm$ was incident on the cassette. After preliminary alignment, the cassette was tilted by an angle of $\sim 7~mrad$ from the incident beam direction. This angle is close to the value $\theta_d=d/L=7.2~mrad$ needed to avoid direct transmission, although a small admixture of neutrons having experienced double and zero reflections is possible due to the geometrical imperfections of the plates. The angular distribution of neutrons having passed through the cassette is projected on a position-sensitive detector (PSD) with a position resolution of $2.8~mm$ FWHM, installed at distance $r$ from the cassette (Cassette \#1: $r=2500~mm$, Cassette \#2: $r=3250~mm$). During the experiment, we keep the incident beam position and the PSD position, while the cassette position is scanned across the beam with a step of $0.05~mm$. The full range of the cassette position scan is $440\times 0.18~mm = 80~mm$. The data were accumulated in two-dimensional matrices: cassette position versus PSD pixel number, see Fig.~\ref{fig:CassetteScan}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig6.png} \caption{\label{fig:CassetteScan} Two-dimensional matrix representing the angular distribution of neutrons reflected by the mirrors of the 440 individual plates in Cassette \#1 (Left) and Cassette \#2 (Right). The solid white line shows the projection on the horizontal axis. See text for details.} \end{figure*} Data for Cassette \#1 were re-binned to the same sample-detector distance $r\approx 3250~mm$ as for Cassette \#2. The bright spots near PSD pixel $\sim 700$ are from neutrons having experienced a single reflection by the mirrors. Much weaker spots near PSD pixel $\sim 870$ are from neutrons having experienced zero reflection. As it is common in reflectometry, the distance $\Delta$ between the position of zero reflection and the single reflection spot is related to the mirror slope $\theta$ as follows: \begin{equation} \Delta \approx 2\theta r. \label{eq:Delta2} \end{equation} Note that for the parameters of our experiment, the positions of the zero-reflection spots are independent of the mirror slope, and the width of the spot is dominated by the PSD's finite resolution, $\sigma_{\rm PSD}\approx 4.25$ pixels, or $\sim 0.37~mrad$. In contrast, the positions of the single reflection spot are defined by the angles of the individual mirrors. Since the incident beam width is $0.1~mm$, the cassette position step $0.05~mm$, and the plate thickness $\sim 180~\mu m$, the beam often illuminates two adjacent plates simultaneously resulting in an additional broadening of the spot or even a spot splitting due to an eventual difference of slopes. At the same time, the width of the narrowest spots is driven by the PSD resolution and the mirror waviness We observed that due to the mechanical polishing our sapphire substrates are typically thinner near the edges than they are in the central region. This difference in the thickness is $\sim 6~\mu m$. Reflection of neutrons from a concave surface seen during their propagation through the plate would result in focusing in space and in the defocusing (additional broadening) in angle. From the data shown in Fig.~\ref{fig:CassetteScan}, we found that the typical width of the most narrow spots (the beam illuminates a single mirror) is very close to $\sigma_{PSD}$ that allows us to conclude that the individual mirror waviness of our sapphire substrates play a minor role in the formation of the reflected beam angular distribution By projecting the matrix data on the horizontal axis we obtain the effective angular distribution of neutrons reflected by the mirrors of all 440 plates. These angular distributions measured for both cassettes are shown in Fig.~\ref{fig:CassetteScan} with solid white lines. By fitting the single reflection peak near pixel $\sim 700$ with a Gaussian we obtain the estimates of standard deviations $\sigma_\theta^1$ and $\sigma_\theta^2$ for the distribution of individual slopes in Cassettes \#1 and \#2: \begin{equation} \sigma_\theta^1=0.62 ~ mrad,\quad \sigma_\theta^2=1.02 ~ mrad. \label{eq:sigmas} \end{equation} As follows from Eq.~(\ref{eq:sigmas}), Cassette \#2 shows a significantly broader distribution of mirror slopes. This fact may be explained by the difference in the assembling procedure. Indeed, Cassette \#1 was assembled from intermediate stacks in a single run. In contrast, while mounting Cassette \#2, we first assembled and glued two intermediate stacks each composed of 4 small stacks of 25 mirrors. The rest of 240 mirrors was assembled in one run from small intermediate stacks. In Fig.~\ref{fig:CassetteScan}, positions 1-400 correspond to the first stack of 100 mirrors, positions 401-800 to the second stack of 100 mirrors, and positions 801-1600 to the rest composed of small stacks. In this two-step procedure, an error in the relative positioning of the second big stack would apply to all 100 mirrors in this stack. This fact may explain the asymmetric form of the slope distribution for Cassette \#2, or even the tendency to the splitting visible in Fig.~\ref{fig:CassetteScan}. In spite of this difference, the width of the slope distribution is in a good agreement with the tolerance window $\pm 1~mrad$ adopted during the assembling of both cassettes. The angular distribution of individual mirror slopes in the cassette, $\sigma_\theta$, may be translated into the broadening of angular divergence of the incident neutron beam (given in second order): \begin{equation} {\rm FWHM}_{\rm eff} \approx {\rm FWHM}_{\rm in} \left( 1+\frac{1}{2}\left( \frac{2.35\sigma_\theta}{{\rm FWHM}_{\rm in}}\right) ^2 \right). \label{eq:FWHA} \end{equation} Here, ${\rm FWHM}_{\rm in}$ is the incident beam angular divergence. For the PF1B guide, the incident beam divergence depends on the neutron wavelength, ${\rm FWHM}_{\rm in}\propto \lambda$, while $\sigma_\theta$ is, as purely geometrical effect, independent of $\lambda$. Therefore, we expect that the effective beam broadening from the polarizer is stronger for short wavelengths. The relevant wavelength band of the PF1B polarizer is $0.3-2.0~nm$. For neutrons with $\lambda =0.3~nm$, ${\rm FWHM}_{\rm PF1B}\approx 2 m \theta_{Ni} \lambda \approx 20~mrad$ (with $m = 2$ of the H113 SM coating and the critical angle of Ni $\theta_{Ni} =17.3~mrad/nm$ ~\cite{Hay1978jpe}). The effect of additional broadening due to the dispersion of mirror slopes, $\sigma_\theta\approx 1~mrad$, is expected to be about $0.7\%$ for Cassette \#2 and smaller for Cassette \#1, and even smaller for neutrons with longer wavelengths. Comparing our results (Eq.~(\ref{eq:sigmas})) to the result $\sigma_{\rm coll}\approx 11~mrad$ measured for a stack of 200 Si plates of $0.2~mm$ thickness each, assembled without any control of angular orientation of individual plates ~\cite{Pet2019rsi}, demonstrates the importance and effectiveness of controlling the alignment of individual plates. Here, the plates are coated on both sides with Ti/Gd multi-layers and serve as collimator in a solid-state Fermi chopper. Since ${\rm FWHM}_{\rm coll}$ is comparable to the divergence after a neutron guide, such a collimator causes both, an imperfect collimation because of the transmission of neutrons outside the design divergence of the collimator, and a significant reduction of the on-axis beam intensity because of the absorption of low-divergence neutrons in the coating of misaligned plates. \subsection{\label{sec:Final assembling}Final assembling of the polarizer} The two stacks of polarizing mirrors, Cassettes \#1 and \#2, are installed into a mechanical driver one after the other, see Fig.~\ref{fig:FinalAssembly}. The driver allows remote control of each stack's slope angle with respect to the direction of the neutron beam. The driver rotates both cassettes together around axis \#1 (centered at Cassette \#1) and separately Cassette \#2 around axis \#2 (centered at Cassette \#2). The angle between the two cassettes corresponds to the tilt angle $\gamma$, see Fig.~\ref{fig:Vbender}. Here and below we denote this angle $\gamma$, not $\gamma_V$, as we discuss only the V-bender below. Then this assembly was inserted into the opening of the magnetic housing shown in Fig.~\ref{fig:Magnet}, Left. It is made from permanent magnets (for the details, see Ref. ~\cite{Pet2019rsi}) and provides a very homogeneous vertical field magnetizing the polarizer, with the transversal component $B_x/B_z<0.005$ in most of the volume occupied by the polarizing mirrors and $B_z\approx 0.38~T$. Finally, neutron shielding composed from boron nitride ceramics was mounted on both sides of the cassettes in order to absorb neutrons beyond the aperture of the polarizing mirrors, see Fig.~\ref{fig:Magnet}, right. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig7.png} \caption{\label{fig:FinalAssembly} The two polarizing cassettes mounted on the motorized mechanical driver. The motors and encoders are not visible, as they are placed outside the lead shielding where the polarizer is installed. From the motor axes, two ribbed shafts go through holes in the shielding and connect to the mechanics via the two brass hubs visible on the photo.} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig8.png} \caption{\label{fig:Magnet} Left: Sketch of the magnetic housing for the SM polarizer of cold neutrons. Red color indicates the South poles and dense blue color the North poles of the NdFeB magnets~ \cite{Pet2019rsi}. Right: The fully assembled new PF1B polarizer composed of the magnet, the motorized mechanical insert with two polarizing cassettes, and the neutron shield built from boron nitride ceramics. The stacks were illuminated from behind and the light going through the sapphire plates can be seen.} \end{figure*} \section{\label{sec:Characterization}Characterization of the new polarizer} \subsection{\label{sec:Setup}Experimental setup} The fully assembled polarizer with its magnetic housing was installed into the lead shielding in the PF1B casemate, downstream the H113 neutron guide, see Fig.~\ref{fig:Setup}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig9.png} \caption{\label{fig:Setup} Scheme of the experimental setup for the characterization of the new polarizer at PF1B.} \end{figure*} An aperture of $70\times 70~mm^2$ was installed at the exit of the H113 neutron guide (where only $60~mm$ width are illuminated with neutrons). In order to measure time-of-flight (ToF) spectra, a neutron beam chopper with a horizontal slit of $60~mm$ width and $5~mm$ height was installed at a distance of $565~mm$ in front of the polarizer. An adiabatic fast passage (AFP) neutron spin flipper with a flipping efficiency of $f>0.999$ ~\cite{Kre2005nima} was inserted in the space between the lead housing and the casemate wall. The stray field from the polarizer magnetic housing, the static magnetic field from the AFP flipper, a vertical magnetic field installed in the casemate window, and the magnetic field of the ``Magic box'' ~\cite{Pet2006nima} constitute the guiding field needed to transport adiabatically the neutron polarization. The ``Magic box'' also serves to conserve or flip the $^3$He polarization of the spin filter cell used for polarization analysis. Both the ``Magic box'' and the neutron detector at its exit were installed on a motorized table in order to allow a horizontal scan of the beam. Neutron ToF measurements without spin filter cell were performed with a low-efficiency ($\sim 5\cdot 10^{-5}$) $^3$He detector. At low efficiency, the number of neutrons crossing the detector is weighted with the $^3$He cross section which grows linearly with neutron wavelength $\lambda$. Measurements with spin filter cell installed were performed with a detector with $\sim 3$\%\ efficiency. Where necessary, the height of the aperture in front of the detector was adapted to keep the dead time correction small. In this experiment, we did not flip the neutron beam polarization and used the AFP flipper only to provide a static guiding field. Instead, we performed the polarization analysis by flipping the polarization of the gas in the $^3$He spin filter cell~\cite{Bab2007pb} (polarization losses per flip $\delta <10^{-5}$). To align the polarizer relative to the beam direction, we first set both cassettes to be parallel to each other. Then we installed a $5~mm$ wide diaphragm behind the chopper thus reducing the beam cross section to $5\times 5~mm^2$. A large area neutron monitor was mounted just behind the polarizer exit to capture the full angular divergence of the incoming beam. In this configuration, we rotate both cassettes around axis \#1 using the motorized driver \#1 (from $-1.5^\circ$ to $+1.5^\circ$, in steps of $0.1^\circ$). The angular position with the maximum count rate corresponds to both cassettes being parallel to the incoming neutron beam. Then, we scan the angular position of Cassette \#2 keeping unchanged the position of Cassette \#1, in order to correct for a potential initial misalignment between the cassettes. Again, the maximum count rate corresponds to the second cassette being parallel to the first one and both parallel to the incident beam. In the last step, Cassette \#2 was tilted by the angle $\gamma\approx 5d/L\approx 18~mrad$ to prevent ``direct view'', see Fig.~\ref{fig:Scheme}. \subsection{\label{sec:AngularDistribution}Angular distribution} Since the polarizer reflection plane is horizontal, the polarizer preserves the vertical angular distribution in the incident neutron beam and modifies only the horizontal one. The latter was measured by means of installing a small pinhole, $5\times 5~mm^2$, at the distance of $50~mm$ behind the chopper, and measuring the count rate in different detector positions across the beam far away from the pinhole using a small-aperture detector ($5~mm$ wide). In this measurement, the neutron detector was installed behind the ``Magic box'' at the distance of $3115~mm$ from the chopper. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig10.png} \caption{\label{fig:ThreeHills}Angular distribution of neutrons in the horizontal plane behind the polarizer as measured with small apertures. The labels in the format ``a-b'' denote the number of reflections in Cassettes \#1 and \#2, respectively. Negative angles correspond to a detector displacement in the direction of bending with respect to the direct beam.} \end{figure} Fig.~\ref{fig:ThreeHills} shows the result of the angular scan for the polarizer tilt angle $\gamma\approx 5d/L$. Contrary to the commonly used C-bender, which shows a continuous angular distribution of reflected neutrons, the V-bender shows a discrete angular distribution for sufficiently large tilt angles. Each peak contains contributions from classes of neutron trajectories with a certain number of reflections in the first and in the second cassette. The labels in Fig.~\ref{fig:ThreeHills} denote the corresponding reflection numbers. For example, the label \mbox{1-1} corresponds to Garland trajectories with a single reflection in Cassette \#1 and a single reflection in Cassette \#2. The label \mbox{0-2} represents neutrons with Zig-Zag trajectories in Cassette \#2. Note that the position of the peak representing Zig-Zag \mbox{0-2} trajectories is independent of the tilt angle $\gamma$ and is close to the direction of the incident beam, while the positions of the other peaks strongly depend on $\gamma$. The solid line shows the result of multi-peak fitting with a Gaussian form. Trajectories with multiple reflections are suppressed by reflection losses, $R<100\%$, beyond the total reflection regime. The angular distribution measured with a point-like aperture on the source (in front of the polarizer as in the experiment or equivalently at the polarizer exit) has to be distinguished from the flux distribution $\Phi(x)$ measured for a large source: \begin{equation} \Phi(x)=\int\int \frac{\lambda}{\lambda_{th}}B{\rm d}\Omega {\rm d} \lambda, \label{eq:Phi} \end{equation} where $B$ is the brightness of the neutron source ~\cite{Abe2006nima} and $\lambda_{\rm th}=0.18~nm$ is, by convention, the wavelength at the most probable velocity $v_0=2200~m/s$ in a thermal Maxwellian spectrum at the neutron temperature of $300~K$. Only in the ``near zone'' where the position splitting due to different angles in the beam is much smaller than the source size, a position scan provides a true flux density distribution. In contrast, a position scan in the ``far zone'' reproduces an angular distribution similar to Fig.~\ref{fig:ThreeHills}. This fact allows to profit from the full intensity of the beam in the ``near zone'' and to split the beam in well-collimated beams ($\sigma_\theta\approx 2.5~mrad$) in the ``far zone'' without any additional collimator. \subsection{\label{sec:Transmission}Polarizer transmission} We used two independent methods to measure the integral transmission of the polarizer (integrated over all neutron wavelengths and angles). In the ToF method, we keep the chopper with its horizontal slit of $60\times 5~mm^2$ in place and compare intensities measured with the polarizer in and out of the beam (by translating the lead-shielding table holding the polarizer). The chopper slit is smaller than the polarizer of $80\times 80~mm^2$. Therefore, most of the neutrons passing through the chopper also hit the polarizer entrance (no angular collimation other than that from the width of $60~mm$ of the H113 guide was imposed in the horizontal plane). The $^3$He monitor detector (with an aperture of $20~mm$ width and $30~mm$ height) was installed behind the ``Magic box'', see Fig.~\ref{fig:Setup}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig11.png} \caption{\label{fig:HorizontalProfiles}Horizontal profiles of neutron capture flux measured at the distance of $3115~mm$ downstream the chopper: without polarizer (triangles) and downstream the polarizer for two different tilt angles, $\gamma\approx 4.5d/L$ (rectangles) and $\gamma\approx 5d/L$ (circles). Data with the polarizer in place have been scaled by a factor of 5.} \end{figure} We performed a horizontal scan of the beam by moving the detector with a step size of $20~mm$, corresponding to the width of the aperture, thus precisely mapping the horizontal axis, and measured ToF spectra at each detector position. Summing-up all channels in the ToF spectra we obtain the neutron count rate profiles shown in Fig.~\ref{fig:HorizontalProfiles}. Note that the data with polarizer in place were multiplied by a factor of 5. Integrating these data over all positions of the detector we found the polarizer integral transmission for the ``good'' spin component (i.e.\ relative to $1/2$ of the intensity of the unpolarized beam): \begin{equation} T_1=0.35~\text{for}~\gamma=4.5d/L~\text{, and}~ T_2=0.31~\text{for}~\gamma=5d/L, \label{eq:Trans} \end{equation} where $\gamma$ is the polarizer tilt angle defined in Fig.~\ref{fig:Vbender}. Using the same data without integration over all neutron wavelengths, we calculate the wavelength spectra of transmitted neutrons for the two polarizer tilt angles, see the experimental points with error bars in Fig.~\ref{fig:Comparison}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig12.png} \caption{\label{fig:Comparison} Comparison of the polarizer transmission for the ``good'' spin component measured at PF1B (points with error bars) and that simulated using a MC code (curves). The dashed line is obtained assuming a perfect alignment of the polarizer and reflectivity curve \#1 of Fig.~\ref{fig:Reflectivities}. The solid line corresponds to a small offset in $\gamma$, $\gamma=4.85d/L$, and reflectivity curve \#2 of Fig.~\ref{fig:Reflectivities}.} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig13.png} \caption{\label{fig:Reflectivities} Reflectivity curves of the SM coating for the two neutron spin components, ($R^+$, $R^-$), propagating through a Sapphire substrate. The blue curve \#1 is from simulations performed in ~\cite{Pet2016nima,Pet2019rsi}. The black curve \#2 represents a parametrization ~\cite{Cla1997pb} with adjusted parameters to better match the experimental data in Fig.~\ref{fig:Performance}. See text for details.} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig14.png} \caption{\label{fig:Performance} Comparison of the polarizer performance for selected peaks in the output angular distribution measured at PF1B. Red color corresponds to neutrons with Zig-Zag trajectories, \mbox{0-2}, while blue color corresponds to Garland ones, \mbox{1-1}. Solid lines show a prediction of MC simulations. The decrease of measured polarization for short wavelengths, $\lambda <0.3~nm$, is due to the decrease of the $^3$He analyser opacity (which is shifted to longer wavelengths by the finite ToF resolution).} \end{figure} First of all, we underline the excellent agreement between the experimental results and the results expected from simulation in the vicinity of the short wavelength cut-off. However, for longer wavelengths, the experimental transmission is systematically lower than the MC prediction; the relative difference is $\sim 10\%$. We do not know exactly the reason for this disagreement. Most probably, it is due to an uncertainty in our knowledge of the SM reflectivity curves for all 900 double-side coated plates as well as due to a dispersion in the mirrors' $m$-values, and due to an uncertainty in the angular alignment of the polarizer cassettes (the zero position found by searching for the maximum in the transmitted intensity has a finite precision). For example, a simulation performed with the slightly modified reflectivity curve shown by line \#2 in Fig.~\ref{fig:Reflectivities} and with a small offset in the tilt angle, $\sim 0.5~mrad$, gives a much better agreement, see the black solid line in Fig.~\ref{fig:Comparison}. We also confirm that contrary to solid C-benders and S-benders, our V-bender does not show any Bragg dips in the wavelength transmission spectra. In the second method of evaluation of the polarizer transmission, we measured the neutron capture flux in front and behind the polarizer, without other collimation than the aperture at the exit of the H113 guide, by activating thin Gold foils~\cite{Als1967nima} mounted on the polarizer entrance and exit windows, respectively. The results for the 5 foil positions arranged in a cross are given in Table~\ref{tab:Fluxes}. \begin{table*} \caption{\label{tab:Fluxes} Neutron capture fluxes in front and behind the polarizer, in units of $10^9 n/cm^2/s$. We performed all flux measurements at the reactor power of $56~MW$. The Mean values have been scaled to the nominal power of $58.3~MW$. The transmission is calculated for the ``good'' spin component ($1/2$ of the capture flux in front of the polarizer). The flux measured at the end of a collimation system frequently used for neutron decay experiments at PF1B is also given. This collimation system consists of a series of apertures of $6\times 6~cm^2$ installed in a vacuum tube, the first one just after the lead shield at $1.47~m$ and the last one at $4.87~m$ behind the guide exit, compare Fig.~\ref{fig:Scheme}.} \begin{ruledtabular} \begin{tabular}{cccccccc} Position:& Top & Centre & Bottom & Left & Right & Mean & Position, distance $x$ from guide exit\\ Flux:& 20.1 & 20.4 & 21.5 & 19.1 & 20.1 & 21.1 & Polarizer entrance, $x=0.87~m$ \\ Flux:& 3.65 & 3.53 & 3.63 & 3.21 & 3.12 & 3.57 & Polarizer exit, $x=1.07~m$ \\ Transmission [\%]: & 36.4 & 34.6 & 33.8 & 33.6 & 31.0 & 33.8 & \\\hline\hline Flux:& 0.59 & 0.64 & 0.61 & 0.63 & 0.66 & 0.65 & End of collimation, $x=4.87~m$ \\ \end{tabular} \end{ruledtabular} \end{table*} The data in Table~\ref{tab:Fluxes} show good transmission homogeneity both in vertical and horizontal direction as well as a reasonable agreement with the results of the ToF method. The small difference between the results of the two methods may be explained by the difference in the angular acceptance: the result Eq.~(\ref{eq:Trans}) was obtained using nearly the total horizontal angular divergence of the beam, which is not fully correct for the results of the gold foil activation shown in Table~\ref{tab:Fluxes}. From the measurements of the integral transmission of the new polarizer as well as from measurements of the transmission as a function of neutron wavelength, we underline the good agreement between MC simulations and experimental data, which is not often the case for such kind of polarizers~\cite{Sha2014nima}. \subsection{\label{sec:Power}Polarization performance} The polarization of the neutron beam downstream the polarizer was measured using the setup shown in Fig.~\ref{fig:Setup}. The only difference compared with the transmission experiment is the presence of a cell with Si windows filled with polarized $^3$He gas (length of the gas column: $15~cm$), mounted inside the ``Magic box'' in order to preserve the polarization of the $^3$He. The polarized $^3$He was produced by Metastable Optical Pumping (MEOP) using the ILL filling station TYREX~\cite{And2005pb,Pet2006pb}. To minimize eventual systematic uncertainties in the measurements of neutron polarization we used the method of opaque cells~\cite{Zim1999plb}. The $^3$He polarization measured optically~\cite{Big1992jp} on TYREX was $0.75$. The polarization of the neutron beam behind the polarizer was analyzed by means of RF flipping the polarization of the $^3$He gas in the analyzer cell~\cite{Bab2007pb}. The loss in $^3$He polarization per single flip was $<10^{-5}$. To cover the neutron wavelength range of interest, $0.3-2.0~nm$, we used the following set of $^3$He pressures in the cell: $0.51,~0.82,~2.2~bar$. This set of ``opaque'' $^3$He cells provides $>0.999$ analyzing power for the neutron wavelengths of $>1.0,~0.6,~0.22~nm$, respectively. After filling the cell with polarized $^3$He, it is inserted in a compact magnetic transport system~\cite{Hay1978jpe} and transported to PF1B where it is installed in the ``Magic box''. The spin-relaxation time constant for $^3$He gas in the cell was longer than $200~h$~\cite{Pet2006pb}. Both applied techniques (opaque polarized $^3$He analyzer and in-situ RF flipping of the $^3$He spin state) assure $>0.999$ analyzing power for neutron spin analysis, without any corrections. \subsubsection{\label{sec:FarZone}Far zone} As mentioned in Section~\ref{sec:AngularDistribution}, the neutron intensity distribution across the beam in the far zone resolves the angular distribution present after the polarizer. Therefore, we decided to measure the polarization for the two most intense peaks in this angular distribution. These peaks correspond to Garland and Zig-Zag trajectories in the polarizer, marked as \mbox{1-1} and \mbox{0-2} in Fig.~\ref{fig:ThreeHills}, respectively. With a small aperture, $5\times 5~mm^2$, installed just behind the chopper, we measured the neutron angular distribution by means of a detector position scan across the beam at the distance of $3.15~m$ from the chopper. The full range of the scan was $200~mm$ with a step size of $20~mm$; this range corresponds to the beam divergence of $3.7^\circ$. At each point of the scan, we measured ToF spectra for the ``white'' (high transmission) spin state of the $^3$He analyzer. With a detector horizontal aperture of $20~mm$, we were able to fully cover the selected peaks. Finally, we centered the detector to either of the two most intense beams and measured ToF spectra for the two spin states of the analyzer. The measurements were performed using the loop \{``white'', ``black'', ``black'', ``white''\} to minimize possible systematic effects associated with the slow decay of the $^3$He polarization. To partly compensate for the very different count rates for the two spin states of the analyzer (the raw flipping ratio is $\sim 10^3$) we used very different expositions for each analyzer state: \{$30~s$, $1800~s$, $1800~s$, $30~s$\}. Sufficient statistics was accumulated by repeating this sequence for $10-20~h$. In order to preserve a high efficiency of the $^3$He analyzer cell, we replaced the cell with a freshly filled one every $24-48~h$. Note that the neutron transmission of the spin filter cell may change significantly on this time scale, whereas the analyzing power remains stable in the region of interest, where it is in saturation ($A\rightarrow 1$). The measured polarization (rectangular points) for the Zig-Zag \mbox{0-2} (red points) and Garland \mbox{1-1} (blue points) trajectories are shown in Fig.~\ref{fig:Performance}. First of all, we note the excellent polarization for both selected peaks. For the neutron wavelength band of $0.3-0.6~nm$, corresponding to the intensity maximum of the polarized beam, the measured polarization reaches the value of $P_n\approx 0.999$. The polarization values averaged over all the transmitted spectra are: $P_n\approx 0.9981(1)$ for the \mbox{0-2} Zig-Zag trajectories and $P_n\approx 0.9980(1)$ for the \mbox{1-1} Garland trajectories, where only the statistical uncertainties are given. As expected, the mean reflection angle for Zig-Zag trajectories is higher and, therefore, the cut-off wavelength is at a longer wavelength than for Garland trajectories. This feature of Zig-Zag trajectories also explains the practically constant polarization (red points) for the full wavelength band $\lambda\in\{0.3~nm-1.9~nm\}$. \subsubsection{\label{sec:NearZone}Near zone} In the near zone, the value of interest is the polarization averaged over the full beam (fully illuminated polarizer and all angles of transmitted neutrons). Since the polarizer does not modify neutron trajectories in the vertical plane, we don't expect any variation of the neutron polarization in the vertical direction. Therefore, we used the full horizontal aperture of the chopper ($60\times5~mm^2$) and measured ToF spectra for the two spin states across the full transmitted beam in the horizontal plane. In order to precisely map the beam over its width of $200~mm$, it was scanned with a step size corresponding to the width of the detector aperture, $20~mm$. The data obtained after integration over all positions of the detector at the distance of $3115~mm$ from the chopper are shown in Fig.~\ref{fig:ComparisonBis}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig15.png} \caption{\label{fig:ComparisonBis} Comparison of the polarizer performance for two different tilt angles $\gamma$ (Red: $\gamma=4.5~d/L$, Blue: $\gamma=5~d/L$). The rectangles with error bars represent the wavelength spectra of neutron polarization (left axis) and the circles the wavelength spectra of capture flux intensity (right axis). The decrease of the measured polarization for short wavelengths, $\lambda<0.3~nm$, is due to the decrease of the $^3$He analyzer opacity (which is shifted to longer wavelengths by the finite ToF resolution).} \end{figure} Again, we note the outstanding polarizing power of the new PF1B polarizer even after averaging over the full output phase-space (angle, position, wavelength). It was measured as high as $P_n=0.9960(1)$ for $\gamma=4.5~d/L$ ($16.2~mrad$) and $P_n=0.9974(1)$ for $\gamma=5~d/L$ ($18~mrad$) (where the measured capture flux spectra are used for averaging over the wavelengths). \section{\label{sec:Adaptability}Polarizer Adaptability} As mentioned in Section ~\ref{sec:Final assembling}, the new PF1B polarizer is equipped with two motorized drivers, see Fig.~\ref{fig:FinalAssembly}. The V-shape design of the polarizer and the motorization provide the unique opportunity to control remotely the polarizer orientation relative to the incident neutron beam as well as the tilt angle $\gamma$ between the polarizing cassettes (analogous to the bending angle of a classical C-bender). The commonly accepted criterion for choosing the value of the tilt angle is the critical angle $\gamma_c$ which just prohibits the ``direct view''. In other words, it is the minimal angle which guarantees the absence of neutron trajectories without collisions with the polarizer reflecting mirrors. One may ask the question: is this angle optimal for all types of experiments with polarized neutrons? Obviously, this is not the case. Indeed, for a long instrument downstream the polarizer, one is interested in the on-axis value $\partial_\Omega\Phi$ ($\Phi$ is the neutron flux density in the beam). An example is the PERKEO~II experiments~\cite{Kre2005plb,Mund:2012fq} which used a well-collimated beam with rather low divergence acceptance. In contrast, for a short instrument, which accepts a high beam divergence, the quantity of interest is the integral flux density $\int_A\Phi{\rm d}A$. These two situations correspond to Far and Near zone after the polarizer. If considering systematic effects, a very wide class of experiments is statistically limited and systematic uncertainties associated with a spatial or angular non-uniformity of the polarization are not dominant~\cite{Ves2008prc,Gle2017plb,Goe2007plb,Gag2016prc}. For this class of experiments, the so-called Figure-of-Merit (FoM): $\Lambda=P^2T$ is the quantity of interest. The bending angle $\gamma_c$ defined above does not maximize $\Lambda$. On the other hand, experiments which are extremely sensitive to the polarization distribution over the beam cross section or over the emitting angle, profit from an ultra-high beam polarization which leaves no room for noticeable systematic uncertainties. The data shown in Fig.~\ref{fig:ComparisonBis} were measured for two different tilt angles $\gamma=4.5d/L$ and $\gamma=5d/L$, which are $12\%$ and $20\%$ higher than $\gamma_c=4d/L$ of the ``no direct view'' condition. The lower the angle $\gamma$ the lower is the beam polarization and the higher is the polarizer transmission or the beam flux density $\Phi$. This typical concurrence implies the existence of an optimal angle $\gamma$ which maximizes the FoM value. There is no reason to expect this optimal value $\Lambda$ to coincide with the value resulting from the ``no direct view'' condition. With the values $P_{\rm ave}$ and $T_{\rm ave}$ shown in Fig.~\ref{fig:ComparisonBis}, we arrive at the following value of the $\Lambda$ parameter: for $\gamma=4.5d/L$, $\Lambda=0.346$, and for $\gamma=5d/L$, $\Lambda=0.312$. This means that even a lower value of $\gamma$ is required to maximize the parameter $\Lambda$. Note that the tilt angle $\gamma=5d/L$ was chosen to provide ultra-high beam polarization, corresponding to what was achieved with the X-SM geometry of two of the previous PF1B benders, where it improves the FoM value substantially compared to $\Lambda_{\rm X-SM}=0.24$ of the X-SM geometry. Since measurements of the polarizer transmission and especially the polarization are time consuming, we did not perform a full scan over the tilt angle. Instead, we tried to shed a light on this problem using results of MC ray tracing, see Fig.~\ref{fig:Merit}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig16.png} \caption{\label{fig:Merit} Figure-of-Merit $\Lambda$ (blue, left axis) and polarization (red, right axis) as function of the polarizer tilt angle $\gamma$ in units $d/L$. The solid lines show results of MC simulations. The points mark our experimental results for $\Lambda$ (green: transmission from gold foil activation, black: transmission from ToF data). Note that the experimental points are for tilt angles well above the ``no direct view'' condition where the performance of the real device, in particular its polarization, is rather insensitive to small geometrical imperfections. For tilt angles in the transition region these imperfections have to be taken into account.} \end{figure} One can see that for the V-bender geometry, the maximum of $\Lambda$ (blue solid line) is reached for tilt angles of about $\gamma=2.5-3.0d/L$ which is far below the value $\gamma=4.0d/L$ required to avoid the ``direct view''. Comparing the $\Lambda$ curve in Fig.~\ref{fig:Merit} with the value $\Lambda=0.477$ measured for the previous PF1B polarizer ~\cite{Sol2002ILL} we conclude that the new polarizer at a tilt angle $\gamma\approx 2.75d/L$ delivers the same Figure-of-Merit $\Lambda$ as the previous one. This fact opens the possibility to optimize the polarized beam for an extremely broad class of possible experiments using the same polarizing device: either for ultra-high polarization, or for the highest Figure-of-Merit. More generally, these results show that polarizing devices similar to the one presented here could replace advantageously conventional benders in other types of neutron instruments, even for experiments that are typically flux-limited but do not require the highest level of polarization. These devices give the possibility to tune the polarizer performance for particular experiments depending on its critical requirement (polarization or transmission). More specifically, for neutron scattering experiments where only limited divergence can be used, one could select only the output peak \mbox{0-2} in Fig.~\ref{fig:ThreeHills} where the intensity corresponding to ``zig-zag'' neutron trajectories is concentrated, close to the incident beam direction, with polarization $0.995-0.999$ over the whole wavelength range (Fig.~\ref{fig:Performance}). By contrast, a C-shaped bender with similar parameters would deviate the whole beam more strongly, giving a continuous outgoing angular distribution~\cite{Soy1995pb} whose total width would be comparable to the present V-bender case~\cite{Pet:un}. The polarizing device could be used either on continuous reactor-based sources or pulsed sources (e.g. spallation), as our ToF measurements show that it can be operated over a broad wavelength range. Another application of this kind of device would be to analyze precisely the beam polarization at a given instrument, without ambiguity and without sophisticated data treatment (assuming that the beam divergence is smaller than the acceptance of the analyzer). Due to the high analyzing efficiency, it could be used the same way as an opaque $^3$He spin filter cell ~\cite{Zim1999plb}, in case the latter is not available or applicable and that an accuracy of a few times $10^{-3}$ is sufficient. A device with smaller beam cross section, easier to install on most instruments, could be built and made available on-demand. \section{\label{sec:Conclusion} Conclusion} A new type of solid-state polarizer is built entirely in-house at the ILL for the PF1B cold neutron beam facility. The polarizer is installed in the PF1B casemate and tested in real conditions. For the tilt angle $\gamma=5 d/L$, in the near zone, the downstream polarization, averaged over the capture spectrum for the full wavelength band of $0.3-2.0~nm$, reaches a record measured value of $P_n=0.997(1)$, with a mean transmission for the ``good'' spin component $>0.31$. In the far zone, the polarization is $P_n>0.998$, and it is practically independent of the neutron wavelength within the wavelength band $\lambda=0.3-1.0~nm$, which contains $0.97$ of the total flux. For longer wavelengths, the polarization shows a very slow decrease towards $0.995$ at the wavelength $\lambda=2~nm$. The polarizer allows remote control of its geometry (the tilt angle $\gamma$) which opens an unique option to deliver optimal conditions for an extremely broad class of possible experiments using the same polarizing device: either ultra-high polarization or optimal Figure-of-Merit, or the option to adjust the mean take-off angle. To our knowledge, no device with similar performance endorsed by measurements was reported for a cold neutron polarizing device. The polarizer is based on a series of innovations in the design and fabrication in the following domains: choice of the substrate material, SM and anti-reflecting multilayer coatings, strength and homogeneity of the magnetizing field, and precision of the assembling process. The polarizer is used for user experiments since the last reactor cycle in 2020. \begin{acknowledgments} These measurements were performed at the instruments PF1B, SuperADAM, D17, and T13 at the ILL. We express our gratitude to our technicians Didier Berruyer, Pascal Mouveau and Nicolas Thiery who produced many technical components for the experiments, to Nicolas Surget and Benjamin Sornin for the new control system of the SM coating machine, and to Guillaume Delphin, Amandine Vittoz, Florian Philit and Vincent Gaignon for the production of the SMs by magnetron sputtering. We thank Thomas Saerbeck for useful transmission and reflectivity measurements of substrate materials. \end{acknowledgments} \section*{Data Availability Statement} Data openly available in a public repository that issues datasets with DOIs: doi:10.5291/ILL-DATA.TEST-3099, doi:10.5291/ILL-DATA.3-07-366, doi:10.5291/ILL-DATA.TEST-2541. \section{References}
1,314,259,992,746
arxiv
\section{Introduction}\label{sec:intro} The aim of this paper is to construct and understand subgame-perfect equilibria in symmetric stochastic timing games, which have important applications, for instance, in strategic real option models. It is well known that in many timing games in continuous time there exist no equilibria in pure strategies. If they do exist, however, they typically involve asymmetric payoffs that only depend on the respective roles of the players, which must be determined before the game starts. Then there is an unresolved strategic conflict. Here we strive for a rather general existence result and possibly symmetric payoffs, so we consider mixed strategies. In particular, no assumption is made concerning the local incentives, which can move randomly between first- and second-mover advantages. Restricting attention to games with a second-mover advantage is known to be helpful for equilibrium existence. We begin by analyzing that case, too, demonstrating the general payoff asymmetry in pure-strategy equilibria. Our main contribution for this case is the construction of mixed strategy equilibria with symmetric payoffs in a possibly general model, making no specific assumptions concerning the underlyling uncertainty. Nevertheless, the equilibrium strategies have a clear characterization and interpretation, using the concept of the Snell envelope from optimal stopping theory. Specifically, we can describe the stopping rates that make a player indifferent to stay in the game when forgoing a profitable local payoff. Due to the possible uncertainty, the tradeoffs can be all in expected terms. Since we are not assuming any kind of smoothness or monotonicity of the underlying payoff processes, we also generalize existing results for purely deterministic models. With a first-mover advantage, there is often a preemption incentive that leads to equilibrium existence problems even with mixed strategies in very simple, completely well-behaved deterministic models.\footnote{% See, e.g., \cite{FudenbergTirole85} and \cite{HendricksWilson92}. } One needs to extend strategies to model preemption appropriately in continuous time, which requires some coordination mechanism (not to be confused with public \emph{correlation}). As there is no respective ``next period'', the players have to be able to ensure that the game ends instantaneously regardless, but without simultaneously stopping to occur with probability 1. We use the generalization of the concept of \cite{FudenbergTirole85} to stochastic models provided in \cite{RiedelSteg14}, which gives us preemption equilibria that correspond to symmetric mixed equilibria in discrete time. These can be combined with the previous equilibria for local second-mover advantages with continuous strategies, to obtain a general existence and characterization of subgame-perfect equilibria with symmetric payoffs without any restriction on the order of payoff processes. These equilibria now allow for richer models of strategic real options, for instance, which typically focus on preemption. In the latter case one needs to ensure that waiting until preemption starts is optimal, however, which may hold in a specific model or requires a particular assumption.\footnote{% See \cite{RiedelSteg14} on such properties. For a strategic investment model where both preemption and attrition situations can arise see \cite{StegThijssen15}. } Here we may have some continuous stopping beforehand. Depending on the profitability of future continuation equilibria, preemption does not have to be triggered just because there is a local first-mover advantage. As the term says, preemption destroys future continuation payoffs; so the less often preemption occurs, the higher the resulting equilibrium payoffs. To determine at which times preemption is indeed inevitable, we provide an algorithm working under the additional assumption that simultaneous stopping is generally the worst outcome (weakly). Then we find that in any equilibrium with symmetric payoffs in every subgame, the equilibrium payoff can never exceed the expected value of optimally stopping the \emph{minimum} of the local leader and follower payoffs. That means, no matter how the players mix, possibly even with arbitrary public correlation and independently of infinite remaining time, they can never benefit from a high value of the underlying payoff processes if that is not attained by both the leader and follower payoffs simultaneously. Then we know that the game has to end by preemption whenever the leader payoff exceeds the equilibrium payoff bound. Iterating this procedure cumulates in the identification of times when preemption cannot be avoided. Confining preemption to those times, we obtain an equilibrium with least sustainable preemption and highest possible payoffs. \subsection{Main theorems} We use the formal concept of subgame-perfect equilibria with mixed strategies for timing games developed in \cite{RiedelSteg14}, who argue that subgames are appropriately identified by stopping times (the latter are feasible decision nodes, but cannot be represented by considering deterministic times only). Mixed strategies take the form of distribution functions over time, that can react to the dynamic exogenous information about the state of the world. We further apply the mentioned strategy extensions for preemption regimes. Uncertainty may affect the underlying payoffs, but may also (just) represent public correlation devices. Given that framework, this paper develops three main Theorems \ref{thm:SPE}, \ref{thm:symeql} and \ref{thm:maxeql}, which build on each other. Theorem \ref{thm:SPE} constructs subgame-perfect equilibria with mixed strategies for games with a systematic (weak) second-mover advantage. Therefore the players have to coordinate on an appropriate payoff process, which consists of the leader payoff up to some feasible point, where there is either a simultaneous stopping equilibrium, or which is sufficiently late such that both players will then have stopped for sure. The equilibrium proceeds by optimally stopping this fixed process. As long as there are expected gains, no player stops. If a point is reached, however, where it would be strictly optimal to stop~-- i.e., any delay would imply a loss~-- then there has to be a compensation in terms of some probability to obtain the higher follower payoff. We characterize the exact rate that the respective opponent has to use to make each player indifferent at such points. Owing to its generality, this result is technically not as clear as its interpretation. With typical Brownian models, for instance, one cannot apply local arguments as there is no path monotonicity at all. Consequently, it is then also not possible to distinguish proper time intervals on which mixing occurs~-- imagine a Brownian motion fluctuating around the boundary of the region where mixing indeed takes place. Nevertheless, using martingale arguments we obtain a clear representation of strategies involving the concept of Snell envelope from the theory of optimal stopping, which allows us to speak meaningfully of a (local) expected loss, for instance. These strategies will typically be continuous up to some terminal jumps. Another important question is then time consistency. If we define mixed strategies for all subgames, i.e., stopping times, we have to ensure that they imply consistent conditional stopping probabilities throughout the game, which is generally not trivial. Theorem \ref{thm:symeql} then makes use of the mentioned strategy extensions, which allow us to provide symmetric preemption equilibria for regimes with a first-mover advantage. The theorem establishes that they form feasible continuation equilibria when leaving regimes with second-mover advantages. In aggregate we thus obtain payoff-symmetric equilibria for games without any restriction on the local incentives. There may be arbitrary, random alternations of first- or second-mover advantages. Theorem \ref{thm:maxeql} determines \emph{efficient} symmetric equilibria. While the previous ones involve extreme preemption~-- whenever there is a strict first-mover advantage~-- we now identify equilibria with least sustainable preemption, resulting in the highest feasible payoffs. For that purpose we focus on \emph{payoff-symmetric} equilibria, with symmetric payoffs in every subgame, since this property has important implications for equilibrium strategies. Roughly, conditional stopping probabilities can only differ when players are currently indifferent to become leader or follower. With the additional assumption that simultaneous stopping is not strictly better than leading or following, in equilibrium the players can coordinate at most on optimally stopping the minimum of the leader and follower payoff processes. Whenever the leader payoff exceed that value, there must be preemption. Knowing this restricts the relevant stopping times in the previous problem, which further reduces the attainable value. Iterating the procedure formally as an algorithm identifies inevitable preemption points. Theorem \ref{thm:maxeql} establishes that we do obtain a well-defined equilibrium in the end, not only a limit value. It is based on the previous equilibria, but suppressing preemption where possible. The main problem is to show that there remain proper equilibria where preemption does take place, and that we indeed have well measurable, time-consistent strategies when applying the proposed algorithm to all subgames. \subsection{Related literature} Strategic timing problems appear in an abundance of contexts, in particular in economics but also in biology, e.g., and there is a vast related literature. On the one hand there is a branch on deterministic timing problems in continuous time addressing a wide range of applications, where typically a distinction is made between preemption models and wars of attrition. Correspondingly, \cite{HendricksWilson92} and \cite{Hendricksetal88} study stylized models with systematic first- and second-mover advantages, respectively. A war of attrition appears in \cite{GhemawatNalebuff85} who consider exit from a declining industry.\footnote{% \cite{FudenbergTirole86} analyse a market exit problem with incomplete information. \cite{BulowKlemperer99} consider a similar problem with more than two firms. } In a seminal contribution, \cite{FudenbergTirole85} emphasize subgame-perfection in a symmetric preemption game. \cite{HoppeLehmann-Grube05} model a similar technology adoption game, allowing the leader payoff function to be multi-peaked while restricting the follower payoff to be nonincreasing.\footnote{% Some implication of \(F\) being a supermartingale will be discussed here in Section \ref{sec:eqlpure}. \cite{Duttaetal95} obtain again a similar structure as \cite{FudenbergTirole85} (including the single-peakedness assumption) from a model of product differentiation. } Without uncertainty, these games proceed quite linearly due to perfect foresight. More complications arise when the incentives may vary more freely. \cite{Larakietal05} consider general deterministic \nbd{N}player games with payoffs that are just continuous functions of time (for given identities of first-movers). They prove that there do always exist \nbd{\varepsilon}equilibria, but not necessarily exact equilibria. On the other hand there is also a wide branch of the literature considering (continuous-time) timing games with uncertainty. \cite{DuttaRustichini93}, e.g., formulate a symmetric Markovian setting. However, restricting themselves to pure strategies, their Markov perfect equilibrium payoffs are generally asymmetric. Important applications with uncertainty are strategic real options. An early contribution is \cite{Smets91}. A typical symmetric model of preemptive investment is that of \cite{MasonWeeds10}.\footnote{% \cite{PawlinaKort06} consider a similar model with asymmetric investment costs and \cite{Thijssen10} one with firm-specific uncertainty. \cite{LambrechtPerraudin03} model preemption with incomplete information. } \cite{Weeds02} considers strategic irreversible R\&D investment, while \cite{Murto04} studies exit from a duopoly. Finally, as we emphasize uncertainty, the literature on Dynkin games with its large tradition has to be named. As these are two-person, \emph{zero-sum} timing games, the classical question is the existence of an equilibrium saddle point, or value, under varying conditions. We here just refer to the more recent work by \cite{TouziVieille02}, since their payoff processes are very general and~-- more importantly~-- since they introduce another concept of mixed strategies (but without consideration of subgames). \cite{TouziVieille02} prove that many more Dynkin games have a value if one allows for such mixed strategies. Recently, also some more abstract work considering stochastic timing games with non-zero-sum payoffs has been conducted. \cite{HamadeneZhang10}, e.g., prove existence of Nash equilibrium for 2-player games with a general second-mover advantage.\footnote{% See also \cite{HamadeneHassani14} for an extension to \(N\) players using a similar approach. \cite{LarakiSolan13} make less assumptions concerning the incentives in a 2-player game. Consequently, even allowing for mixed strategies, they can only prove existence of \nbd{\varepsilon}equilibria. } \subsection{Outline} This paper is organized as follows. In Section \ref{sec:game} we define our timing games, making only minimal regularity assumptions, and we introduce the concept of subgame-perfect equilibria in mixed strategies as developed in \cite{RiedelSteg14}. Although we are generally working with mixed strategies, equilibrium verification is related to solving optimal stopping problems by linearity, for which we establish a convenient representation in Section \ref{sec:BR}. There we also present the needed facts from the general theory of optimal stopping. On the one hand, strategies will be represented by the Snell envelope, which we motivate. On the other hand, in our games we have to be quite careful about existence of optimal stopping times, which depends strongly on path properties of the involved processes, so we will address some details. By a first application of this theory in Section \ref{sec:eqlpure} we establish equilibria in pure strategies and argue that they typically generate coordination problems. These are resolved in Section \ref{sec:eqlmix} by the construction of subgame-perfect equilibria in mixed strategies, in a first step for games with a systematic second-mover advantage. Although our representation of the equilibrium strategies can be well interpreted, we derive a completely explicit equilibrium for a market exit example in Section \ref{sec:expduo}. In Section \ref{sec:eqlsym} we use the mentioned strategy extensions to obtain equilibria in regimes with first-mover advantages, which then enables us to construct and characterize subgame-perfect equilibria for arbitrary symmetric stopping games. Finally we identify equilibria with maximal payoffs and least possible preemption in Section \ref{sec:symeql}. Section \ref{sec:conc} concludes. The appendix contains some technical results and the proofs. \section{The timing game}\label{sec:game} We use the framework for subgame-perfect equilibria with mixed strategies developed in \cite{RiedelSteg14}, where the concepts summarized in this section are explained in more detail. Here we only consider symmetric games, which allows some simplifications incorporated in the following. The timing game consists of two players \(i=1,2\), who each decide when to stop in continuous time \(t\in[0,\infty]\). However, there is uncertainty about the state of the world, modeled by a probability space \((\Omega, {\mathscr F}, P)\). The partial information about the true state evolves exogenously over time, represented by a filtration \(({\mathscr F}_t)_{t\geq 0}\). The player's stopping decisions may of course use this information, so a feasible plan is in principle a stopping time; see Section \ref{subsec:strateql} for the formal definition of strategies. As usual in timing games, we focus only on situations (resp.\ histories) in which no player has stopped, yet. Therefore, the game ends as soon as any of the players stops. A player who is the single one to stop first is called the \emph{leader}. In this case the other player becomes the \emph{follower}. Their respective payoffs are determined by the two stochastic processes \(L\) and \(F\). Both processes incorporate the possible effect of an (optimal, contingent) stopping decision that the follower might have in a more primitive model, given that the opponent has already stopped~-- as in Example \ref{exm:attrition} below. If the game ends by both players stopping simultaneously, their payoffs are determined by the third process \(M\). All processes are measured in the same numeraire, say, discounted to time 0, and the players are risk neutral.\footnote{% Alternatively, one can interpret the payoff processes as measured in discounted ``utils''. } Equilibria will obviously be based on solving optimal stopping problems involving the three underlying payoff processes. We need to make some weak regularity assumptions in order to have well defined problems in the following. \begin{assumption}\label{asm:payoffs} \par\noindent \begin{enumerate} \item \(\left(\Omega, {\mathscr F}, P\right)\) is a fixed probability space equipped with a filtration \(\filt{F}=\bigl({\mathscr F}_t\bigr)_{t\geq 0}\) satisfying the usual conditions (i.e., \(\filt{F}\) is right-continuous and complete). \item The processes \(L\), \(F\) and \(M\) are adapted, right-continuous (a.s.) and of class {\rm (D)}, \(M\) having an extension with \(E[\lvert M_\infty\rvert]<\infty\). \item\label{LFusc} \(\min(L,F)\) is upper-semi-continuous from the left in expectation, in fact on \([0,\infty]\) if we put \(L_\infty=F_\infty=M_\infty\). \end{enumerate} \end{assumption} \begin{remark}\label{rem:payoffs} \par\noindent \begin{enumerate} \item The payoff processes \(L\), \(F\) and \(M\) do not have to be random; deterministic ones are just a special case. Even then the probability space and filtration might be nontrivial and represent possible public randomization devices. The current payoffs at any time \(t\) just should be known by the public information \({\mathscr F}_t\). \item Two important general technical issues are measurability, in particular concerning strategies that we address below, and integrability. We need to ensure that expectations are always well defined and that pointwise converging random variables converge in expectation, too. Class (D) is the possibly weakest integrability condition we can work with.\footnote{% A measurable process \(X\) is of class {\rm (D)} if the family \(\{X_\tau\colon\tau<\infty\text{ a.s.\ a stopping time}\}\) is uniformly integrable. Then the family is bounded in expectation and pointwise convergence of \(X\) at a stopping time implies convergence in \(L^1(P)\) as well. This is a mild regularity condition implied, e.g., by either \(E[\sup_t\abs{X_t}]<\infty\) or \(\sup_\tau E[\abs{X_\tau}^p]<\infty\) for some \(p>1\). We may equivalently define any \(X_\infty\in L^1(P)\) and consider \emph{all} stopping times (possibly taking the value \(\infty\)) in the previous set; cf.\ Lemma \ref{lem:classD}. } Boundedness would be much too strong for many applications (e.g., involving Brownian motion). \item\label{rem:M_infty It depends on the model whether there is a natural payoff if both players ``never stop'', which may be some limit of \(M\) or of \(L\). In the latter case we simply set \(M_\infty:=L_\infty\) and work with \(M_\infty\) for a unified payoff notation. For convenience, we also formally define \begin{equation*} F_\infty:=M_\infty. \end{equation*} \item In order to have any general existence results for equilibria, some path regularity of the payoff processes is necessary, as can be seen clearly even in the deterministic, single agent case. Nevertheless, it suffices for us to have upper-semi-continuity from the left only in expectation.\footnote{% Upper-semi-continuity from the left in expectation means \(E[L_\tau\wedge F_\tau]\geq\limsup_{n}E[L_{\tau_n}\wedge F_{\tau_n}]\) for any sequence of stopping times \((\tau_n)\) that is a.s.\ increasing to a stopping time \(\tau\). } In optimal stopping problems this property is also needed for existence. We use it for equilibria in mixed strategies when there is a (local) second-mover advantage. It is of course only required for \(L\) if that never exceeds \(F\). Indeed, one could restrict attention to intervals \(\bigl[\tau,\,\inf\{t\geq\tau\mid L_t>F_t\}\bigr]\), where \(\tau\) is a stopping time; the area \(\{L>F\}\) is only relevant at transitions. The assumption is satisfied, e.g., if the paths of \(L\) and \(F\) are a.s.\ (upper-semi-)continuous from the left.\footnote{% Then \(\limsup_{s\nearrow t}(L_s\wedge F_s)\leq(\limsup_{s\nearrow t}L_s)\wedge(\limsup_{s\nearrow t}F_s)\leq L_t\wedge F_t\) for all \(t\in[0,\infty]\) a.s., and we note that \(L\) and \(F\) are of class {\rm (D)}. } \end{enumerate} \end{remark} \begin{example}\label{exm:attrition} Let us consider a market exit problem as a simple example for a stochastic timing game with second-mover advantage, i.e., \(F\geq L\), like in the classical war of attrition.\footnote{% For typical examples of preemption type, see \cite{RiedelSteg14}. } Suppose that two firms are operating in one market such that duopoly returns \(\pi^D\) might not be sustainable in the long run, depending on uncertain exogenous conditions. While each firm would in general like the opponent to leave the market in order to earn the monopoly profit \(\pi^M\geq\pi^D\), it might be too costly to wait for that possibly random event. Each firm thus decides on times when waiting becomes no longer promising, and at which to leave the market if the other is still present. The payoff processes are then: \begin{align}\label{Ft} L_t={}&M_t:=\int_0^t\pi^D_s\,ds,\nonumber\\ F_t:={}&L_t+\esssup_{t\leq\tau^F\in{\mathscr T}}E\biggl[\int_t^{\tau^F}\pi^M_s\,ds\!\biggm\vert\!{\mathscr F}_t\biggr] \end{align} for all \(t\in[0,\infty]\) (the convention \(F_\infty=M_\infty=L_\infty\) here holds naturally). Any monopolist may drop out at a stopping time \(\tau^F\), since the monopoly return need not be profitable, either; then immediate exit is the dominant strategy and the second-mover advantage will not be strict. However, we do not model the strategy of a single remaining firm, but incorporate the corresponding optimal decision in the payoff processes. The idea of subgame perfection requires that the latter be chosen optimally. Assumption \ref{asm:payoffs} is satisfied in this example if \(\pi^D\) and \(\pi^M\) are adapted and \nbd{P\otimes dt}integrable, as all processes are then bounded by an integrable random variable. It follows from our discussion in Section \ref{subsec:optstop} below that there exists a right-continuous process \(F\) such that the relation \eqref{Ft} holds even when substituting \(t\) by a general stopping time \(\tau\), which is one of the most important results in continuous-time stopping.\footnote{% This issue is somewhat more delicate for \(L\) if we have an example in which the leader's payoff also depends on the stopping time eventually chosen by the follower, as in an entry model. The follower's decision involves of course no optimality condition with respect to the leader's payoff stream, which may make the leader's payoff discontinuous from the right in expectation (which optimality exactly prevents for the follower's payoff). In general that problem will not arise in diffusion models, however. } \end{example} \subsection{Mixed strategies and equilibrium concept}\label{subsec:strateql} The concept of subgame-perfect equilibrium for stochastic timing games of \cite{RiedelSteg14} is as follows. The feasible decision nodes in continuous time are all \emph{stopping times}. Therefore we consider any stopping time as the beginning of a subgame, with the connotation that no player has stopped before. Let \({\mathscr T}\) denote the set of all stopping times w.r.t.\ our filtration \(\filt{F}\). We hence specify complete plans of actions for all subgames, taking the form of (random) distribution functions over time. In order to aggregate strategies for the whole game, one requires \emph{time consistency}, meaning that Bayes' law has to be respected wherever it applies. Additional strategy extensions are needed for subgames with first-mover advantages, to model preemption appropriately in continuous time.\footnote{% See also \cite{HendricksWilson92} on (non-)existence of equilibria in deterministic preemption games. } Therefore we use the generalization of the concept of \cite{FudenbergTirole85} to stochastic models developed in \cite{RiedelSteg14}~-- which preserves the interpretation of discrete-time limits. These extensions are introduced immediately, although one can abstract from them in the discussion of games with a second-mover advantage. We will take them up later for general games. \begin{definition}\label{def:alpha} An \emph{extended mixed strategy} for player \(i\in\{1,2\}\) in the subgame starting at \(\vartheta\in{\mathscr T}\), also called \emph{\nbd{\vartheta}strategy}, is a pair of processes \(\bigl(G^\vartheta_i,\alpha^\vartheta_i\bigr)\) taking values in \([0,1]\), respectively, with the following properties. \begin{enumerate} \item \(G^\vartheta_i\) is adapted. It is right-continuous and nondecreasing with \(G^\vartheta_i(t)=0\) for all \(t<\vartheta\), a.s. \item \(\alpha^\vartheta_i\) is progressively measurable.\footnote{% Formally, the mapping \(\alpha^\vartheta_i\colon\Omega\times[0,t]\to\mathbb{R}\), \((\omega,s)\mapsto\alpha^\vartheta_i(\omega,s)\) must be \({\mathscr F}_t\otimes{\mathscr B}([0,t])\)-measurable for any \(t\in\mathbb{R}_+\). It is a stronger condition than adaptedness, but weaker than optionality, which we automatically have for \(G^\vartheta_i\) by right-continuity. Progressive measurability implies that \(\alpha^\vartheta_i(\tau)\) will be \nbd{{\mathscr F}_\tau}measurable for any \(\tau\in{\mathscr T}\). } It is right-continuous where \(\alpha^\vartheta_i<1\), a.s.\footnote{% This means that with probability 1, \(\alpha^\vartheta_i(\cdot)\) is right-continuous at all \(t\in[0,\infty)\) for which \(\alpha^\vartheta_i(t)<1\). Since we are here only interested in \emph{symmetric} games, we may demand the extensions \(\alpha^\vartheta_i(\cdot)\) to be right-continuous also where they take the value 0, which simplifies the definition of outcomes. See Section 3 of \cite{RiedelSteg14} for issues with asymmetric games and corresponding weaker regularity restrictions. } \item \begin{equation*} \alpha^\vartheta_i(t)>0\Rightarrow G_i^\vartheta(t)=1\qquad\text{for all }t\geq 0\text{, a.s.} \end{equation*} \end{enumerate} We further define \(G^\vartheta_i(0-)\equiv 0\), \(G^\vartheta_i(\infty)\equiv 1\) and \(\alpha^\vartheta_i(\infty)\equiv 1\) for every extended mixed strategy. \end{definition} Note that we do not require the players to stop in finite time. Then player \(i\) may for instance decide to stop simply at some stopping time \(\tau\geq\vartheta\), which is interpreted as a \emph{pure} strategy and corresponds to \(G^\vartheta_i(t)=\indi{t\geq\tau}\) for all \(t\geq 0\) (and \(\alpha^\vartheta_i=\indi{t\geq\infty}\)). If \(\alpha^\vartheta_i\equiv 0\) on \([0,\infty)\), we loosely speak of a ``standard'' mixed strategy. Such extended mixed strategies are completely equivalent to mixed strategies in the analogous model without extensions \(\alpha^\vartheta_i\). \begin{definition}\label{def:payoffs_extended} Given two extended mixed strategies \(\bigl(G^\vartheta_i,\alpha^\vartheta_i\bigr)\), \(\bigl(G^\vartheta_j,\alpha^\vartheta_j\bigr)\), \(i,j\in\{1,2\}\), \(i\not=j\), the \emph{payoff} of player \(i\) in the subgame starting at \(\vartheta\in{\mathscr T}\) is \begin{align* V^\vartheta_i\bigl(G^\vartheta_i,\alpha^\vartheta_i,G^\vartheta_j,\alpha^\vartheta_j\bigr):=E&\biggl[\int_{[0,\hat\tau^\vartheta)}\bigl(1-G^\vartheta_j(s)\bigr)L_s\,dG^\vartheta_i(s)+\int_{[0,\hat\tau^\vartheta)}\bigl(1-G^\vartheta_i(s)\bigr)F_s\,dG^\vartheta_j(s)\nonumber\\ &+\sum_{s\in[0,\hat\tau^\vartheta)}\Delta G^\vartheta_i(s)\Delta G^\vartheta_j(s)M_s+\lambda^\vartheta_{L,i}L_{\hat\tau^\vartheta}+\lambda^\vartheta_{L,j}F_{\hat\tau^\vartheta}+\lambda^\vartheta_{M}M_{\hat\tau^\vartheta}\!\biggm\vert\!{\mathscr F}_\vartheta\biggr]. \end{align*} \end{definition} At \(\hat\tau^\vartheta:=\inf\{t\geq\vartheta\mid\alpha^\vartheta_1(t)+\alpha^\vartheta_2(t)>0\}\), the extensions \(\alpha^\vartheta_\cdot\) determine final outcome probabilities \(\lambda^\vartheta_{L,i}\), \(\lambda^\vartheta_{L,j}\) and \(\lambda^\vartheta_{M}\). Their definition is given in Appendix \ref{app:outcome} for completeness; it is a simplification of that in \cite{RiedelSteg14}, thanks to the slightly stronger regularity here. Note that if both players reserve some mass for \(t=\infty\) (whence \(\hat\tau^\vartheta=\infty\), \(\lambda^\vartheta_{L,i}=\lambda^\vartheta_{L,j}=0\)), the corresponding payoff will be \(\bigl(1-G^\vartheta_i(\infty-)\bigr)\bigl(1-G^\vartheta_j(\infty-)\bigr)M_\infty\), since we have defined \(G^\vartheta_i(\infty)=\alpha^\vartheta_i(\infty)=1\). The pathwise integrals do include possible jumps of the \emph{right-continuous} integrators at 0, since player \(i\) can become leader/follower from an initial jump of \(G^\vartheta_i\)/\(G^\vartheta_j\), respectively. The payoffs are indeed well defined under Assumption \ref{asm:payoffs} and bounded in expectation~-- uniformly across all feasible strategies; cf.\ Lemma \ref{lem:LdG}. To aggregate \nbd{\vartheta}strategies across subgames, time consistency in the form of Bayes' law has to hold. \begin{definition}\label{def:TC_extended} An \emph{extended mixed strategy} for player \(i\in\{1,2\}\) in the stopping game is a family \begin{equation*} \bigl(G_i,\alpha_i\bigr):=\bigl(G_i^\vartheta,\alpha^\vartheta_i\bigr)_{\vartheta\in{\mathscr T}} \end{equation*} of extended mixed strategies for all subgames \(\vartheta\in{\mathscr T}\). An extended mixed strategy \(\bigl(G_i,\alpha_i\bigr)\) is \emph{time-consistent} if for all \(\vartheta\leq\vartheta'\in{\mathscr T}\) \begin{flalign*} && \vartheta'\leq t\in\mathbb{R}_+ &\Rightarrow\ G_i^\vartheta(t)=G_i^\vartheta(\vartheta'-)+\bigl(1-G_i^\vartheta(\vartheta'-)\bigr)G_i^{\vartheta'}(t)\quad\text{a.s.} && \\ &\text{and} \\ && \vartheta'\leq\tau\in{\mathscr T} &\Rightarrow\ \alpha^\vartheta_i(\tau)=\alpha^{\vartheta'}_i(\tau)\quad\text{a.s.} && \end{flalign*} \end{definition} Note that time consistency implies in particular that for any two subgames \(\vartheta,\vartheta'\in{\mathscr T}\) we must have \(G_i^\vartheta\equiv G_i^{\vartheta'}\) (a.s.) on the event \(\{\vartheta=\vartheta'\}\), as one would reasonably expect. The equilibrium concept is then natural. \begin{definition}\label{def:SPE_extended} A \emph{subgame-perfect equilibrium} for the timing game is a pair \(\bigl(G_1,\alpha_1\bigr)\), \(\bigl(G_2,\alpha_2\bigr)\) of time-consistent extended mixed strategies such that for all \(\vartheta\in{\mathscr T}\), \(i,j\in\{1,2\}\), \(i\not=j\), and extended mixed strategies \(\bigl(G_a^\vartheta,\alpha^\vartheta_a\bigr)\) \begin{equation*} V_i^\vartheta(G_i^\vartheta,\alpha^\vartheta_i,G_j^\vartheta,\alpha^\vartheta_j)\geq V_i^\vartheta(G_a^\vartheta,\alpha^\vartheta_a,G_j^\vartheta,\alpha^\vartheta_j)\quad\text{a.s.}, \end{equation*} i.e., such that every pair \(\bigl(G^\vartheta_1,\alpha^\vartheta_1\bigr)\), \(\bigl(G^\vartheta_2,\alpha^\vartheta_2\bigr)\) is an \emph{equilibrium} in the subgame at \(\vartheta\in{\mathscr T}\), respectively. \end{definition} \section{Best replies and optimal stopping}\label{sec:BR} The payoffs in Definition \ref{def:payoffs_extended} are apparently linear in strategies. In this section we derive a more explicit representation of this linearity, which will be very helpful for rigorous proofs to verify equilibria, but also for necessity arguments. To determine or verify any best replies, one needs to maximize over (extended) mixed strategies against these same objects in general, of course. Here we make related statements such as ``any stopping time in the support of the mixed strategy needs to be optimal'' precise. We further introduce the central concepts from the theory of optimal stopping, notably the \emph{Snell envelope}, which plays a crucial role in the following representation and interpretation of mixed strategies in equilibrium. The following arguments concern the distributions \(G^\vartheta_i\), so we neglect the extensions until Section \ref{sec:eqlsym} for simplicity (which formally means restricting to \(\alpha^\vartheta_i\equiv\indi{t\geq\infty}\), as mentioned in Section \ref{subsec:strateql}, i.e., to ``standard'' mixed strategies). Now, for the alternative representation of the payoff of player \(i\in\{1,2\}\) in the subgame at \(\vartheta\in{\mathscr T}\), we introduce the process \(S^\vartheta_i\) given by \begin{equation}\label{Si} S^\vartheta_i(t):=\int_{[0,t)}F_s\,dG^\vartheta_j(s)+\Delta G^\vartheta_j(t)M_t+\bigl(1-G^\vartheta_j(t)\bigr)L_t \end{equation} for all \(t\in[0,\infty)\), where \(G^\vartheta_j\) is a given feasible mixed strategy for the opponent \(j\in\{1,2\}\setminus i\). Lemma \ref{lem:SclassD} shows that this process is well behaved: it is \emph{optional}\footnote{% This means that \(S^\vartheta_i\) is measurable w.r.t.\ the optional \nbd{\sigma}field on the product space \(\Omega\times\mathbb{R}_+\), which is generated by all right-continuous adapted processes, or equivalently by the random intervals \([0,\tau)\), \(\tau\in{\mathscr T}\). } and of class {\rm (D)}. With \(M_\infty\in L^1(P)\), we can extend the definition of \(S^\vartheta_i\) in \eqref{Si} to \(t=\infty\) implying also \(S^\vartheta_i(\infty)\in L^1(P)\).\footnote{\label{fn:FdGbounded}% Note that the integral in \eqref{Si} converges, as it is bounded by \(\int_{[0,\infty)}\abs{F_s}\,dG^\vartheta_j(s)\in L^1(P)\) thanks to Lemma \ref{lem:LdG}. } Thanks to Lemma \ref{lem:LdG} and \(S^\vartheta_i(\infty)\in L^1(P)\), we can integrate \(S^\vartheta_i\) by any feasible \(dG^\vartheta_i\) to see that the expected payoff of player \(i\) in the subgame beginning at \(\vartheta\in{\mathscr T}\) can be written as\footnote{% An application of Fubini's theorem using footnote \ref{fn:FdGbounded} yields in particular \begin{align*} &\int_{[0,\infty)}\bigl(1-G^\vartheta_i(s)\bigr)F_s\,dG^\vartheta_j(s)=\int_{[0,\infty)}\int_{[0,\infty]}\indi{t>s}\,dG^\vartheta_i(t)F_s\,dG^\vartheta_j(s)\\ ={}&\int_{[0,\infty]}\int_{[0,\infty)}\indi{s<t}F_s\,dG^\vartheta_j(s)\,dG^\vartheta_i(t)=\int_{[0,\infty]}\int_{[0,t)}F_s\,dG^\vartheta_j(s)\,dG^\vartheta_i(t)\in L^1(P). \end{align*} } \begin{equation}\label{V=SdG} V^\vartheta_i(G^\vartheta_i,G^\vartheta_j)=E\biggl[\int_{[0,\infty]}S^\vartheta_i(t)\,dG^\vartheta_i(t)\!\biggm\vert\!{\mathscr F}_\vartheta\biggr]. \end{equation} As is to be expected by the linearity of \eqref{V=SdG}, there exists a best reply only if there is one that is a pure strategy. \begin{lemma}\label{lem:BRpure} Fix \(\vartheta\in{\mathscr T}\) and let \(G^\vartheta_i\) and \(G^\vartheta_j\) be feasible. Then \begin{equation*} V^\vartheta_i(G^\vartheta_i,G^\vartheta_j)\leq\esssup_{\vartheta\leq\tau\in{\mathscr T}}E\bigl[S^\vartheta_i(\tau)\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]\qquad\text{a.s.} \end{equation*} \end{lemma} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip From the lemma (and more explicitly from its proof) we see that a feasible strategy \(G^\vartheta_i\) will be a best reply to \(G^\vartheta_j\) if and only if for any stopping time \(\tau^*\) such that \(dG^\vartheta_i(\tau^*)>0\),\footnote{% This means that \(G^\vartheta_i(t)>G^\vartheta_i(\tau-)\) for all \(t>\tau\) a.s. } \begin{equation* E\bigl[S^\vartheta_i(\tau^*)\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]\geq E\bigl[S^\vartheta_i(\tau)\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]\qquad\forall\,\vartheta\leq\tau\in{\mathscr T} \end{equation*} and we generally have to solve the optimal stopping problem on the right in Lemma \ref{lem:BRpure}. The central aspect of continuous-time games of timing is their inherent discontinuity, even if the underlying data (here \(L\), \(F\), and \(M\)) is continuous. For instance, from the definition of \(S^\vartheta_i\) in \eqref{Si} it is now immediately clear that a best reply cannot put any joint mass points where \(F>M\), since \(\lim_{\varepsilon\searrow 0}S^\vartheta_i(t+\varepsilon)-S^\vartheta_i(t)=\Delta G^\vartheta_j(t)\bigl(F_t-M_t\bigr)\) by right-continuity of \(L\); this will be a frequent argument. Depending on \(G^\vartheta_j\), there need not exist any stopping time that actually attains the value of the problem, as \(S^\vartheta_i\) may have various kinds of discontinuities. Dealing with such discontinuities will be one of our major issues. In the following subsection we present some crucial facts from the general theory of optimal stopping in continuous time, providing in particular sufficient (and basically necessary) conditions for the existence of optimal stopping times and their characterization in terms of the Snell envelope. The latter is in fact our main tool to derive and represent mixed equilibrium strategies. \subsection{Optimal stopping in continuous time}\label{subsec:optstop} As a motivating stopping problem to present the theory, consider the unilateral problem of when to become the leader optimally, i.e., supposing the opponent will never act. This problem will play an important role in the following.\footnote{% To stay in the framework of the game, for finding a (pure) best reply to \(G^0_j=\indi{t\geq\infty}\) we have to use the payoff \(M_\infty\) for not stopping in finite time. Recall our convention of setting \(L_\infty=M_\infty\), however. } It is well established how to characterize the solution of the optimal stopping problem \begin{equation* V_L(0):=\esssup_{\tau\in{\mathscr T}}E\bigl[L_\tau\bigr] \end{equation*} given Assumption \ref{asm:payoffs}. In fact, our payoff process \(L\) is right-continuous (hence optional) and of class {\rm (D)}, such that we can apply the general theory of optimal stopping as in, e.g., \cite{Mertens72} and \cite{BismutSkalli77}: There exists a smallest supermartingale \(U_L\) dominating the payoff process \(L\), called the \emph{Snell envelope} of \(L\), which satisfies \begin{equation}\label{UL} U_L(\vartheta)=\esssup_{\vartheta\leq\tau\in{\mathscr T}}E\bigl[L_\tau\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]\quad\text{a.s.} \end{equation} for all stopping times \(\vartheta\in{\mathscr T}\). In particular \(U_L(0)=V_L(0)\). We remark that one can very well define the RHS of \eqref{UL} for any \(\vartheta\in{\mathscr T}\), but the key insight is that there exists a well behaved \emph{process} \(U_L=\bigl(U_L(t)\bigr)_{t\geq 0}\), which one can evaluate at any stopping time \(\vartheta\) to know the continuation value there. In view of the dynamic programming principle, we do need to consider continuation problems at stopping times; the latter are feasible quantities, but much richer than deterministic times. Now \(U_L\) is optional and of class {\rm (D)} as well\footnote{% See \cite{Mertens72}, Th\'eor\`eme T4 for the existence and Th\'eor\`eme T5 and proof for \(U_L\) being of class {\rm (D)}. } and such supermartingales have very convenient regularity properties: There exists a Doob-Meyer decomposition\footnote{% See \cite{Mertens72}, Th\'eor\`eme T3. } \begin{equation* U_L=M_L-D_L \end{equation*} that we extensively use, with a uniformly integrable, right-continuous martingale\footnote{% Therefore, the crucial optional sampling holds: \(M_L(\sigma)=E[M_L(\tau)\mid{\mathscr F}_\sigma]\) for all \(\sigma\leq\tau\in{\mathscr T}\). Further, \(M_L\) has a last element \(M_L(\infty)\) to which it converges in \(L^1(P)\). } \(M_L\) and a nondecreasing, predictable and integrable process \(D_L\). The latter can be interpreted as measuring the \emph{expected loss from stopping too late}: If we postpone any stopping to \(\tau\geq 0\), then we cannot achieve more than \(E[U_L(\tau)]=U_L(0)-E[D_L(\tau)]\), even if we stop optimally from \(\tau\) onwards. Reflecting the dynamic programming principle, the value process \(U_L\) is a martingale as long as there still exists a future time \(\tau\) giving at least the same value in expectation as stopping immediately. Whether there exists any \emph{optimal} stopping time depends on the continuity properties of \(D_L\). If \(L\) is upper-semi-continuous in expectation (as by Assumption \ref{asm:payoffs}\,\ref{LFusc} if \(L\leq F\), e.g.), \(D_L\) has left-continuous paths a.s.\footnote{\label{fn:Lusc}% See \cite{BismutSkalli77}, Th\'eor\`eme II.2 and proof. (Semi-) Continuity in expectation is in general weaker than the corresponding path property from the left. Our payoff processes are not necessarily positive. However, if \(L\) is optional and of class {\rm (D)}, the same will be true for its negative part \(L^-:=\max(-L,0)\), which thus has a Snell envelope \(U_{L^-}=M_{L^-}-D_{L^-}\) decomposing into a uniformly integrable right-continuous martingale \(M_{L^-}\) and an integrable increasing process \(D_{L^-}\). Then \(M_{L^-}-L^-\geq 0\), implying \(L+M_{L^-}\geq 0\). Adding the martingale \(M_{L^-}\) neither affects \(L\) being optional, of class {\rm (D)}, or (semi-) continuous in expectation, nor any optimal stopping times for \(L\). } By right-continuity of \(L\), \(D_L\) will be even continuous.\footnote{\label{fn:Lrc}% See \cite{BismutSkalli77}, (2.15), where right-continuity of the payoff process implies in fact \(Z^+=X\). } With \(D_L\) left-continuous, there exist the optimal stopping times\footnote{% \emph{Example}: \(L\) not upper-semi-continuous \(\Rightarrow\) \(\inf\{D_L>0\}\) not optimal. \begin{minipage}[b]{0.4\linewidth} \centering \begin{tikzpicture}[inner sep=0pt,minimum size=0pt,label distance=3pt] \draw[->] (-0.1,0) -- (4,0) {}; \draw[->] (0,-0.3) -- (0,2) {}; \draw[-] (0,0.8) -- (1.8,0.8) [] {}; \draw[-] (1.8,1.5) -- (3.5,1) [] {}; \draw[dotted] (0,1.5) -- (3.5,1.5) []{}; \draw[dashed] (1.8,0) -- (3.5,0.5)[]{}; \fill[black] (1.8,0.8) circle (.04); \filldraw[fill=white,draw=black] (1.8,1.5) circle (.04); \node at (3.5,1.5) [label=right:\(M_L\)] {}; \node at (3.5,0.9) [label=right:\(L\)] {}; \node at (3.5,0.35) [label=right:\(D_L\)] {}; \end{tikzpicture} \end{minipage} \begin{minipage}[b]{0.4\linewidth} \centering \begin{tikzpicture}[inner sep=0pt,minimum size=0pt,label distance=3pt] \draw[->] (-0.1,0) -- (4,0) {}; \draw[->] (0,-0.3) -- (0,2) {}; \draw[-] (0,0.8) -- (1.8,1.5) [] {}; \draw[-] (1.8,1) -- (3.5,1) [] {}; \draw[dotted] (0,1.5) -- (3.5,1.5) []{}; \draw[dashed] (1.8,0.5) -- (3.5,0.5)[]{}; \fill[black] (1.8,1) circle (.04); \fill[black] (1.8,0.5) circle (.04); \filldraw[fill=white,draw=black] (1.8,1.5) circle (.04); \filldraw[fill=white,draw=black] (1.8,0) circle (.04); \node at (3.5,1.5) [label=right:\(M_L\)] {}; \node at (3.5,0.9) [label=right:\(L\)] {}; \node at (3.5,0.35) [label=right:\(D_L\)] {}; \end{tikzpicture} \end{minipage} } \begin{align}\label{tauopt} \tau_L(\vartheta):=\inf\bigl\{t\geq\vartheta\!\bigm\vert\! U_L(t)=L_t\bigr\}\quad\text{and}\quad\tau^L(\vartheta):=\inf\bigl\{t\geq\vartheta\!\bigm\vert\! D_L(t)>D_L(\vartheta-)\bigr\}.\hphantom{,} \end{align} They are the respectively smallest and largest stopping times after \(\vartheta\in{\mathscr T}\) attaining\footnote{% See \cite{BismutSkalli77}, Th\'eor\`eme II.3. } \begin{equation}\label{U_L=L} U_L(\vartheta)=E\bigl[L_{\tau_L(\vartheta)}\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]=E\bigl[L_{\tau^L(\vartheta)}\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]\quad\text{a.s.} \end{equation} Hence, by optimality it must hold that \(U_L=L\) a.s.\ at any point of increase of \(D_L\), which implies\footnote{\label{fn:(U_L-L)dD_L=0}% If \(D_L\) is continuous, \(U_L\) inherits right-continuity from \(M_L\). Then, by \eqref{tauopt}, \eqref{U_L=L} and right-continuity of \(U_L-L\), \(\inf\{t\in\mathbb{R}_+\mid\int_0^t\indi{U_L-L\geq\varepsilon}\,dD_L>0\}=\infty\) a.s.\ for any \(\varepsilon>0\), i.e., \(U_L-L<\varepsilon\) \nbd{dD_L}a.e.\ with probability one, implying the claim. \eqref{(U_L-L)dD_L=0} also holds without right-continuity of \(U_L-L\), if \(L\) is upper-semi-continuous in expectation; see Remark \ref{rem:eqlLusc} in the appendix. } \begin{equation}\label{(U_L-L)dD_L=0} \int_0^\infty(U_L(t)-L_t)\,dD_L(t)=0\quad\text{a.s.} \end{equation} \section{Equilibria in pure strategies}\label{sec:eqlpure} In symmetric games with a systematic second-mover advantage \(F\geq L\), it is straightforward to identify certain subgame-perfect equilibria in pure strategies. Player \(j\), say, just has to stop sufficiently late, such that \(i\) will solve the problem of optimally stopping \(L\) presented in Section \ref{subsec:optstop}. We show in this section that such pure strategy equilibria typically entail asymmetric payoffs, however. The respective roles of the players have to be determined before the game starts, and correspondingly who obtains the higher payoff. With mixed strategies that we will consider thereafter, one can obtain equilibria with symmetric payoffs that do not create another strategic conflict outside the model. Stopping sufficiently late to support a pure strategy equilibrium need not be ``never'' as in the previous section; whenever it is optimal to stop \(L\), it must not be worthwhile for \(i\) to wait until \(j\) stops, in order to become follower then. This will be the case, e.g., if \(j\) stops only at times where \(F=L\)~-- or simply at \(\infty\). The easiest example is thus \(G^\vartheta_j=\indi{t=\infty}\) and \(G^\vartheta_i=\indi{t\geq\tau_L(\vartheta)}\) for all \(\vartheta\in{\mathscr T}\), or analogously with \(\tau^L(\cdot)\) defined in \eqref{tauopt}. In either case waiting is indeed optimal for \(j\) on \([0,\infty)\), because there are expected gains of \(L\) on any interval \([\vartheta,\tau^L(\vartheta)]\), and \(F\) dominates \(L\) at both \(\tau_L(\vartheta)\) and \(\tau^L(\vartheta)\). There can also be quite complex patterns based on the same logic, but with players switching roles across subgames. This can be illustrated best with a little more structure as in Example \ref{exm:attrition}, where the follower's optimal stopping times will be ``sufficiently late'' for an equilibrium. However, the arguments generalize a bit: the exploited properties are that \(F\geq L\geq M\) and that \(F\) is a supermartingale, i.e., that one becomes follower the sooner the better. Then stopping before \(\tau_L(\vartheta)\) is dominated (for any \(G^\vartheta_j\)). \begin{lemma}\label{lem:tauLdom} Suppose \(F\geq L\geq M\) and that \(F\) is a supermartingale. Fix \(\vartheta\in{\mathscr T}\). For any feasible \(G^\vartheta_j\) and stopping time \(\tau_i\geq\vartheta\), \begin{flalign*} && E&\Bigl[S^\vartheta_i\bigl(\bigl(\tau_i\vee\tau_L(\vartheta)\bigr)+\bigr)\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr]\geq E\Bigl[S^\vartheta_i\bigl(\tau_i\bigr)\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr] &&\\ \text{and}\\ && E&\Bigl[S^\vartheta_i\bigl(\tau_L(\vartheta)+\bigr)\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr]\geq E\Bigl[L_{\tau_L(\vartheta)}\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr],\qquad\qquad\text{a.s.}&& \end{flalign*} Both inequalities also hold with \(\tau^L(\vartheta)\). \end{lemma} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip Note that the supermartingale property of \(F\) is important for the result, to ensure relatively high payoffs in case one becomes follower before the optimum of \(L\) is reached. It is not sufficient that there are even strictly better future stopping times for \(L\) and that \(F\geq L\): If \(G^\vartheta_j\) puts mass between \(\vartheta\) and \(\tau_L(\vartheta)\) where \(F\) still dominates \(L\), but where both are very low, then it may be worthwhile to secure the current payoff \(L_\vartheta\) due to the risk of becoming follower while waiting for the optimum of \(L\). An alternative condition would be that \(L\) is a \emph{submartingale} on \([\vartheta,\tau_L(\vartheta)]\). In Example \ref{exm:attrition}, \(L=\int_0^\cdot\pi^D\,ds\) is the duopolists' payoff process. Then the optimal stopping times in the follower's problem are sufficiently late to support an equilibrium: the perspective to become follower (monopolist) at a time when immediate exit is optimal has no value, and it leads to ceding when \(\pi^D\) seems an unsustainable loss~-- at \(\tau_L(\vartheta)\). Indeed, as a monopolist stops \(\int_0^\cdot\pi^M\,ds\) with \(\pi^M\geq\pi^D\), that optimal stopping time satisfies \(\tau_F(\vartheta)\geq\tau_L(\vartheta)\). Furthermore, it holds that \(F=L=M\) a.s.\ at \(\tau_F(\vartheta)\), so in particular simultaneous stopping is feasible on \(\{\tau_L(\vartheta)=\tau_F(\vartheta)\}\) by \(F=M\). These properties generate a whole class of equilibria with varying roles of the players, decided by events \(C\) at \(\tau_L(\vartheta)\). \begin{proposition}\label{prop:pureeql} Suppose \(F\geq L\geq M\) and that \(F\) is a supermartingale. Fix \(\vartheta\in{\mathscr T}\) and consider a stopping time \(\tau_F(\vartheta)\geq\tau_L(\vartheta)\) a.s., such that at \(\tau_F(\vartheta)\) we have \(F=L\), and more specifically \(F=M\) on the subset \(\{\tau_F(\vartheta)=\tau_L(\vartheta)\}\) a.s.~-- e.g., \(\tau_F(\vartheta):=\inf\{t\geq\vartheta\mid F_t=M_t\}\). Then, for any given event \(C\in{\mathscr F}_{\tau_L(\vartheta)}\), the pure strategies corresponding to \begin{equation*} \tau_1^*=\tau_L(\vartheta)\indi{C}+\tau_F(\vartheta)\indi{C^c}\quad\text{and}\quad\tau_2^*=\tau_L(\vartheta)\indi{C^c}+\tau_F(\vartheta)\indi{C} \end{equation*} form an equilibrium in the subgame beginning at \(\vartheta\). \end{proposition} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip Equilibria in pure strategies typically involve asymmetric payoffs, for instance if \(F>L\) at \(\tau_L(\vartheta)\) in those that we have specified. Consequently, there arises a coordination problem before the start of the game, each player wanting to become follower eventually. This problem is even aggravated in the equilibria of Proposition \ref{prop:pureeql}, where the roles may switch across subgames. For this reason such equilibria are also not easy to aggregate to a subgame-perfect equilibrium: for each subgame starting at some \(\vartheta\in{\mathscr T}\), an event \(C\in{\mathscr F}_{\tau_L(\vartheta)}\) has to be agreed on that determines the respective roles. Maybe even more importantly, no player can obtain the preferred follower payoff by taking or threatening to take a certain action, but only by the threat of taking \emph{no} action for a longer time, which has to induce the opponent to stop. Effectively, players compete in the credibility to take no action. Such problems can be avoided by allowing for mixed strategies, making the players indifferent about the roles when stopping occurs. This is our topic in the following. \section{Equilibria in mixed strategies}\label{sec:eqlmix} The very universal principle of the Snell envelope allows us to construct equilibria in mixed strategies in our general setting as well. But we do not only get existence: our equilibrium strategies can be clearly interpreted like the Snell envelope itself. Recall that the compensator relates to the expected loss from stopping too late. The logic of the following equilibria in symmetric games is this. If \(F\geq L\) but without the other conditions of Lemma \ref{lem:tauLdom}, waiting for future optimal times to stop \(L\) may not be a \emph{dominant} strategy. Nevertheless, the players have an incentive and the possibility to coordinate on not stopping too early, which can be extended until the latest optimal time \(\tau^L(\vartheta)\) to stop \(L\). To cross that point, however, any player has to be compensated by some chance to become follower with \(F>L\), because otherwise any delay would definitely be costly. Of course the opponent has to be willing to provide that chance, so we identify the appropriate rate to compensate \emph{exactly} the impending loss \(dD_L>0\), to make both indifferent. This principle does not work where \(L>F\), however, when there would be much more intense stopping due to a preemption incentive (see Section \ref{sec:eqlsym}). On the other hand, even if we were considering only games with a global second-mover advantage, there may be equilibria with even higher (symmetric) payoffs~-- if simultaneous stopping is feasible and sufficiently profitable at some future point, precisely where \(M\geq F>L\). For these reasons we need to adapt the appropriate payoff process that players can coordinate on. \begin{theorem}\label{thm:mixedeql} Consider a subgame beginning at \(\vartheta\in{\mathscr T}\) with \(F_\vartheta\geq L_\vartheta\) a.s.\ on \(\{\vartheta<\infty\}\), and another stopping time \(\tau^\vartheta\in{\mathscr T}\) taking values in \([\vartheta,\,\inf\{t\geq\vartheta\mid F_t<L_t\}]\) a.s. Define the payoff process \(\tilde L^{\tau^\vartheta}:=\indi{t<\tau^\vartheta}L+\indi{t\geq\tau^\vartheta}\max(F_{\tau^\vartheta},M_{\tau^\vartheta})\) and as \(D_{\tilde L}^{\tau^\vartheta}\) the compensator of its Snell envelope. Then there exists a payoff-symmetric equilibrium with mixed strategies satisfying \begin{flalign} && G^\vartheta_i(t)&=1-\indi{t<\tau^\vartheta}\exp\biggl(-\int_\vartheta^t\frac{\,dD_{\tilde L}^{\tau^\vartheta}(s)}{F_s-L_s}\biggr) & \label{Geql}\\ &\text{and} \nonumber\\ && G^\vartheta_j(t)&=1-\indi{t<\tau^\vartheta}\exp\biggl(-\int_\vartheta^t\frac{\indi{F_s>L_s}\,dD_{\tilde L}^{\tau^\vartheta}(s)}{F_s-L_s}\biggr), & \label{Gjeql} \end{flalign} \(i,j\in\{1,2\}\), \(i\not=j\), iff \(\Delta G^\vartheta_i(M-F)\leq 0\) at \(\inf\{t\in\mathbb{R}_+\mid G^\vartheta_i(t)=1\}<\tau^\vartheta\) and \(\Delta G^\vartheta_i(M-F)\geq 0\) at \(\tau^\vartheta<\infty\) a.s. There further exists a symmetric equilibrium with both players using the strategy \eqref{Geql} iff \(\Delta G^\vartheta_i(M-F)=0\) a.s.\ at \(\inf\{t\in\mathbb{R}_+\mid G^\vartheta_i(t)=1\}<\tau^\vartheta\). \end{theorem} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip The ``endpoint condition'' at \(\inf\{t\in\mathbb{R}_+\mid G^\vartheta_i(t)=1\}=:\tau^{G,\vartheta}_i(1)\) seems contradictory, but it plays the following role. First suppose \(\Delta G^\vartheta_i(\tau^\vartheta)>0\), so \(\tau^{G,\vartheta}_i(1)=\tau^\vartheta\) and there is a joint terminal jump. This means that the players coordinate on the terminal payoff \(M_{\tau^\vartheta}\), which is only feasible if \(M_{\tau^\vartheta}\geq F_{\tau^\vartheta}\) (recall \(M_\infty=F_\infty\) by convention). \(G^\vartheta_i\) can indeed jump to 1 before \(\tau^\vartheta\), where \(F=L\). There we choose \(G^\vartheta_j\) continuous to address the case \(F=L>M\), where payoffs are hence symmetric. This choice can only be an equilibrium, however, if indeed \(F\geq M\). Otherwise \(j\) could obtain a higher value by stopping at \(\tau^{G,\vartheta}_i(1)\) and not supporting the equilibrium earlier on; we could then simply adjust \(\tau^\vartheta\) to ensure the correct continuation values. Finally, \(G^\vartheta_i\) can reach 1 also continuously (then \(G^\vartheta_j\equiv G^\vartheta_i\)). This case is one reason why we write ``\(\max(F_{\tau^\vartheta},M_{\tau^\vartheta})\)'' in the definition of \(\tilde L^{\tau^\vartheta}\), although we require \(\Delta G^\vartheta_i(M-F)\geq 0\) at \(\tau^\vartheta\). Even if \(\Delta G^\vartheta_i=0\), the terminal value of \(\tilde L^{\tau^\vartheta}\) determines the continuation values on which players coordinate earlier on. Putting \(M_{\tau^\vartheta}\) regardless would not be correct. Another reason is that we will indeed obtain continuation equilibria with payoff \(\max(F_{\tau^\vartheta},M_{\tau^\vartheta})\) when we take up \emph{extended} mixed strategies in Section \ref{sec:eqlsym}. The strategies here are continuous except for terminal jumps. As motivated above and given the appropriate payoff process \(\tilde L^{\tau^\vartheta}\), the opponent's stopping rate \(dD_{\tilde L}^{\tau^\vartheta}/(F-L)\) makes each player indifferent when it would seem optimal to secure \(L_\cdot\) beforehand. That probability to obtain \(F_\cdot>L_\cdot\) exactly compensates any expected loss from forgoing \(L_\cdot\). The resulting equilibrium payoffs are \begin{equation*} V^\vartheta_1=V^\vartheta_2=\esssup_{\vartheta\leq\tau\in{\mathscr T}}E\Bigl[\indi{\tau<\tau^\vartheta}L_\tau+\indi{\tau\geq\tau^\vartheta}\max(F_{\tau^\vartheta},M_{\tau^\vartheta})\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr]:=U_{\tilde L}^{\tau^\vartheta}(\vartheta). \end{equation*} The proof of Theorem \ref{thm:mixedeql} is based on martingale arguments. An important aspect is to take care of the different kinds of jumps in the strategies and to ensure that the underlying payoff process \(\tilde L^{\tau^\vartheta}\) has the necessary properties (e.g., that \(D_{\tilde L}^{\tau^\vartheta}\) is continuous). Where stopping happens continuously, it need not have a rate with respect to time \(dt\), however (though it does in the explicit Brownian example in Section \ref{sec:expduo}); it might only take place on a set of time points of measure 0.\footnote{% In this case \(G^\vartheta_i\) would be a \emph{singular} measure, which often appear in optimal control of Brownian models. } The strategies are trivial in the present equilibria~-- given that the endpoint is feasible~-- if either \(L\) or \(F\) is a (sub-)martingale on \([\vartheta,\tau^\vartheta]\); then there is no loss from waiting and \(dD_{\tilde L}^{\tau^\vartheta}\equiv 0\).\footnote{% Cf.\ \cite{RiedelSteg14}, Theorem 3.3, for this case, but also their Section 4.3 concerning issues in asymmetric games. } \begin{remark} It may happen that \(D_{\tilde L}\) and hence \(G^\vartheta_i\) has jumps if \(L\) is only upper-semi-continuous from the right (and the left). In the specified equilibria, waiting is always at least as good as obtaining \(L\) and there must be indifference at increases of \(D_{\tilde L}\). This is of course not possible with a joint mass point where \(L>M\).\footnote{% \emph{Example}: No symmetric payoff equilibrium if \(L\) (\(>M\)) not right-continuous. \medskip \begin{minipage}[c]{0.25\linewidth} \centering \begin{tikzpicture}[inner sep=0pt,minimum size=0pt,label distance=3pt] \draw[->] (-0.1,0) -- (3,0) {}; \draw[->] (0,-0.3) -- (0,2) {}; \draw[-] (0.8,0.2) -- (2.8,1) [] {}; \draw[-] (0.8,1.8) -- (2.8,1) [] {}; \draw[-] (0.8,1) -- (1.6,1.2) []{}; \draw[-] (1.6,1) -- (2.8,1)[]{}; \fill[black] (1.6,1.2) circle (.04); \filldraw[fill=white,draw=black] (1.6,1) circle (.04); \node at (0.8,1.8) [label=left:\(F\)] {}; \node at (0.8,1) [label=left:\(L\)] {}; \node at (0.8,0.2) [label=left:\(M\)] {}; \node at (1.6,0) [label=below:\(T_1\)] {}; \node at (2.8,0) [label=below:\(T_2\)] {}; \end{tikzpicture} \end{minipage} \hfill \begin{minipage}[c]{0.7\linewidth} Waiting is strictly optimal for \(i\) at \(t\in(T_1,T_2)\) if \(G_j(T_2-)>G_j(t)\), hence \(G_i(T_2-)=G_i(T_1)\) and \(G_j(T_2-)=G_j(T_1)\) by payoff symmetry, giving a continuation payoff \(L(T_2)\) on \((T_1,T_2]\). The only symmetric continuation payoff at \(T_1\) is then in \((L(T_2),L(T_1))\) from \(\Delta G_i(T_1)=\Delta G_j(T_1)\in(0,1)\). Waiting is also strictly dominant on \([0,T_1)\), but stopping short of \(T_1\) now yields a higher payoff than stopping at \(T_1\). \end{minipage} } Theorem \ref{thm:mixedeql} remains true if instead \(L\equiv M\) (e.g., in an attrition model); see Remark \ref{rem:eqlLusc}. \end{remark} The equilibria of Theorem \ref{thm:mixedeql} can deal so far only with subgames satisfying \(F_\vartheta\geq L_\vartheta\). Therefore, if we want to aggregate them to a subgame-perfect equilibrium, we have to assume \(F\geq L\) for now. Then existence is trivial by setting \(\tau^\vartheta=\inf\{t\geq\vartheta\mid M_t>F_t\}\) for all \(\vartheta\in{\mathscr T}\). If also \(F\geq M\) throughout, we get the further simplification that we can set \(\tau^\vartheta\equiv\infty\) and hence \(\tilde L^{\tau^\vartheta}=L\), \(D_{\tilde L}^{\tau^\vartheta}=D_L\). In the latter case the stopping rates do not depend on \(\vartheta\), which ensures time consistency. In general, however, we do not preclude the possibility that at some \(\tau^\vartheta\), \(\max(F,M)>L\) or \(\max(F,M)<U_L\), such that \(D_{\tilde L}^{\tau^\vartheta}\not=D_L\) on \([\vartheta,\tau^\vartheta]\). Then time consistency requires that \(\tau^\vartheta\) will not be changed if it has not been passed yet: for any two subgames \(\vartheta,\vartheta'\in{\mathscr T}\) we should have \(\tau^\vartheta=\tau^{\vartheta'}\) on \(\{\vartheta\leq\vartheta'\leq\tau^\vartheta\}\) and vice versa (in summary, \(\tau^\vartheta=\tau^{\vartheta'}\) on \(\{(\vartheta\vee\vartheta')\leq(\tau^\vartheta\wedge\tau^{\vartheta'})\}\), noting that \(\vartheta\leq\tau^\vartheta\) and \(\vartheta'\leq\tau^{\vartheta'}\) a.s.). We now have the following payoff-symmetric subgame-perfect equilibria with ``standard'' mixed strategies for games with systematic second-mover advantage. \begin{theorem}\label{thm:SPE} Assume \(F\geq L\) and fix \(i,j\in\{1,2\}\), \(i\not=j\). If we have an equilibrium as in Theorem \ref{thm:mixedeql} for any \(\vartheta,\vartheta'\in{\mathscr T}\) with \(\tau^\vartheta=\tau^{\vartheta'}\) a.s.\ on \(\{(\vartheta\vee\vartheta')\leq(\tau^\vartheta\wedge\tau^{\vartheta'})\}\)~-- as we do by setting \(\tau^\vartheta=\inf\{t\geq\vartheta\mid M_t>F_t\}\;\forall\vartheta\in{\mathscr T}\), e.g.~-- then the strategies \(\bigl(G^\vartheta_1\bigr)_{\vartheta\in{\mathscr T}}\) and \(\bigl(G^\vartheta_2\bigr)_{\vartheta\in{\mathscr T}}\) form indeed a subgame-perfect equilibrium. \end{theorem} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip Even if the time-consistency condition for the family \(\{\tau^\vartheta\colon\vartheta\in{\mathscr T}\}\) postulated in the theorem statement holds, we then have a family \(\{D_{\tilde L}^{\tau^\vartheta}\colon\vartheta\in{\mathscr T}\}\) that needs to induce time-consistent stopping rates \((dG^\vartheta_i)_{\vartheta\in{\mathscr T}}\). This is the main point of Theorem \ref{thm:SPE}, given optimality by Theorem \ref{thm:mixedeql}. \section{Example: Exit from a duopoly}\label{sec:expduo} In this section we illustrate the simplification we get with games having a systematic second-mover advantage, which we pointed out in the context of Theorem \ref{thm:mixedeql}. Specifically, we determine subgame-perfect equilibrium strategies by explicitly deriving the Snell envelope \(U_L\) and its compensator \(D_L\) for a version of the market exit game in Example \ref{exm:attrition}. The stopping rate during attrition is then represented in terms of a sustained flow of losses from unprofitable operations. To specify the model, assume that at each time \(t\), discounted duopoly profits are given by \begin{equation*} \pi^D_t = e^{-rt}(Y_t-c), \end{equation*} where \(c>0\) is a constant operating cost and revenues \((Y_t)_{t\geq 0}\) follow a geometric Brownian motion solving \(dY=\mu Y\,dt+\sigma Y\,dB\). If either firm becomes monopolist, the profit stream changes to \begin{equation*} \pi^M_t = e^{-rt}(mY_t-c), \end{equation*} where \(m>1\). Each firm can decide to leave the market with accumulated payoff \(L_t=M_t=\int_0^te^{-rs}(Y_s-c)\,ds\), for example if \(Y\) gets so low that the revenue does not cover the production costs. In such a phase the game is a war of attrition if monopoly seems profitable. However, it may also be optimal to stop immediately in the follower's problem \begin{equation*} F_t=L_t+\esssup_{t\leq\tau^F\in{\mathscr T}}E\biggl[\int_t^{\tau^F}e^{-rs}(mY_s-c)\,ds\!\biggm\vert\!{\mathscr F}_t\biggl]. \end{equation*} The latter problem is a standard exercise under the condition \(r>\max(\mu,0)\),\footnote{% This is also necessary and sufficient for the processes to be of class (D) in accordance with Assumption \ref{asm:payoffs}. Then \(-c/r\leq L_t\leq\int_0^\infty e^{-rs}\abs{Y_s-c}\,ds\in L^1(P)\) and similarly for \(F\), inserting \(m\). } and its unique solution is to stop as soon as \(Y\) falls below the threshold \begin{equation*} y_m = \frac{\beta_2}{\beta_2-1}\frac{r-\mu}{r}\frac{c}{m}<\frac{c}{m}, \end{equation*} where \(\beta_2\) is the negative root of the quadratic equation \(\frac{1}{2}\sigma^2\beta(\beta-1)+\mu\beta-r=0\). The value of the stopping problem can be explicitly expressed as \begin{equation}\label{Fexpl} F_t-L_t=e^{-rt}\indi{Y_t>y_m}\biggl[\frac{mY_t}{r-\mu}-\frac{c}{r}-\left(\frac{Y_t}{y_m}\right)^{\beta_2}\left(\frac{my_m}{r-\mu}-\frac{c}{r}\right)\biggr], \end{equation} which shows that \(F\) is in fact a continuous process and \(F=L\,(=M)\Leftrightarrow Y\leq y_m\). Hence, for any equilibrium as in Theorem \ref{thm:mixedeql}, we need \(\tau^\vartheta\geq\inf\{t\geq\vartheta\mid Y_t\leq y_m\}\) for the endpoint condition. On the other hand, stopping is strictly dominant for a monopolist as soon as \(Y\leq y_m\), and so it is in duopoly, where revenues can never exceed those in monopoly. Therefore we can choose \(\tau^\vartheta=\inf\{t\geq\vartheta\mid Y_t\leq y_m\}\) without loss and it will lead to a symmetric equilibrium at any \(\vartheta\in{\mathscr T}\) as follows. Since \(F\) is a supermartingale for \(m\geq 1\) and dominates \(L\), it also dominates the Snell envelope \(U_L\) of the latter, such that we have \(F=L\Rightarrow F=U_L=L\). Consequently, \(\tilde L^{\tau^\vartheta}\) in Theorem \ref{thm:mixedeql} here is just \(L\) stopped at \(\tau^\vartheta\), and the Snell envelope \(U_{\tilde L}^{\tau^\vartheta}\) coincides with \(U_L\) until \(\tau^\vartheta\). Applying the RHS of \eqref{Fexpl} with \(m=1\) yields the solution to optimally stopping the leader (duopoly) payoff: \begin{align*} U_L(t)={}&\esssup_{t\leq\tau\in{\mathscr T}}E[L_\tau\mid{\mathscr F}_t]=L_t+e^{-rt}\indi{Y_t>y_1}\biggl[\frac{Y_t}{r-\mu}-\frac{c}{r}-\left(\frac{Y_t}{y_1}\right)^{\beta_2}\left(\frac{y_1}{r-\mu}-\frac{c}{r}\right)\biggr]. \end{align*} Applying It\=o's lemma shows that the monotone part of the supermartingale \(U_L\) is just the drift \[dD_L=-\indi{Y_t<y_1}\,dL=\indi{Y_t<y_1}e^{-rt}(c-Y_t)\,dt \] where immediately stopping \(L\) is optimal. With \(\tau^\vartheta=\inf\{t\geq\vartheta\mid Y_t\leq y_m\}\) for every \(\vartheta\in{\mathscr T}\), \(dD^{\tau^\vartheta}_{\tilde L}=dD_L\) and \eqref{Fexpl} we now have a fully explicit symmetric subgame-perfect equilibrium, with payoffs \(V^\vartheta_i\bigl(G^\vartheta_i,G^\vartheta_j\bigr)=U_L(\vartheta)\), respectively. As \(y_m<y_1<c\), we see that \(dD_L\) is simply the stream of losses resulting from unprofitable operations. If a duopolist never hoped to become monopolist, these losses would be too large to keep operating. Here, whenever \(Y\in(y_m,y_1)\), both firms are leaving duopoly at a rate that depends directly on those running losses; it is decreasing in \(Y\). The state may rise next to the region \((y_1,c)\). Then there are still running losses, but the firms suspend mixing because the option to wait for a market recovery is sufficiently valuable. Thus there is no need for a compensation. There will typically be alternating periods of continuous and no mixing. If the state drops to \([0,y_m]\), however, the option to wait for market recovery would be worthless in the face of running losses even if a firm was (sure to become) monopolist, and both firms quit immediately. \section{Equilibria for general symmetric games}\label{sec:eqlsym} \subsection{Preemption with extended mixed strategies}\label{subsec:extended} In a preemption situation, i.e., when there is a first-mover advantage \(L>F\), there typically exist no equilibria in pure strategies in continuous time. \cite{FudenbergTirole85} and \cite{HendricksWilson92} show that this issue arises when there is an incentive to wait (\(L\) is increasing). If the model is sufficiently regular and the first-mover advantage is strict, then one may have equilibria in (standard) mixed strategies, with one player stopping immediately and the other stopping at a sufficient rate, such that the first would not be able to realize the increase in \(L\). The payoffs are then asymmetric, \(L\) and \(F\). However, these equilibria cannot be extended to the boundary of the preemption region; if \(L=F\), the necessary stopping rate to support any equilibrium explodes. This observation does not depend on any regularity conditions, the payoff processes can be arbitrarily smooth and deterministic. Therefore, if we want to allow for any equilibria where preemption will be set off (or also symmetric payoff equilibria where \(L>F\)), we need to enrich the strategies. The key is to facilitate some partial coordination when players try to stop at the same time, but when simultaneous stopping would be the worst outcome. Hence we make use of the strategy extensions \(\alpha^\vartheta_i\) from Definition \ref{def:alpha}, which have been introduced in \cite{RiedelSteg14} and which follow in spirit those of \cite{FudenbergTirole85}. With these extended strategies one can capture the continuous-time limits of symmetric, mixed discrete-time equilibria, which do not suffer from such problems.\footnote{% In discrete time, there can be equilibria with a positive probability of simultaneous stopping even if that is the worst outcome, because the players can only assign positive probabilities to the single periods; one cannot circumvent coordination failure by stopping \(\varepsilon\) after a mass point of the other. } We then obtain the following equilibria of immediate stopping for subgames with a first-mover advantage~-- here for a symmetric game:\footnote{% The extension to \(\{M>F\}\) is straightforward in the symmetric case. } \begin{proposition}[{\cite{RiedelSteg14}, Proposition 3.1}]\label{prop:eqlL>F} Fix \(\vartheta\in{\mathscr T}\) and suppose \(\vartheta=\inf\{t\geq\vartheta\mid L_t>F_t\}\) a.s. Then \(\bigl(G^\vartheta_1,\alpha^\vartheta_1\bigr)\), \(\bigl(G^\vartheta_2,\alpha^\vartheta_2\bigr)\) defined by \begin{equation*} \alpha^\vartheta_i(t)=\begin{cases} 1 & \text{if}\quad M_t\geq F_t\text{ and }t=\inf\{u\geq t\mid L_u>F_u\},\\[6pt] \displaystyle\indi{L_t>F_t}\frac{L_t-F_t}{L_t-M_t} & \text{else} \end{cases} \end{equation*} for any \(t\in[\vartheta,\infty)\) and \(G^\vartheta_i=\indi{t\geq\vartheta}\), \(i=1,2\), are an equilibrium in the subgame at \(\vartheta\). The resulting payoffs are \(V_i^\vartheta(G_i^\vartheta,\alpha^\vartheta_i,G_j^\vartheta,\alpha^\vartheta_j)=\max(F_\vartheta,M_\vartheta)\). \end{proposition} \begin{remark}\label{rem:eqlL>F} Where \(L>F\geq M\), the choice of \(\alpha^\vartheta_i\) makes the respective other player indifferent between stopping and waiting, and \(\alpha^\vartheta_i(\cdot)\) is right-continuous, allowing a limit outcome argument. Where \(M>F\), stopping is of course the unique best reply. In the polar case \(L_t=F_t=M_t\), there might not be a right-hand limit of \(\indi{L_t>F_t}\frac{L_t-F_t}{L_t-M_t}\), so we set \(\alpha^\vartheta_i(t)=1\). If the limit does exist, one can use it to make \(\alpha^\vartheta_i(\cdot)\) right-continuous even here, since the players will be completely indifferent in this case. \end{remark} If \(L_\vartheta=F_\vartheta>M_\vartheta\), each player becomes leader or follower with probability \(\frac{1}{2}\).\footnote{% Then the \(\liminf\) and \(\limsup\) in Definition \ref{def:outcome} are both \(\frac{1}{2}\) with the strategies of Proposition \ref{prop:eqlL>F}. } This is the same outcome as in \cite{FudenbergTirole85} for their smooth, deterministic model. If \(L_\vartheta>F_\vartheta>M_\vartheta\), however, there is a positive probability of simultaneous stopping, which is the price of preemption, driving the payoffs down to \(F_\vartheta\). \subsection{General symmetric equilibria}\label{subsec:eqlsym} We can now combine the equilibria we obtained for \(F\geq L\) and \(L>F\), respectively. With standard mixed strategies, the equilibria for a current second-mover advantage of Theorem \ref{thm:mixedeql} depend on the ``endpoint condition'' \(\Delta G^\vartheta_i(M-F)\geq 0\), e.g., where a preemption regime begins with both players trying to stop immediately. Proposition \ref{prop:eqlL>F}, however, gives us ``continuation'' equilibria of immediate stopping at such transitions with payoffs \(\max(F_\cdot,M_\cdot)\). Indeed, if player \(j\) uses an \emph{extended} mixed strategy, the payoff difference for player \(i\) between stopping and waiting where \(G^\vartheta_j\) jumps to 1 at \(\hat\tau^\vartheta_j=\inf\{t\geq\vartheta\mid\alpha^\vartheta_j(t)>0\}\) changes from \(\Delta G^\vartheta_j\bigl(M-F\bigr)\) to \begin{equation* \Delta G^\vartheta_j\bigl(\alpha^\vartheta_jM+\bigl(1-\alpha^\vartheta_j\bigr)L-F\bigr), \end{equation*} which is nonnegative if \(\alpha^\vartheta_j\) is as in Proposition \ref{prop:eqlL>F}. Therefore, this possibility to coordinate partially in preemption also generates suitable endpoints for attrition regimes where we cannot have \(M_\cdot\geq F_\cdot\) before reaching \(\{L>F\}\). Now, for any symmetric stopping game~-- where the payoff processes \(L\), \(F\) and \(M\) do not depend on the individual players~-- there exists a payoff-symmetric subgame-perfect equilibrium: \begin{theorem}\label{thm:symeql} Under Assumption \ref{asm:payoffs} there exists a payoff-symmetric subgame-perfect equilibrium in extended mixed strategies \(\bigl(G_1,\alpha_1\bigr)\), \(\bigl(G_2,\alpha_2\bigr)\) given as follows: \medskip Pick \(i,j\in\{1,2\}\), \(i\not=j\). For any \(\vartheta\in{\mathscr T}\), set \(\tau^\vartheta:=\inf\{t\geq\vartheta\mid L_t>F_t\text{ or }M_t>F_t\}\). Define \(G^\vartheta_i\), \(G^\vartheta_j\) as in Theorem \ref{thm:mixedeql} and \(\alpha^\vartheta_i=\alpha^\vartheta_j\) as in Proposition \ref{prop:eqlL>F}. \medskip Further, if the payoff processes are such that for any stopping time \(\tau\in{\mathscr T}\) with \(L_\tau=F_\tau\), either \(F_\tau=M_\tau\) or \(\tau=\inf\{t>\tau\mid L_t>F_t\}\) a.s., then there is a symmetric subgame-perfect equilibrium using \(G^\vartheta_i\) from Theorem \ref{thm:mixedeql} for both \(i=1,2\) and any \(\vartheta\in{\mathscr T}\). \end{theorem} \noindent {\it Proof:} In Appendix \ref{app:miscproofs}. \medskip The idea of these equilibria is virtually pasting the war of attrition that we have on \(\{F\geq L\}\) using the continuous strategies by Theorem \ref{thm:mixedeql} with the preemption equilibria of immediate stopping on \(\{L>F\}\) by extended mixed strategies as in Proposition \ref{prop:eqlL>F}. By the upper-semi-continuity of Assumption \ref{asm:payoffs}\,\ref{LFusc}, it is during attrition feasible for the players to coordinate on a future continuation equilibrium with payoffs \(\max(F_\cdot,M_\cdot)\). Then there will be no predictable drop in payoffs from setting off preemption. The corresponding equilibrium payoffs are \begin{equation*} V^\vartheta_1=V^\vartheta_2=\esssup_{\vartheta\leq\tau\in{\mathscr T}}E\Bigl[\indi{\tau<\tau^\vartheta}L_\tau+\indi{\tau\geq\tau^\vartheta} \max(F_{\tau^\vartheta},M_{\tau^\vartheta})\!\Bigm\vert\!{\mathscr F}_\vartheta\Bigr]\end{equation*} with \(\tau^\vartheta=\inf\{t\geq\vartheta\mid L_t>F_t\text{ or }M_t>F_t\}\) for any \(\vartheta\in{\mathscr T}\). While the endpoint condition \(\Delta G^\vartheta_i(M-F)\geq 0\) at \(\tau^\vartheta\) is now replaced by the preemption continuation equilibria, we still need to ensure the second one, \(\Delta G^\vartheta_i(M-F)\leq 0\) at \(\tau^{G,\vartheta}_i(1)\); the cap \(\tau^\vartheta\wedge\inf\{t\geq\vartheta\mid M_t>F_t\}\) works generally, but there may also be alternative choices in more specific cases. The proof of Theorem \ref{thm:symeql} relies of course on those of Theorem \ref{thm:mixedeql} and Proposition \ref{prop:eqlL>F}. The main issue is that the former was formulated in a reduced setting with ``standard'' mixed strategies, so we establish a formal relation to the present setting with extended mixed strategies. \section{Efficient symmetric equilibrium}\label{sec:symeql} The equilibria of Theorem \ref{thm:symeql} involve maximal preemption~-- wherever \(L>F\)~-- and thus have a relatively simple structure: the game ends as soon as there is a strict first-mover advantage. Preemption need not be that severe if there are future continuation equilibria with sufficiently high (expected) payoffs. In this section we identify equilibria with least possible preemption, entailing the highest attainable equilibrium payoffs. We focus on the class of \emph{payoff-symmetric equilibria}, which are the subgame-perfect equilibria with \(V^\vartheta_1=V^\vartheta_2\) a.s.\ at any stopping time \(\vartheta\in{\mathscr T}\). These have strong implications for equilibrium strategies. Then, in competitive games, where \(M\) is throughout the lowest payoff, equilibrium payoffs are at most what can be obtained from optimally stopping \(\min(L,F)\)~-- no matter how players mix, possibly using public correlation (Proposition \ref{prop:U_min(L,F)}). This bound on equilibrium payoffs enables us then to identify inevitable preemption points: those where the leader payoff \(L\) exceeds any continuation equilibrium payoff. Theorem \ref{thm:maxeql} formulates a corresponding algorithm and establishes the existence of ``efficient'' subgame-perfect equilibria. It is quite clear that any stopping on \(\{F>L\}\) must induce the lower payoff \(L\) if \(M\) is generally the worst. The basis of our argument is the more subtle result that players also cannot exploit \(L>F\) by mixing in any payoff-symmetric equilibrium, even if they have no time constraint. \begin{proposition}\label{prop:U_min(L,F)} Suppose \(M\leq\min(L,F)\). Then, in any payoff-symmetric equilibrium and for any \(\vartheta\in{\mathscr T}\), \(i,j\in\{1,2\}\), \(i\not=j\), \begin{equation* V_i^\vartheta\bigl(G_i^\vartheta,\alpha^\vartheta_i,G_j^\vartheta,\alpha^\vartheta_j\bigr)\leq U_{L\wedge F}(\vartheta)=\esssup_{\vartheta\leq\tau\in{\mathscr T}}E\bigl[L_\tau\wedge F_\tau\!\bigm\vert\!{\mathscr F}_\vartheta\bigr], \end{equation*} where we can in fact restrict ourselves to stopping times \(\tau\leq\inf\{t\geq\vartheta\mid G^\vartheta_i\vee G^\vartheta_j\geq 1\}\). \end{proposition} The proof of Proposition \ref{prop:U_min(L,F)} in Appendix \ref{app:U_min(L,F)} is based on the following important facts for any payoff-symmetric equilibrium (which do not depend on the assumption \(M\leq\min(L,F)\), yet): First, the conditional stopping probabilities of the players must be the same on \(\{F\not=L\}\) (Lemma \ref{lem:dGi=dGj}), since a player who stops with a higher conditional probability also becomes leader with a higher conditional probability, whereas the other becomes follower on that event. As one consequence, \(G^\vartheta_1\) and \(G^\vartheta_2\) must then even be identical before they put any mass on \(\{F=L\}\) (Lemma \ref{lem:Gi=Gj}). As another consequence, on \(\{F\not=L\}\) we can only have simultaneous jumps. These are only possible if \(M_\cdot\geq F_\cdot\), or if preemption occurs with \(L_\cdot\geq F_\cdot>M_\cdot\). Most importantly, we cannot have any jumps where \(F>\max(L,M)\). Finally, the local payoff from any terminal jump is bounded by \(\max(F,M)\) (Lemma \ref{lem:DG>0}). The intuition for Proposition \ref{prop:U_min(L,F)} is now the following. In any equilibrium, player \(i\) must be willing to wait until any point at which \(G^\vartheta_i<1\), and stop only from there on with the corresponding conditional probabilities. Consider as such a point the first time at which any player puts some mass on \(\{F\geq L\}\); call it \(\tilde\tau\). By waiting until \(\tilde\tau\), player \(i\) might become follower where \(G^\vartheta_j\) increases earlier on \(\{F<L\}\). At \(\tilde\tau\), by definition at least one player is willing to stop. The corresponding (symmetric) local payoff is clearly \(L_{\tilde\tau}\leq F_{\tilde\tau}\) if \(G^\vartheta_1\), \(G^\vartheta_2\) are continuous; a jump can only occur if indeed \(F_{\tilde\tau}=L_{\tilde\tau}\), which is also the maximal local payoff (with the hypothesis \(M\leq \min(L,F)\), we cannot have any jump where \(F_{\tilde\tau}>L_{\tilde\tau}\) as we have seen). Finally, it might happen that \(G^\vartheta_i\) is exhausted on \(\{F<L\}\), before ever reaching \(\tilde\tau\). Then, however, we must have \(G^\vartheta_1=G^\vartheta_2\). If they jump to 1, the terminal payoff is at most \(F<L\); if they attain 1 continuously, this means in the limit becoming follower for sure on \(\{F<L\}\). In summary, player \(i\) never receives more than \(\min(L,F)\) where stopping occurs. Proposition \ref{prop:U_min(L,F)} implies that whenever \(L_\vartheta>U_{L\wedge F}(\vartheta)\), we must have \(G^\vartheta_1(\vartheta)\vee G^\vartheta_2(\vartheta)=1\) by preemption.\footnote{% This argument is not impaired by any jump \(\Delta G^\vartheta_j(\vartheta)\in(0,1)\) due to which player \(i\) could not realize \(L_\vartheta\). \(L\) is right-continuous, so player \(i\) could try to stop right after \(\vartheta\). The formal argument is given in the proof of Theorem \ref{thm:maxeql}. } If there are any such preemption points in the future, they also restrict the feasible stopping times \(\tau\) to maximize the expected value of \(\min(L,F)\) in Proposition \ref{prop:U_min(L,F)}, which even further reduces the maximally attainable equilibrium payoff. By iteration we can identify where preemption is inevitable. \begin{theorem}\label{thm:maxeql} Suppose \(M\leq\min(L,F)\). Then there exists a maximal payoff-symmetric equilibrium with value \begin{equation* V^\vartheta_1=V^\vartheta_2=\esssup_{\tau\in[\vartheta,\tilde\tau(\vartheta)]}E\bigl[L_\tau\wedge F_\tau\!\bigm\vert\!{\mathscr F}_\vartheta\bigr]:=U_{(L\wedge F)^{\tilde\tau(\vartheta)}}(\vartheta) \end{equation*} for any \(\vartheta\in{\mathscr T}\), where \(\tilde\tau(\vartheta)\) is the latest sustainable preemption point after \(\vartheta\) determined by the following algorithm: \begin{enumerate} \item Set \(\tau_0(\vartheta):=\inf\{t\geq\vartheta\mid L>U_{L\wedge F}\}\) and \begin{equation* (L\wedge F)^{\tau_0(\vartheta)}:=\bigl(L_{t\wedge\tau_0(\vartheta)}\wedge F_{t\wedge\tau_0(\vartheta)}\bigr)_{t\geq 0} \end{equation*} with Snell envelope \(U_{(L\wedge F)^{\tau_0(\vartheta)}}=\bigl(\esssup_{t\leq\tau\in{\mathscr T}}E\bigl[(L\wedge F)^{\tau_0(\vartheta)}_\tau\!\bigm\vert\!{\mathscr F}_t\bigr]\bigr)_{t\geq 0}\). \item For all \(n\in\mathbb{N}\) set \(\tau_n(\vartheta):=\inf\{t\geq\vartheta\mid L>U_{(L\wedge F)^{\tau_{n-1}(\vartheta)}}\}\wedge\tau_{n-1}(\vartheta)\) and \begin{equation* (L\wedge F)^{\tau_n(\vartheta)}:=\bigl(L_{t\wedge\tau_n(\vartheta)}\wedge F_{t\wedge\tau_n(\vartheta)}\bigr)_{t\geq 0} \end{equation*} with Snell envelope \(U_{(L\wedge F)^{\tau_n(\vartheta)}}=\bigl(\esssup_{t\leq\tau\in{\mathscr T}}E\bigl[(L\wedge F)^{\tau_n(\vartheta)}_\tau\!\bigm\vert\!{\mathscr F}_t\bigr]\bigr)_{t\geq 0}\). \item Take the monotone limit \(\tilde\tau(\vartheta):=\lim_{n\to\infty}\tau_n(\vartheta)\). \end{enumerate} \end{theorem} \noindent {\it Proof:} In Appendix \ref{app:maxeql}. \medskip The payoff-maximal equilibrium is implemented using the strategies of Theorem \ref{thm:symeql}, but setting \(\alpha^\vartheta_i=0\) before \(\tilde\tau(\vartheta)\). Constructing \(\tilde\tau(\vartheta)\) by the algorithm is technically not difficult. The main problem is rather to verify the claimed equilibrium properties: to make sure that there is no preemption incentive where \(L>F\) on \([\vartheta,\tilde\tau(\vartheta))\), that \(\tilde\tau(\vartheta)\) is indeed maximal, and that there is a continuation equilibrium of preemption at \(\tilde\tau(\vartheta)\). Further, measurability is a major technical issue, since we want to have a time-consistent version of the strategies where we set \(\alpha^\vartheta_i=0\) on \([\vartheta,\tilde\tau(\vartheta))\) for all \(\vartheta\in{\mathscr T}\), to achieve the maximal payoff in all subgames. In order to suppress preemption where \(L_\vartheta>F_\vartheta\), it is obviously not sufficient that there exists \(\tau\geq\vartheta\) such that \(E[L_\tau\wedge F_\tau\mid{\mathscr F}_\vartheta]\geq L_\vartheta\); this relation then rather has to hold on all of \([\vartheta,\tau]\cap\{L>F\}\). For instance, the algorithm of Theorem \ref{thm:maxeql} can be applied to the model of \cite{FudenbergTirole85} by visual inspection, shown in Figure \ref{fig:FTpreem}. \begin{figure}[ht] \centering \begin{tikzpicture}[inner sep=0pt,minimum size=0pt,label distance=3pt] \draw[->] (0,0) -- (6,0) node[label=right:$t$] {}; \draw[->] (0,0) -- (0,3.5) node[] {}; \draw[-] (.4,.8) .. controls (2.3,3.5) and (2.7,3.5) .. (3.4,2.3) [] {}; \draw[-] (1.3,.7) .. controls (4,3) and (4.5,3) .. (5.3,1.9) [] {}; \draw[-] (.4,1.4) -- (3.4,2.3) [] {}; \node at (1.3,2.4) [] {$L$}; \node at (2.25,2.15) [] {$F$}; \node at (2.4,1.3) [] {$M$}; \node at (5.5,1.6) [] {$L$,$F$,$M$}; \draw[dotted] (.96,0) -- (.96,1.55) [] {}; \draw[dotted] (1.2,0.4) -- (1.2,1.65) [] {}; \draw[dotted] (1.85,0.4) -- (1.85,1.85) [] {}; \draw[dotted] (4.3,0) -- (4.3,2.6) [] {}; \draw[dashed] (4.3,2.62) -- (1.85,2.62) [] {}; \draw[dashed] (1.85,2.6) -- (1.85,1.85) [] {}; \draw[dashed] (1.85,1.85) -- (1.2,1.85) [] {}; \draw[dashed] (1.2,1.85) -- (1.2,1.65) [] {}; \draw[dashed] (1.2,1.65) -- (1.05,1.65) [] {}; \draw[dashed] (1.05,1.65) -- (1.05,1.58) [] {}; \node at (.96,-.4) {$T_1$}; \node at (1.25,.2) {$\tau_1$}; \node at (1.9,.2) {$\tau_0$}; \node at (4.3,-.35) {$\hat T_2$}; \end{tikzpicture} \caption{Preemption, Fudenberg and Tirole (1985)} \label{fig:FTpreem} \end{figure} \(L\) exceeds the future maximum of \(\min(L,F)\) for the first time at \(\tau_0\), whence there will be preemption. Taking that into account, at most the maximum of \(\min(L,F)\) \emph{up to} \(\tau_0\) might be achieved. However, \(L\) will also exceed this reduced value, at \(\tau_1\). In the limit, \(\tilde\tau(0)=T_1\) is the first inevitable preemption point. Fudenberg and Tirole also consider another, Case B, in which the peak at \(\hat T_2\) is higher than first one. Then \(\tilde\tau(0)=\infty\), because \(L=F\) at and from their global peak onwards, and the player can coordinate on joint late adoption. In general we may have much more complex stochastic patterns, of course, with arbitrary regions of first- and second-mover advantages, that may trigger preemption or not. \section{Conclusion}\label{sec:conc} In many timing games mixed strategies play an important role as we have argued, either for equilibrium existence or to resolve any strategic conflicts (about roles with differing amenities) within the game. Having analysed the two different kinds of local strategic incentives, we have been able to prove existence of and to characterize quite explicitly subgame-perfect equilibria for general symmetric stochastic timing games, providing symmetric equilibrium payoffs. Our approach is based on the general theory of optimal stopping and demonstrates which kinds of stopping problems need to be solved to verify equilibria, not only but in particular for mixed strategies. There are possibly different equilibria for a given timing game, with varying degrees of preemption. We have considered the two extreme cases: If one initiates preemption whenever there is any first-mover advantage, payoffs may be severely restricted. However, we have shown how to reduce preemption to a minimum and proved existence of corresponding equilibria with maximally attainable payoffs. If preemption can indeed be prevented in a certain regime with first-mover advantage (by sufficiently profitable future continuation equilibria), then there may also exist further equilibria with continuous mixing, which we have only employed at second-mover advantages. Nevertheless, any such additional mixing will be inefficient and induce lower payoffs (which one can also show directly). A more specific strategic investment model with random first- and second-mover advantages is analysed in \cite{StegThijssen15}, where the strategies corresponding to the ones derived here have Markovian representations.
1,314,259,992,747
arxiv
\section{Introduction} The formation and evolution of massive elliptical (passive) galaxies in the Universe is interesting for both studies of galaxy evolution and cosmology. In the former case, such galaxies present an observational challenge for hierarchical models of structure formation, as some form of feedback is required to suppress on-going star formation in such massive systems (see \citet{2006MNRAS.372..537W} and references therein). For cosmology, such massive ellipticals can be used to directly constrain cosmological parameters (e.g., as standard candles; \citet{1998MNRAS.297..128C}) and provide efficient tracers of the Large Scale Structure (LSS) in the Universe \citep{2001AJ...122...2267E}. Moreover, the (relative) ages of massive ellipticals, as a function of redshift, offers the possibility of directly constraining the Hubble parameter, thus providing vital information on the expansion history of the Universe and therefore, the equation of state of dark energy (see \citet{2002ApJ...573...37J} for a discussion of the underlying concept, and \citet{2003ApJ...593..622J}; \citet{2005PhRvD..71l3001S} and \citet{2009arXiv0907.3149S} for observational constraints obtained from using this technique). In this paper, we revisit the techniques used to constrain cosmological parameters through the age--redshift relationship of passive (elliptical), massive galaxies. In detail, we present a new analysis of the ages of Luminous Red Galaxies (LRGs; \citet{2001AJ...122...2267E}), as selected from the Sloan Digital Sky Survey (SDSS; \citet{2000AJ...120...1579Y}). The key difference present herein compared to other such work with SDSS spectral data (e.g. \citet{2003AJ....125.1817B,2003AJ....125.1882B}; \citet{2003ApJ...585..694E}; \citet{2006AJ....131.1288B}) is that we calibrate, for the first time, the SDSS spectra onto the well-known, and well-studied, Lick/IDS system (\citet{1984ApJ...287..586B}; \citet{1994ApJS...94..687W}), thus allowing us to exploit a host of previous work on this system including the Simple Stellar Population (SSP) modeling in the literature (\citet{1998PASP..110..888W}; \citet{1998MNRAS.300..872M}; \citet{2005A&A...438..685K}) and comparisons with Lick measurements in the literature. In a companion paper, we will use the age--redshift data derived for SDSS LRGs in this paper to obtain cosmological constraints. In Section 2 of this paper, we outline the SDSS galaxy data we use to construct our age--redshift relation, specifically luminous (massive), quiescent galaxies. We also discuss the stellar data available to calibrate these galaxies onto the Lick/IDS system (see also Appendix A). In Section 3, we provide details about the SSP models we have used to determine the ages, metallicities and $\alpha$--enhancements of our galaxies, using a Markov Chain Monte Carlo (MCMC) technique for parameter estimation. We discuss our results in Section 4, including the effect of priors, and conclude in Section 5. Appendix A provides the extensive details on how we calibrate the SDSS spectra onto the Lick/IDS system, including all the necessary corrections made to the data. In Appendix B, we provide a review of tests we have performed to quantify the robustness of our spectral measurements, including comparisons with data from the literature. We assume a concordance, flat $\Lambda$CDM cosmology, with $h=0.7$ and $\Lambda=0.7$, where required. \section{Sample Selection} \label{sample_selection} The Sloan Digital Sky Survey \citep[SDSS;][]{2000AJ...120...1579Y,2002AJ...123...485S,2003AJ...126...2081A} is a photometric and spectroscopic survey, covering $\simeq8000 {\rm deg^2}$ of the northern sky, using a 2.5 meter telescope at Apache Point Observatory in New Mexico, USA. The photometric survey consists of simultaneous observations of the sky using 5 optical filters (\textit{u, g, r, i} and \textit{z}), providing a database of hundreds of millions of detected objects all with accurate photometric and astrometric calibrations \citep{2003AJ....125.1559P, 2001ASPC..238..269L, 2001AJ....122.2129H}. The SDSS spectroscopic galaxy survey consists of two samples of galaxies selected using different criteria; namely the MAIN sample \citep{2002AJ...124...1810S} and the LRG sample \citep{2001AJ...122...2267E}. The SDSS spectra of these galaxies span a wavelength range of $3800<\lambda<9200\ensuremath{\:\textnormal{\AA}}$ with a median resolution of $\textnormal{R}\sim\!1800$ ($\textnormal{R} \equiv \lambda/\Delta\lambda$), which is approximately $3\ensuremath{\:\textnormal{\AA}}$ (although this varies as a function of wavelength and is different for each fiber). The SDSS spectra are automatically reduced using dedicated software, which flux calibrates the spectra and references them to the heliocentric frame and converts to vacuum wavelengths. The software also measures a redshift for each object and measures a series of spectral features consistent (in their wavelength definitions) as the standard Lick indices (see Section 2.2), but are not calibrated onto the Lick systems as no attempt has been made to match to the resolution of the Lick/IDS system. For the analysis presented in this paper, we do not use the Lick indices measurements from the standard SDSS pipeline, but instead determine our own line-strengths after matching the instrumental resolutions (see Section~\ref{sec:lick_calib}). However, we do use redshifts (\texttt{z}), velocity dispersions (\texttt{velDisp}), magnitudes (\texttt{u,g,r,i,z}) and other derived quantities such as the \textit{r}-band de Vaucouleurs radii (\texttt{deVRad\_r}) and the \textit{r}-band de Vaucouleurs profile axis ratio (\texttt{deVAB\_r}) from the standard SDSS spectral pipeline, made available through the Catalog Archive Server (CAS)\footnote{http://www.sdss.org/dr7/access/index.html\#CAS}. The spectral data used in this paper was obtained from the Data Archive Server (DAS)\footnote{http://www.sdss.org/dr7/access/index.html\#DAS}. \subsection{SDSS Luminous Red Galaxies} For our analysis, we only used Cut I Luminous Red Galaxies (LRGs) as outlined in \citep{2001AJ...122...2267E}. This is achieved by selecting objects from CAS using the \texttt{TARGET\_GALAXY\_RED} flag, thus yielding a pseudo volume-limited sample of LRGs between $0.15 < z < 0.35$. Below $z=0.15$ contamination by low redshift star-forming galaxies increases the space density of galaxies that satisfy the SDSS LRG colour-colour selection, while above $z=0.35$ the space density of SDSS LRGs decreases due to the $4000\ensuremath{\:\textnormal{\AA}}$ break dropping out of the SDSS $g$-band. We do not use the Cut II LRG sample at $z>0.4$ as we require velocity dispersion measurements for each object and these are not supplied above this redshift. Therefore, we impose an upper redshift limit of $z=0.4$ in our sample and analysis. We further restrict our sample to only include LRGs with \texttt{specClass EQ `SPEC\_GALAXY'} (spectrum classified as a galaxy), \texttt{zStat EQ `XCORR\_HIC'} (redshift obtained from cross-correlation with a template), \texttt{zWarning EQ 0} (no warning flags on the redshift), \texttt{eClass < 0} (indicates an old stellar population), \texttt{z < 0.4} (redshift less than 0.4) and \texttt{fracDev\_r > 0.8} (indicating a surface brightness profile best fit by a de Vaucouleurs profile). Based upon these selection criteria, we obtain a sample of approximately $77,000$ LRGs (see Figure~\ref{fig:z_dist_quiescent} for the redshift distribution of this sample). \subsubsection{Quiescent Galaxies} \begin{figure*} \includegraphics[scale=0.9]{quiescent_ew_contours.ps} \caption[]{Distribution of the line-strengths from the MPA-JHU dataset for the LRGs of our sample. Dotted contours show the full LRG sample while solid lines show the quiescent sample selected based on their \balmer{\alpha} and \forb{O}{II} line-strengths. While only two lines have been used in the production of the quiescent sample, all lines show distributions consistent with zero-emission.} \label{fig:quiescent_ew_contours} \end{figure*} At low redshift, our sample will be contaminated by bulges in late-type galaxies due to the fiber size (3\arcsec) of the SDSS spectrographs. To reduce this contamination, and increase the number of truly quiescent galaxies in our sample, we make further cuts based on the emission line properties of our LRG sample, e.g., we use standard emission lines such as \balmer{\alpha}, \ensuremath{\textnormal{H}\beta\:} and $\forb{O}{III}\lambda5007$ \citep{2004ApJ...601L.127F,2006ChJAA...6...15Z,2006MNRAS...373...349R}. If left unchecked, such nebular emission, or emission associated with low-ionization nuclear emission-line region (LINER) activity, would confuse our interpretation of the SSP parameters derived from these objects. To combat this, we use the MPA-JHU spectral line data\footnote{http://www.mpa-garching.mpg.de/SDSS/DR7/raw\_data.html} to select only those objects that are consistent with zero emission. In their original work, \citet{2004ApJ...613..898T} fitted the spectral energy distributions (SEDs) of \citet{2003MNRAS.344.1000B} to the SDSS galaxy spectra in Data Release Four (DR4), subtracted off the best fitting SED, and then measured the line-strengths of \forb{O}{II}\wavelen{air}{3726}, \ensuremath{\textnormal{H}\beta\:}, \forb{O}{III}\wavelen{air}{5007}, \forb{N}{II}\wavelen{air}{6584}, \balmer{\alpha} and \forb{S}{II}\wavelen{air}{6717}. This SED fitting approach has now been applied to the SDSS Data Release Seven (DR7) dataset using the stellar population synthesis spectra of Charlot \& Bruzual (2008). We use this latest DR7 dataset herein. In an approach similar to \citet{2006ApJ...648..281Y} we fit a two component function to the line-strength distributions of \citeauthor{2004ApJ...613..898T}, that consists of a normal component to describe the quiescent LRGs, plus a log-normal component to describe the active objects. The best fit is then used to determine the small zero-point offsets that exist in the dataset of \citeauthor{2004ApJ...613..898T} from which we then select objects that are consistent with zero emission, at the $2\sigma$ level, in \balmer{\alpha} and \forb{O}{II}, hence removing the need to make corrections to the Lick index line-strengths for the possible presence of nebular emission. With these constraints on the emission-line characteristics of the LRG sample, we obtain approximately $35,000$ galaxies that we refer to as the quiescent LRG sample. While it is possible to use different, and more, emission lines in determining the quiescent sample, doing so results in a significant reduction of our sample size. For instance, including \ensuremath{\textnormal{H}\beta\:} and \forb{O}{III} in the joint constraint with \balmer{\alpha} and \forb{O}{II} yields approximately $28,000$ LRGs over $0.0 < z < 0.4$ and further inclusion of \forb{N}{II} and \forb{S}{II} constraints yields approximately $19,000$ objects. While the more restrictive sample selection ensures a more quiescent sample, the reduction in the total number of objects available hampers our ability to explore the evolution of the SSP parameters over the desired redshift range. But, even selecting on \balmer{\alpha} and \forb{O}{II} alone helps to ensure that we are not including objects with significant emission in \ensuremath{\textnormal{H}\beta\:} as show in Figure~\ref{fig:quiescent_ew_contours}. In Figure~\ref{fig:z_dist_quiescent} we also show the redshift distribution of the quiescent LRGs sample. \begin{figure} \includegraphics[width=84mm]{dr7_quiescent_redshift_dist.ps} \caption[]{Redshift distribution of our LRG sample (solid line) and our quiescent sample (dashed line).} \label{fig:z_dist_quiescent} \end{figure} \subsubsection{Further Selection Criteria} In order to select the same population of objects over the redshift range $0.0<z<0.4$ we need to select on physical properties such as velocity dispersion and absolute luminosity. Aperture corrections are applied to the SDSS velocity dispersion measurements following the approaches of \citet{1995MNRAS.276.1341J} and \citet{2003AJ....125.1817B}, see Section~\ref{sec:ap_corr}. To select on luminosity we have performed K-corrections using the code \emph{kcorrect} code of \citet{2007AJ....133..734B} and have K-corrected all galaxies to a redshift of 0.2 (the median of our distribution) in order to minimise errors in these K-corrections. \begin{figure} \includegraphics[width=84mm]{dr7_quiescent_lum_evolve_contour.ps} \caption{Distribution of our quiescent LRG sample in the redshift-luminosity plane, with a simple linear model (green/solid line) used to describe the luminosity evolution of the LRGs. Contours show the distribution in number density of the whole sample, and are in the intervals 5,10,50,100. The region bounded by the vertical dotted lines represent the volume-limited portion of our sample and the (red/dashed line) shows the median luminosity in redshift intervals of $\Delta z = 0.01$.} \label{fig:quies_lum_evolve_contour} \end{figure} We have modeled the luminosity evolution in our LRGs using a simple linear fit to the volume limited part of the K-corrected, magnitude-redshift distribution and then extrapolated this to higher and lower redshifts, see Figure~\ref{fig:quies_lum_evolve_contour}. From this, we obtain an evolutionary (e) correction which we apply to our quiescent LRG sample. A more detailed approach to determine the evolution correction using stellar population synthesis models is unlikely to significantly improve the accuracy of the e-correction given the difficulty that current models have in explaining the evolution of the SDSS LRGs colours \citep{2006MNRAS.372..537W, 2009MNRAS.394L.107M}. When reporting absolute magnitudes, we will use the notation M$_{0.2_{r}}$ to denote r-band SDSS magnitudes that have been K+e corrected such that the r-band has been shifted to $z=0.2$. After correcting velocity dispersions for aperture effects and performing K+e corrections to the magnitudes, we have produced four subsamples of quiescent LRGs (see Table 2 for numbers in each sample). These subsamples span an intervals of $\Delta\textnormal{M}_{0.2_{r}}=0.6$ and $\Delta\sigma_{v}=30\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ and are shown as the boxed regions in Figure~\ref{fig:quies_slices_contour}. Objects satisfying these criteria are then co-added to produce high signal-to-noise spectra after binning into redshift intervals of $\Delta z = 0.02$. This redshift interval is small enough that there is little evolution in the ages of the objects within the bin such that we are co-adding objects of sufficiently similar ages, e.g. in the case of a concordance cosmology the age evolution within a bin is approximately 0.2 Gyr over the redshift range $0.0<z<0.4$, and is big enough to yield a sufficient number of objects to reach S/N levels of approximately $100\ensuremath{\ang\,^{-1}}$ In generating the co-added spectrum we have adopted an approach similar to \citet{2006AJ....131.1288B}. Briefly, after binning our sample in redshift, absolute magnitude and velocity dispersion we shift each spectrum to its restframe. All spectra in a given bin are normalised to the median flux in the region $4500-5500\ensuremath{\:\textnormal{\AA}}$ and combined pixel-by-pixel using a weighted arithmetic mean where the weights are determined by the inverse variance of the flux in each pixel. This approach means that not all spectra contribute equally to the co-added spectrum but it does mean that those pixels contaminated by residuals from imperfect sky-subtraction contribute less to the final stacked spectrum. \begin{figure*} \includegraphics[scale=0.9]{dr7_quiescent_contour.ps} \caption{Distribution of our quiescent LRG sample in the velocity dispersion-luminosity plane in four different redshift slice of $\Delta z = 0.1$ over the redshift range $0.0 < z < 0.4$. Velocity dispersions have been aperture corrected and absolute magnitudes have been k+e corrected to $z = 0.2$. Contours show the distribution in number density of the whole sample, and are in the intervals 1,10,50,100. The four boxes are the same in each panel and show the boundaries used to create the four mass (velocity dispersion) samples used in Section~\ref{sec:results} when exploring the redshift evolution of the SSP parameters for the quiescent sample.} \label{fig:quies_slices_contour} \end{figure*} \subsection{Lick/IDS System} A long term programme undertaken by \citet{1984ApJ...287..586B, 1985ApJS...57..711F, 1994ApJS...94..687W} and \citet{1998ApJS..116....1T}, amongst others, has yielded a library of stellar spectra, obtained with the Image Dissector Scanner \citep[IDS,][]{1972PASP...84..161R} on the Shane 3m telescope at the Lick Observatory. These authors have also established a spectral index system used to investigate element abundances from low-resolution integrated spectra of extragalactic stellar populations. Twenty five absorption features in the wavelength range $4000<\lambda<6000\ensuremath{\:\textnormal{\AA}}$ at $\sim\!9\ensuremath{\:\textnormal{\AA}}$ resolution, have been identified that are sensitive to effective temperature ($T_{\textnormal{eff}}$), surface gravity ($g$) and metallicity ($Z$) of a star. The wavelength definitions of these features were chosen such that the broad absorption features found in elliptical galaxies could be studied and to minimise the effect of galaxy velocity dispersion on the measured line-strengths. All features measured on the Lick/IDS system are defined by a central ``feature'' bandpass and two adjacent ``pseudocontinuum'' bandpasses, see Appendix~\ref{sec:lick_calib}. From this continuum and feature it is possible to measure a ``pseudo'' equivalent width (EW). It is not a true EW as the wavelength definitions for the bandpasses are fixed and, depending on the element abundances, instrumental resolution or velocity dispersion of the stellar population, the wings of an absorption feature may extend beyond the feature bandpass. As a consequence of adopting fixed wavelengths to determine the continuum and feature fluxes, the measured line-strength depends on the flux in the continua bandpasses as well as the feature bandpass. The continua also contain absorption features which, when coupled to the effects of galaxy velocity dispersions and instrumental resolution, will help suppress its average flux. Other factors that effect the measured line-strength of absorption features are gas, dust and telluric features. Although dust has little effect due to the generally narrow nature of the features, gas can be a problem. Nebular emission will contaminate the measured line-strength of the Balmer lines by filling in the ``natural'' absorption \citep[][and references therein]{2002MNRAS.336..382M} and, when using the \ensuremath{\textnormal{H}\beta\:} absorption index, will lead to incorrectly determining older ages for stellar populations. While the \ensuremath{\textnormal{H}\beta\:} index line-strength is directly affected by Balmer emission from ionized gas, some indices are indirectly affected by emission, e.g. the red sideband of \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} can be contaminated by \forb{N}{i} \citep{1996A&A...306L..45G}. Telluric emission and absorption can also have a significant impact on the accuracy of the measured line-strengths, but stacking spectra in order to improve the S/N of the Lick indices also allows us to minimise the impact from these telluric features. By co-adding spectra at different redshifts, telluric features ``appear'' at different restframe wavelengths in the deredshifted spectra. Using the inverse variance of the flux when weighting each pixel allows us to minimise the impact from these features on the final stacked spectrum. It should be noted that the SDSS convention is to present all wavelengths as vacuum wavelengths. Therefore, the wavelength definitions of all Lick Indices were converted to vacuum wavelengths \citep{1991ApJS...77..119M} prior to performing any measurements of index line-strengths on spectra. We have used the publicly--available software INDEXF\footnote{http://www.ucm.es/info/Astrof/software/indexf/indexf.html} \citep{2007hsa..conf.....F} to measure our absorption feature line-strengths from our co-added stacked LRG spectra. \subsection{Lick Stars in the SDSS} To properly calibrate to the Lick/IDS system it is necessary to have SDSS observations of Lick stars, or galaxies, to allow the zero-point offsets to be determined, see Appendix~\ref{sec:transform}. The library of Lick stars that could be used in calibrating the SDSS to the Lick system have not been intentionally targeted by the SDSS and therefore requires serendipitous observations of Lick stars to allow the SDSS to be placed on the Lick system. The diameter of the SDSS fibres is $3.0\arcsec$ with the median seeing at Apache Point of $\sim\,1.5\arcsec$. We have therefore chosen a cut-off of $<1.5\arcsec$ when matching SDSS objects and Lick star coordinates. In searching for Lick stars in the SDSS we have included both \textit{special} and \textit{survey} plates in the search and have identified 13 stars from the Lick library that match SDSS objects to within $1.5\arcsec$; 11 stars in M67 (\textit{special plate} \texttt{\#} 0321) and 2 stars in NGC 7789 (\textit{special plate} \texttt{\#} 2377) and are listed in Table~\ref{tab:lick_stars}. The cluster proper motion dispersion for M67 is $\sim0.8\,\textnormal{mas}\,\textnormal{yr}^{-1}$ \citep{1978A&A....62..259M} and for NGC 7789 is $\sim0.4\,\textnormal{mas}\,\textnormal{yr}^{-1}$ \citep{1981A&AS...43..337M}, with only one star in Table~\ref{tab:lick_stars} having a measured proper motion of $\sim9.0\,\textnormal{mas}\,\textnormal{yr}^{-1}$ for M67 F 164 \citep{1998A&A...335L..65H}. Given the small size of the proper motions we have ignored this effect when matching the coordinates of SDSS objects to the Lick stars in these two clusters. The details of how our SDSS LRG spectra were calibrated onto the Lick/IDS system using the Lick stars in Table 1 is given in Appendix A. \begin{table} \caption[SDSS - Observed Lick Stars]{Lick stars observed by SDSS} \begin{tabular}{@{}lccc@{\hspace{3ex}}l@{}} \hline Star & Plate & MJD & FiberID & Spectral Type \\ \hline M67 IV-77 & 0321 & 51612 & 385 & K0 IV \\ M67 IV-68 & 0321 & 51612 & 386 & G8 V \\ M67 I-17 & 0321 & 51612 & 388 & F0 V \\ M67 F 115 & 0321 & 51612 & 463 & F6 \\ M67 F 105 & 0321 & 51612 & 466 & K2 III \\ M67 F 231 & 0321 & 51612 & 479 & K0 III \\ M67 II-22 & 0321 & 51612 & 480 & K0 IV \\ M67 F 175 & 0321 & 51612 & 490 & - \\ M67 F 164 & 0321 & 51612 & 491 & K1 III \\ M67 IV-20 & 0321 & 51612 & 499 & K0 III/IV \\ M67 F 193 & 0321 & 51612 & 519 & K0 IV \\ NGC 7789 676 & 2377 & 53756 & 160 & G8 III \\ NGC 7789 897 & 2377 & 53756 & 492 & G9 III \\ \hline \end{tabular} \medskip Note: Spectral classifications obtained from SIMBAD \label{tab:lick_stars} \end{table} \section{Method} \label{sec:method} \subsection{Simple Stellar Population Models} Simple stellar population (SSP) models have become an invaluable tool in allowing the physical properties of stellar populations to be probed via measurements of absorption features in their integrated spectra. But absorption features generally suffer from the same age-metallicity degeneracy that effects the interpretation of star formation histories and chemical evolution of stellar population using their broadband optical colours. Balmer lines become weaker while metallic line become stronger as the age and metallicity of a stellar population increases. While different absorption features behave differently to age and metallicity, with some more sensitive to age and others to metallicity, a further complication is that due to $\alpha$-enhancement. \citet{1998PASP..110..888W} has shown that regardless of isochrones, stellar library, fitting functions or authors, the SSP models at the time were unable to explain the large spread in Mg at fixed Fe of ellipticals. This overabundance of Mg is considered to be a consequence of the different timescales involved in the evolution of stars of different masses. The abundance of the $\alpha$-elements (O, Ne, Mg, Si amongst others), which are primarily a product of nucleosynthesis in Type II supernovae, is enhanced over the the abundance of the Fe-peak elements (Fe, Cr amongst others) that result from Type Ia supernovae. The ages and metallicities derived for SSPs differ depending on whether an $\alpha$-element or $\textnormal{Fe}$-peak element is used to represent metallicity (\ensuremath{[\textnormal{Z}/\textnormal{H}]\,}). Since elliptical galaxies show an enhancement of $\alpha$-element over $\textnormal{Fe}$-element abundance, it has become necessary to employ SSP models that account for these effects. In this work we use the SSP models of \citet[hereafter KMT05]{2005A&A...438..685K} which use the Evolutionary Population Synthesis (EPS) scheme of \citet[hereafter M98]{1998MNRAS.300..872M} and the method of \citet[hereafter TMB03]{2003MNRAS.339..897T} to produce the line-strengths of the Lick indices for stellar populations with variable element abundance. In KMT05, the authors use model stellar atmospheres to produce synthetic stellar spectra from which they compute line-strengths and from this, index response functions as a function of metallicity. They then use the scheme of TMB03 to produce Lick index line-strengths that account for variable \ensuremath{[\alpha/\textnormal{Fe}]\,} ratios. These models have been calibrated on globular clusters that are known to have enhanced \ensuremath{[\alpha/\textnormal{Fe}]\,} ratios above the solar value. \subsection{Markov chain Monte Carlo} Different authors have employed different strategies when converting between the observed line-strengths in galaxy spectra and the model parameters \citep{2003A&A...407..423M,2007AJ....133..330D}. However, most previous techniques have involved interpolating a grid of model index values at the location of the observed index values for the object in question. Since the KMT05 models have three SSP parameters, this requires that one of the parameters needs to be fixed and there is normally an iterative procedure employed to explore the parameter space. At each iteration, a new set of model grids are generated and the observed index values are used to re-generate the appropriate SSP parameters. In order to determine the correct SSP parameters for the observed index line-strengths, multiple model index grids may be used. Moreover, each author may have a different convergence criteria for these iterations such that the same SSP parameters need to be reproduced with different index-index grids and are consistent to a pre-determined level. In this work, we explore an alternative methodology, which is based on a Markov chain Monte Carlo (MCMC) technique to perform the iterative minimisation in order to obtain the SSP parameters that correspond to the measured index line-strengths of the object under study. In the context of Bayesian inference, the MCMC process produces a random sequence of dependent variables that are drawn directly from the posterior distribution which is achieved using Bayes' Theorem: \begin{equation} p(\btheta|y) = \frac{p(y|\btheta)p(\btheta)}{\int p(y|\btheta)p(\btheta) d\btheta}\quad, \label{eqn:bayes} \end{equation} where $p(\btheta|y)$ is the posterior probability density and assigns a probability to the model, $\btheta$, given the data, $y$, and any prior knowledge concerning the model. The probability density $p(y|\btheta)$ is associated with obtaining the data given the vector of model parameters and is also called the likelihood ($\mathcal{L}$). The marginal probability density, $p(\btheta)$, describes the probability associated with the parameter vector and encompasses any prior knowledge of the model parameters. The denominator in the above expression can be considered a normalisation factor and is ignored in our implementation. We implement the Metropolis-Hastings algorithm \citep{1953JCHEMPHYS...21..1087, 1970BIO...57..97H} to ensure that the stationary distribution of the Markov chain samples from the posterior distribution. This form of MCMC uses a candidate generating, or proposal, distribution to propose new locations in parameter space which allows the likelihood surface to be explored. During this exploration the proposed new locations are accepted or rejected based on logical criteria embodied in the Metropolis-Hastings algorithm. While the form of the candidate-generating distribution does not impact on the ability of the Metropolis-Hastings algorithm to reach a stationary distribution, it does impact on the efficiency of the convergence to stationarity \citep{2004JCAP...09..003D}. The most efficient candidate-generating distribution to adopt would be that of the posterior distribution, but this is not known a priori. In our implementation we adopt a multivariate normal distribution as our candidate-generating distribution and estimate the covariance matrix from the data itself. Convergence to the stationary distribution is identified through use of the Gelman-Rubin $\widehat{\mathcal{R}}$-statistic \citep{1992StatSci...4..457} with convergence identified when $\widehat{\mathcal{R}} \leq 1.1$ \citep{2003ApJS..148..195V} for each parameter of the model. Identifying convergence allows the ``burn-in'' phase to be excluded from the chain and accurate estimation of the confidence-intervals on the model parameters. \subsection{Likelihood Estimation} In order to use the SSP models of KMT05 with our MCMC approach we adopt a trilinear interpolation scheme, since the models have three parameters (age, metallicity and $\alpha$-iron ratio), such that the SSP models can be investigated at an arbitrary location in parameter space. While there is a unique mapping from model parameters to index values, there is no unique mapping from index values to model parameters and therefore the models cannot be inverted. We must therefore use multiple Lick indices simultaneously in order to break this degeneracy. The Metropolis-Hastings algorithm employs a likelihood ratio test in order to determine acceptance or not of a candidate position in parameter space. The likelihood $(\mathcal{L} \equiv p(y|\btheta))$ is obtained by assuming that the data, $y$, is a set of $N$ independent normally distributed random variables, $y_{i}, i=1,\ldots,N$ \citep{1998StatDataAnal}. Each $y_{i}$ has a different and unknown mean, $\mu_{i}$, but known variance, $\sigma_{i}^{2}$. The joint p.d.f. is the product of these $N$ normal distributions:\\[1ex] \begin{equation} \mathcal{L} = \prod_{i=1}^{N}\frac{1}{\sqrt{2\pi\sigma_{i}}}\textnormal{exp}\left(\frac{-(y_{i}-\mu_{i})^2}{2\sigma_{i}^{2}}\right)\:, \end{equation} where the true values, $\mu_{i}$, depend on the model parameters, i.e. $\btheta=(\theta_{1},\ldots,\theta_{m})$. Taking the logarithm of the joint p.d.f. yields\\[1ex] \begin{equation} \ln \mathcal{L}(\btheta) = \sum_{i=1}^{N}\frac{1}{\sqrt{2\pi\sigma_{i}}} -\frac{1}{2}\sum_{i=1}^{N}\frac{(y_{i}-\mu_{i}(\btheta))^2}{\sigma_{i}^{2}}\:. \label{eqn:fullloglike} \end{equation} We can maximise the log-likelihood by minimizing the $\chi^{2}$ function,\\[1ex] \begin{equation} \chi^{2}(\btheta) = \sum_{i=1}^{N}\frac{(y_{i}-\mu_{i}(\btheta))^2}{\sigma_{i}^{2}}\:. \end{equation} In the context of estimating SSP parameters from line-strength data, $y_{i}\pm\sigma_{i} \equiv I_{i}\pm d\,\textnormal{I}_{i}$, where $\textnormal{I}_{i}$ is the line-strength of a particular absorption feature, $d\,\textnormal{I}_{i}$ is its one-sigma error and $i$ represents the Lick index used, e.g. \ensuremath{\textnormal{H}\beta\:}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} etc. The ``true'' absorption indices, $\mu_{i}$, are determined from interpolating the SSP models of KMT05 for a particular set of model parameters, $\btheta=\left\{t,\ensuremath{[\textnormal{Z}/\textnormal{H}]\,},\ensuremath{[\alpha/\textnormal{Fe}]\,}\right\}$. At each location in parameter space, we use the SSP models to determine the corresponding values for the chosen Lick indices using the the trilinear interpolation method to go from model parameters to Lick index line-strengths. The MCMC approach is then used to iteratively maximise the likelihood, such that at each stage in the evolution of the chain, the likelihood, $\mathcal{L}_{n+1}$, calculated at the new candidate position, $\btheta_{n+1}$, is compared with the likelihood, $\mathcal{L}_{n}$, determined at the present location in parameter space, $\btheta_{n}$. The Metropolis-Hastings algorithm is then used to make the decision on accepting or rejecting the candidate position. If the next step is rejected then the current step is re-saved as part of the chain. The chain now performs a random walk in parameter space and generates the sequence of parameter samples $\{\btheta_{1},\btheta_{2},\ldots,\btheta_{n},\btheta_{n+1}\}$. Once the chain has converged to the stationary distribution, i.e. burned in, the random walk will be confined to the vicinity of the global mode in the posterior distribution. In Appendix~\ref{app:mcmc_approach} we test this method of SSP parameter estimation by comparing with previously published results from \citet[hereafter TMBO]{2005ApJ...621..673T}. \section{Results} \label{sec:results} \subsection{Line Strengths} The relationship between the line-strength and redshift for the four different mass (velocity dispersion) samples, defined in Figure~\ref{fig:quies_slices_contour}, are presented in Figure~\ref{fig:index_redshift} for a selection of the key Lick indices used in age-dating of galaxy spectra (again see Appendix for details of how the SDSS spectra were calibrated onto the Lick system). These relationships are consistent with that of a passively evolving stellar population, i.e. the strength of \balmer{\beta} and \balmer{\gamma F} indices increase with redshift while the \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} and \ensuremath{\langle\textnormal{Fe}\rangle\:} lines decrease in strength, indicated by the dotted line in each panel. The scatter in these relationships is consistent with the error on a typical line-strength and there is a clear trend with velocity dispersion, i.e. \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} and \ensuremath{\langle\textnormal{Fe}\rangle\:} show a positive correlation with velocity dispersion while \balmer{\gamma F} shows a negative correlation. These trends are consistent with the finding from other authors, e.g. \citet{1993ApJ...411..153B,2001MNRAS.323..615K,2003AJ....125.1882B,2009MNRAS.398..133L}. While there is no obvious segregation of the \balmer{\beta}-redshift relationships there is a hint that the $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (circles/green) sample yields line-strengths that are systematically larger than the $320<\sigma_{v}\le350\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (diamonds/pink) sample, consistent with the negative correlation observed by other studies. \subsection{SSP Parameter Estimates - Uniform Priors} Using the \balmer{\gamma F}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} and \ensuremath{\langle\textnormal{Fe}\rangle\:} data in Figure~\ref{fig:index_redshift} and the parameter estimation method outlined in Section~\ref{sec:method} we determine the SSP parameters for each mass sample and the results are shown in Figure~\ref{fig:ssp_redshift_s20}. These parameter estimates have been obtained using uniform priors that cover the full parameter space of the KMT05 models, i.e. $\textnormal{age (Gyr)}\sim\mathcal{U}(0.31,18.2)$, $\ensuremath{[\textnormal{Z}/\textnormal{H}]\,}\sim\mathcal{U}(-1.0,0.97)$, $\ensuremath{[\alpha/\textnormal{Fe}]\,}\sim\mathcal{U}(-0.25,0.73)$, when computing the posterior probability using Equation~\ref{eqn:bayes}, and then estimating the 16, 50 and 84 percentiles from the posterior distribution for each parameter after marginalising over the remaining parameters. In the rest of this paper, the median is used to define the ``location'' and the 16 and 84 percentiles the $1\sigma$ error on SSP parameters. The evolution in \ensuremath{[\alpha/\textnormal{Fe}]\,} with redshift is somewhat confusing as the low (blue triangles) and high (pink diamonds) mass (velocity dispersion) samples exhibiting significant scatter and may not be consistent with a constant value over the entire redshift range (there is some indication for a change in behaviour at $z>0.35$). The intermediate mass samples (green circles and red squares) are more stable and exhibit no significant gradient, as expected for galaxies that are not chemically evolving. There does however appear to be a positive correlation between velocity dispersion and \ensuremath{[\alpha/\textnormal{Fe}]\,} which is consistent with other studies, e.g. \citet{2005ApJ...621..673T}. The evolution in \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} with redshift is also consistent with a chemically unevolving population of objects, although the lowest mass sample does exhibit a significant gradient. The metallicity also exhibits a significant positive correlation with velocity dispersion, where the more massive objects have higher metallicity, again consistent with other studies. Finally, the evolution in $age$ with redshift exhibits the expected behavior; galaxies become younger with redshift, with a trend that suggests the most massive objects are, in fact, the youngest. While the \balmer{\beta} and \balmer{\gamma F} trends are consistent with tracing a passively evolving population, it is not possible to obtain consistent estimates of the SSP parameters using these indices. The dotted line in Figure~\ref{fig:index_redshift} is the relationship expected for a population of objects in a $\Lambda$CDM concordance cosmology, with $[Z/H] = 0.37$ and $[\alpha/Fe] = 0.27$ and a formation age of 4.5 Gyr. To reproduce the \balmer{\beta} line-strengths, a population with the same chemical composition would need a formation age of approximately 2.5 Gyr and hence yield inconsistent line-strengths for \balmer{\gamma}. \begin{figure*} \includegraphics[scale=0.8]{dr7_index_z.ps} \caption{Line-strengths of \balmer{\beta}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:}, \ensuremath{\langle\textnormal{Fe}\rangle\:} and \balmer{\gamma F} as a function of redshift for the four samples of Figure 3; $230<\sigma_{v}\le260\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (blue triangles), $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (green circles), $290<\sigma_{v}\le320\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (red squares), $320<\sigma_{v}\le350\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (pink diamonds). The dotted line shows the expected variation in the line-strengths for an object with $[Z/H] = 0.37$ and $[\alpha/Fe] = 0.27$ and a formation age of 4.5 Gyr for a $\Lambda$CDM cosmology. Segregation of the line-strengths for the different samples is obvious except in the case of \balmer{\beta}. The more massive the sample (i.e. the higher velocity dispersion), the stronger the \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} and \ensuremath{\langle\textnormal{Fe}\rangle\:} line-strengths and the weaker the \balmer{\gamma F} line-strength at a given redshift. Errors are only provided on the the $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ sample (green circles) to avoid overcrowding on the plot. The errors on the other relations are similar as they have comparable signal-to-noise.} \label{fig:index_redshift} \end{figure*} \begin{figure*} \includegraphics[scale=0.9]{dr7_params_z_s20.ps} \caption{Reconstruction the evolution of the mean luminosity-weighted age, \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} using \balmer{\gamma F}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:}, Fe5270 and Fe5335. Symbols and colours are the same as shown in Figure~\ref{fig:index_redshift}. Open symbols represent data with S/N$<100\ensuremath{\ang\,^{-1}}$ in \balmer{\gamma F}. Horizontal lines represent the mean \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} for data with S/N$>100\ensuremath{\ang\,^{-1}}$in the $230<\sigma_{v}\le260\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (blue short dash), $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (green dot-dash), $290<\sigma_{v}\le320\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (red dotted-dash) and $320<\sigma_{v}\le350\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (pink long dash) samples. For the reconstruction of the age-redshift relationship the dotted line shows the age of the universe $t_{U}(z)$ for a $\Lambda$CDM cosmology while the dashed line indicates $t_{U}(z)-4.5\,\textnormal{Gyr}$ and is for reference only. To reduce clutter in the diagram we again only show the $1\sigma$ errors on the $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (circles/green) sample only, but errors for each sample are consistent with the scatter.} \label{fig:ssp_redshift_s20} \end{figure*} In Figure~\ref{fig:param_correlations_s20} we show the correlations between the parameter estimates of Figure~\ref{fig:ssp_redshift_s20} and in Table~\ref{tab:param_correlations_s20} we show the Spearman rank correlation coefficient ($-1\le \rho \le +1$) and in brackets the significance of the correlation from zero ($0\le s_{\rho} \le 1$; with low values having greater significance). From this we can see that, except for the lowest mass sample, correlations between the parameters have little statistical significance, i.e. values for $s_{\rho} > 0.1$. While interpretation of the lowest mass sample is difficult because of the gradients in \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} with redshift, possibly due to including a population of young objects at low redshift (c.f. the \balmer{\beta} redshift relationship in Figure~\ref{fig:index_redshift}), there does appear to be a significant correlation between the chemical composition of these objects and their mass, the more massive the objects the higher their \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,}. \begin{figure*} \includegraphics[scale=0.9]{dr7_param_correlations_s20.ps} \caption{Correlations between the SSP parameters of Figure~\ref{fig:ssp_redshift_s20}. Parameter estimates are formed from the one-dimensional posterior distribution marginalised over the remaining parameters.} \label{fig:param_correlations_s20} \end{figure*} \begin{table*} \begin{minipage}{100mm} \caption{SSP parameter correlations} \begin{tabular}{@{}lcr@{.}llr@{.}llr@{.}ll@{}} \hline Sample & Total Number & \multicolumn{3}{c}{t--[Z/H]} & \multicolumn{3}{c}{t--[$\alpha$/Fe]} & \multicolumn{3}{c}{[$\alpha$/Fe]--[Z/H]}\\ \hline $230<\sigma_{v}\le260\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ & $4472$ & $0$&$54$ & $(0.05)$ & $-0$&$71$ & $(0.01)$ & $-0$&$64$ & $(0.02)$ \\ $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ & $6195$ & $-0$&$26$ & $(0.33)$ & $-0$&$15$ & $(0.58)$ & $0$&$20$ & $(0.45)$ \\ $290<\sigma_{v}\le320\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ & $4835$ & $-0$&$17$ & $(0.57)$ & $0$&$05$ & $(0.85)$ & $0$&$24$ & $(0.41)$ \\ $320<\sigma_{v}\le350\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ & $2350$ & $-0$&$55$ & $(0.16)$ & $-0$&$02$ & $(0.96)$ & $0$&$36$ & $(0.39)$ \\ \hline \end{tabular} \label{tab:param_correlations_s20} \end{minipage} \end{table*} In Figures~\ref{fig:ssp_redshift_s20}-\ref{fig:param_correlations_s20} we have employed \balmer{\gamma F}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:} and \ensuremath{\langle\textnormal{Fe}\rangle\:} when estimating the SSP parameters. In Figure~\ref{fig:ssp_redshift_s23} we show the results of using \balmer{\beta} instead of \balmer{\gamma F} when reconstructing the age-redshift relationship. The \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} redshift relationships (not shown) are similar to those in Figure~\ref{fig:ssp_redshift_s20}, but shifted to lower values by approximately 0.05 dex in \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and 0.01 dex in \ensuremath{[\alpha/\textnormal{Fe}]\,}. The scatter in \balmer{\beta} translates into significant scatter in the age estimates but the same trend seen in Figure~\ref{fig:ssp_redshift_s20} still exists, i.e. the age of the objects decreases with redshift, although the ages are older by approximately 2.5 Gyr. \begin{figure*} \includegraphics[scale=0.9]{dr7_age_z_s23.ps} \caption{Reconstruction the evolution of the mean luminosity-weighted age using \balmer{\beta}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:}, Fe5270 and Fe5335. The only difference between this figure and Figure~\ref{fig:ssp_redshift_s20} is the choice of Balmer line used in the estimation of the SSP parameters. While the trend of the age-redshift relationship is the same as that derived using \balmer{\gamma F}, there is more variability in the $age$ estimates resulting from the variability in \balmer{\beta} as seen in Figure~\ref{fig:index_redshift}. Also the ages derived using \balmer{\beta} are systematically older than those derived using \balmer{\gamma F}. Symbols and lines are the same as in Figure~\ref{fig:ssp_redshift_s20} and we have excluded the \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} traces here as they are similar to the corresponding traces in Figure~\ref{fig:ssp_redshift_s20}.} \label{fig:ssp_redshift_s23} \end{figure*} \subsection{SSP Parameter Estimates - Gaussian Priors} By binning our samples into redshift intervals, we have obtained multiple independent estimates of the chemistry of our LRG population. For each mass sample we can use these independent estimates of \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} to produce a prior probability distribution, which we can then employ to better constrain the $age$ estimates when estimating the posterior probability using Equation~\ref{eqn:bayes}. When determining the priors on \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,}, we only use individual estimates that have been produced using line-strength data with S/N$>100\ensuremath{\ang\,^{-1}}$ in \balmer{\gamma F} and we generate Gaussian priors from the mean and standard error ($rms/\sqrt{n}$) of the \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} distributions for a given mass sample. We use \textit{importance sampling} \citep{2002PhRvD..66j3511L} and the new Gaussian priors to re-weight the original MCMC chain data obtained with uniform priors on the parameters. Although \ensuremath{[\alpha/\textnormal{Fe}]\,} does not correlate with $age$, \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} does (see Figure 3 of TMBO) and placing a prior constraint on \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} results in a tightening of the multivariate posterior distribution yielding a more localised marginalised distribution for the $age$ of the object. In Figure~\ref{fig:ssp_redshift_imp_zh_afe_s20} we show what effect the Gaussian priors on \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} have on the reconstruction of the \textit{age-redshift} relationship for each mass sample. The impact of the prior on \ensuremath{[\alpha/\textnormal{Fe}]\,} is negligible, but the decrease in the errors on the individual $age$ estimates, and the reduction in the scatter for a given mass sample resulting from the prior on \ensuremath{[\textnormal{Z}/\textnormal{H}]\,}, and the age-metallicity degeneracy, is significant. Over the redshift range studied herein, we see a decrease in the age of LRGs of $\simeq5$Gyrs, which is fully consistent with expectations from a $\Lambda$CDM universe. \begin{figure*} \includegraphics[scale=0.9]{dr7_age_z_imp_zh_afe_s20.ps} \caption{Reconstruction the evolution of the mean luminosity-weighted age using \balmer{\gamma F}, \ensuremath{\textnormal{Mg}\,\textnormal{b}\:}, Fe5270 and Fe5335 and employing Gaussian priors on \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,}. The size of the error bars on the $260<\sigma_{v}\le290\ensuremath{\:\textnormal{km}\:\textnormal{s}^{-1}}$ (circles/green) sample are about the same size as the data points.} \label{fig:ssp_redshift_imp_zh_afe_s20} \end{figure*} \section{Discussion} \label{sec:discuss} We present in Figures 4, 5, 7, 8 and 9, the results of our analysis of LRG ages, metallicities and $\alpha$-enhancements as a function of redshift. We also provide in Appendices C and D the data presented in these figures. Overall, these trends are consistent with expectations and can provide important constraints on cosmology (via the age-redshift relation) and galaxy evolution studies. However, we raise here a number of important caveats that should be considered when using these data. First, we have tried to select a consistent population of quiescent galaxies with redshift using the SDSS Luminous Red Galaxy (LRG) selection, which is designed to select ``red and dead" massive galaxies out to $z\sim0.4$ (with a constant number density). The size of this LRG sample also allows us to preferentially select quiescent galaxies with no obvious emission lines, as well as define four subsamples with the same range of velocity dispersions and luminosities. However, we have employed a simple evolutionary correction that maybe too naive at higher redshift and also performed aperture corrections to velocity dispersion measurements for which it has been necessary to assume that corrections derived at low redshift objects can be applied to higher redshift objects. This assumes no evolution in velocity dispersion, or stellar population distributions, and hence no evolution in the mass distribution, over the range $0.0 < z < 0.4$. Both of these assumptions may impact on the samples used in generating the stacked spectrum and could potentially result in different populations at high and low redshift being probed. This could explain the gradients in the \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,}-redshift relationships even if each population probed is passively evolving. While some studies into the luminosity function for the red sequence have shown that the most massive systems show little or no evolution over the redshift range probed in this work, \citep[See][]{2004ApJ...608..752B,2007ApJ...665..265F}, this may not be true for lower mass systems in our sample. The anticorrelation between \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} for the $230<\sigma_{v}\le260$ sample suggests that the low mass systems become less alpha-enhanced and more metal-enriched at late times. This could arise if our galaxy sample includes a fraction of ``blue cloud" galaxies whose star formation is switching off and hence moving onto the red sequence since z=0.4. These low mass systems have been forming stars over longer periods of time and have experienced more chemical enrichment than galaxies with similar velocity dispersions whose star formation switched off at earlier times, giving them lower \ensuremath{[\alpha/\textnormal{Fe}]\,} and higher \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} at low redshift. This low mass sample, while possibly offering insights into the mechanisms behind galaxy evolution, is less useful in its intended role as a cosmological probe due to the possibility of not accurately tracing a passively evolving sample of galaxies. There are further complications from adopting the Lick/IDS system when estimating the SSP parameters, even though this is a popular and well-known method. For example, matching the SDSS instrumental resolution to that of the IDS instrument, as well as performing each of the aperture, velocity dispersion and zero-point corrections, potentially introduces a source of error into our line-strengths and thus SSP parameter estimations. Also, the KMT05 models are not perfectly matched to the Lick/IDS system and therefore, the accuracy of our SSP parameter estimates could depend on the difference between the calibration of the data to the models. This difference varies for each of the Lick indices, resulting in the absolute value of any SSP parameter being dependent on the particular set of Lick indices chosen. This is illustrated in Figures~\ref{fig:ssp_redshift_s20} \& \ref{fig:ssp_redshift_s23} by the discrepancy between the absolute ages for our quiescent LRGs when derived using \balmer{\gamma} or \ensuremath{\textnormal{H}\beta\:}. Therefore, we would caution the reader from using our absolute ages in this paper, but recommend they focus on the relative ages with redshift. We also note that our observed line-strengths are strongly dependent on the accuracy of the velocity dispersion correction (see Appendix A). Since the velocity dispersion correction depends on the stellar sample used to generate the correction factors, matching the stellar sample to the dominant stellar population of the LRG may become important in the future. Also, at higher redshift, the contribution of younger stellar populations to the galaxy spectrum may require a different stellar sample to be used in making the velocity dispersion correction. Therefore, questions remain over the appropriateness of employing a single stellar sample of KIII stars to determine the velocity dispersion correction over the redshift range used in this work. However, we finish by stressing that for our samples we see little evolution in the shape of \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} with redshift, especially for the intermediate mass (velocity dispersion) samples which are consistent with a constant over the redshifts probed. This is re-assuring as it does indicate that our attempts to select a passively evolved, quiescent subsample of galaxies that could be used to probe the age--redshift relation can be found and the errors well-controlled. We provide these relationships here for others to use in their analyses, and plan to explore the cosmological constraints of our \textit{age-redshift} relationships in a subsequent paper. \section{Summary} We present here a detailed analysis of a sample of passive SDSS LRGs which are selected to be the same population of galaxies over the redshift range $0.0<z<0.4$. A total of 17,853 LRGs are co-added in four bins of velocity dispersion and luminosity to provide high signal--to-noise spectra for detailed spectra absorption line measurements. In Section 3 and Appendix A, we outline the careful calibration of these absorption line measurements onto the well--established Lick/IDS system which has been extensively used in the literature to constrain Simple Stellar Population (SSP) models and thus derive galaxy parameters like age, metallicity and $\alpha$--enhancements. In Figures 4, 5, 7 and 8, we present our results which show clear trends for these parameters with redshift (we also provide the data from these figures in Appendices C \& D). In particular, for our two intermediate mass (velocity dispersion) samples, we see a constant \ensuremath{[\textnormal{Z}/\textnormal{H}]\,} and \ensuremath{[\alpha/\textnormal{Fe}]\,} with redshift which confirms our assumptions that these LRGs are a passively evolving subsample of galaxies (i.e., they are chemically unevolved over this redshift range) providing confidence on our age determinations for these galaxies. We see a clear trend of decreasing age of our LRGs as a function of redshift ($\simeq5$Gyrs over the redshift range probed here), which is fully consistent with expectations from a $\Lambda$CDM universe. It also appears that the most massive sample of LRGs is also the youngest (Figures 5 and 8). We provide these relationships now to help others studying cosmology and galaxy evolution, as well as provide in our appendices, the methodology required for others to calibrate SDSS spectra onto the Lick/IDS system. \section*{Acknowledgments} We thank Nichol\'{a}s Cardiel for supplying a version of INDEXF compatible with SDSS FITS files and Daniel Thomas for supplying higher resolution SSP models than those available publicly. We thank Claudia Maraston, Daniel Thomas and Daniel Eisenstein for stimulating discussions during this work. We acknowledge important insights on earlier related work from Alfonso Aragon-Salamanca, and thank a helpful referee for their contributions. DPC thanks STFC and UoP for financial assistance during this work, and RCN was partially supported during this work through the STFC rolling grant ``Survey Cosmology". Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,314,259,992,748
arxiv
\section*{Introduction} Supersymmetry (SUSY) was a deeply studied subject during the last two decades since the no-go theorem of Coleman and Mandula about all the possible symmetries of the S-matrix in $D=4$ was by-passed by the discovery of Haag, Lopuzsanski and Sohnius (HLS) on possible fermionic extensions of the corresponding symmetry algebra \cite{susy}. While I do not know the relevance of the mentioned theorem in dimensions greater than four and outside the domain of local quantum field theory (and the present paper do not intend to address this question) it is certainly true that SUSY in higher dimensions becamed more and more relevant with the construction of the various supergravity theories in the seventies \cite{sugra}, with the formulation of superstrings theories in the eighties \cite{sustrings} and with the more recently conjectured existence of a hypothetic M-theory in $D=11$\cite{mtschwarz}. And in relation with this last one and possible compactifications of it that give rise to SUSY theories in lesser dimensions, it naturelly appear ``charged" extensions of the superalgebras considered before that were not allowed by the HLS theorem. These charges in fact present non trivial Lorentz transformation properties and were found as topological charges associated to solutions of $p$-branes in supergravity theories \cite{stelle}, being given essentially by generalizations of the familiar string winding number \be Z^{M_1 \dots M_p } \sim \int_{\Sigma_p} \; dX^{M_1} \wedge \dots \wedge d X^{M_p} \label{cargas} \ee where $\Sigma_p$ is the spatial volume of the $p$-brane world volume. A nonzero value of $Z$ is obtained if at some fixed proper time the $p$-brane defines a non trivial $p$-cycle in space-time. In references \cite{holpro} , \cite{azca}, \cite{bergsez}, \cite{sez1} , some possible generalizations of superalgebras containing p-form generators of the kind given in (\ref{cargas}) and additional spinorial ones (generalized $p$-forms in superspace) were considered, mainly focused in the minimal $N=1$ case and in eleven dimensions. In this letter a $N$-extended version of superalgebras in $D= 3,\; 9,\;\rm{mod}\; 8$ containing only tensor generators of the type (\ref{cargas}) is presented. Two appendices are included which contain the definitions and formulae used throughout the paper. \section*{The extended superalgebras} I start by considering the existence of a $\rm Spin(1,2n)$ algebra with generators $\{ X_{MN} = - X_{NM} , M = 0,1,\dots , 2n \}$ satisfying the standard commutation relations \be \left[ X_{MN}, X_{PQ} \right] = \eta_{MQ}\; X_{NP} + \eta_{NP}\; X_{MQ} - \eta_{MP}\; X_{NQ} - \eta_{NQ}\; X_{MP} \label{loralg} \ee For $n=1, 4 \; \rm{mod}\; 4$ it is possible to add to these bosonic generators $N$ anticommuting Majorana supercharges $\{ Q_I^\Lambda ; I=1,\dots,N , \Lambda = 1,\dots, 2^n \}$, \be \left[ Q_I^\Lambda , X_{MN}\right] = {(S_{MN})}^\Lambda{}_\Omega\; Q_I^\Omega\label{qxq} \ee where the $S_{MN}$'s are the spinor representation of the $X_{MN}$'s and are defined in Appendix A. The completeness of the $\Gamma^{(\mu_p )}$'s allows to write the general anticommutation relations \be \left\{ Q_I^\Lambda , Q_J^\Omega \right\} = \sum_{p=0}^n\; \frac{(-)^{\frac{p}{2}(p-1)}}{p!} \;( \Gamma^{(\mu_p )} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Omega}\; Z_{IJ(\mu_p )}^{(p)}\label{qq} \ee where due to (\ref{qxq}) and the XQQ superJacobi identity the $\{Z_{IJ(\mu_p )}^{(p)} \}$ is a set of $p$-form valued commuting operators, e. g. for any ${\rm Spin}(1, 2n)$ element $\; U(\omega) = e^{{1\over 2}\omega^{MN} X_{MN}}$, \bea U(\omega)^{-1}\; Z_{IJ(\mu_p )}^{(p)}\; U(\omega) &=& (V(\omega)^{-1})^{(\nu_p )}{}_{(\mu_p )} \; Z_{IJ(\nu_p )}^{(p)}\nonumber\\ V(\omega )^{(\mu_p)}{}_{(\rho_p)}\; (V(\omega )^{-1} )^{(\rho_p)}{}_{(\nu_p)} &=& \delta^{(\mu_p)}{}_{(\nu_p)} \eea where the $V$ and $\delta$ are introduced in (\ref{deltadef}) and (\ref{vdef}). By consistency they must satisfy the symmetry property \be Z_{IJ(\mu_p )}^{(p)} = \delta_p\; Z_{JI(\mu_p )}^{(p)}\label{simZ} \ee where $\delta_p$ is the phase introduced in (\ref{gamaprop1}). On the other hand, admitting grading of the algebra the XZQ superJacobi identity and equation (\ref{gamalor}) dictate the commutation relation \be \left[ Q_K^\Lambda , Z_{IJ(\mu_p )}^{(p)}\right] = \omega^{(p)}_{IJK}{}^L \; ({\Gamma_{(\mu_p )}})^\Lambda{}_\Omega\;Q_L^\Omega \label{qz} \ee where the coefficients $\omega^{(p)}_{IJK}{}^L$ (written some times as matrix elements) must satisfy \be \omega^{(p)}_{IJK}{}^L \equiv (\omega^{(p)}_{IJ})_K{}^L = \delta_p \; (\omega^{(p)}_{JI})_K{}^L \label{simw} \ee It will be shown they determine the whole algebra. \section*{The super-Jacobi identities} The consistency with the generalized Jacobi identities impose as usual strong constraints on the the possible superalgebras. Those identities involving Lorentz generators were used above to write the algebra, here the remaining ones will be imposed. To start with the $ZQQ$ identity is written as \bea 0 &=& \sum_{r=0}^n \frac{(-)^{\frac{r}{2}(r-1)}}{r!} \left( \omega^{(p)}_{IJK}{}^E \; \delta_L^F \Gamma_{(\mu_p )} \;\Gamma^{(\rho_r )} + \omega^{(p)}_{IJL}{}^E \; \delta^F_K \left( \Gamma_{(\mu_p )}\Gamma^{(\rho_r )}\;{\cal C}} \def\cD{\cal D^{-1} \right)^t {\cal C}} \def\cD{\cal D \right) Z_{EF(\rho_r )}^{(r)}\nonumber\\ &+&\sum_{q=0}^n \frac{(-)^{\frac{q}{2}(q-1)}}{q!}\left[ Z_{IJ(\mu_p )}^{(p)}, Z_{KL(\nu_q)}^{(q)}\right]\;\Gamma^{(\nu_q)} \label{j1} \eea On the other hand the $ZZQ$ identity becomes \bea \left[ Q_M^\Lambda, \left[ Z_{IJ(\mu_p )}^{(p)}, Z_{KL(\nu_q)}^{(q)}\right]\right] &=& \biggl( (\omega^{(p)}_{IJ}\; \omega^{(q)}_{KL})_M{}^E \; ( \Gamma_{(\mu_p )} \; \Gamma_{(\nu_q )} )^\Lambda{}_\Delta\\ &-&(\omega^{(q)}_{KL} \;\omega^{(p)}_{IJ})_M{}^E\; ( \Gamma_{(\nu_q)} \; \Gamma_{(\mu_p )} )^\Lambda{}_\Delta \biggr)\; Q_E^\Delta \label{j2} \eea Finally the $QQQ$ identity yields \begin{eqnarray} 0 &=& \sum_{q=0}^n \frac{(-)^{\frac{q}{2}(q-1)}}{q!}\; \biggl(\omega^{(q)}_{IJK}{}^L\; (\; \Gamma^{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Omega}\; (\Gamma_{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1} )^{\Delta\Upsilon} \nonumber\\ &+&\omega^{(q)}_{KIJ}{}^L\; (\Gamma^{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1} )^{\Delta\Lambda}\; (\Gamma_{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1} )^{\Omega\Upsilon} + \omega^{(q)}_{JKI}{}^L\;(\Gamma^{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1})^{\Omega\Delta} (\Gamma_{(\nu_q)} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Upsilon} \biggr) \label{j3} \end{eqnarray} The use of completeness of the $\Gamma^{(\mu_p)}$'s and some matrix relations quoted in the appendices give simplifyied versions of these conditions. In first term let us consider (\ref{j1}); then equations (\ref{2gamabis}) and (\ref{simcC}) {\item fix} the $Z$-algebra to be \bea \left[ Z_{IJ(\mu_p )}^{(p)}, Z_{KL(\nu_q)}^{(q)}\right] &=& \sum_{r=0}^n \; \frac{1}{r!}\; {\cal Q}_{IJKL}^{EF}(p;q,r)\; {\cal C}} \def\cD{\cal D^{(\rho_r)}_{(\mu_p)(\nu_q)} Z_{EF(\rho_r )}^{(r)} \nonumber\\ {\cal Q}_{IJKL}^{EF}(p;q,r) &=& -{1\over 2}\; \left( \omega^{(p)}_{IJK}{}^E \;\delta_L^F + \delta_q\;\omega^{(p)}_{IJL}{}^E \;\delta_K^F \right) + \delta_r\;\left( E\leftrightarrow F\right) \label{zzz} \eea Not only this, consistency with the antisymmetry of the commutator gives the constraint \be {\cal Q}_{IJKL}^{EF}(p;q,r) + \sigma_{pqr}\; {\cal Q}_{KLIJ}^{EF}(q;p,r) = 0\label{j12} \ee In second term the same equations allow to rewrite (\ref{j2}) as a constraint on the $\omega^{(p)}$'s \be \omega^{(p)}_{IJ}\; \omega^{(q)}_{KL} - \sigma_{pqr}\; \omega^{(q)}_{KL} \;\omega^{(p)}_{IJ} = {\cal Q}_{KLIJ}^{EF}(q;p,r) \; \omega^{(r)}_{EF} \label{j22} \ee Finally by using the identity (\ref{gamaprop2}) equation (\ref{j3}) becomes equivalent to \footnote{ For $n=1$ equation (\ref{gamaprop2}) does not work, however (\ref{j32}) remains valid. } \be 2^n\;\omega^{(0)}_{KJI}{}^L + \sum_{p=0}^n {2n+1\choose p}\; \left( \omega^{(p)}_{IJK}{}^L + \eta\;\omega^{(p)}_{IKJ}{}^L \right)= 0 \label{j32} \ee In the next section the set of equations (\ref{j12}), (\ref{j22}), (\ref{j32}) will be solved. \section*{The general solution} \bigskip Let us start by considering the right hand side of (\ref{j22}) at fixed $p$ and $q$ for two different $r, r'\;$ such that $\sigma_{pqr}= \sigma_{pqr'}$ (and then $\delta_r = \delta_{r'}$); it follows that \be {\cal Q}_{IJKL}^{EF}(p;q,r) = {\cal Q}_{IJKL}^{EF}(p;q,r') \label{s1} \ee as a {\item necessary} condition. But it is easily seen from (\ref{zzz}) that because of the $r$-dependence on ${\cal Q}$ comes enterely through the $\delta_r$ factor, the $p$-dependence on $\omega^{(p)}$ must be in the same way and taking into account (\ref{simw}) the general form of it must be \be \omega^{(p)}_{IJK}{}^L = z_{IJK}{}^L + \delta_p \; z_{IJK}{}^L \label{s2} \ee with $z_{IJK}{}^L$ arbitrary, possibly dependent on $n$. Then plugging (\ref{s2}) in (\ref{j32}) and using the identity \be \sum_{p=0}^n \; {2n+1\choose p}\; (-)^{ \frac{p\tilde p}{2} } = \eta\; 2^n \label{s3} \ee the following condition is obtained \be z_{IJK}{}^L + \eta\; z_{IKJ}{}^L = 0 \label{s4} \ee which in turn as before fixes the form of the $z$'s to be \be z_{IJK}{}^L = m_{IJK}{}^L - \eta \; m_{IKJ}{}^L \label{s5} \ee with $m_{IJK}{}^L$ constants coefficients. Finally equation (\ref{j12}) gives \be m_{IJK}{}^L = \delta_I^L\; \mu'_{JK} \ee with $\mu'$ an arbitrary $N_D\times N_D$ matrix. So the coefficients $\omega$'s can be recast in the final form \be \omega^{(p)}_{IJK}{}^L = \delta_I^L\; \mu_{JK} + \delta_p \; \delta_J^L\; \mu_{IK} \ee with the matrix $\mu$ satisfying the symmetry property \be \mu_{IJ} = -\eta\; \mu_{JI} \label{simu} \ee It is straightforward to prove that they are solution of (\ref{j22}) for arbitrary $\mu$. This arbitrariness of the matrix $\mu$ should be waited from the fact that the algebra as usual is defined up to changes of basis; in fact under a change of basis \bea Q_I &\longrightarrow& P^J{}_I\; Q_J\nonumber\\ Z^{(p)}_{IJ}&\longrightarrow& P^K{}_I \; P^L{}_J\; Z^{(p)}_{KL} \eea the algebra remains invariant of form it is made the replacement \be \mu \longrightarrow P^{\rm t}\;\mu \; P \label{mu} \ee Then depending on the value of $\eta$ (and so on $n$) the matrix $\mu$ can be taken in some standard form allowed by the transformation (\ref{mu}) according to (\ref{simu}). The final form of the $Z$-algebra is \bea \left[ Q_K^\Lambda , Z_{IJ(\mu_p )}^{(p)}\right] &=& ({\Gamma_{(\mu_p )}})^\Lambda{}_\Omega\; \left( \mu_{JK}\; Q_I^\Omega + \delta_p\; \mu_{IK}\; Q_J^\Omega \right) \\ \left[ Z_{IJ(\mu_p )}^{(p)}, Z_{KL(\nu_q)}^{(q)}\right] &=& \left[ \left(\mu_{LI}\;\sum_{r=0}^n\;\frac{1}{r!}\; {\cal C}} \def\cD{\cal D^{(\rho_r)}_{(\mu_p)(\nu_q)} Z_{KJ(\rho_r)}^{(r)} \right) + \delta_p \; \left( I\leftrightarrow J\right)\right]\nonumber\\ &+&\delta_q\; \left[ K\leftrightarrow L\right] \eea Let us explicitly write the superalgebras obtained in some particular cases. \bigskip \noindent{\underline{$n=5, \; N = 1$}} As said in the introduction this eleven dimensional algebra could be of relevance in M-theory; the $Z^{(p)}$- operators are presumibly related (as it is showed for particular cases where they effectively behave as {\item central} charges) to different charges associated to states that classically are p-branes like solutions of its low energy effective theory, eleven dimensional supergravity. Taking into account the symmetry constraint (\ref{simZ}) there exist charges for $p=1,2,5$ (commonly associated with the massless superparticle, supermembrane and superfivebrane of M-theory); the superalgebra is \bea \left\{ Q^\Lambda , Q^\Omega \right\} &=& (\Gamma^{M} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Omega}\; Z_M^{(1)} - {1\over 2} \; ( \Gamma^{MN} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Omega}\; Z_{MN}^{(2)}\nonumber\\ &+& {1\over 5!}\; (\Gamma^{(\mu_5 )} {\cal C}} \def\cD{\cal D^{-1} )^{\Lambda\Omega}\; Z_{(\mu_5)}^{(5)} \label{firstd=11}\\ \left[ Q^\Lambda , Z_{(\mu_p )}^{(p)}\right] &=& 2\;\mu\; ({\Gamma_{(\mu_p )}})^\Lambda{}_\Omega\;Q^\Omega\\ \left[ Z_{M}^{(1)}, Z_{N}^{(1)}\right] &=& 4\;\mu\; Z_{MN}^{(2)}\\ \left[ Z_M^{(1)} , Z^{(2)}_{N_1 N_2} \right] &=& 4\;\mu \left( \eta_{M N_1}\; Z_{N_2}^{(1)} - \eta_{M N_2} \; Z_{N_1}^{(1)}\right)\\ \left[ Z_M^{(1)} , Z^{(5)}_{(\nu_5)} \right] &=& 4\;\mu\; {1\over 5!}\; \epsilon^{(\rho_5}{}_{M\nu_5)}\; Z_{(\rho_5 )}^{(5)}\\ \left[ Z_{MN}^{(2)}, Z_{PQ}^{(2)} \right] &=& 4\;\mu\;\left( \eta_{MQ}\; Z_{NP}^{(2)} + \eta_{NP}\; Z_{MQ}^{(2)} \right) - (P\leftrightarrow Q)\\ \left[ Z_{M_1 M_2}^{(2)} , Z^{(5)}_{(\nu_5)} \right] &=& 4\;\mu \;\left[ \left( \eta_{M_2 N_1}\; Z_{M_1 N_2\dots N_5}^{(5)} + {\rm cyclic}(N_1\dots N_5)\right) - \left( M_1 \leftrightarrow M_2 \right)\right]\\ \left[ Z_{(\mu_5)}^{(5)} , Z^{(5)}_{(\nu_5)} \right] &=& 4\;\mu \biggl( - \epsilon^{(R_1}{}_{\mu_5\nu_5)}\; Z_{R_1}^{(1)} + \sum_{\sigma ,\tau \in {\cal P}_5 }\; (-)^{\sigma+ \tau}\; \biggl({1 \over 24}\; \prod_{l=1}^4 \; \eta_{M_{\sigma(l)} N_{\tau(l)}} Z_{M_{\sigma(5)} N_{\tau(5)}}^{(2)} \nonumber\\ &+& {1\over {72\; 5!}} \prod_{l=1}^2 \eta_{M_{\sigma(l)} N_{\tau(l)}} \epsilon^{(\rho_5}{}_{M_{\sigma(3)}\dots M_{\sigma(5)}N_{\tau(3)} \dots N_{\tau(5)})} Z_{(\rho_5 )}^{(5)} \biggr)\biggr)\label{lastd=11} \eea \bigskip \noindent{\underline{$n=4, \; N = 2$}} \footnote{ Because $\;\eta=+1\;$ if $\;n=4 \; \rm{mod}\; 4$, there exists no non trivial $N=1$ extension (see (\ref{simu})) and the $Z$'s on the RHS of (\ref{qq}) behave like ``central" charges, but of course that $\mu = 0$ is always the trivial choice for any $n, N$. } The matrix $\mu$ is written as $\;\mu_{IJ} = \mu\; \epsilon_{IJ}$ and the symmetry constraint (\ref{simZ}) does not rule out any $p$-charge; being $\;\delta_0 = \delta_1 = -\delta_2 = -\delta_3 = \delta_4 = 1\;$, I write $Z_{IJ(\mu_0 )}^{(0)} \equiv Z_{IJ}$ and $\; Z_{IJ(\mu_p )}^{(p)}\equiv \epsilon_{IJ}\; Z_{(\mu_p)}^{(p)}\;\;$ for $p=2,3\;$, being symmetric for $p=0, 1, 4$. With these conventions the algebra is \footnote{ Let us notice in particular that the subalgebra for $p=0$ in (\ref{z0alg}) is isomorphic to $sp(2, \Re )$ (take $J_3\sim Z_{12} /2\mu\; ,\; J_+\sim Z_{11} /2\mu \; , \; J_- \sim Z_{22} /2\mu\;$ ; then $[J_3 , J_{\pm}] = \pm\; J_\pm \; , [J_+ , J_- ] = -2\;J_3 \;$). Its appearence is natural from the fact that this is the group of automorphisms of the algebra since their transformations according to (\ref{mu}) leave invariant the matrix $\mu$. } \bea \left\{ Q^\Lambda_I , Q_{J;\Omega} \right\} &=& \delta^\Lambda_\Omega\; Z_{IJ} + (\Gamma^M )^\Lambda{}_\Omega\; Z_{IJ(M)}^{(1)} +{1\over 4!} \; ( \Gamma^{\mu_4} )^\Lambda{}_\Omega\; Z_{IJ(\mu_4)}^{(4)}\nonumber\\ &-& \epsilon_{IJ}\;\left({1\over 2!}\; (\Gamma^{MN})^\Lambda{}_\Omega\; Z_{(MN)}^{(2)} + {1\over 3!}\; (\Gamma^{(\mu_3 )})^\Lambda{}_\Omega\; Z_{(\mu_3)}^{(3)}\right)\\ \left[ Q^\Lambda_K , Z_{IJ(\mu_p )}^{(p)}\right] &=& \mu\; \left( {\Gamma_{(\mu_p )}})^\Lambda{}_\Omega\; (\epsilon_{JK}\; Q^\Omega_I + \epsilon_{IK}\; Q^\Omega_J \right)\\ \left[ Q^\Lambda_I , Z_{(\mu_p )}^{(p)}\right] &=& - \mu\; ({\Gamma_{(\mu_p )}})^\Lambda{}_\Omega\;Q^\Omega_I \\ \left[ Z_{IJ} , Z_{KL(\mu_p)}^{(p)}\right] &=& \mu\;\left( \epsilon_{LI} Z_{KJ(\mu_p )}^{(p)} + \epsilon_{LJ}\; Z_{KI(\mu_p )}^{(p)}\right) + (K\leftrightarrow L)\label{z0alg}\\ \left[ Z_{IJ} , Z_{(\mu_p)}^{(p)}\right] &=& 0\\ \left[ Z_{IJ(M)}^{(1)}, Z_{KL(N)}^{(1)}\right] &=& \mu\; \biggl( \eta_{MN}\; ( \epsilon_{LI}\; Z_{KJ} + \epsilon_{LJ}\; Z_{KI} + \epsilon_{KI}\; Z_{LJ} + \epsilon_{KJ}\; Z_{LI} )\nonumber\\ &+&\ 2\; (\epsilon_{IL}\; \epsilon_{JK} + \epsilon_{IK}\; \epsilon_{JL} )\; Z_{(MN)}^{(2)}\biggr)\\ \left[ Z_{IJ(M)}^{(1)} , Z^{(2)}_{(N_1 N_2)} \right] &=& 2\;\mu \left( \eta_{M N_2} \;Z_{IJ(N_1 )}^{(1)} - \eta_{M N_1}\; Z_{IJ(N_2 )}^{(1)} \right)\\ \left[ Z_{IJ(M)}^{(1)} , Z^{(3)}_{(\nu_3)} \right] &=& -2\;\mu\; Z_{IJ(M\nu_3 )}^{(4)}\\ \left[ Z_{IJ(M)}^{(1)} , Z^{(4)}_{KL(\nu_4)} \right] &=& \mu\; \biggl( (\epsilon_{IL}\; \epsilon_{JK} + \epsilon_{IK}\; \epsilon_{JL} )\; (\eta_{MN_1}\; Z^{(3)}_{N_2 N_3 N_4} - \eta_{MN_2}\; Z^{(3)}_{N_3 N_4 N_1}\nonumber\\ &+&\eta_{MN_3}\; Z^{(3)}_{N_4 N_1 N_2} - \eta_{MN_4}\; Z^{(3)}_{N_1 N_2 N_3})\nonumber\\ &+& {i\over 4!}\; \epsilon^{(\rho_4 }{}_{M\nu_4 )}\; (\epsilon_{IL}\; Z_{KJ(\rho_4)}^{(4)} + \epsilon_{JL}\; Z_{KI(\rho_4)}^{(4)})\biggr) + \left( K\leftrightarrow L\right)\\ \left[ Z_{(MN)}^{(2)}, Z_{(PQ)}^{(2)} \right] &=& 2\;\mu\;\left( \eta_{MP}\; Z_{(NQ)}^{(2)} + \eta_{NQ}\; Z_{(MP)}^{(2)}\right) - (P\leftrightarrow Q)\\ \left[ Z_{(M_1 M_2 )}^{(2)} , Z^{(3)}_{(\nu_3)} \right] &=& 2\;\mu\; \left( \eta_{M_1 N_1}\; Z_{(M_2 N_2 N_3 )}^{(3)} + {\rm cyclic}(N_1 N_2 N_3)\right) - \left( M_1 \leftrightarrow M_2\right)\\ \left[ Z_{(\mu_2 )}^{(2)} , Z^{(4)}_{IJ(\nu_4)} \right] &=& 2\;\mu\; \biggl( \eta_{M_1 N_1}\; Z_{IJ(M_2 N_2 N_3 N_4)}^{(4)} - \eta_{M_1 N_2}\; Z_{IJ(M_2 N_3 N_4 N_1)}^{(4)}\nonumber\\ &+& \eta_{M_1 N_3}\; Z_{IJ(M_2 N_4 N_1 N_2)}^{(4)} - \eta_{M_1 N_4}\; Z_{IJ(M_2 N_1 N_2 N_3)}^{(4)}\biggr)\nonumber\\ &-& \left( M_1 \leftrightarrow M_2\right)\\ \left[ Z_{(\mu_3)}^{(3)} , Z^{(3)}_{(\nu_3)} \right] &=& \mu \; \biggl( \sum_{\sigma ,\tau \in {\cal P}_3}\; (-)^{\sigma +\tau}\; \eta_{M_{\sigma (1)} N_{\tau (1)}}\; \eta_{M_{\sigma (2)} N_{\tau (2)}}\; Z_{( M_{\sigma (1)} N_{\tau (1)} )}^{(2)}\nonumber\\ &-& {i\over 3}\; \epsilon^{(\rho_3}{}_{\mu_3\nu_3 )}\; Z^{(3)}_{(\rho_3)}\biggr) \\ \left[ Z_{(\mu_3)}^{(3)} , Z^{(4)}_{IJ(\nu_4)} \right] &=& 2\; \mu \; \biggl( \sum_{\tau \in {\cal P}_4 }\; (-)^\tau \; \prod_{l=1}^3 \; \eta_{M_l N_{\tau(l)}}\; Z_{IJ(N_{\tau (4)})}^{(1)} \nonumber\\ &+& \frac{i}{288} \sum_{\sigma\in{\cal P}_3 \atop \tau\in{\cal P}_4} (-)^{\sigma +\tau} \eta_{M_{\sigma(1)} N_{\tau(1)}} \epsilon^{(\rho_4}{}_{M_{\sigma(2)}M_{\sigma(3)} N_{\tau(2)} N_{\tau(3)} N_{\tau(4)})} Z_{IJ(\rho_4 )}^{(4)} \biggr)\nonumber\\ & & \\ \left[ Z_{IJ(\mu_4)}^{(4)} , Z^{(4)}_{KL(\nu_4)} \right] &=& \mu \; \biggl\{ \epsilon_{LI}\;\biggl( \;\sum_{\tau \in {\cal P}_4 }\; (-)^\tau \; \prod_{l=1}^4 \;\eta_{M_l N_{\tau(l)}}\; Z_{KJ} \nonumber\\ &+& \frac{1}{8}\; \sum_{\sigma ,\tau\in{\cal P}_4}\; (-)^{\sigma +\tau} \;\prod_{l=1}^2 \;\eta_{M_{\sigma(l)} N_{\tau(l)}}\; Z^{(4)}_{KJ ( M_{\sigma(3)}M_{\sigma(4)} N_{\tau(3)} N_{\tau(4)})}\nonumber\\ &+& \frac{\epsilon_{KJ}}{6}\; \sum_{\sigma ,\tau\in{\cal P}_4}\; (-)^{\sigma +\tau} \;\biggl( \prod_{l=1}^3 \; \eta_{M_{\sigma(l)} N_{\tau(l)}}\; Z^{(2)}_{(M_{\sigma(4)}N_{\tau(4)})}\nonumber\\ &-& \frac{i}{36}\; \eta_{M_{\sigma(1)} N_{\tau(1)}}\; \epsilon^{(\rho_3}{}_{ M_{\sigma(2)}M_{\sigma(3)}M_{\sigma(4)} N_{\tau(2)} N_{\tau(3)}N_{\tau(4)})}\; Z^{(3)}_{(\rho_3 )} \biggr)\nonumber\\ &-& i\; \epsilon^{(R}{}_{\mu_4\nu_4 )}\; Z^{(1)}_{KJ(R)} \biggr) + (I\leftrightarrow J ) \biggr\} + \left( K\leftrightarrow L\right) \eea \noindent{\underline{$n=1,\; N\; \rm{arbitrary} $} } This three dimensional algebra presents one scalar charge $\; Z_{IJ(\mu_0 )}^{(0)} \equiv Z_{IJ} = - Z_{JI}\;$ and a vector one $\; Z_{IJ(M)}^{(1)} = Z_{JI(M)}^{(1)}$. The matrix $\mu$ is symmetric and by definiteness I take it to be pseudorthogonal with signature $(N-d,d)$, $\mu_{IJ} = \eta_{IJ}$. The algebra reads \bea \left\{ Q^\Lambda_I , Q_{J;\Omega} \right\} &=& \delta^\Lambda_\Omega\; Z_{IJ} + (\Gamma^M )^\Lambda{}_\Omega\; Z_{IJ(M)}^{(1)}\\ \left[ Q^\Lambda_K , Z_{IJ}\right] &=& \eta_{JK}\; Q^\Lambda_I - \eta_{IK}\; Q^\Lambda_J\\ \left[ Q^\Lambda_K , Z_{IJ(M)}^{(1)}\right] &=& ({\Gamma_M})^\Lambda{}_\Omega\; \left( \eta_{JK}\; Q^\Omega_I + \eta_{IK}\; Q^\Omega_J \right)\\ \left[ Z_{IJ} , Z_{KL}\right] &=& \eta_{LI}\; Z_{KJ} - \eta_{LJ}\; Z_{KI} - (K\leftrightarrow L )\\ \left[ Z_{IJ} , Z_{KL(M)}^{(1)}\right] &=& \eta_{LI}\; Z_{KJ(M)}^{(1)} - \eta_{LJ}\; Z_{KI(M)}^{(1)} + (K\leftrightarrow L )\\ \left[ Z_{IJ(M)}^{(1)} , Z^{(1)}_{KL(N)} \right] &=& \left( \eta_{LI}\; \left( \eta_{MN}\; Z_{KJ} - \epsilon^{(R}{}_{MN)} \; Z_{KJ(R)}^{(1)} \right) + (I\leftrightarrow J) \right)\nonumber\\ &+&(K\leftrightarrow L) \eea \section*{Conclusions} I have presented in this paper an $N$-extended superalgebra by $p$-form tensor operators for any dimension in which Majorana spinors exists, in some sense a generalization of early work in reference \cite{holpro} where the minimal case $N=1$ was considered. \footnote{ Also the case of dimensions $D=9 \; \rm{mod}\; 8$ was not considered there. } This is only a first step however, in the direction of a generalization of this work that should contain spin operators other than the supersymmetric charges. This inclusion is motivated mainly from the fact that formulations of a $N$-extended target superspace that add new degrees of freedom (the coordinates corresponding to the new generators) could be of most importance in the construction of super $p$-branes actions \cite{sez2}. The fact that the algebra given in (\ref{firstd=11})-(\ref{lastd=11}) does not reduce to the $D=11$ algebra discovered in \cite{sez1}) seems to give a hint that this generalization should exist and be parameter-dependent. Also it should be interesting to work out the representations as well as field-theoretic realizations of these algebras. Finally it is worth to remark that even dimensional results can be straightforwardly got from naive dimensional reduction from the results presented here; however not all the possibilities might be contemplated this way because more covariance constraints that the really needed ones are included following this route, in particular in writting equations like (\ref{qz}). I hope to address some of these questions in a near future. \bigskip I thank to Jos\'e Edelstein for bringing to my attention references \cite{sez1}, \cite{sez2} (from which I learnt about the existence of \cite{holpro}) and for useful comments.
1,314,259,992,749
arxiv
\section{Introduction} The role of many-body pairing in Fermi systems above the critical superfluid temperature - the so-called pseudogap pairing - is a complex and intriguing problem. It has been long recognized that the pseudogap pairing is important in underpinning superconductivity in high-temperature superconductors \cite{loktev2001,Chen2005,Stajic2017}, however due to quantum fluctuations such pairing is difficult to understand \cite{Chien2010,Mueller2017}. The advancement of experimental techniques in trapping and control of interactions in ultracold Fermi gases makes them an ideal platform to study high-temperature many-body pairing across the crossover from a Bose-Einstein condensate (BEC) to a Bardeen-Cooper-Schrieffer (BCS) superfluid \cite{Gaebler2010}. Two-dimensional (2D) ultracold Fermi gases are of particular interest due to the increasingly important role of quantum fluctuations in low dimensions and it is expected that the interaction and temperature regimes where pseudogap pairing dominates, known as the pseudogap regime, is much more pronounced \cite{torma2016,Mueller2017}. Probing the pseudogap regime is difficult as there is no conclusive phase transition across the BEC-BCS crossover. The most widely used theoretical definitions of the pseudogap formation temperature are when a minimum enters the density of states (DoS) or there is a "backbend" in the spectral function \cite{Zwerger2009,Ohashi2009,Hu2010prl,Chien2010,Perali2011,Mueller2017}. However, there is no uniquely defined transition and these methods can lead to competing formation temperatures. For example, in 2D the suppression entering the DoS near the Fermi surface leads to a limit in the weakly interacting BCS regime where pairing and condensation occur at different temperatures. Hence, one has to take a more significant suppression in the DoS to define a consistent and meaningful pseudogap formation temperature \cite{Bauer2014}. Another technique to observe the effects of many-body pairing is to calculate the equation of state (EoS) and thermodynamic properties \cite{kinast2005heat}. It has been observed that the spin susceptibility and specific heat at constant volume contain information about the pairing in a three-dimensional interacting Fermi gas and a characteristic transition temperature can be defined \cite{vanwyk2016}. In this work we will determine the pseudogap regime using the specific heat at constant volume in two dimensions and compare to the pseudogap regime predicted from the suppression in the DoS (see, i.e., Ref.~\cite{Bauer2014}). On the experimental side, the advancement of trapping techniques over the last few years has seen a set of important measurements on 2D interacting Fermi gases~\cite{Martiyanov2010,Feld2011,Frolich2011,Zhang2012}. There was much debate about the pairing regime found in these experiments \cite{Watanabe2013,Bauer2014,Marsiglio2015}, where it was argued that the regime probed was not many-body pairing, but two-body pairing. In order for many-body pairing to exist the Fermi gas must have a defined Fermi surface and pairing comes from the many-body nature of the system, which is seen to be true for a chemical potential $\mu>0$. However, it has also been argued by Ref.~\cite{Marsiglio2015} that the criterion of a positive chemical potential is too strict and that many-body pairing can exist for a wider interaction and temperature range. Recent experimental work by Murthy in Ref.~\cite{Murthy2018} has seemingly observed the high-temperature pairing in 2D Fermi gases for a wide range of interaction strengths and temperatures, although they did not determine a phase diagram. All previous theoretical methods used to study the pseudogap regime in two dimensions (i.e., $x-y$ plane) have relied upon a \emph{single} channel model of fermions with a contact interaction. It has been found that this model works very well in explaining experimental data for both below the Berezinskii-Kosterlitz-Thouless (BKT) transition \cite{Bertaina2011,He2015,Schonenberg2016,Mulkerin2017,he2019reaching} and the EoS in the normal state \cite{Klimin2012,Watanabe2013,Bauer2014,Mulkerin2015,Marsiglio2015,Matsumoto2014,Anderson2015}. However, recent measurements on the breathing mode and quantum anomaly of 2D Fermi gases \cite{Holten2018,Peppler2018} have found a significant deviation from the state-of-the-art theoretical prediction using the single-channel model \cite{Hofmann2012,Gao2012}. This difference could not be explained through a temperature dependence of the experimental data alone \cite{Mulkerin2018}, and including higher-order excitations along the $z$-axis of the quasi-2D system is crucial in capturing the reduced breathing mode anomaly \cite{Toniolo2018,Hu2019}. Theoretical studies focused on the importance of the quasi-2D nature of scattering and the increased role of confined fermions being able to occupy higher excited single-particle states along the $z$-direction, even when the trapping is extremely tight~\cite{Kestner2007b,Fischer2013,Hu2019,Wu2019}. Including dressed molecules within a two-channel model has been found to effectively describe this situation \cite{Levinsen2013,Hu2019}, where the molecular state encapsulates the higher excited states and characterizes a confinement-induced effective range of interactions. This highlights the importance of understanding the pseudogap regime by using the two-channel model. The purpose of this work is to understand the role played by the confinement-induced effective range on many-body pairing within the two-channel model. Using a field theoretic method to include pairing fluctuations, we calculate the pseudogap formation temperature from the specific heat at constant volume, $\tilde{T}$ \cite{vanwyk2016}. We compare this characteristic temperature to the pseudogap temperature determined through a suppression in the DoS at the Fermi surface. We find that when using the definition of the pseudogap temperature, $T^*$, as a dip in the DoS of 25\% of the value at the left fringe, there is a good agreement between $T^{*}$ and $\tilde{T}$ in the weakly interacting regime. We then investigate the role played by the confinement-induced effective range on the pseudogap formation temperature of $\tilde{T}$ and see that the effective range shifts the pseudogap window towards weaker binding energies. Finally, we compare our results to the recent radio-frequency (rf) spectroscopy measurements of the pseudogap regime by Murthy \textit{et al.} in Ref.~\cite{Murthy2018}, which is the most promising way to experimentally map out the pseudogap regime. For this purpose, we also calculate rf-spectra for a trapped system from the analytically continued Green's function, and examine the role of the confinement-induced effective range. A good \emph{qualitative} agreement is found between the experimental data and the theoretical pseudogap regime defined by $\tilde{T}$. However, we find that the inclusion of the confinement-induced effective range does not improve the agreement. The rest of our manuscript is set out as follows. In Sec.~\ref{sec:hamil}, we introduce the two-channel model Hamiltonian and outline the many-body $T$-matrix theory. In Sec.~\ref{sec:results}, we calculate the specific heat at constant volume for a 2D interacting Fermi gas, and using the properties of the specific heat we determine the pseudogap regime and compare it to the pseudogap regime found from the DoS. In Sec.~\ref{comp}, we compare our results to the recent experimental measurements, by calculating the rf-spectra and the pseudogap temperature. And finally in Sec.~\ref{sec:conc}, we summarize our findings. For simplicity we set $\hbar=1$ throughout. \section{Hamiltonian} \label{sec:hamil} We start our calculation of the many-body Green's function within a two-channel model of the 2D interacting Fermi gas in the normal state, described by the Hamiltonian~\cite{Ohashi2002,Gurarie2007,Tajima2018b,Mulkerin2020}: \begin{eqnarray}\label{eq:hamil} \mathcal{H} & = & \sum_{\mathbf{k}\sigma}\xi_{\mathbf{k}}c_{\mathbf{k}\sigma}^{\dagger}c_{\mathbf{k}\sigma}^{\phantom{\dagger}}+\sum_{\mathbf{q}}\left(\epsilon_{\mathbf{q}}/2+\nu-2\mu\right)b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}^{\phantom{\dagger}}\nonumber \\ & & +g_{\rm b}\sum_{\mathbf{kq}}\left(b_{\mathbf{q}}^{\phantom{\dagger}}c_{\mathbf{q}/2+\mathbf{k}\uparrow}^{\dagger}c_{\mathbf{q}/2-\mathbf{k}\downarrow}^{\dagger}+ {\rm H.c.}\right), \end{eqnarray} where ${\rm H.c.}$ is the Hermitian conjugate, $c_{\mathbf{k}\sigma}$ are the annihilation operators of atoms with spin $\sigma=\uparrow,\downarrow$ and mass $M$ in the open channel, and $b_{\mathbf{q}}$ are the annihilation operators of molecules in the closed channel. The kinetic energy of the Fermi atoms measured from the chemical potential $\mu$ is $\xi_{\mathbf{k}} = \epsilon_{\mathbf{k}}-\mu$, where $\epsilon_{\mathbf{k}}=k^2/(2M)$. The threshold energy of the diatomic molecule is $\nu$ and the Feshbach coupling is $g_{\rm b}$. As we have used a momentum-independent Feshbach coupling constant, which is unphysical at the high energy, there is an ultraviolet divergence. This divergence can be removed by renormalizing $\nu$, as we discuss in detail in Appendix A. $\nu$ and $g_{\rm b}$ are related to the physical observables of the binding energy $\varepsilon_B$ and the effective range of interactions $R_s<0$ via, \begin{alignat}{1} \nu&=-\varepsilon_B+g^{2}_{\rm b}\sum_{\mathbf{k}}\frac{1}{2\epsilon_{\mathbf{k}}+\varepsilon_B},\\ g^{2}_{\rm b}&=-\frac{4\pi\hbar^{4}}{M^{2}}\frac{1}{R_{s}}. \end{alignat} \subsection{Many-body $T$-matrix theory} \label{sec:Tmatrix} \begin{figure}\centering{} \includegraphics[width=0.9\columnwidth]{diags} \caption{\label{fig:diags} (color online) The Feynman diagrams for (a) the fermion self energy, (b) the molecular self-energy, and (c) the vertex function within the ladder-approximation. } \end{figure} We consider the effect of pair fluctuations on the normal state properties of a strongly-correlated Fermi system through the non-self-consistent $T$-matrix approximation. The interacting thermal Green's function of fermions at temperature $T$ is given by \cite{Liu2005PRA,Ohashi2009,Tajima2018b}, \begin{alignat}{1} \label{eq:gfdown} G(\mathbf{k},i\omega_m)=\frac{1}{i\omega_m-\left(\epsilon_{\mathbf{k}}-\mu\right)-\Sigma_{a}(\mathbf{k},i\omega_m)}, \end{alignat} where we sum all of the ladder-type diagrams to obtain the self-energy (see Fig.~\ref{fig:diags}(a)), \begin{equation} \Sigma_{a}=k_{B}T\sum_{\mathbf{q},i\nu_{n}}G^{(0)}\left(\mathbf{q}-\mathbf{k},i\nu_{n}-i\omega_{m}\right)\Gamma\left(\mathbf{q},i\nu_{n}\right).\label{eq:selfenergy1} \end{equation} Here, the fermionic and bosonic Matsubara frequencies are respectively $\omega_{m}=(2m+1)\pi k_BT$ and $\nu_{n}\equiv2n\pi k_BT$ for integers $m$ and $n$, and the free Green's function is $G^{(0)}(\mathbf{k},i\omega_m)=(i\omega_m-\xi_{\mathbf{k}})^{-1}$. The vertex function $\Gamma\left(\mathbf{q},i\nu_{n}\right)$, which is an effective bosonic propagator, can be written through Fig.~\ref{fig:diags}(c), \begin{equation} \Gamma^{-1}\left(\mathbf{q},i\nu_{n}\right)=U_{\rm eff}^{-1}\left(\mathbf{q},i\nu_{n}\right)+\Pi\left(\mathbf{q},i\nu_{n}\right),\label{eq:vertexfunction} \end{equation} with the effective interaction $U_{\rm eff}\equiv g_{\rm b}^2 D_0(\mathbf{q},i\nu_n)$ and the pair propagator $\Pi(\mathbf{q},i\nu_{n})$, \begin{alignat}{1} \Pi &=k_{B}T\sum_{\mathbf{k},i\omega_{m}}G^{(0)}\left(\mathbf{q}-\mathbf{k},i\nu_{n}-i\omega_{m}\right)G^{(0)}\left(\mathbf{k},i\omega_{m}\right), \nonumber \\ &=k_{B}T\sum_{\mathbf{k},i\omega_{m}}\frac{1-f(\xi_{\frac{\mathbf{q}}{2}-\mathbf{k}})-f(\xi_{\frac{\mathbf{q}}{2}+\mathbf{k}})}{2\epsilon_{\mathbf{k}}-2\mu+ \epsilon_{\mathbf{q}}/2-i\nu_n }. \label{Eq:propagator2p} \end{alignat} Here, the free Green's function of a molecular boson is $D_0(\mathbf{q},i\nu_n) = 1/\left[i\nu_n - \epsilon^{\rm B}_{\mathbf{q}} \right]$ with dispersion $\epsilon^{\rm B}_{\mathbf{q}} = \epsilon_{\mathbf{q}}/2 -\nu +2\mu$. As shown in Fig.~\ref{fig:diags}(b), similar to the fermionic Green's function, the interacting Green's function of the molecular boson also includes a self-energy correction, \begin{alignat}{1} D(\mathbf{q},i\nu_n) = \frac{1}{i\nu_n - \epsilon_{\mathbf{q}}/2-\nu +2 \mu- \Sigma_{\rm m}(\mathbf{q},i\nu_n)}, \end{alignat} where $\Sigma_{\rm m}(\mathbf{q},i\nu_n)$ is given by \begin{alignat}{1} \label{eq:mol_se} \Sigma_{\rm m} = -g_{\rm b}^2 \Pi({\mathbf{q}},i\nu_n). \end{alignat} At a given temperature, binding energy and effective range we tune the chemical potential to satisfy the particle number equation: \begin{alignat}{1} N &= N_{\rm a} + 2N_{\rm m} \nonumber \\ &=2k_BT\sum_{\mathbf{k},i\omega_m}G(\mathbf{k},i\omega_m)+2k_BT\sum_{\mathbf{q},i\nu_n}D(\mathbf{q},i\nu_n). \label{eq:Dens:eos} \end{alignat} To make the equations dimensionless, we define the Fermi units $k_{\textrm{F}}=(2\pi n)^{1/2}$, $\varepsilon_{\rm F}=k_{\textrm{F}}^{2}/(2M)$, and $T_{\textrm{F}}=\varepsilon_{\rm F}/k_{B}$, where $n=N/V=k_{\rm F}^2/2\pi$ is the total density and $V$ is the area (or the volume in 2D). We then converge the chemical potential $\mu/\varepsilon_{\rm F}$ at a given reduced temperature $T/T_{{\rm F}}$, binding energy $\varepsilon_{B}/\varepsilon_{\rm F}$, and effective range $k_{{\rm F}}^{2}R_{s}$. The closed set of Eqs.~\eqref{eq:gfdown}-\eqref{eq:mol_se}, can be solved directly with a numerical sum over the Matsubara frequencies, as done in Ref.~\cite{Liu2005PRA}; however within this methodology it is difficult to numerically continue the thermal Green's function to the real axis, which is needed for obtaining the spectral function and the DoS. Alternatively, we can analytically continue the Matsubara frequencies to the real axis first, allowing us to directly calculate the analytically continued Green's function~\cite{Marsiglio2015,Pietil2012,Veillette2008}. The thermal Green's function then becomes \begin{alignat}{1}\label{eq:Green_cont} G(\mathbf{k},\omega^{+})=\frac{1}{\omega^{+}-\left(\epsilon_{\mathbf{k}}-\mu_{\downarrow}\right)-\Sigma_{a}(\mathbf{k},\omega^{+})}, \end{alignat} where $\omega^{+}\equiv\omega+i0^{+}$. Using contour integration the self-energy function takes the form \cite{Rohe2001}, \begin{alignat}{1} \Sigma_{a} & (\mathbf{k},\omega^{+})=\nonumber \\ & \int\frac{d\mathbf{q}}{(2\pi)^{2}}\frac{d\epsilon}{\pi}\biggl[b(\epsilon)G^{(0)}(\mathbf{k}-\mathbf{q},\epsilon-\omega^{+}){\rm Im}\Gamma(\mathbf{q},\epsilon^{+})\nonumber \\ & -f(\epsilon){\rm Im}G^{(0)}(\mathbf{k},\epsilon^{+})\Gamma(\mathbf{k}+\mathbf{q},\epsilon+\omega^{+})\biggl], \end{alignat} where $f(z)=[\exp(\beta z)+1]^{-1}$ and $b(z)=[\exp(\beta z)-1]^{-1}$ are the Fermi and Bose distributions respectively, with $\beta \equiv 1/(k_BT)$. We then find the imaginary part of the analytically continued self-energy, \begin{alignat}{1} {\rm Im}\,\left[\Sigma_{a}(\mathbf{k},\omega)\right]= & \int\frac{d\mathbf{q}}{(2\pi)^{2}}\frac{d\epsilon}{2\pi}\left[b(\epsilon)+f(\epsilon-\omega)\right]\nonumber \\ & \times{\rm Im}\Gamma(\mathbf{q},\epsilon)\,{\rm Im}G^{(0)}(\mathbf{q}-\mathbf{k},\epsilon-\omega), \end{alignat} and we calculate the real part of the self-energy from the Kramers-Kronig relation, \begin{alignat}{1} \label{eq:Sig_real} {\rm Re}\,\left[\Sigma_a(\mathbf{k},\omega)\right]=\frac{1}{\pi}\mathcal{P}\int_{-\infty}^{\infty}d\omega'\frac{{\rm Im}\left[\Sigma_a(\mathbf{k},\omega')\right]}{\omega'-\omega}. \end{alignat} The DoS is calculated by analytically continuing the fermionic Green's function and integrating over the momenta \begin{alignat}{1} \rho(\omega) & = -\frac{1}{\pi} \sum_{\mathbf{k}} {\rm Im} G(\mathbf{k},i\omega_m\rightarrow\omega+i0^+), \nonumber \\ & \equiv \sum_{\mathbf{k}} A(\mathbf{k},\omega+i0^+). \end{alignat} It is possible to relate the above many-body $T$-matrix theory to the Nozi\`ere-Schmitt Rink \cite{nozieres1985bose,Mulkerin2019b} approach by truncating the self-energy to the first order, i.e., \begin{alignat}{1} \label{eq:NSR_Green} G(\mathbf{k},i\omega_m) = G_0(\mathbf{k},i\omega_m)+G_0 \Sigma_a(\mathbf{k},i\omega_m) G_0. \end{alignat} This is equivalent to writing the thermodynamic potential for a two-channel model \begin{alignat}{1} \Omega = \Omega^{(0)}_{\rm F} + \Omega^{(0)}_{\rm B}- \sum_{\mathbf{q},i\nu_n} \ln \left[ 1+g_{\rm b}^2D_0 \Pi(\mathbf{q},i\nu_n) \right], \end{alignat} where $\Omega^{(0)}_{\rm F} = 2\sum_{\mathbf{k}} \ln( e^{-\beta\epsilon_{\mathbf{k}}} +1)$ is the free fermionic thermodynamic potential and $\Omega^{(0)}_{\rm B} = \sum_{\mathbf{q}} \ln(e^{-\beta\epsilon^{\rm B}_{\mathbf{q}}} - 1)$ is the free bosonic thermodynamic potential. Although the pressure equation of state (EoS) and thermodynamic properties can be calculated from the density equation of state in Eq.~\eqref{eq:Dens:eos} via the Gibbs-Duhem relation \cite{Bauer2014,Mulkerin2015}, we use the NSR approach for the calculation of the specific heat at constant volume as this is considerably simpler: it is more feasible to calculate numerically the derivatives with respect to the chemical potential and interaction strength. We expect that there is a small correction to the specific heat at constant volume when the self-energy becomes more significant and the approximation of Eq.~\eqref{eq:NSR_Green} weakens. Through the thermodynamic potential we can calculate the thermodynamic properties of the system, starting with the pressure EoS $P=\Omega/V$, the energy $E = -TS+\Omega+\mu N$, and the entropy $S = -\left(\partial\Omega/\partial T \right)_{\mu}$ \cite{Mulkerin2020}. The specific heat at constant volume is given by \begin{alignat}{1} C_V &= \left(\frac{\partial E}{\partial T}\right)_{V,N}. \end{alignat} For the specific heat at constant volume it is simplest to calculate the derivative of the energy with respect to temperature numerically: \begin{alignat}{1} C_V = \frac{E\left[\mu(T+\delta),T+\delta\right]-E\left(\mu(T-\delta),T-\delta\right]}{2\delta}, \end{alignat} and where we set $\delta=0.01T_F$ \cite{vanwyk2016}. We note that since the superfluid transition temperature predicted by the Thouless criterion is precisely zero in two dimensions~\cite{Hohenberg1966,loktev2001}, we do not consider the finite-temperature transition in this work. It is also important to note the limitations and benefits of the non-self-consistent $T$-matrix scheme. This $T$-matrix scheme is useful as it is possible to analytically continue the Green's function and directly obtain spectral functions: we do not rely on a numerically unsound procedure. The non-self-consistent $T$-matrix approximation is well defined and works well in the high temperature regime where the interaction strength effectively becomes weaker, in the tightly-bound limit where the binding energy $\epsilon_B \gg \varepsilon_{\rm F}$ and molecules are well-formed, or in the weakly-interacting limit where the binding energy is exponentially small. However, when the interactions between performed molecules are strong, such as in the strongly correlated regime and at sufficiently low temperatures, the chemical potential approaches the binding energy and we expect the non-self-consistent $T$-matrix theory to give incorrect results \cite{Matsumoto2014}. In this work we avoid this problem as we focus on the relatively high temperature regime (i.e., at temperatures larger than a characteristic BKT temperature of $\sim0.1T_F$.) \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{Cv_temo} \caption{\label{fig:Cv_temo} (color online) (a) The chemical potential in units of the Fermi energy for binding energies $\varepsilon_B/\varepsilon_{\rm F}=0.01$ (black dotted), $0.1$ (purple dot-dashed), $0.3$ (blue dashed), and $0.75$ (red solid) and (b) the specific heat at constant volume in units of $C_0=Nk_{\rm B}$ as a function of reduced temperature for the same binding energies. The ideal Fermi gas specific heat predicted by Eq.~(\ref{eq:ideal}) is shown as the symbols.} \end{figure} \section{Results} \label{sec:results} \subsection{specific heat} We first consider the broad resonance limit and let $g_{\rm b}\rightarrow\infty$, i.e. $k_{\rm F}^2R_s=0$, in order to understand the general properties of the specific heat at constant volume. Figure~\ref{fig:Cv_temo}(a) shows the reduced chemical potential in units of the Fermi energy as a function of temperature, $T/T_{\rm F}$. We see that the temperature dependence of the chemical potential for each binding energy is non-trivial, and as we go towards the strongly-correlated and low temperature regime we see the chemical potential has a maximum value, indicating the tendency of a transition towards the superfluid state. In Fig.~\ref{fig:Cv_temo}(b) we plot the specific heat at constant volume in units of $C_0 = Nk_{\rm B} $, as a function of temperature from the weakly-attractive BCS side to the strongly-correlated regime. We see that for the weakest binding energy, $\varepsilon_B/\varepsilon_{\rm F}=0.01$, the specific heat is reduced to the ideal Fermi gas specific heat at constant volume: \begin{alignat}{1}\label{eq:ideal} C_V^F = 2\frac{{\rm Li}_{2}\left(-e^{\beta\mu}\right)}{{\rm Li}_{1}\left(-e^{\beta\mu}\right)} - \frac{{\rm Li}_{1}\left(-e^{\beta\mu}\right)}{{\rm Li}_{0}\left(-e^{\beta\mu}\right)}. \end{alignat} In the high temperature limit (i.e., $T>T_F$), the specific heat for all interactions is approaching $C_V=Nk_B$. In the relatively high temperature regime (i.e., $T\sim0.3T_F$), the specific heat is enhanced compared to the ideal gas result, and typically exhibits a peak structure. As we move from the weakly-attractive regime to the strongly-coupled regime, the enhancement or peak first increases and then decreases. As we shall discuss in greater detail below, this enhancement connects to the many-body pseudogap pairing. Before doing so, let us briefly review the DoS, which provides a conventional characterization of the pseudogap regime. \subsection{Density of states} \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{DoS_03} \caption{\label{fig:Dos} (color online) The density of states is plotted as a function of frequency in units of $\rho_0=m/\pi$ for an interaction strength of $\varepsilon_B/\varepsilon_{\rm F}=0.3$ and a range of temperatures. } \end{figure} Indeed, it has been discussed in a range of works that in both two and three dimensions the DoS can be used to find the pseudogap formation temperature \cite{Mueller2017}. In Fig.~\ref{fig:Dos} we plot the DoS at the interaction strength $\varepsilon_B/\varepsilon_{\rm F}=0.3$ for a range of temperatures, normalized by the ideal density of states, $\rho_0=m/\pi$, and showing the evolution of the suppression, or dip, near zero frequency with respect to the chemical potential. It is readily seen that, as the temperature reduces the suppression in the density of states increases. In this work we choose to take the pseudogap formation temperature $T^{*}$ when there is a significant dip near the Fermi surface \cite{Bauer2014}, that is, when the lowest value near $\omega/\varepsilon_{\rm F}\simeq0$ is 25\% lower than the left peak value. In this way, we can approach the BKT transition temperature in the weakly interacting regime, and as the temperature is lowered the system will move directly from a normal Fermi liquid to a superfluid. \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{mixed_Cv}% \caption{\label{fig:mixed_Cv} (color online) (a) The specific heat at constant volume in units of $C_0=Nk_{\rm B}$ as a function of binding energy $\varepsilon_B/\varepsilon_{\rm F}$. The Fermi and Bose ideal limits are shown square and circular symbols, respectively. (b) The fluctuation contribution to the number equation, $N_{\rm fluc}=-\partial \Omega_{\rm NSR}/\partial \mu$ (red dot-dashed) and twice the number $N_B$ of stable molecules (blue solid). (c) Phase diagram of the 2D Fermi gas as function of binding energy and reduced temperature. Crossover to many-body pairing (PG) from the the normal Fermi gas (NF) found from $C_V$ is given by $\tilde{T}$ (red dot-dashed). $T^*$ (black dashed) is the pseudogap formation temperature found from the density of states. $T_c$ (blue solid) defines the BKT transition to a superfluid (SF) and is given by the Gaussian pair fluctuation theory in Ref.~\cite{Mulkerin2017}. The temperature $T_2$ where $\mu(T_2)=0$ (purple dashed) is the crossover temperature towards a two-body dominated regime.} \end{figure} \subsection{Phase diagram} To understand the enhancement of the specific heat at constant volume in the relatively high temperature regime we plot in Fig.~\ref{fig:mixed_Cv}(a) $C_V$ as a function of binding energy at a fixed temperature $T/T_{\rm F}=0.3$. We see there is a clear enhancement of $C_V$ peaked at binding energy $\varepsilon_B/\varepsilon_{\rm F}\simeq0.3$, indicating that in this regime there are high-temperature many-body Cooper pairs forming. The specific heat smoothly evolves from an ideal Fermi gas $C_V^F$ on the weakly attractive BCS side to an ideal Bose gas $C_V^B$ of mass $2M$ and density $N/2$ on the strongly attractive BEC side. Here, the ideal Bose gas specific heat at constant volume takes the same form as in Eq.~\eqref{eq:ideal} \cite{Robert1964}; however, the chemical potential is determined using the number equation for an equivalent Bose system with mass $2M$ and density $N/2$. To see how these many-body pairs arise, we plot in Fig.~\ref{fig:mixed_Cv}(b) the fluctuation contribution to the total number density, $N_{\rm fluc} = -\partial\Omega_{\rm NSR}/\partial \mu$, as a function of the binding energy (red dot-dashed line), where \begin{alignat}{1} \Omega_{\rm NSR} = -\frac{1}{\pi}\sum_{\mathbf{q}} \int_{-\infty}^{\infty} \frac{d\omega}{e^{\beta\omega}-1}\delta(\mathbf{q},\omega), \end{alignat} and $\delta(\mathbf{q},\omega)\equiv-{\rm Im}\ln[-\Gamma^{-1}(\mathbf{q},\omega+i0^{+})]$. The contribution of $N_{\rm fluc}$ to the total density can be thought of as renormalized Cooper-pair fluctuation and can be broken into contributions from metastable pairs and scattered states \cite{Ohashi2002,Massignan2008PRA}. In particular, if there is no Fermi surface and the chemical potential is negative, i.e. $\mu<0$, it is possible to divide the fluctuation contribution into twice the number of stable molecules $N_B$ (blue solid line) and of scattered states $N_{\rm sc}$ (not shown in the figure) \cite{Ohashi2002,vanwyk2016}. We plot in Fig.~\ref{fig:mixed_Cv}(b) twice the number of stable molecules $N_B$ for binding energies greater than $\varepsilon_B/\varepsilon_F>0.5$, where $N_B$ can be calculated from the bound state contribution \footnote{See the appendix of Ref.~\cite{vanwyk2016} for retails on the calculation of $N_B$.}. For binding energies below this value the chemical potential is positive and the stable molecule formulation is unphysical. Thus, it is clear that the contribution of pairs below binding energies $\varepsilon_B/\varepsilon_F\sim0.5$ should be from many-body pairing and gives rise to the enhancement of the specific heat at constant volume $C_V$. Following the idea of Ref.~\cite{vanwyk2016} we take the minimal value of $C_V(T/T_{\rm F})$ as a characteristic transition temperature, $\tilde{T}$, between the normal Fermi gas (NF) and a many-body paired system, i.e. the pseudogap regime (PG). This value signifies the deviation from the ideal $C_V$ in the weakly attractive regime, and breaks down as we approach strongly attractive interactions, and can be seen as the minimum value in Fig.~\ref{fig:Cv_temo}, for temperatures above where the chemical potential is unphysically tending towards the binding energy. This is not a true transition temperature to the pseudogap regime but a characteristic transition. We plot a phase diagram in Fig.~\ref{fig:mixed_Cv}(c) showing the crossover temperature to the pseudogap regime defined by $\tilde{T}$ (red dot-dashed) and $T^*$ (black dotted), and the BKT transition temperature $T_c$ to a superfluid (SF) found by the Gaussian pair fluctuation theory in Ref.~\cite{Mulkerin2017} (blue solid). We show also the crossover line to a regime dominated by two-body physics by the curve $\mu(T_2)=0$ (purple dashed). This line bounds the pseudogap regime as we increase the binding energy. All together, the three lines of the characteristic temperatures, $\tilde{T}$, $T_c$ and $T_2$, enclose a pseudogap regime. We note that the calculation of $\mu(T_2)=0$ is stopped for temperatures below $T/T_{\rm F}=0.2$ due to the break-down of the NSR and $T$-matrix schemes. \subsection{Effective range dependence} \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{Cv_fixedT} \caption{\label{fig:Cv_fixedT} (color online) The specific heat at constant volume in units of $C_0=Nk_{\rm B}$ as a function of interaction strength $\varepsilon_b/\varepsilon_{\rm F}$ for negative effective ranges: $k_{\rm F}^2R_s=0$ (black dotted), $k_{\rm F}^2R_s=-0.5$ (purple dot-dashed), $k_{\rm F}^2R_s=-1$ (blue dashed), $k_{\rm F}^2R_s=-1.5$ (red solid), and $k_{\rm F}^2R_s=-3$ (green dot-dot-dashed), for temperatures (a) $T/T_{\rm F}=0.25$, (b) $T/T_{\rm F}=0.5$, (c) $T/T_{\rm F}=1.0$. } \end{figure} We now move to consider the confinement-induced effective range dependence of the specific heat at constant volume and of the pseudogap formation temperature. We show $C_V$ in Fig.~\ref{fig:Cv_fixedT} as a function of binding energy, $\varepsilon_B/\varepsilon_{\rm F}$ for a negative effective ranges $k_{\rm F}^2R_s=0$ to $-3$ and temperatures (a) $T/T_{\rm F}=0.25$, (b) $T/T_{\rm F}=0.5$, (c) $T/T_{\rm F}=1.0$. The behavior of $C_V$ as a function of decreasing effective range is non-trivial: we find that the enhancement in the middle interaction regime (around $\varepsilon_B/\varepsilon_{\rm F}\simeq0.3$) dampens for each temperature, as the negative effective range decreases. This is most likely due to the system more readily forming bound molecules with decreasing negative effective range. For increasing temperature the peak value is also decreasing, and this is to be expected, as for higher temperatures the role of many-body pairing decreases. We also see that the peak value shifts to larger binding energies at high temperatures as the effective range decreases, due to a non-trivial competition of pair formation with decreasing effective range and high temperatures. Furthermore, in the weakly attractive ($\varepsilon_B/\varepsilon_{\rm F}<0.1$) and tightly bound ($\varepsilon_B/\varepsilon_{\rm F}>5$) limits, the specific heat at constant volume more slowly approaches the ideal gas limits, as the effective range decreases. Following the same method to define a pseudogap transition temperature $\tilde{T}$ as in Fig.~\ref{fig:mixed_Cv}, we calculate the effective range dependence of the pseudogap formation and report this main result of our work in Fig.~\ref{fig:maxEb}. The effective ranges are $k^2_{\rm F}R_s=0$ (black dot-dashed), $k^2_{\rm F}R_s=-0.5$ (purple dashed) , $k^2_{\rm F}R_s=-1$ (blue dotted), and $k^2_{\rm F}R_s=-2$ (red solid). We also plot the crossover temperature $T_2$ to a molecule dominated system defined by $\mu(T_2)=0$ using different symbols but the same color for each effective range. The effective range shifts the pseudogap region to weaker binding energies. This is due to the fact that the system more readily forms molecular states with decreasing effective range and increasing binding energy. The interaction window where the pseudogap regime exists remains approximately the same size, however for the smallest effective range ($k_{\rm F}^2R_s=-2$) in the figure, the pseudogap formation temperature is still large for weak interactions. This effect can also be seen in Fig.~\ref{fig:Cv_fixedT}(a), where for decreasing effective range and binding energy, $C_V$ is more slowly approaching the ideal gas result. \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{maxEb} \caption{\label{fig:maxEb} (color online) The pseudogap formation temperature $\tilde T$ found from the specific heat for a range of negative effective ranges for $k_{\rm F}^2R_s=0$ (black dot-dashed), $k_{\rm F}^2R_s=-0.5$ (purple dashed), $k_{\rm F}^2R_s=-1$ (blue dotted), and $k_{\rm F}^2R_s=-2$ (red solid). We also show the characteristic temperature $T_2$ defined by $\mu(T_2)=0$ using different symbols. At the same effective range, the color is same for lines ($\tilde T$) and symbols ($T_2$).} \end{figure} \section{Comparison to the experiment} \label{comp} In this section we outline how to compare our two-channel calculations to the recent experimental observations of Murthy \textit{et al.} in Ref.~\cite{Murthy2018}, with and without the confinement-induced effective range. For this purpose, we include the effect of an inhomogeneous trap through the local density approximation (LDA), $\mu(\mathbf{r}) = \mu_g - \frac{1}{2}M\omega^2 \mathbf{r}^2$, where $\mu_g$ is the global chemical potential, $\omega$ is the trap frequency, and $\mathbf{r}$ is the distance from the center of the trap. We denote the dimensionless radii as $\tilde{r}\equiv r/R_{{\rm TF}}$, $R_{{\rm TF}}^{2}=2k_{{\rm B}}T_{{\rm F}}/(m\omega^{2})$ is the Thomas-Fermi radius for a zero-temperature non-interacting trapped Fermi gas, and the trap Fermi energy $E_{\rm F} = \left(2 N\right)^{1/2}\omega$. We find the global chemical potential by enforcing that the total number of atoms satisfies $N=\int d\mathbf{r} n(\mathbf{r})$, where \begin{alignat}{1} n(\mathbf{r}) = 2k_BT \int\frac{d\mathbf{k}}{(2\pi)^2} d\omega A(\mathbf{k},\mathbf{r},\omega) n_{\rm F}(\omega), \end{alignat} the Fermi distribution is $n_{\rm F}(\omega) = 1/\left(1+e^{-\beta\omega} \right)$, and $A(\mathbf{k},\mathbf{r},\omega)=(-1/\pi) \, {\rm Im} G(\mathbf{k},\mathbf{r},\omega+i0^+)$ is the spectral function found from the trap dependent Green's function. The inhomogeneous trap means we have trap dependent temperature $T/T_{\rm F}(\mathbf{r})$ and interaction $\ln[k_{\rm F}(\mathbf{r})a_{2D}]$. The experiment in Ref.~\cite{Murthy2018} measures the local spectral response of a trapped 2D Fermi gas through radio-frequency (rf) spectroscopy. Rf spectroscopy can give information about the properties of the system, by applying a short rf pulse to flip the spin states from an initial strongly interacting system into a weakly-interacting final state and then by measuring the number of transferred atoms. This can then be repeated for a range of detunings of the rf pulse and information about the single-particle properties can be measured. In order to compare the spectra found from the experiment, we calculate the trap dependent rf-spectra. When there is no final state interaction we can take the rf response to be \cite{Pietil2012,Marsiglio2015}: \begin{alignat}{1} I_{\rm rf}(\omega,\mathbf{r})=2\int\frac{d\mathbf{k}}{(2\pi)^{2}}f(\xi_{\mathbf{k},\mathbf{r}}-\omega)A(\mathbf{k},\xi_{\mathbf{k},\mathbf{r}}-\omega), \end{alignat} where $\xi_{\mathbf{k},\mathbf{r}} = \epsilon_{\mathbf{k}}-\mu(\mathbf{r})$. As a self-consistent check to our calculation of the rf spectra, we can calculate the number density, i.e. \begin{alignat}{1} N=\int_{-\infty}^{\infty} d\omega\int d\mathbf{r}I_{\rm rf}(\omega,\mathbf{r}). \end{alignat} To compare our two-channel results to the experimental local rf spectra we need to fix a realistic confinement-induced effective range. This can be done as follows. Using the experimentally measured values of the binding energy and Fermi energy we define the ratio $\varepsilon_B/\varepsilon_{\rm F}$ to obtain the dimensionless effective range for a given interaction. We require that the two-body $T$-matrix $T_{2B}(E^{+})$ and the quasi-2D scattering amplitude share the same pole (the same binding energy $\varepsilon_{B}$) \cite{Wu2019}. It is readily seen that the binding energy $\varepsilon_{B}=\kappa^{2}/M$ is related to the effective range $R_{s}$ by, \begin{equation} R_{s}=\frac{2\ln\left(\kappa a_{s}\right)}{\kappa^{2}},\label{eq:Rs} \end{equation} where the 2D scattering length $a_s$ is defined in Appendix A. Using the defined binding energy the dimensionless effective range $R_s/a_s^2$ and {\it central} effective range $k_{\rm F}^2R_s$ is then found. \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{LDA_spectra_compare} \caption{\label{fig:spectra} (color online) Comparison of the local spectra from the $T$-matrix (solid lines), with a finite negative effective range (dashed-line), and experimental results of Ref.~\cite{Murthy2018} (symbols). (a) is for central binding energy $\varepsilon_B/\varepsilon_{\rm F}\simeq0.2$ and local temperature $T/T_{\rm F}=1.0$. (b) is from interaction strength $\varepsilon_B/\varepsilon_{\rm F}\simeq1.21$ and local temperature $T/T_{\rm F}=0.7$. The green dashed lines are the threshold energy and black dotted are the free energy, both determined experimentally.} \end{figure} In Fig.~\ref{fig:spectra} we compare the rf spectroscopy found from the $T$-matrix approximation and from Figs. 3(c) and 3(d) of Ref.~\cite{Murthy2018}. We have taken the experimental values of $\varepsilon_B=1.37$kHz in Fig.~\ref{fig:spectra}(a) and $9.31$kHz in Fig.~\ref{fig:spectra}(b), and local Fermi energies $\varepsilon_{\rm F}=6.56$kHz and $7.61$kHz at two fixed radii $\mathbf{r}$ as in the experiment, respectively. This defines a confinement-induced effective range of $R_s/a_{s}^2\simeq-0.2$ and $R_s/a_{s}^2\simeq-1.2$, respectively.These binding energies correspond to Feshbach resonances of 670G and 690G and we use the measured local trap temperatures of $T/T_{\rm F}=1.0$ and $T/T_{\rm F}=0.7$. Although there is a realistic effective range in the experiment, for comparison in Fig.~\ref{fig:spectra} we also show the theoretical predictions without effective range using red solid lines. Quite generally, there are two peaks in the spectra. The right peak, referred to as the pairing peak, comes from the signal of Cooper pairs. The left peak, referred to as the free peak, is contributed from free, unpaired atoms. In order to compare theoretical and experimental spectra, we normalize our spectra to have the same peak value for the pairing peak, and shift the peak to have its maximum at the same frequency. This shift may minimize the residual final-state effect, which is present in the experiment but is not captured by our theory. Firstly, the results at smaller binding energy in Fig.~\ref{fig:spectra}(a) match quite well for the whole spectra when there is \emph{no} finite effective range. Using the same fitting method in Ref.~\cite{Murthy2018} to determine the threshold energy, which is the energy required to break a pair, we find that the threshold and free-peak energies are similar to the experimental values. These experimental values are plotted in Fig.~\ref{fig:spectra}(a) using the vertical lines. The ratio of the difference of these energies to the binding energy indicates that we are in the pseudogap regime for this interaction strength and local temperature. When including the finite negative effective range, the agreement between theory and experiment becomes \emph{worse} and the free peak shifts to negative values of the rf frequency. This red shift is due to the chemical potential being slightly lower and the system more easily forming molecular pairs. For the strongly attractive regime in Fig.~\ref{fig:spectra}(b), we see the spectra match well for the {\it pairing} peak, but not for the {\it free} peak, which is strongly renormalized by the chemical potential. The threshold energy is then quite similar: there is closer agreement between the theoretical threshold energy with the finite effective range and the experimental threshold energy. If we take the ratio of the difference of the theoretically determined free and threshold energies to the binding energy we would find that for this interaction and temperature we are also in the pseudogap regime, which we would not expect. This is most likely due to the global chemical potential being negative for large interaction strengths, making the free peak shift to negative frequencies \cite{Barth2014}. It is well known that for a large negative chemical potential the Fermi surface is breaking down and two-body bound pairs can form for any binding energy and we are actually not in the pseudogap regime. In this regime the BCS pairing picture gives a fictitious pairing gap as the chemical potential is the gap~\cite{Mueller2017}. In the experiment in order to measure the free peak they introduce a population imbalance, which creates a broader free peak structure, we do not consider this imbalance in this work, as in the experiment it is only used as a tool to measure the pairing. Experimentally the free peak is then centered around zero rf frequency. The comparison of our theoretical results of the rf-spectra with and without effective range to the experimental data suggests that we can hardly follow the experimental procedure to reliably determine the pseudogap regime, by using the theoretically simulated rf-spectra. This is partly due to the fact that, for rf-spectra the many-body $T$-matrix becomes less accurate in the strongly correlated regime where $\varepsilon_B\sim\varepsilon_{\rm F}$. The comparison between theory and experiment is further complicated by the fact that, in the current treatment our theory fails to account for the final-state effect. Thus, at this stage it seems more reliable for us to theoretically determine the pseudogap regime using the specific heat at constant volume. \begin{figure}\centering{} \includegraphics[width=1.0\columnwidth]{phase_pseudo}% \caption{\label{fig:compare_pseudo} (color online) Pseudogap transition temperature phase diagram. The red dot-dashed curve is the specific heat prediction, purple dashed is the curve where the chemical potential becomes negative, the blue solid curve is the BKT transition temperature from the GPF calculation, which are in units of the homogeneous Fermi temperature and energy. } \end{figure} In Fig.~\ref{fig:compare_pseudo}, we re-plot the phase diagram for the pseudogap regime found from the specific heat at constant volume at zero effective range and compare it to the experimental result (see, i.e., Fig. 4(b) in Ref.~\cite{Murthy2018}). Here, we do not consider the effective range, since the effect of the effective range does not unambiguously show up in the rf-spectra as we have just discussed. From the figure, we see that the experimental result at $T\sim0.5T_F$ agrees well with the predicted pseudogap regime. Experimentally the confinement-induced effective range $k_{\rm F}^2R_s$ changes as a function of the binding energy and trap temperature, so it is difficult to have a defined effective range for the entire crossover regime. We would expect that not considering a finite negative effective range to be reasonable in the weakly interacting regime and as the binding energy increases we would expect the negative confinement-induced effective range to become more important and shift the upper and lower bounds of the pseudogap transition towards smaller binding energies. \section{Conclusions} \label{sec:conc} In summary, we have explored the pseudogap regime of a strongly interacting Fermi gas confined to two dimensions with and without a negative confinement-induced effective range. Using the specific heat at constant volume as a probe for high-temperature many-body pairing we have found that in two-dimensions it can be used to determine a good characteristic pseudogap formation temperature when compared to the traditional method of defining the pseudogap regime through a suppression in the density of states. We have seen that, as the effective range decreases, the pseudogap regime shifts to weaker binding energies as the system more preferentially forms pairs. By comparing our calculations to the recent experiment of Ref.~\cite{Murthy2018}, we have obtained good qualitative agreement. Plotting directly the measured in-trap radio-frequency spectra, we have shown our results match well the experimental data in the pseudogap regime, and in the strongly-correlated regime the differences can be understood. We have also shown that at high temperatures the many-body pairing regime experimentally defined through radio-frequency measurements fits well with the pseudogap regime theoretically determined from the specific heat at constant volume. However, under the current experimental conditions, it seems difficult to clearly discriminate the effect of the confinement-induced effect range in the radio-frequency spectra and on the pseudogap window, largely due to the insufficient theoretical accuracy for the radio-frequency spectra and insufficient experimental resolution. \begin{acknowledgments} Our research was supported by Australian Research Council's (ARC) Discovery Projects: DP140100637 and FT140100003 (XJL), FT130100815 and DP170104008 (HH). \end{acknowledgments}
1,314,259,992,750
arxiv
\section{Introduction} \label{sec:intro} \input{introduction.tex} \section{Surrogate optimization} \label{sec:background} \input{background.tex} \section{The asynchronous algorithm} \label{sec:async} \input{methods.tex} \section{POAP implementation} \label{sec:poap} \input{poap.tex} \section{\lowercase{py}SOT implementation} \label{sec:pysot} \input{pysot.tex} \section{Code examples} \label{sec:code} \input{code.tex} \section{Numerical experiments} \label{sec:experiments} \input{experiments.tex} \section{Conclusions} \label{sec:conclusions} \input{conclusions.tex} \section*{Acknowledgement} The authors appreciated support from NSF CISE 1116298 to Prof. Shoemaker and Bindel, and Prof. Shoemaker's start up grant from National University of Singapore. We also thank Dr. Taimoor Akhtar for his incorporation of multi-objective code into \texttt{pySOT}\xspace. \subsection{Experimental design} \label{sec:expdes} The simplest experimental design is choosing the $2^d$ corners of the hypercube $\mathcal{D}$, often referred to as the 2-factorial design, but this is infeasible when $d$ is large and the function is expensive. Two common alternatives are the Latin hypercube design (LHD) and the symmetric Latin hypercube design (SLHD), which allow an arbitrary number of design points. We deal with integer variables by rounding the generated design and generate a new experimental design if the resulting design is rank-deficient or if any two points coincide. This works well in practice, which is also reported in \cite{costa2014rbfopt} and \cite{muller2013so}. \subsection{Surrogate models} \label{sec:surr} The surrogate model is used to approximate the objective function. The surrogate model of choice in \texttt{pySOT}\xspace is radial basis functions (RBFs), but we also support Gaussian processes (GPs), support vector regression (SVR), multivariate adaptive regression splines (MARS), and polynomial regression. \subsubsection{Radial basis functions} \label{sec:rbf} RBF interpolation is one of the most popular approaches for approximating scattered data in a general number of dimensions \cite{buhmann2003radial,fasshauer2007meshfree,schaback2006kernel,wendland2004scattered}. Given a set of pairwise distinct interpolation points $X=\{x_i\}_{i=1}^n \subset \Omega$ the RBF model takes the form \begin{equation} \label{eq:rbf} s_{f,X}(x) = \sum_{i=1}^n \lambda_i \varphi(\|x - x_i\|) + p(x) \end{equation} where the kernel $\varphi : \mathbb{R}_{\geq 0} \to \mathbb{R}$ is a one-dimensional function and $p \in \Pi_{k-1}^d$, the space of polynomials with $d$ variables of degree no more than $k-1$. The name RBF comes from the fact that the function $\varphi(\cdot)$ is constant on spheres in $\mathbb{R}^d$. Common choices of kernels in surrogate optimization are the linear kernel $\varphi(r)=r$, the cubic kernel $\varphi(r)=r^3$, and the thin-plate spline $\varphi(r)=r^2\log(r)$. The coefficients $\lambda_i$ are determined by imposing the interpolation conditions $s_{f,X}(x_i) = f(x_i)$ for $i=1,\ldots,n$ and the discrete orthogonality condition \begin{equation} \label{eq:disc_orthg} \sum_{i=1}^n \lambda_i q(x_i) = 0, \qquad \forall q \in \Pi_{k-1}^d. \end{equation} If we let $\{\pi_i\}_{i=1}^{m}$ be a basis for the $m=\binom{k-1+d}{k-1}$-dimensional linear space $\Pi_{k-1}^d$, so we can write $p(x) = \sum_{i=1}^m c_i \pi_i(x)$, the interpolation conditions lead to the following linear system of equations \begin{equation} \label{eq:rbf_system} \begin{bmatrix} \Phi & P \\ P^T & 0 \end{bmatrix} \begin{bmatrix} \lambda \\ c \end{bmatrix}= \begin{bmatrix} f_X \\ 0 \end{bmatrix}, \end{equation} where $\Phi_{ij} = \varphi(\|x_i - x_j\|)$, $P_{ij} = \pi_j(x_i)$, and $f_X = [f(x_1), \ldots, f(x_n)]^T$. The solution to the linear system of equations is unique as long as $\text{rank}(P) = m$ and $k$ is at least the order of the kernel $\varphi$. The cubic and thin-plate spline kernels are both of order $k=2$, so a polynomial tail of at least degree 1 is necessary, which is what we use. A direct solver of the RBF system requires computing a dense LU factorization at a cost of $\mathcal{O}(n^3)$ flops. We can utilize the fact that we are adding a few points at a time, which allows incremental updates of an initial factorization in quadratic time. We first evaluate $n$ points such that $\text{rank}(P) = m$, which makes it possible to compute an initial LU factorization with pivoting \[ A = \begin{bmatrix} 0 & P^T \\ P & \Phi \end{bmatrix} = PL_{11}U_{11}, \] where we have reordered the blocks to make it more natural to add new points to the system. After adding the $k$ new points $\hat{X} = \{\hat{x}_i\}_{i=1}^{k}$ we want to find an LU factorization of the extended system \[ \hat{A} = \left[\begin{array}{cc|c} 0 & P^T & \hat{P}^T \\ P & \Phi & \hat{\Phi} \\ \hline \hat{P} & \hat{\Phi} & \hat{\varphi} \rule{0pt}{2.6ex}\\ \end{array}\right] := \begin{bmatrix} A & B \\ B^T & C \end{bmatrix} \] where $\hat{\Phi}_{ij} = \varphi(\|x_i - \hat{x}_j\|)$, $\hat{P}_{ij} = \pi_j(\hat{x}_i)$, and $\hat{\varphi}_{ij} = \varphi(\|\hat{x}_i - \hat{x}_j\|)$. The fact that the trailing Schur complement is positive semi-definite allows us to look for a factorization of the form \[ \begin{bmatrix} A & B \\ B^T & C \end{bmatrix} = \begin{bmatrix} P & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} L_{11} & 0 \\ L_{21} & L_{22} \end{bmatrix} \begin{bmatrix} U_{11} & U_{12} \\ 0 & L_{22}^T \end{bmatrix} = \begin{bmatrix} PL_{11}U_{11} & PL_{11}U_{12} \\ L_{21}U_{11} & L_{21}U_{12}+L_{22}L_{22}^T \end{bmatrix}. \] We need to solve the two triangular systems $B=PL_{11}U_{12}$ and $B^T=L_{21}U_{11}$ followed by computing a Cholesky factorization of $C-L_{21}U_{12}$. This allows us to update the factorization in $\mathcal{O}(kn^2)$ flops, which is better than computing a new LU factorization in $\mathcal{O}(n^3)$ flops. In practice, we add regularization to the system by using the kernel $\tilde{\varphi}(x_i,x_j) = \varphi(x_i,x_j) + \eta\delta_{ij}$, for some regularization parameter $\eta \geq 0$, which ensures that the trailing Schur complement is positive definite and that the system is well-conditioned. \subsubsection{Gaussian processes} \label{sec:gp} A Gaussian process (GP) is stochastic process where any finite number of random variables have a joint Gaussian distribution; see,~e.g.~\cite{rasmussen2006gaussian}. This defines a distribution over functions $f(x) \sim \mathcal{GP}(\mu(x),k(x,x'))$, where $\mu : \mathbb{R}^d \rightarrow \mathbb{R}$ is the mean function and $k : \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ is the covariance kernel. The GP model allows predicting the value and variance at any point, so it gives us an idea about the uncertainty of the prediction. The most popular kernel is the squared exponential kernel $k(x,y) = \exp(-0.5\|x-y\|^2/\ell^2)$, and other possibilities include the Mat\'ern kernels. For any points $X = \{x_1,\ldots,x_n\} \subset \mathbb{R}^d$, $f_X \sim \mathcal{N}(\mu_X, K_{XX})$ where $\mu_X$ denotes the vector values for $\mu$ evaluated at each of the $x_i$, and $(K_{XX})_{ij} =k(x_i, x_j)$. We assume we observe function values $y_X \in \mathbb{R}^n$, where each entry is contaminated by independent Gaussian noise with constant variance $\sigma^2$. Under a Gaussian process prior depending on the hyper-parameters $\theta$, the log marginal likelihood is given by \begin{equation} \label{eq:mloglik} \mathcal{L}(y_X \, | \, \theta) = -\frac{1}{2}\left[(y-\mu_X)^T\alpha + \log |\tilde{K}_{XX}| + n\log 2\pi\right] \end{equation} where $\alpha = \tilde{K}_{XX}^{-1}(y_X - \mu_X)$ and $\tilde{K}_{XX} = K_{XX} + \sigma^2 I$ ($\sigma=0$ for a deterministic $f(x)$). Optimization of (\ref{eq:mloglik}) is expensive, since direction computation of $\log |\tilde{K}_{XX}|$ involves computing a Cholesky factorization of $\tilde{K}_{XX}$. The iteration cost of $\mathcal{O}(n^3)$ quickly becomes significantly more expensive than using an RBF interpolant, even though both methods are based on kernel interpolation, and the dependency of the hyper-parameters stops us from updating a factorization when adding new points. Promising work on scalable approximate Gaussian process regression can decrease the kernel learning to $\mathcal{O}(n)$ \cite{wilson2015kernel,dong2017scalable}, but these ideas only work in low-dimensional spaces. The computational cost for computing the surrogate model should be compared to the computational cost of function evaluations as we want to spend most of the computational effort on doing function evaluations. \subsubsection{Other choices} RBFs and GPs are by far the two most popular surrogate models in computationally expensive optimization. We briefly mention some other possible surrogate models available in \texttt{pySOT}\xspace, even though they are not as frequently used. Multivariate adaptive regression splines (MARS) \cite{friedman1991multivariate}, are also weighted sums of basis functions $B_i(x)$, where each basis function is either constant and equal to 1, a hinge function of either the form $\max(0,x-c)$ or $\max(0,c-x)$ for some constant $c$, or a product of hinge functions. It is also possible to use polynomial regression or support vector regression (SVR). Multiple surrogate models can be combined into an ensemble surrogate and Dempster-Shafer theory can be used to decide how to weigh the different models \cite{muller2011mixture}. This is useful in situations where it is hard to know what surrogate model to choose for a specific problem. \cite{muller2014influence} indicated regression polynomial surrogate did not perform well by themselves on test problems, but they were sometimes helpful in combination with RBF surrogates. \subsection{Auxiliary problem} Evaluation of $f(x)$ is expensive, so we optimize an acquisition function $\alpha(x)$ involving the surrogate model and previously evaluated points to find the next point(s) to evaluate. We refer to the optimization of $\alpha$ as an auxiliary problem. This auxiliary problem must balance exploration and exploitation, where exploration emphasizes evaluating points far from previous evaluations to improve the surrogate model and escape local minima, while exploitation aims to improve promising solutions to make sure we make progress. The subsections below describe methods in \texttt{pySOT}\xspace to solve the auxiliary problem. \subsubsection{Candidate points} An acquisition function based on the weighted-distance merit function is introduced in \cite{regis2007stochastic} to balance exploration and exploitation. The main idea is to generate a set of candidate points $\Omega$ and use the merit function to pick the most promising candidate points. Exploration is achieved by giving preference to candidate points far from previous evaluations. More specifically, for each $x\in\Omega$ we let $\Delta(x)$ be the distance from $x$ to the point closest to $x$ that is currently being or has been evaluated. By defining $\Delta^{\max} = \max\{\Delta(x) : x \in \Omega\}$ and $\Delta^{\min} = \min\{\Delta(x) : x \in \Omega\}$ a good measure of exploration is a small value of $V^D(x) = \frac{\Delta^{\max} - \Delta(x)}{\Delta^{\max}-\Delta^{\min}}$, where $0 \leq V^D(x) \leq 1$ for all $x \in \Omega$. Exploitation is achieved through the surrogate model $s(x)$, where a small value of the quantity $V^S(x) = \frac{s(x) - s^{\min}}{s^{\max}-s^{\min}}$ provides a measure of exploitation, where $s^{\max} = \max\{s(x) : x \in \Omega\}$ and $s^{\min} = \min\{s(x) : x \in \Omega\}$. The best candidate point is the minimizer of the acquisition function, for a given $w \in [0,1]$. This shows that $w$ serves as a balance between exploitation and exploration. A weight close to 0 emphasizes exploration while a weight close to 1 emphasizes exploitation. Algorithm \ref{alg:cand_points} shows how to select the most promising candidate point. (see next page) \begin{algorithm} \caption{Candidate point selection} \label{alg:cand_points} \begin{algorithmic}[1] \State Compute $s^{\max} \leftarrow \max\limits_{x \in \Omega} \,\,s(x)$ and $s^{\min} \leftarrow \min\limits_{x \in \Omega} \,\,s(x)$ \For{each $x \in \Omega$} \State $V^S(x)\leftarrow \begin{cases} \frac{s(x) - s^{\min}}{s^{\max}-s^{\min}} &\text{ if } s^{\max} > s^{\min} \\ 1 &\text{ otherwise } \end{cases}$ \EndFor \For{each $x \in \Omega$} \State $\Delta(x) \leftarrow \min\limits_{y \in \mathcal{A}} d(x,y)$ \EndFor \State Compute $\Delta^{\max} \leftarrow \max\limits_{x \in \Omega} \,\,\Delta(x)$ and $\Delta^{\min} \leftarrow \min\limits_{x \in \Omega} \,\,\Delta(x)$ \For{each $x \in \Omega$} \State $V^D(x)\leftarrow \begin{cases} \frac{\Delta^{\max} - \Delta(x)}{\Delta^{\max}-\Delta^{\min}} &\text{ if } \Delta^{\max}>\Delta^{\min} \\ 1 &\text{ otherwise } \end{cases}$ \EndFor \Return $\mathop{\mathrm{argmin}}\limits_{x \in \Omega} \,\,\, wV^S(x) + (1-w)V^D(x)$ \end{algorithmic} \end{algorithm} The LMS-RBF method \cite{regis2007stochastic} is useful for low-dimensional optimization problems. Given a sampling radius $\sigma$, the candidate points are generated as $\mathcal{N}(0,\sigma^2)$ perturbations along each coordinate direction from the best solution \cite{regis2007stochastic}. Large values of the sampling radius will generate candidate points far away from the best solution while smaller values of the sampling radius will generate candidate points that are close to the best solution. We defer a description of how the sampling radius is updated to the next section. If $\sigma$ is smaller than $1$ for an integer variable, $\sigma=1$ is used to ensure that this variable is also perturbed \cite{muller2013so}. The DYCORS method \cite{regis2013combining} was developed for high-dimensional problems and the idea is to start by perturbing most coordinates and perturb fewer dimensions towards the end of the optimization run \cite{regis2013combining}. This is achieved by assigning a probability to perturb each dimension. If $n_0$ points are used in the experimental design and the evaluation budget is given by $N_{\max}$, each coordinate is perturbed with probability $p_{\text{perturb}}(n)$ for $n_0 \leq n \leq N_{\max}$. The probability function used in \texttt{pySOT}\xspace is the one introduced in \cite{regis2013combining}, which is $p_{\text{perturb}}(n)=\min\left(\frac{20}{d},1\right)\times \left[1-\frac{\log(n-n_0)}{\log(N_{\max}-n_0)}\right]$. In \texttt{pySOT}\xspace it is also possible to choose candidate points uniformly from $\mathcal{D}$ and use the merit function $wV^S(x) + (1-w)V^D(x)$ to pick the most promising points, which contrasts to the previous two methods by not making local perturbations around the current best solution. This helps diversifying the set of evaluated points but resulting in \cite{regis2007stochastic} for this approach in GMSRBF were not promising. \subsubsection{Acquisition functions in Bayesian optimization} Gaussian processes allow us to use acquisition functions that takes the prediction variance into account. A popular choice is the probability of improvement, which takes the form \begin{equation} \text{PI}(x) = P(f(x) \leq f(x^{+}) - \xi) = \Phi\left(\frac{f(x^{+}) - \mu(x) - \xi}{\sigma(x)}\right) \end{equation} where $\xi$ is a trade-off parameter that balances exploration and exploitation. With $\xi=0$, probability of improvement does pure exploitation. A common choice is to start with large $\xi$ and lower $\xi$ towards the end of the optimization run \cite{brochu2010tutorial}. Expected improvement is likely the most widely used acquisition function in Bayesian optimization, where the main idea is choosing the point that gives us the largest expected improvement. Mo\u{c}kus defined improvement as the function \begin{equation} I(x) = \max\{0, f(x^{+}) - f_{n+1}(x)\}, \end{equation} which can be evaluated analytically under a Gaussian process posterior and \citet{jones1998efficient} shows that \begin{equation} \text{EI}(x) = \begin{cases} (f(x^{+}) - \mu(x))\Phi(Z) + \sigma(x)\varphi(Z) & \text{if } \sigma(x) > 0 \\ 0 & \text{if } \sigma(x) = 0 \end{cases} \end{equation} where $Z = (f(x^{+}) - \mu(x))/\sigma(x)$. We can in a similar fashion add a trade-off parameter $\xi$ in which case the expected improvement takes the form \begin{equation} \text{EI}(x) = \begin{cases} (f(x^{+}) - \mu(x) - \xi)\Phi(Z) + \sigma(x)\varphi(Z) & \text{if } \sigma(x) > 0 \\ 0 & \text{if } \sigma(x) = 0 \end{cases} \end{equation} where $Z = (f(x^{+})-\mu(x)-\xi)/\sigma(x)$. Another option that has been proposed is the lower confidence bound (LCB) \begin{equation} \text{LCB}(x) = \mu(x) - \kappa \sigma(x), \end{equation} where $\kappa$ is left to the user. \subsubsection{Other choices} Selecting the point that minimizes the bumpiness of a radial basis function, a concept introduced by \citet{gutmann2001radial}, is supported in other softwares such as RBFOpt \cite{costa2014rbfopt}. The knowledge gradient acquisition function introduced used by \citet{frazier2008knowledge} is implemented in Cornell-MOE \footnote{\url{https://github.com/wujian16/Cornell-MOE}}. \subsection{Problem Statement} We consider the global optimization problem \begin{equation} \label{eq:globprob} \begin{array}{lll} \text{minimize} & f(x) \\ \text{subject to} & x \in \mathcal{D} \cap (\mathbb{Z}^q \times \mathbb{R}^{d-q}) \end{array} \end{equation} where $f: \mathbb{Z}^q \times \mathbb{R}^{d-q} \to \mathbb{R}$ is a computationally expensive black-box function. We assume in addition that $\mathcal{D}$ is a compact hypercube and that $f(x)$ is a continuous function over the continuous variables. In our setting, $f(x)$ is non-linear, and has multiple local minima, and the gradient of $f(x)$ is not available. Computationally expensive refers to any problem where a single function evaluation takes anywhere between a few minutes and many hours. Common examples include running an expensive simulation model of a complex physical process and tuning machine learning models \cite{snoek2012practical}. It is common to have limited time and evaluation budgets due to the significant amount of time necessary for each function evaluation, making it challenging to find a good solution to (\ref{eq:globprob}) in the case when $f$ is multimodal. \subsection{Survey of Methods} Many popular algorithms for black-box optimization are not suitable when the function evaluations are computationally expensive. Derivative based methods are appealing in cases when gradient information can be obtained cheaply, in which case it is possible to run a local optimizer with a multi-start strategy such as Newton's method or BFGS \cite{avriel2003nonlinear}. Finite differences can be used when gradient information is unavailable, but it is very computationally expensive since $f(x)$ is expensive, and impreciseness in the simulation model often leads to inaccurate estimates. Several popular derivative free optimization (DFO) methods exist for local optimization such as pattern search \cite{hooke1961direct}, Nelder-Mead \cite{nelder1965simplex}, and ORBIT \cite{wild2011global}, but these methods are not good choices for multimodal problems. Global heuristic optimization methods such as genetic algorithms \cite{goldberg2006genetic}, particle swarm optimization \cite{kennedy2010particle}, and differential evolution \cite{storn1997differential}, generally require a large number of function evaluations and are not practical for computationally expensive objective functions. A successful family of optimization algorithms for computationally expensive optimization are methods based on surrogate models. The surrogate model approximates the objective function and helps accelerate convergence to a good solution. Popular choices are methods based on radial basis functions (RBFs) such as \cite{regis2007stochastic,regis2013combining,gutmann2001radial} and Kriging and Gaussian process (GPs) based methods such as \cite{jones2001taxonomy,jones1998efficient,frazier2008knowledge}. Other possible surrogate models are polynomial regression models and multivariate adaptive regression splines \cite{friedman1991multivariate,muller2014influence}. Most surrogate optimization algorithms start by evaluating an experimental design that is used to fit the initial surrogate model. What follows is an adaptive phase where an auxiliary problem is solved to pick the next sample point(s), and this phase continues until either a restart or a stopping criterion has been met. We can avoid getting trapped in a local minimum by using an auxiliary problem that provides a good balance of exploration and exploitation. Several parallel algorithms have been developed for computationally expensive black-box optimization. Regis and Shoemaker \cite{regis2009parallel} developed a synchronous parallel surrogate optimization algorithm based on RBFs and this idea was later extended to SOP algorithm for large number of processors \cite{krityakierne2016sop}. In both algorithms, it is assumed that (i) the resources are homogeneous and (ii) the evaluation time is constant. The first assumption does not hold for heterogeneous parallel computing platforms and the second assumption is unlikely to hold in cases where the complexity of evaluating the objective depends spatially on the input. The first assumption can almost always be assessed before the start of the optimization run while the second assumption may not be easy to assess in practice. Another limitation of the work in \cite{regis2009parallel} is that the algorithm does not handle the possibility of worker failures and crashed evaluations. Being able to handle failures is critical in order to run the algorithm on large-scale systems. The natural way of dealing with cases where (i) or (ii) are violated is to launch function evaluations asynchronously, which is illustrated in Figure \ref{fig:async_vs_sync} to eliminate idle time. \begin{wrapfigure}{R}{0.5\textwidth} \includegraphics[width=0.48\textwidth]{sync-vs-async.eps} \caption{\textit{Synchronous vs asynchronous parallel.}} \label{fig:async_vs_sync} \end{wrapfigure} \subsection{Survey of Software} A library with similar functionality to \texttt{POAP}\xspace is \texttt{SCOOP} \cite{hold2014once}, a Python based library for distributing concurrent tasks while internally handling the communication. \texttt{POAP}\xspace provides similar functionality for global optimization problems and also handles all of the communication internally, which makes it easy to implement asynchronous optimization algorithms. \texttt{HOPSPACK} (Hybrid Optimization Parallel Search PACKage) \cite{plantenga2009hopspack} is a C\texttt{++}{} framework for derivative-free optimization problems. \texttt{HOPSPACK} supports parallelism through MPI or multi-threading and supports running multiple optimization solvers simultaneously, a functionality similar to combining strategies in \texttt{POAP}\xspace. The framework implements an asynchronous pattern search solver and supports non-linear constraints and mixed-integer variables, but there is no support for surrogate optimization. \texttt{MATSuMoTo} (\textsc{Matlab}{} Surrogate Model Toolbox) \cite{mueller2014matsumoto} is an example of a surrogate global optimization toolbox. \texttt{MATSuMoTo} is written in \textsc{Matlab}{} and has support for computationally expensive, black-box global optimization problems that may have continuous, mixed-integer, or pure integer variables. \texttt{MATSuMoTo} offers a variety of choices for surrogate models and surrogate model mixtures, experimental designs, and auxiliary functions. The framework is not designed to support a large class of surrogate optimization algorithms and the lack of object orientation makes it hard to extend the framework. Parallelism is only supported through \textsc{Matlab}{}'s Parallel Computing Toolbox and there is no support for asynchrony, combining strategies, or dynamically changing the number of workers. Furthermore, many large-scale systems do not support \textsc{Matlab}{}. Note that as of version 2018b, the \textsc{Matlab}{} Global Optimization Toolbox offers \texttt{surrogateopt} \footnote{\url{https://www.mathworks.com/help/gads/surrogateopt.html}}, which is an asynchronous surrogate optimization method implementation based on \citet{regis2007stochastic}. Nonlinear Optimization by Mesh Adaptive Direct Search (\texttt{NOMAD}) \cite{le2011algorithm} is a library intended for time-consuming black-box simulation with a small number of variables. The library implements mesh adaptive direct search (MADS) and there is support for asynchronous function evaluations using MPI. The framework is fault resilient in the sense that it supports objective function failing to return a valid output. Similar fault resilience is provided by \texttt{POAP}\xspace, which allows the user to decide what action to take in case of a failure. \texttt{Dakota} \cite{eldred2007dakota} is an extensive toolkit with algorithms for optimization with and without gradient information; uncertainty quantification, nonlinear least squares methods, and sensitivity/variance analysis. These components can be used on their own or with strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. The \texttt{Dakota} toolkit is object-oriented and written in C\texttt{++}{} with the intention of being a flexible and extensible interface between simulation codes, and there is support for parallel function evaluations. \texttt{Dakota} includes C\texttt{++}{} code for global optimization with a GP based surrogate (e.g, an implementation of the GP-EI/EGO method \cite{jones1998efficient} and EGRA method \cite{bichon2008efficient}). \texttt{Dakota} does not have global optimization codes designed to be used with RBF surrogates, although it is possible to construct an RBF surrogate in \texttt{Dakota}. \texttt{BayesOpt} \cite{martinez2014bayesopt} is a library with Bayesian optimization methods to solve nonlinear optimization problems. Bayesian optimization methods build a posterior distribution to capture the evidence and prior knowledge of the target function. Built in C\texttt{++}{}, the library is efficient, portable, and flexible. There is support for commonly used methods such as sequential Kriging optimization (SKO), sequential model-based optimization (SMBO), and efficient global optimization (EGO). The software is sequential and there is no support for parallel function evaluations. \texttt{RBFOpt} \cite{costa2014rbfopt} is a radial basis function based library that implements and extends the global optimization RBF algorithm proposed by Gutmann \cite{gutmann2001radial}. \texttt{RBFOpt} is written in Python and supports asynchronous parallelism through Python's multiprocessing library, but there is no support for MPI. The software is not designed to cover a large class of surrogate optimization methods and there is no support for dynamically changing the number of workers and combining different optimization strategies. \texttt{Cornell-MOE} is a Python library that implements Bayesian optimization with the expected improvement and knowledge gradient acquisition functions. The software is built on work that extends these acquisition functions to batch synchronous parallel, both with and without gradient information \cite{wu2016parallel,wu2017bayesian}. There is no support for asynchronous parallelism and it is not possible to dynamically change the number of workers. \subsection{Contribution} The \texttt{POAP}\xspace and \texttt{pySOT}\xspace software has become very popular with \texttt{pySOT}\xspace having been downloaded more than 88,000 times and \texttt{POAP}\xspace downloaded more than 126,000 times. The main contribution of \texttt{POAP}\xspace is an event-driven framework for building and combining asynchronous optimization strategies. The user can implement their own strategies that specify what actions to take when different events occur, while all communication and dispatching of work is handled by the framework. \texttt{POAP}\xspace is designed to be both flexible and easily extensible, and the framework makes it easy to dynamically change the number of workers and combine different optimization strategies. \texttt{POAP}\xspace is fault resilient and handles function evaluation and worker crashes. \texttt{pySOT}\xspace is a great test-suite for doing head-to-head comparisons with different experimental designs, surrogate models, and acquisition functions. Being built on top of \texttt{POAP}\xspace, \texttt{pySOT}\xspace leverages the many benefits of the \texttt{POAP}\xspace framework, leading to a robust and flexible framework without having to worry about the communication and dispatching of work. The object-oriented design makes \texttt{pySOT}\xspace easy to extend and users can experiment with different surrogate models, experimental designs, and auxiliary problems, and make comparisons in either a synchronous or an asynchronous setting. In addition, \texttt{pySOT}\xspace supports checkpointing which allows users to resume a crashed optimization run. We provide an extensive comparison of synchrony and asynchrony in cases where the objective function evaluation time varies and conclude that reducing idle time is more important than information for multimodal problems. We conclude that asynchrony should be preferred over synchrony in this case. The performance difference between asynchrony and synchrony increases with function evaluation variance and number of processors since both increase the idle time for synchrony. Our numerical experiments also indicate that parallelism improves exploration and that the parallel algorithms often outperform the serial version with respect to number of function evaluations. \subsection{Overview} We review the general surrogate optimization algorithm and the most common surrogate models, experimental designs, and auxiliary problems in \S\ref{sec:background}. We describe in detail our asynchronous surrogate optimization algorithm in \S\ref{sec:async}. The implementation details of \texttt{POAP}\xspace and \texttt{pySOT}\xspace are described in \S\ref{sec:poap} and \S\ref{sec:pysot} respectively. We illustrate a code example in \S\ref{sec:code} that shows how to use \texttt{pySOT}\xspace and \texttt{POAP}\xspace. We provide an extensive comparison between asynchrony and synchrony in \S\ref{sec:experiments} and conclude in \S\ref{sec:conclusions}. \subsection{Updating the sampling radius in Stochastic SRBF} \label{sec:sampling_radius} We now elaborate on how to pick the value of the sampling radius $\sigma$ that is used to generate the candidate points used in the LMS-RBF and DYCORS methods. We follow the idea in \cite{regis2007stochastic} where counters $C_{\text{success}}$ and $C_{\text{fail}}$ are used to track the number of consecutive evaluations with and without significant improvement. If there are too many failures in serial, the algorithm restarts. This idea is extended to synchronous parallel in \cite{regis2009parallel} by processing a batch at a time. If $C_{\text{success}}$ reaches a tolerance $\mathcal{F}_{\text{success}}$ the sampling radius is doubled and $C_{\text{success}}$ is set to 0. Similarly, if $C_{\text{fail}}$ reaches $\mathcal{F}_{\text{fail}}$ the sampling radius is halved and $C_{\text{fail}}$ is set to 0. In the asynchronous setting, we update the counters after each completed function evaluation. We do not update the counters for evaluations that were launched before the last time the sampling radius was changed. The reason for this is that these evaluations are based on outdated information. The logic for updating the sampling radius and the best solution can be seen in Algorithm \ref{alg:async_algo_radius}. \begin{algorithm}[!t \caption{Sampling radius adjustment routine} \label{alg:async_algo_radius} \begin{algorithmic}[1] \State \textbf{Inputs:} $\sigma$, $f(x_i)$, $x_i$, $f_{\text{best}}$, $x_{\text{best}}$, $C_{\text{success}}$, $C_{\text{fail}}$, $\mathcal{F}_{\text{success}}$, $\mathcal{F}_{\text{fail}}, \delta$ \If{$f(x_i) < f_{\text{best}}$} \State $f_{\text{best}} \leftarrow f(x_i)$ \State $x_{\text{best}} \leftarrow x_i$ \If{$f(x_i) < f_{\text{best}} - \delta |f_{\text{best}}|$} \State $C_{\text{succ}} \leftarrow C_{\text{succ}} + 1$ \State $C_{\text{fail}} \leftarrow 0$ \EndIf \Else \State $C_{\text{succ}} \leftarrow 0$ \State $C_{\text{fail}} \leftarrow C_{\text{fail}} + 1$ \EndIf \State \If{$C_{\text{succ}} = \mathcal{F}_{\text{succ}}$ \OR $C_{\text{fail}} = \mathcal{F}_{\text{fail}}$} \State $C_{\text{succ}} \leftarrow 0$ \State $C_{\text{fail}} \leftarrow 0$ \If{$C_{\text{succ}} = \mathcal{F}_{\text{succ}}$} \State $\sigma \leftarrow \min(2\sigma, \sigma_{\max})$ \Else \State $\sigma \leftarrow \max(\sigma/2, \sigma_{\min})$ \EndIf \EndIf \State \State \Return $\sigma$, $f_{\text{best}}$, $x_{\text{best}}$ $C_{\text{success}}$, $C_{\text{fail}}$ \end{algorithmic} \end{algorithm} We also follow the recommendations in \cite{regis2007stochastic} and \cite{regis2009parallel} to restart the algorithm when we reach a maximum failure tolerance parameter $\mathcal{M}_{\text{fail}}$ or when the sampling radius $\sigma$ drops below $\sigma_{\min}$. Restarting has shown to be successful for LMS-RBF and DYCORS as it can be hard to make progress when the surrogate is very biased towards the current best solution and we may be stuck in a local minimum that is hard to escape. Restarting the algorithm can help avoid this issue. We do not terminate pending evaluations after a restart occurs, but they are not incorporated in the surrogate model or used to adjust the sampling radius when they finish. \subsection{Controllers} The controller is responsible for accepting or rejecting proposals by the strategy object, controlling and monitoring the workers, and informing the strategy object of relevant events. Examples of relevant events are the processing of a proposal, or status updates on a function evaluation. Interactions between controller and the strategies are organized around proposals and evaluation records. At the beginning of the optimization and on any later change to the system state, the controller requests a proposal from the strategy. The proposal consists of an action (evaluate a function, kill a function, or terminate the optimization), a list of parameters, and a list of callback functions to be executed once the proposal is processed. The controller then either accepts the proposal (and sends a command to the worker), or rejects the proposal. When the controller accepts a proposal to start a function evaluation, it creates an evaluation record to share information about the status of the evaluation with the strategy. The evaluation record includes the evaluation point, the status of the evaluation, the value (if completed), and a list of callback functions to be executed on any update. Once a proposal has been accepted or rejected, the controller processes any pending system events (e.g. completed or canceled function evaluations), notifies the strategy about updates, and requests the next proposed action. \texttt{POAP}\xspace comes with a serial controller for when objective function evaluations are carried out in serial. There is also a threaded controller that dispatches work to a set of workers where each worker is able to handle evaluation and kill requests. The requests are asynchronous in the sense that the workers are not required to complete the evaluation or termination requests. The worker is forced to respond to evaluation requests, but may ignore kill requests. When receiving an evaluation request, the worker should either attempt the evaluation or mark the record as killed. The worker sends status updates back to the controller by updating the relevant record. There is also an extension of the threaded controller that works with MPI and a controller that uses simulated time. The latter is useful for testing asynchronous optimization strategies for different evaluation time distributions. \subsection{Strategies} The strategy is responsible for choosing new evaluations, killing evaluations, and terminating the optimization run when a stopping criteria is reached. \texttt{POAP}\xspace provides some basic default strategies based on non-adaptive sampling and serial optimization routines and also some strategies that adapt or combine other strategies. Different strategies can be composed by combining their control actions, which can be used to let a strategy cycle through a list of optimization strategies and select the most promising of their proposals. Strategies can also subscribe to be informed of all new function evaluations so they incorporate any new function information, even though the evaluation was proposed by another strategy. This makes it possible to start several independent strategies while still allowing each strategy to look at the function information that comes from function evaluations proposed by other strategies. As an example, we can have a local optimizer strategy running a gradient based method where the starting point can be selected based on the best point found by any other strategy. The flexibility of the \texttt{POAP}\xspace framework makes combined strategies like these straightforward. \subsection{Workers} The multi-threaded controller employs a set of workers that are capable of managing concurrent function evaluations. Each worker does not provide parallelism on its own, but the worker itself is allowed to exploit parallelism by separate external processes. The basic worker class can call Python objective functions, which only results in parallelism if the objective function itself allows parallelism. There is also a worker class that uses subprocesses to evaluate objective functions that are not necessarily in Python. The user is responsible for specifying how to evaluate the objective function and parse partial information. The number of workers can be adjusted dynamically during the optimization process, which is particularly useful in a cloud setting. \texttt{POAP}\xspace supports running both on the Google Cloud platform (GCP) and the Amazon Web Services (AWS). We support workers connecting to a specified TCP/IP port to communicate with the controller, making it easy to add external resources. \subsection{Strategies and auxiliary problems} The strategy object follows the \texttt{POAP}\xspace framework. \texttt{pySOT}\xspace implements an asynchronous base class for surrogate optimization which serves as a template for all surrogate optimizations in \texttt{pySOT}\xspace. This base class abstracts out the difference between serial, synchronous parallel, and asynchronous parallel. \texttt{pySOT}\xspace supports the candidate point methods SRBF and DYCORS. We also support strategies for the most common acquisition functions from BO: expected improvement (EI) and the lower confidence bound (LCB). \subsection{Experimental design} \texttt{pySOT}\xspace implements the symmetric Latin hypercube (SLHD), Latin hypercubes (LHD), and 2-factorial designs that were described in \S\ref{sec:expdes}. The experimental design is always evaluated first and the asynchronous optimization strategy in \texttt{pySOT}\xspace is designed to proceed to the adaptive phase as soon as no initial design points are outstanding. Another possibility is to cancel the pending evaluations from the initial phase and proceed to the adaptive phase as soon as possible, but we choose to finish the entire initial design as exploration is important for multi-modal optimization problems. As discussed in the previous section, we must choose enough initial design point to allow building the surrogate model when all points in the initial design are either completed or pending. \subsection{Surrogate models} \texttt{pySOT}\xspace supports the many popular surrogate models, including RBFs, GPs, MARS, polynomial regression, and support vector regression. We provide our own RBF implementation that uses the incremental factorization update idea that was described in \S\ref{sec:rbf}. Support for MARS is provided via py-earth\footnote{\url{https://github.com/scikit-learn-contrib/py-earth}} and support for GPs and polynomial regression is provided through scikit-learn \cite{pedregosa2011scikit}. The surrogate model does not need access to any of the other objects, as it just constructs a model based on the evaluated points and their values. The surrogate fitting problem may be ill-conditioned if the domain is scaled poorly, and we provide wrappers for rescaling the domain to the unit hypercube, which is particularly useful on problems where the bounds are very skewed. We add regularization to the linear system when radial basis functions are used to keep the system well-conditioned. Previous work has shown that hard-capping of function values can be useful to avoid oscillation, where a common choice is to replace all function values above the median by the median function value, and we provide wrappers for this as well. \subsection{Optimization problems} The optimization problem object specifies the number of dimensions, the number of analytical constraints, and provide methods for evaluating the objective function and the constraints. We provide implementations of many standard test problems which can be used to compare algorithms within the \texttt{pySOT}\xspace framework. The optimization problem does not depend on any other objects. \subsection{Checkpointing} Checkpointing is important when optimizing an expensive function since the optimization may run for several days or weeks, and it would be devastating if all information was lost due to e.g., a system or power failure. \texttt{pySOT}\xspace supports a controller wrapper for saving the state of the system each time something changes, making it possible to resume from the latest such snapshot.
1,314,259,992,751
arxiv
\section{Introduction}\label{sec_intro} Interface problems arise from diverse physical and engineering applications in which the coefficients of the governing partial differential equations are discontinuous across material interfaces that separate the physical domains. The body-fitted finite element methods resolve the geometry of the interface by requiring the vertices of the finite element mesh located on the interfaces \cite{Babuska70, Chen98, ChenLong}. For domains with complex geometry, the construction of body-fitted shape regular finite element meshes may be difficult and time-consuming, which is the main driving force of the study of unfitted finite element methods. In this paper we will show that the shape regular body-fitted mesh can indeed be constructed for any shaped smooth interface based on our new merging cell algorithm (see remarks below Theorem \ref{thm:3.1}). We emphasize, however, that even when the body-fitted shape regular mesh is available, the construction of high-order finite element methods still requires substantial new ideas including, for example, the isoparametric finite element method \cite{Ciarlet, Lehrenfeld} or unfitted finite element methods which are the focus of this paper. We remark that the shape regularity assumption of the finite element mesh is not only fundamental in the mathematical theory of finite element methods (see, e.g., \cite{Ciarlet}) but also essential in controlling the condition number of the finite element stiffness matrix for elliptic equations (see, e.g. \cite{Bank}). Let $\Omega\subset\mathbb{R}^2$ be a bounded Lipschitz domain which is divided by a $C^2$-smooth interface $\Gamma$ into two nonintersecting subdomains $\Omega_1\subset\bar{\Omega}_1\subset \Omega$, $\Omega_2=\Omega\setminus\bar{\Omega}_1$, see Fig.\ref{fig:1.1}. We consider the following elliptic interface problem \begin{align} &-{\rm div}(a\nabla u)=f\ \ \mbox{in }\Omega_1\cup\Omega_2,\label{m1}\\ &[\![u]\!]_{\Gamma}=0, \, [\![a\nabla u \cdot n]\!]_{\Gamma}=0\ \ \mbox{on } \Gamma,\ \ u=g\ \ \mbox{on }\partial\Omega,\label{m2} \end{align} where $f\in L^2(\Omega)$, $g\in H^{1/2}(\partial \Omega)$, $n$ is the unit outer normal to $\Omega_1$, and $[\![v]\!]:=v|_{\Omega_1}-v|_{\Omega_2}$ stands for the jump of a function $v$ across the interface $\Gamma$. We assume that the coefficient $a(x)$ is positive and piecewise constant, namely, $a=a_1\chi_{\Omega_1}+a_2\chi_{\Omega_2}$, $a_1,a_2 >0$, where $\chi_{\Omega_i}$ denotes the characteristic function of $\Omega_i$, $i=1,2$. \begin{figure}[htbp] \centering \includegraphics[width=0.3\linewidth]{./figs_new/IDG1.pdf} \caption{The setting of the elliptic interface problem and the unfitted mesh.}\label{fig:1.1} \end{figure} Unfitted finite element methods in the discontinuous Galerkin (DG) framework have attracted considerable interests in the literature in the last twenty years starting from the seminal work \cite{Hansbo} in which an unfitted finite element method is proposed for elliptic interface problems. The method is defined on a fixed background mesh and uses different finite element functions in different cut cells which is the intersection of the mesh elements and physical domains. The jump condition on the interface is enforced by penalties which extends an earlier idea of Nitsche \cite{Nitsche}. This unfitted finite element method can also be viewed as the interior penalty discontinuous Galerkin method (see, e.g., \cite{Arnold}) defined on meshes allowing curve-shaped elements. The main difficulty in using the unfitted finite element methods is the so-called {\it small cut cell problem}: the cut cells can be arbitrarily small and anisotropic, which can make the stiffness matrix extremely ill-conditioned, especially for high-order finite element methods \cite{Prenter, Badia22}. For other approaches to design unfitted discretization methods by constructing special finite element bases on interface elements or finite difference stencils along the interface, we refer to the immersed boundary method \cite{Peskin}, the immersed interface method \cite{LeVeque, Li06}, or the immersed finite element method \cite{Li03, Chen09}. There are two approaches in the literature to attack the small cut cell problem. One is by appropriate techniques of stabilization \cite{Burman10, Burman12, Wu, Massjung, Wang, Gurken}. Among them, for example, the method of ghost penalty \cite{Burman10, Burman12, Gurken} adds additional penalties on the jumps of derivatives across sides or facets of interface elements. The other approach is by merging the small cut cells with neighboring large elements \cite{Johansson, Huang, Badia18, CLX, BurmanHHO21} so that the merged macro-elements have enough support. While the DG formulation is still used in \cite{Johansson, Huang, CLX}, the aggregated unfitted finite element method in \cite{Badia18} relies on the construction of stable extension operators so that the finite element space is still $C^0$. We refer to recent works \cite{Burman15, Burman21, Badia21, Badia22} for further information about ghost penalty and the aggregated unfitted finite element method. In \cite{CLX} an adaptive high-order unfitted finite element method is proposed for elliptic interface problems in which the $hp$ a priori and a posteriori error estimates are derived based on novel $hp$ domain inverse estimates and the concept of interface deviation. The interface deviation is a measure to quantify the mesh resolution of the geometry of the interface. The macro-elements, which are the union of small interface elements and their surrounding elements, are assumed to be rectangular in \cite{CLX}. This assumption is different from those in \cite{Johansson, Huang, Badia18}, see Fig.\ref{fig:p123}. The macro-elements in \cite{Johansson, Huang, Badia18} need not to be of rectangular shape, which makes the implementation simpler but the crucial inverse estimates on extended elements in \cite{Johansson, Huang} or the stability of the extension operators \cite{Badia18} are shown without considering the dependence on the finite element approximation order $p$. The assumption that the macro-elements should be rectangular in \cite{CLX} raises the question of how to construct the merging algorithm in practical applications. \bigskip \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{./figs_new/p123.pdf} \caption{Three different ways of generating macro-elements which are marked in dark. The left, middle, and right figures illustrate the macro-elements used in \cite{Johansson,BurmanHHO21}, \cite{Huang, Badia18}, and \cite{CLX}, respectively.}\label{fig:p123} \end{figure} The first objective of this paper is to propose a reliable algorithm to merge small interface elements with their surrounding elements to generate the macro-elements. The algorithm is based on the concept of admissible chain of interface elements, the classification of patterns for merging elements, and appropriate ordering in generating macro-elements from the patterns so that the reliability of the algorithm in the sense that it terminates in finite number of steps can be proved. This algorithm also leads to a reliable algorithm of automatically generating 2D shape regular body-fitted finite element meshes for arbitrarily shaped smooth interfaces. To the authors' best knowledge, this algorithm introduces a new way to generate body-fitted finite element meshes and may be of independent interest. The second objective of the paper is to study the condition number of the stiffness matrix of high-order unfitted finite element methods which are known to be of the order $O(h^{-2})$ in the literature \cite{Burman10, Johansson, Huang, Badia18, Badia22HHO} on quasi-uniform meshes with the mesh size $h$. For high order methods, it is known \cite{Prenter} that the condition number of the stiffness matrix may grow exponentially with the finite element approximation order $p$ in terms of the measure of cut cells. This indicates that the geometry of the cut cells is essential in controlling the condition number of the stiffness matrix. In this paper, we will take the basis functions of the spectral element, that is, the Lagrangian interpolation functions at the Gauss-Lobatto points on elements not intersecting with the interface. For the interface elements, extra care must be taken as the basis of the spectral element on $K$ is ill-conditioned on the subsets $K_i=K\cap\Omega_i$, $i=1,2$, which is similar to the observation in \cite[P.346]{Dubiner} for Legendre polynomials. Here we choose the $L^2$-orthogonal functions on some special polygons inside $K_i$, $i=1,2$, as the basis functions for the interface elements $K$. We show that the condition number of the stiffness matrix is bounded by $\Theta^2(p^3(N-N^\Gamma)+p^4N^\Gamma)$ up to a logarithmic factor, where $N$ is the number of total elements, $N^\Gamma$ is the number of interface elements, and $\Theta$ depends on the interface deviation and $p$. This bound is optimal and indicates that the mesh has to sufficiently resolve the geometry of the interface to control the condition number of the stiffness matrix. The results of this paper allow for extensions in several directions. Firstly, for the ease of exposition, we consider in this paper the case when the domain $\Omega$ is a union of rectangles and the interface is smooth. The extension to the general domains with smooth boundary is straightforward. Secondly, the case when the interface is piecewise smooth will be pursued in our forthcoming work by combining the ideas in \cite{CLX} on large elements and interface deviation for interfaces with singularities with the merging algorithm developed in this paper. Thirdly, the theoretical results in this paper and in \cite{CLX} including the $hp$ domain inverse estimates and the concept of the interface deviation can be extended to study three-dimensional interface problems. The merging algorithm in the three-dimensional case is more challenging. Nevertheless, we believe that with the new insights gained in this paper for the two-dimensional case, reliable algorithms for constructing cubic macro-elements can be achieved in future. Finally, we remark that our argument to analyze and control the condition number of the stiffness matrix is fairly general, it can be used in other unfitted finite element methods including three-dimensional cases. The layout of the paper is as follows. In section 2 we introduce our unfitted finite element method. In section 3 we construct the merging algorithm to generate the induced mesh. In section 4 we prove the discrete Poincar\'e inequality and the $hp$ estimate for the condition number of the stiffness matrix. In section 5 we present several numerical examples to confirm our theoretical results. \section{The unfitted finite element method}\label{sec_ufem} Let $\Omega\subset\mathbb{R}^2$ be a domain which is a union of rectangles and $\mathcal{T}$ a Cartesian finite element mesh of $\Omega$ with possible hanging nodes. This allows us to locally refine the mesh near the interface to resolve the geometry to save the computational costs away from the interface. The elements of the mesh are (open) rectangles whose sides are parallel to the coordinate axes. We assume that the interface intersects the boundary of $K$ twice at different sides (including the end points). For any element $K$, let $h_K$ stand for its diameter. Denote $\mathcal{T}^\Gamma:=\{K\in\mathcal{T}:K\cap\Gamma\not=\emptyset\}$ the set of interface elements. We recall the definition of large element in Chen et al \cite[Definition 2.1]{CLX}. \begin{Def}\label{def:2.1} (Large element) For $i=1,2$, an element $K\in \mathcal{T}$ is called a large element with respect to $\Omega_i$ if $K\subset\Omega_i$ or $K\in\mathcal{T}^\Gamma$ for which there exists a constant $\delta_0\in(0,1/2)$ such that $|e\cap \Omega_i|\ge \delta_0|e|$ for each side $e$ of $K$ having nonempty intersection with $\Omega_i$. Specially, $K$ is called a large element if $K\in\mathcal{T}^\Gamma$ is large with respect to both $\Omega_1$ and $\Omega_2$. Otherwise, $K$ is called a small element. \end{Def} Note that it is possible that $K\in\mathcal{T}^\Gamma$ may not be a large element. The following assumption in \cite{CLX} is inspired by Johansson and Larson \cite{Johansson} in which a fictitious boundary method is considered. \medskip {\bf Assumption (H1)} For each $K\in\mathcal{T}^\Gamma$, there exists a rectangular macro-element $M(K)$ which is a union of $K$ and its surrounding element (or elements) such that $M(K)$ is a large element. We assume $h_{M(K)}\le C_0h_K$ for some constant {{$C_0>0$}}. \medskip In section 3 we will construct a merging algorithm to find the macro-element for each small element in an admissible chain of interface elements. This indicates that the assumption (H1) can always be satisfied by using the algorithm. In the following, we will always set $M(K)=K$ if $K\in\mathcal{T}^{\Gamma}$ is a large element. Then, the induced mesh of $\mathcal{T}$ is defined as \begin{eqnarray*} \mathcal{M}=\{M(K):K\in \mathcal{T}^\Gamma\}\cup\{K\in\mathcal{T}: K\not \subset M(K') \text{ for some } K'\in \mathcal{T}^{\Gamma}\}. \end{eqnarray*} We will write $\mathcal{M}={\rm Induced}(\mathcal{T})$. Note that $\mathcal{M}$ is also a Cartesian mesh of $\Omega$ in the sense that either $M(K)\cap M(K')=\emptyset$ or $M(K)=M(K')$ for any two different elements $K,K'\in\mathcal{T}$. All elements in $\mathcal{M}$ are large elements. For any $K\in\mathcal{M}^\Gamma:=\{K\in\mathcal{M}:K\cap\Gamma\not=\emptyset\}$, denote $K_i=K\cap\Omega_i$, $i=1,2$, $\Gamma_K=\Gamma\cap K$, and $\Gamma_K^h$ the open line segment connecting the two intersection points of $\Gamma$ and $\partial K$. $\Gamma_K^h$ divides the element $K$ into two polygons $K_1^h$ and $K_2^h$ which are the polygonal approximation of $K_1$ and $K_2$, respectively. An important property of $K$ being a large element is that $K_i^h$, $i=1,2$, is a {\it strongly} shape regular polygon in the sense that it is the union of shape regular triangles in the sense of Ciarlet \cite{Ciarlet}. We remark that there are different definitions of shape regular polygons in the literature, see, e.g., Ming and Shi \cite{Ming} and Brenner and Sung \cite{Brenner}. The following concept of interface deviation is introduced in \cite{CLX}. \begin{Def}\label{def:2.2} For any $K\in\mathcal{M}^\Gamma$, the interface deviation $\eta_K$ is defined as $\eta_K=\max(\eta_K^1,\eta_K^2)$, where for $i=1,2$, if $A_K^i\in\Omega_i$ is the vertex of $K$ which has the maximum distance to $\Gamma_K^h$ among all vertices of $K$ in $\Omega_i$, \begin{eqnarray*} \eta_K^i=\frac{{\rm dis}_{\rm H}(\Gamma_K,\Gamma_K^h)}{{\rm dis}(A_K^i,\Gamma_K^h)}. \end{eqnarray*} Here ${\rm dis}_{\rm H}(\Gamma_1,\Gamma_2)=\max_{x\in\Gamma_1}(\min_{y\in\Gamma_2}|x-y|)$ and ${\rm dis}(A,\Gamma_1)=\min_{y\in\Gamma_1}|A-y|$. \end{Def} The interface deviation is a measure on how well the mesh resolves the geometry of the interface. We will show in section 4 that this concept also links to the control of the condition number of the stiffness matrix. It is known that if $\Gamma_K$ is $C^2$-smooth, ${\rm dis}_{\rm H}(\Gamma_K,\Gamma_K^h)\le Ch_K^2$ (see, e.g., Feistauer \cite[\S3.3.2]{Feistauer}) and thus $\eta_K\le Ch_K$ for some constant $C$ independent of $h_K$. Therefore, the interface deviation can be made arbitrarily small by locally refining the mesh near the interface. When the interface $\Gamma$ is Lipschitz and piecewise $C^2$-smooth, the definition of the large element and interface deviation has to be modified in the elements containing the singular points of the interface, see \cite{CLX} for the details. For any integer $p\ge 1$ and $K\in\mathcal{M}$, denote $Q_p(K)$ the set of polynomials in $K$ which is of degree $p$ in each variable. The following $hp$ domain inverse estimate is proved in \cite[Lemma 2.4]{CLX}. \begin{lem}\label{lem:2.0} Let $\Delta$ be a triangle with vertices $A=(a_1,a_2)^T$, $B=(0,0)^T$, $C=(c_1,0)^T$, where $a_2,c_1 >0$. Let $\delta\in (0,a_2)$ and $\Delta_\delta=\{x\in\Delta:{\rm dist}(x,BC)>\delta\}$. Then we have \begin{eqnarray*} \|v\|_{L^2(\Delta)}\le\mathsf{T}\left(\frac{1+\delta a_2^{-1}}{{{1-\delta a_2^{-1}}}}\right)^{2p+3/2}\|v\|_{L^2(\Delta_\delta)}\ \ \forall v\in Q_p(\Delta), \end{eqnarray*} where $\mathsf{T}(t)=t+\sqrt{t^2-1}\ \ \forall t\ge 1$. \end{lem} The proof of this lemma makes use of the following one-dimensional domain inverse estimate in \cite[Lemma 2.3]{CLX} \begin{equation}\label{g1} \|g\|_{L^2(I_\lambda\backslash\bar I)}^2\le \frac 12\left[(\lambda+\sqrt{\lambda^2-1})^{2p+1}-1\right]\|g\|_{L^2(I)}^2\ \ \forall g\in Q_p(I_\lambda), \end{equation} where $I=(-1,1), I_\lambda=(-\lambda,\lambda)$, $\lambda>1$, and $Q_p(I_\lambda)$ is the set of polynomials of order $p$ in $I_\lambda$. We remark that the growing factor $(\lambda+\sqrt{\lambda^2-1})^{2p+1}$ is sharp which is attained by the Chebyshev polynomials whose explicit expression is $C_n(t)=\frac 12[(t+\sqrt{t^2-1})^n+(t-\sqrt{t^2-1})^n]$, $n\ge 0$, see DeVore and Lorentz \cite[P.76]{DeVore}. Let $\delta_K:={\rm dis}_{\rm H}(\Gamma_K,\Gamma_K^h)$, We also define two polygons $K_i^{h-\delta_K}$, $i=1,2$, as follows. Let $\Gamma_{K_i}^{h-\delta_K}\subset K_i$ be the line segment which is parallel to $\Gamma_K^h$ and its distance to $\Gamma^h_K$ is $\delta_K$. Let $K_i^{h-\delta_K}$ be the polygon bounded by sides of $K$ and $\Gamma_{K_i}^{h-\delta_K}$. \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{./figs_new/fig21.pdf} \caption{The figure used in the proof of Lemma \ref{lem:new}.}\label{fig:2.1} \end{figure} \begin{lem}\label{lem:new} Let $K\in\mathcal{M}^\Gamma$ and $\eta_K\le 1/2$, Then for $i=1,2$, we have \begin{align} & \|v_i\|_{L^2(K_i^{h-\delta_K})}\le \|v_i\|_{L^2(K_i)}\le C\mathsf{T}\left(\frac{1+3\eta_K}{1-\eta_K}\right)^{2p+3/2}\|v_i\|_{L^2(K_i^{h-\delta_K})}\ \ \forall v\in Q_p(K),\label{f4} \end{align} where the constant $C$ is independent of $h_K$, $p$, and $\eta_K$. \end{lem} \begin{proof} The left inequality \eqref{f4} is trivial since $K_i^{h-\delta_K}\subset K_i$. Here we prove the right inequality in \eqref{f4} when $\Gamma$ intersects $\partial K$ at neighboring sides. The other cases can be proved similarly. We use the notation in Fig.\ref{fig:2.1} in which $B'C'$, $B''C''$ are parallel to $\Gamma_K^h$ and the distances of $B'C', B''C''$ to $\Gamma_K^h$ are $\delta_K$. Then $K_1^{h-\delta_K}=\Delta A_K^1B'C'$ and $K_2^{h-\delta_K}$ is the polygon bounded by sides of $K$ and $B''C''$. Let $d_i={\rm dis}(A_K^i,\Gamma_K^h)$, $i=1,2$. By definition, the interface deviation $\eta_K\ge \delta_K/d_i$, $i=1,2$. By Lemma \ref{lem:2.0}, for any $v\in Q_p(K)$, \begin{eqnarray*} \|v\|_{L^2(K_1)}\le\|v\|_{L^2(\Delta A_K^1B''C'')}&\le&\mathsf{T}\left(\frac{1+2\delta/(d_1+\delta)}{1-2\delta/(d_1+\delta)}\right)^{2p+3/2}\|v\|_{L^2(\Delta A_K^1B'C')}\\ &\leq&\mathsf{T}\left(\frac{1+3\eta_K}{1-\eta_K}\right)^{2p+3/2}\|v\|_{L^2(K_1^{h-\delta_K})}. \end{eqnarray*} The case for $K_2$ can be proved similarly. This completes the proof. \end{proof} The numerical results in Example 1 in section 5 indicate that the bound in Lemma \ref{lem:new} is sharp. Now for any $K\in\mathcal{M}$, we denote \begin{eqnarray*} a_K=\left\{\begin{array}{ll} \frac{a_1+a_2}{2}& \mbox{if }K\in\mathcal{M}^\Gamma,\\ a_i & \mbox{if }K\in \Omega_i. \end{array}\right.,\quad \Theta_K=\left\{\begin{array}{ll} \mathsf{T}\left(\frac{1+3\eta_K}{1-\eta_K}\right)^{4p+3} & \mbox{if }K\in\mathcal{M}^\Gamma,\\ 1 & \mbox{otherwise}. \end{array}\right. \end{eqnarray*} Based on the concept of interface deviation, the following $hp$ inverse estimates on curved domains are proved in \cite[Lemma 2.8, (2.12)]{CLX}. \begin{lem}\label{lem:2.1} Let $K\in\mathcal{M}^\Gamma$ and $\eta_K\le 1/2$, Then for $i=1,2$, we have \begin{eqnarray*} & &\|\nabla v\|_{L^2(K_i)}\le Cp^2h_K^{-1}\Theta_K^{1/2}\|v\|_{L^2(K_i)}\ \ \forall v\in Q_p(K),\\ & &\|v\|_{L^2(\partial K_i)}\le Cp h_K^{-1/2}\Theta_K^{1/2}\|v\|_{L^2(K_i)}\ \ \forall v\in Q_p(K), \end{eqnarray*} where the constant $C$ is independent of $h_K,p,$ and $\eta_K$. \end{lem} We remark that $hp$ inverse estimates on star-shaped curve elements are studied in Massjung \cite{Massjung}, Wu and Xiao \cite{Wu}, and Cangiani et al \cite{Dong} which can be viewed as different forms of assumption on the mesh to resolve the geometry. Lemma \ref{lem:2.1} does not require the locally star-shaped assumption on the interface and is robust with respect to small variations of the interface as long as the interface deviation is the same. Notice that if $\eta_K\le \frac 1{p(p+1)}$, for $s=\frac{1+3\eta_K}{1-\eta_K}=1+\gamma_K$, where $\gamma_K=\frac {4\eta_K}{1-\eta_K}\le 4p^{-2}$, we have $\mathsf{T}(s)=s+\sqrt{s^2-1}=1+\rho_K$ with $\rho_K=\gamma_K+\sqrt{\gamma_K^2+2\gamma_K}\le p^{-1}(4p^{-1}+\sqrt{16p^{-2}+8})$. Thus $\Theta_K=e^{(4p+3)\ln(\mathsf{T}(s))}\le e^{(4p+3)\rho_K}\le C$ for some constant $C$ independent of $p$ and $\eta_K$. This motivates us to make the following assumption in the remainder of this paper which can be easily satisfied for $C^2$-smooth interfaces if the mesh is locally refined near the interface. \medskip {\bf Assumption (H2)} For any $K\in\mathcal{M}^\Gamma$, $\eta_K\le \frac 1{p(p+1)}$. \medskip Now we introduce some notation for DG methods. Let $\mathcal{E}=\mathcal{E}^{\rm side}\cup\mathcal{E}^\Gamma\cup\mathcal{E}^{\rm bdy}$, where $\mathcal{E}^{\rm side}=\{e=\partial K\cap\partial K':K,K'\in\mathcal{M}\}$, $\mathcal{E}^\Gamma=\{\Gamma_K:K\in\mathcal{M}\}$, and $\mathcal{E}^{\rm bdy}=\{e=\partial K\cap\partial\Omega:K\in\mathcal{M}\}$. For $i=1,2$, denote by $\mathcal{M}_i=\{K\in\mathcal{M}:K\cap\Omega_i\not=\emptyset\}$. Then $\Omega_i\subset\Omega_i^h=\cup\{K:K\in\mathcal{M}_i\}$. We denote $\mathcal{E}_i^{\rm side}$ the set of all sides of $\mathcal{M}_i$ interior to $\Omega_i^h$, that is, not on the boundary $\partial\Omega_i^h$. Finally, we set $\bar{\cal{E}}= \mathcal{E}^{\rm side}_{1} \cup\mathcal{E}^{\rm side}_{2}\cup\mathcal{E}^\Gamma\cup\mathcal{E}^{\rm bdy}$. For any $e\in\mathcal{E}$, we fix a unit normal vector $n_e$ of $e$ with the convention that $n_e$ is the unit outer normal to $\partial\Omega$ if $e\in\mathcal{E}^{\rm bdy}$ and $n_e$ is the unit outer normal to $\partial\Omega_1$ if $e\in\mathcal{E}^\Gamma$. For any $v\in H^1(\mathcal{M}):=\{v_1\chi_{\Omega_1}+v_2\chi_{\Omega_2}:v_i|_K\in H^1(K), K\in\mathcal{M}, i=1,2\}$, we define the jump of $v$ across $e$ as \begin{eqnarray*} [{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]_e:=v_--v_+\ \ \forall e\in\mathcal{E}^{\rm side}\cup\mathcal{E}^\Gamma,\ \ \ \ [{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]_e:=v_-\ \ \forall e\in\mathcal{E}^{\rm bdy}, \end{eqnarray*} where $v_\pm$ is the trace of $v$ on $e$ in the $\pm n_e$ direction. We define the normal vector function $n\in L^\infty(\mathcal{E})$ by $n|_e=n_e\ \ \forall e\in\mathcal{E}$. For any subset $\widehat\mathcal{M}\subset\mathcal{M}$ and $\hat\mathcal{E}\subset\bar\mathcal{E}$, we use the notation \begin{eqnarray*} (u,v)_{\widehat\mathcal{M}}:=\sum_{K\in\widehat\mathcal{M}}(u,v)_K,\ \ \langle u,v\rangle_{\hat\mathcal{E}}:=\sum_{e\subset\hat\mathcal{E}}\langle u,v\rangle_e, \end{eqnarray*} where $(u,v)_K$ is the inner product of $L^2(K)$ and $\langle u,v\rangle_e$ is the inner product of $L^2(e)$. The unfitted finite element method is based on the idea of ``doubling of unknowns" in Hansbo and Hansbo \cite{Hansbo}. We define the unfitted finite element space as \begin{eqnarray*} \mathbb{X}_p(\mathcal{M})=\{v_1\chi_{\Omega_1}+v_2\chi_{\Omega_2}:v_i|_K\in Q_p(K), K\in\mathcal{M},i=1,2\}. \end{eqnarray*} For any $v\in H^1(\mathcal{M})$, we denote $\nabla_hv|_K:=\nabla v_1\chi_{K_1}+\nabla v_2\chi_{K_2}$, where $\chi_{K_i}$ is the characteristic function of $K_i$, $i=1,2$. For any $v\in H^1(\mathcal{M}),g\in L^2(\partial\Omega)$, we define the liftings $\mathsf{L}(v)\in [\mathbb{X}_p(\mathcal{M})]^2$, $\mathsf{L}_1(g)\in [\mathbb{X}_p(\mathcal{M})]^2$ such that \begin{equation} (w,\mathsf{L}(v))_\mathcal{M}=\langle w^-\cdot n,[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\rangle_{\mathcal{E}},\ \ \ \ (w,\mathsf{L}_1(g))_\mathcal{M}=\langle w\cdot n,g\rangle_{\mathcal{E}^{\rm bdy}}\ \ \ \forall w\in [\mathbb{X}_p(\mathcal{M})]^2\label{ll1} \end{equation} Our unfitted finite element method is to find $U\in\mathbb{X}_p(\mathcal{M})$ such that \begin{equation}\label{a2} a_h(U,v)=F_h(v)\ \ \ \ \forall v\in\mathbb{X}_p(\mathcal{M}), \end{equation} where the bilinear form $a_h: H^1(\mathcal{M})\times H^1(\mathcal{M})\to \mathbb{R}$, and the functional $F_h:H^1(\mathcal{M})\to\mathbb{R}$ are given by \begin{align} a_h(v,w)=&(a(\nabla_h v-\mathsf{L}(v)),\nabla_h w-\mathsf{L}(w))_\mathcal{M}+\langle\alpha[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\bar{\cal{E}}} +\langle p^{-2}h\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],\nabla_T[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\mathcal{E}^\Gamma},\label{a3}\\ F_h(v)=&(f,v)_\mathcal{M}-(a\mathsf{L}_1(g),\nabla_h v-\mathsf{L}(v))_\mathcal{M}+\langle\alpha g,v\rangle_{\mathcal{E}^{\rm bdy}},\label{a33} \end{align} where $\nabla_T$ is the surface gradient on $\Gamma$. For any $v=v_1\chi_{\Omega_1}+v_2\chi_{\Omega_2}, w=w_1\chi_{\Omega_1}+w_2\chi_{\Omega_2}\in H^1(\mathcal{M})$, \begin{eqnarray*} \langle\alpha[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\bar{\cal{E}}}:=\sum^2_{i=1}\langle\alpha[{\hskip -1.5pt} [ v_i]{\hskip -1.5pt} ],[{\hskip -1.5pt} [ w_i]{\hskip -1.5pt} ]\rangle_{\mathcal{E}_i^{\rm side}}+\langle\alpha [{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\mathcal{E}^\Gamma\cup\mathcal{E}^{\rm bdy}}. \end{eqnarray*} The interface penalty function $\alpha\in L^\infty(\mathcal{E})$ is \begin{equation} \alpha |_e=\alpha_0 a_e\Theta_eh_e^{-1}p^2\ \ \forall e\in\mathcal{E},\label{a34} \end{equation} where $\alpha_0>0$ is a fixed constant, $a_e=\max\{a_K:e\cap \bar K\not=\emptyset\}\ \forall e\in\mathcal{E}$, $\Theta_e=\max\{\Theta_K:e\cap \bar K\not=\emptyset\}\ \forall e\in\mathcal{E}$, and the mesh function $h|_e=(h_K+h_{K'})/2$ if $e=\partial K\cap\partial K'\in\mathcal{E}^{\rm side}$ and $h|_e=h_K$ if $e=K\cap\Gamma\in\mathcal{E}^\Gamma$ or $e=\partial K\cap\partial\Omega\in\mathcal{E}^{\rm bdy}$. We remark that our unfitted finite element method \eqref{a2} is the so-called local discontinuous Galerkin (LDG) method in Cockburn and Shu \cite{Cockburn} which is different from the interior penalty discontinuous Galerkin (IPDG) method used in \cite{Hansbo}. We choose the LDG method because the penalty constant $\alpha_0$ in \eqref{a34} can be any fixed constant, while the corresponding penalty constant in the IPDG method has to be sufficiently large to ensure the stability. We refer to Arnold et al \cite{Arnold} for a review of different DG methods for elliptic equations. Notice that the last term $\langle p^{-2}h\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ], \nabla_T[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\mathcal{E}^\Gamma}$ in the bilinear form \eqref{a3} is not present in \cite{CLX}. It is included in this paper in order to show the discrete Poincar\'e inequality for unfitted finite element functions in Lemma \ref{lem:2.3} which is crucial for us to study the condition number of the stiffness matrix. We also remark that $\langle p^{-2}h\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ], \nabla_T[{\hskip -1.5pt} [ w]{\hskip -1.5pt} ]\rangle_{\mathcal{E}^\Gamma}$ penalizes the tangential gradient of the finite element solution, not the normal flux of the solution as in Burman and Hansbo \cite{Burman10}, Xiao and Wu \cite{Wu}. For any $v\in H^2(\mathcal{M})$, we introduce the DG norm \begin{eqnarray*} \|v\|_{\rm DG}^2:=\|a^{1/2}\nabla v\|_\mathcal{M}^2+\|\alpha^{1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\bar{\mathcal{E}}}^2+\|p^{-1}h^{1/2}\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\mathcal{E}^\Gamma}^2, \end{eqnarray*} where $\|\alpha^{1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ] \|_{\bar{\mathcal{E}}}^2=\langle\alpha[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\rangle_{\bar{\mathcal{E}}}$ and $\|p^{-1}h^{1/2}\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\mathcal{E}^\Gamma}^2=\langle p^{-2}h\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ],\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\rangle_{\mathcal{E}^\Gamma}$. By Lemma \ref{lem:2.1}, it is easy to show that \begin{eqnarray*} a_h(v,v)\le C\|v\|_{\rm DG}^2\ \ \ \forall v\in X_p(\mathcal{M}). \end{eqnarray*} Moreover, by \cite[Theorem 2.1]{CLX} we know that \begin{eqnarray*} a_h(v,v)\ge c_{\rm stab}\|v\|_{\rm DG}^2\ \ \ \forall v\in\mathbb{X}_p(\mathcal{M}), \end{eqnarray*} where $c_{\rm stab}>0$ is a constant independent of the mesh sizes, $p$, and the interface deviations $\eta_K$ for all $K\in\mathcal{M}^\Gamma$. \begin{thm}\label{thm:2.1} Let the solution of the problem \eqref{m1}-\eqref{m2} $u\in H^k(\Omega_1\cup\Omega_2)$, $k\ge 2$, and $U\in\mathbb{X}_p(\mathcal{M})$ be the solution of the problem \eqref{a2}. Then we have \begin{eqnarray*} \|u-U\|_{\rm DG}\le C\Theta^{1/2}\frac{h^{\min(p+1,k)-1}}{p^{k-3/2}}\|u\|_{H^k(\Omega_1\cup\Omega_2)}, \end{eqnarray*} where $h=\max_{K\in\mathcal{M}}h_K$, $\Theta=\max_{K\in\mathcal{M}}\Theta_K$, and the constant $C$ is independent of the mesh sizes, $p$, and the interface deviations $\eta_K$ for all $K\in\mathcal{M}^\Gamma$. \end{thm} \begin{proof} For the sake of completeness, we sketch a proof by using the argument in e.g., Perugia and Sch\"otzau \cite{Perugia}, Wu and Xiao \cite{Wu}. For $i=1,2$, let $\tilde u_i\in H^k(\mathbb{R}^2)$ be the Stein extension (cf., e.g., Adams and Fournier \cite[Theorem 5.14]{Adams}) of $u_i=u|_{\Omega_i}\in H^k(\Omega_i)$, which is available for any Lipschitz domains, such that $\|\tilde u_i\|_{H^k(\mathbb{R}^2)}\le C\|u_i\|_{H^k(\Omega_i)}$. Let $u_I=I_{hp}(\tilde u_1)\chi_{\Omega_1}+I_{hp}(\tilde u_2)\chi_{\Omega_2}$, where $I_{hp}:H^1(\mathcal{M})\to\mathbb{V}_p(\mathcal{M})=\Pi_{K\in\mathcal{M}}Q_p(K)$ is the interpolation operator defined in Babu\v{s}ka and Suri \cite[Lemma 4.5]{Babuska87b}. For any $K\in\mathcal{M}$, it satisfies that for any $0\le j\le k$, \begin{equation}\label{a8} \|w-I_{hp}(w)\|_{H^j(K)}\le C\frac{h_K^{\min(p+1,k)-j}}{p^{k-j}}\|v\|_{H^k(K)}\ \ \forall v\in H^k(K), \end{equation} where the constant $C$ is independent of $h_K,p$, but may depend on $k$. By the multiplicative trace inequality, we have \begin{eqnarray*} \|w\|_{L^2(\partial K)}\le Ch_K^{-1/2}\|w\|_{L^2(K)}+C\|w\|_{L^2(K)}^{1/2}\|\nabla w\|_{L^2(K)}^{1/2}\ \ \forall w\in H^1(K). \end{eqnarray*} For any $K\in\mathcal{M}^\Gamma$, by Xiao et al \cite[Lemma 3.1]{Wang}, \cite[Lemma 2.6]{CLX}, we have that for $i=1,2$, \begin{equation}\label{a10} \|w\|_{L^2(\Gamma_K)}\le C\|w\|_{L^2(K_i)}^{1/2}\|\nabla w\|_{L^2(K_i)}^{1/2}+\|w\|_{L^2({{\partial K_i\backslash\bar\Gamma_K}})}\ \ \forall w\in H^1(K). \end{equation} Thus we obtain by using \eqref{a8} that for any $K\in\mathcal{M}$, $j=0,1$, \begin{eqnarray*} \|w-I_{hp}(w)\|_{H^j(\partial K_i)}\le C\frac{h^{\min(p+1,k)-j-1/2}}{p^{k-j-1/2}}\|w\|_{H^k(K)}\ \ \forall w\in H^k(K). \end{eqnarray*} This implies easily that \begin{equation}\label{a9} \|u-u_I\|_{\rm DG}\le C\Theta^{1/2}\frac{h^{\min(p+1,k)-1}}{p^{k-3/2}}\|u\|_{H^k(\Omega_1\cup\Omega_2)}. \end{equation} On the other hand, since $a_h(u,v)=F_h(v)\ \ \forall v\in\mathbb{X}_p(\mathcal{M})$, we use \eqref{a2} to conclude that \begin{eqnarray*} \|u_I-U\|_{\rm DG}^2\le c_{\rm stab}^{-1}a_h(u_I-U,u_I-U)&=&c_{\rm stab}^{-1}a_h(u_I-u,u_I-U)\\ &\le&C\|u_I-u\|_{\rm DG}\|u_I-U\|_{\rm DG}. \end{eqnarray*} This completes the proof by \eqref{a9} and the triangle inequality. \end{proof} To conclude this section, we remark that the same a posteriori error estimate in \cite[Theorem 3.1]{CLX} also holds for the solution $U\in\mathbb{X}_p(\mathcal{M})$ in \eqref{a2}. Here we omit the details. \section{The merging algorithm}\label{sec_algo_1} In this section, we construct a merging algorithm for the admissible chain of interface elements so that each small interface element in the chain is included in some macro-element which is a large element. We first introduce the concept of admissible chain in \S 3.1 and five types of patterns of merging small interface elements with their surrounding elements in \S 3.2. We propose our merging algorithm and prove its reliability in \S 3.3. \subsection{The admissible chain of interface elements} A chain of interface elements $\mathfrak{C}=\{G_1\rightarrow G_2 \rightarrow\cdots \rightarrow G_n\}$ orderly consists of $n$ interface elements $G_i\in\mathcal{T}^\Gamma$, $i=1,\cdots,n$, such that $\bar\Gamma_{G_i}\cup\bar\Gamma_{G_{i+1}}$ is a continuous curve, $1\le i\le n-1$. We call $n$ the length of $\mathfrak{C}$ and denote $\mathfrak{C}\{i\}=G_i$, $i=1,\cdots,n$. For any element $K\in\mathcal{T}$, we call $N(K)\in\mathcal{T}$ a neighboring element of $K$ if $K$ and $N(K)$ share a common side, and $D(K)\in\mathcal{T}$ a diagonal element of $K$ if $K$ and $D(K)$ only share one common vertex. Set $\mathcal{S}(K)_0=\{K\}$, and for $j\ge 1$, denote $\mathcal{S}(K)_j=\{K''\in\mathcal{T}:\exists\,K'\in\mathcal{S}(K)_{j-1}\ \mbox{such that }\bar K''\cap\bar K'\not=\emptyset\}$, that is, $\mathcal{S}(K)_j$ is the set of all $k$-th layer elements surrounding $K$, $0\le k\le j$. Obviously, $\mathcal{S}(K)_0\subset\mathcal{S}(K)_1\subset\cdots\subset\mathcal{S}(K)_j$ for any $ j\ge 1$. \begin{Def}\label{def:3.1} A chain of interface elements $\mathfrak{C}$ is called admissible if the following rules are satisfied. \begin{description} \item[$1.$] For any $K\in\mathfrak{C}$, all elements in $\mathcal{S}(K)_2$ have the same size as that of $K$. \item[$2.$] If $K\in\mathfrak{C}$ has a side $e$ such that $\bar e\subset \Omega_i$, then $e$ must be a side of some neighboring element $N(K)\subset\Omega_i$, $i=1,2$. \item[$3.$] Any elements $K\in\mathcal{T}\backslash\mathcal{T}^\Gamma$ can be neighboring at most two elements in $\mathfrak{C}$. \item[$4.$] For any $K\subset\Omega_i$, the interface elements in $\mathcal{S}(K)_2$ must be connected in the sense that the interior of the closed set $\cup\{\bar G: G\in\mathcal{S}(K)_2\cap\mathcal{E}^\Gamma\}$ is a connected domain. \end{description} \end{Def} \bigskip \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{./figs_new/not_allowed_12.pdf} \caption{The patch of elements not allowed by Rule $2$ (left) and Rule $3$ (right) in Definition \ref{def:3.1}. }\label{fig:3.2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{./figs_new/exception12.pdf} \caption{The patch of elements not allowed by Rule $4$ in Definition \ref{def:3.1}. }\label{fig:3.3} \end{figure} We remark that the four rules of the admissible chains can be easily satisfied if the mesh is well refined near the interface. The purpose of Rules $2$ and $3$ is to exclude the situations illustrated in Fig.\ref{fig:3.2}, in which refinements are required to resolve the geometry of the interface. By Rule $4$, the two cases illustrated in Fig.\ref{fig:3.3} are not allowed since the interface elements in $\mathcal{S}(K)_2$ are not connected, where $K$ is the dark element. \subsection{The patterns} {{Since the interface intersects the boundary of $K$ twice at different sides (including the end points)}}, the interface intersects any element only in four possible ways as shown in Fig.\ref{fig:3.1}. We denote $\mathcal{T}_1$ the set of interface elements shown in Fig.\ref{fig:3.1}(a), $\mathcal{T}_2$ the set of interface elements shown in Fig.\ref{fig:3.1}(b) and (c), and $\mathcal{T}_3$ the set of interface elements shown in Fig.\ref{fig:3.1}(d). By Definition \ref{def:2.1}, each element in $\mathcal{T}_3$ is a large element. Thus we only need to consider the merging of type $\mathcal{T}_1$ and $\mathcal{T}_2$ elements. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{./figs_new/abcd.pdf} \caption{Different types of interface elements. The type 2 elements include elements illustrated in (b) and (c). }\label{fig:3.1} \end{figure} A pattern is a set of interface elements and their neighboring and diagonal elements whose union consists of a macro-element. We introduce five types of patterns according to the combination of different types of interface elements, which will be used in our merging algorithm for the admissible chain of interface elements. In the following, for any $K\in \mathcal{T}$, $h_i(K)$ stands for its length of the side of $K$ which is parallel to the $x_i$-axis, $i=1,2$. \bigskip {\bf{Pattern 1}}: $K\in\mathcal{T}_1$ has two neighboring elements $N(K)_1,N(K)_2\in \mathcal{T}_2$, see Fig.\ref{pattern1}. $e_1$ and $e_2$ are respectively the thick part of the sides of $N(K)_1$ and $N(K)_2$ in the figure. We use Algorithm 1 to obtain the macro-elements $M(K)$, $M(N(K)_1)$, and $M(N(K)_2)$. Here for any closed set $T\subset\mathbb{R}^2$, $T^\circ$ stands for the interior of $T$. \medskip \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 1:}} Pattern 1 \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}} $(N(K)_1,K,N(K)_2)$ {\bf {Output:}} $(M(N(K)_1),M(K),M(N(K)_2))$ {\bf {if}} $K$, $N(K)_1$, and $N(K)_2$ are large elements {\bf{then}} $\quad$ $M(N(K)_1)=N(K)_1$, $M(K)=K$, $M(N(K)_2)=N(K)_2$; {\bf {else}} $\quad$ $\quad$ {\bf {if}} $|e_1|/h_2(K) \ge 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$ {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)})^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) \ge 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup \overline{G}_4\cup \overline{G}_5)^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) < 2\delta_0$ and $|e_2|/h_1(K) \ge 2\delta_0$ } {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup \overline{G}_1\cup \overline{G}_2)^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) < 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup (\cup^5_{j=1}\overline{G}_j))^\circ$. $\quad$ $\quad$ {\bf {end}} {\bf {end}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{./figs_new/pattern12.pdf} \caption{Illustration of type 1 (left) and type 2 (right) patterns.}\label{pattern1} \end{figure} \begin{lem}\label{lem:3.1} Let $\delta_0\in(0,1/3\,]$. The macro-elements $M(K)$, $M(N(K)_1)$, $M(N(K)_2)$ of the output of Algorithm 1 are large elements. \end{lem} \begin{proof} We only prove $M(K)$ is a large element when $|e_1|/h_2(K) < 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$. The other cases can be proved analogously. Since $\delta_0\in(0,1/3\,]$, we have \begin{align*} \frac{|e_1|+h_2(K)}{3\,h_2(K)}\ge \frac 13\ge\delta_0,\ \ \frac{2h_2(K)-|e_1|}{3\,h_2(K)}\ge \frac 13\ge\delta_0. \end{align*} Similar inequalities hold for $|e_2|$. Thus $|e\cap \Omega_i|\ge \delta_0|e|$ for each side $e$ of $M(K)$ having nonempty intersection with $\Omega_i$, $i=1,2$. This implies that $M(K)$ is a large element. \end{proof} {\bf{Pattern 2}}: $K\in \mathcal{T}_1$ has two neighboring elements $N(K)_1,N(K)_2\in \mathcal{T}_1$, see Fig.\ref{pattern1}. $e_1$ and $e_2$ are respectively the thick part of the side of $N(K)_1$ and $N(K)_2$ in the figure. We use Algorithm 2 to obtain $M(K)$, $M(N(K)_1)$, and $M(N(K)_2)$. \begin{figure} \centering \includegraphics[width=\textwidth]{./figs_new/pattern345.pdf} \caption{Illustration of type 3 (left), type 4 (middle) and type 5 (right) patterns.}\label{pattern3} \end{figure} \medskip \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 2:}} Pattern 2 \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}} $(N(K)_1,K,N(K)_2)$ {\bf {Output:}} $(M(N(K)_1),M(K),M(N(K)_2))$ {\bf {if}} $K,N(K)_1$, and $N(K)_2$ are large elements {\bf{then}} $\quad$ let $M(N(K)_1)=N(K)_1$, $M(K)=K$, $M(N(K)_2)=N(K)_2$; {\bf {else}} $\quad$ $\quad$ {\bf {if}} $|e_1|/h_2(K) \ge 2\delta_0$ and $|e_2|/h_1(K) \ge 2\delta_0$ {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)})^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) \ge 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup \overline{G}_4\cup \overline{G}_5)^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) < 2\delta_0$ and $|e_2|/h_1(K) \ge 2\delta_0$ } {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup \overline{G}_1\cup \overline{G}_2)^\circ$; $\quad$ $\quad$ {\bf {else if}} {$|e_1|/h_2(K) < 2\delta_0$ and $|e_2|/h_1(K) < 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K)_1)=M(N(K)_2)=(\overline{K}\cup \overline{N(K)}_1 \cup \overline{N(K)}_2 \cup \overline{{D}(K)}\cup(\cup^5_{j=1} \overline{G}_j))^\circ$. $\quad$ $\quad$ {\bf {end}} {\bf {end}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \bigskip {\bf{Pattern 3}}: $K\in \mathcal{T}_1$ has one neighboring element $N(K)\in \mathcal{T}_1$, see Fig. \ref{pattern3}. $e_1$ and $e_2$ are respectively the thick part of the side of $K$ and $N(K)$ in the figure. We use Algorithm 3 to obtain $M(K)$, $M(N(K))$. \medskip \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 3:}} Pattern 3 \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf Input:} {$(K,N(K))$} {\bf Output:} {$(M(K),M(N(K)))$ } {\bf if}{ $K$ and $N(K)$ are both large elements} {\bf{then}} $\quad$ let $M(K)=K$, $M(N(K))=N(K)$\; {\bf {else}} $\quad$ $\quad$ {\bf {if}} $|e_1|/h_1(K) \ge 2\delta_0$ and $|e_2|/h_1(K) \ge 2\delta_0$ {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K))=(\overline{K}\cup \overline{N(K)})^\circ$; $\quad$ $\quad$ {\bf {else if}} $|e_1|/h_1(K) \ge 3\delta_0$ {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K))=(\overline{K}\cup \overline{N(K)}\cup \overline{G}_1)^\circ$; $\quad$ $\quad$ {\bf {else if}} $|e_2|/h_1(K) \ge 3\delta_0$ {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K))=(\overline{K}\cup \overline{N(K)}\cup \overline{G}_2)^\circ$; $\quad$ $\quad$ {\bf {else}} $\quad$ $\quad$ $\quad$ let $M(K)=M(N(K))=(\overline{K}\cup \overline{N(K)}\cup \overline{G}_1\cup \overline{G}_2)^\circ$. $\quad$ $\quad$ {\bf {end}} {\bf {end}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \bigskip {\bf{Pattern 4}}: $K\in \mathcal{T}_2$ has two neighboring elements $N(K)_1,N(K)_2\in \mathcal{T}_1$, see Fig. \ref{pattern3}. $e_1$ and $e_2$ are respectively the thick part of the side of $N(K)_1$ and $N(K)_2$ in the figure. We use Algorithm 4 to obtain $M(K)$, $M(N(K)_1)$, $M(N(K)_2)$. \medskip \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 4:}} Pattern 4 \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}}{ $(N(K)_1,K,N(K)_2)$} {\bf{Output:}}{ $(M(N(K)_1),M(K),M(N(K)_2))$ } {\bf {if}}{ $K$, $N(K)_1$, and $N(K)_2$ are all large elements} {\bf then} $\quad$ {let $M(K)=K$, $M(N(K)_1)=N(K)_1$, $M(N(K)_2)=N(K)_2$; {\bf {else}} $\quad$ $\quad$ {\bf{if}}{ $|e_1|/h_1(K) \ge 3\delta_0$ and $|e_2|/h_1(K) \ge 3\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(N(K)_1)=M(K)=M(N(K)_2)=(\overline{N(K)}_1\cup \overline{K} \cup \overline{N(K)}_2)^\circ$; $\quad$ $\quad$ {\bf{else if}} {$|e_2|/h_1(K) \ge 4\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(N(K)_1)=M(K)=M(N(K)_2)=(\overline{N(K)}_1\cup \overline{K} \cup \overline{N(K)}_2\cup\overline{G}_1)^\circ$; $\quad$ $\quad$ {\bf{else if}} {$|e_1|/h_1(K) \ge 4\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(N(K)_1)=M(K)=M(N(K)_2)=(\overline{N(K)}_1\cup \overline{K} \cup \overline{N(K)}_2\cup\overline{G}_2)^\circ$; $\quad$ $\quad$ {\bf{else}} $\quad$ $\quad$ $\quad$ let $M(N(K)_1)=M(K)=M(N(K)_2)=(\overline{N(K)}_1\cup \overline{K} \cup \overline{N(K)}_2\cup\overline{G}_1\cup \overline{G}_2)^\circ$. $\quad$ $\quad$ {\bf{end}} {\bf{end}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \bigskip {\bf{Pattern 5}}: $K\in \mathcal{T}_2$, see Figure \ref{pattern3}. $e_1$ and $e_2$ are respectively the thick part of the sides of $K$ in the figure. We use Algorithm 5 to obtain $M(K)$. \medskip \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 5:}} Pattern 5 \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}}{ $K$} {\bf{Output:}}{ $M(K)$ } {\bf {if}}{ $K$ is a large element} {\bf then} $\quad$ {let $M(K)=K$; {\bf {else}} $\quad$ $\quad$ {\bf{if}}{ $|e_1|/h_1(K) <1- 2\delta_0$ and $|e_2|/h_1(K) <1- 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=(\overline{K}\cup \overline{G}_1)^\circ$; $\quad$ $\quad$ {\bf{else if}} {$|e_1|/h_1(K) \ge 2\delta_0$ and $|e_2|/h_1(K) \ge 2\delta_0$} {\bf{then}} $\quad$ $\quad$ $\quad$ let $M(K)=(\overline{K}\cup \overline{G}_2)^\circ$; $\quad$ $\quad$ {\bf{else}} $\quad$ $\quad$ $\quad$ let $M(K)=(\overline{K}\cup \overline{G}_1\cup \overline{G}_2)^\circ$. $\quad$ $\quad$ {\bf{end}} {\bf{end}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \bigskip The following lemma can be proved by the same argument as that in Lemma \ref{lem:3.1}. Here we omit the details. \begin{lem}\label{lem:3.2} The output macro-elements of Algorithm 2, Algorithm 3, Algorithm 4, and Algorithm 5 are large elements if $\delta_0\in(0,1/3\,]$, $\delta_0\in(0,1/4\,]$, $\delta_0\in(0,1/5\,]$, and $\delta_0\in(0,1/3\,]$, respectively. \end{lem} To conclude this subsection, we make the following observations which can be easily checked from the construction of the patterns. \begin{rem}\label{rem0} Only elements in $\{\mathcal{S}(K)_2:K\in\mathcal{T}^\Gamma\}$ can be possibly merged with small interface elements. The elements two layers away from the interface will not be touched in the merging algorithm. \end{rem} \begin{rem}\label{rem1} An element $G\in\mathcal{T}_2$ is merged with some element $K\in\mathcal{T}_1$ if and only if there exists an element $G'\in\mathcal{T}_2$ such that $G,K,G'$ form a pattern of type 1 or there exists an element $G'\in\mathcal{T}_1$ such that $G,K,G'$ form a pattern of type 4. \end{rem} \begin{rem}\label{rem2} An element $G\subset\Omega_i$, $i=1,2$, is merged with some element $K\in\mathcal{T}_1$ such that $K$ and $G$ has only one common vertex, then $G,K$, and two neighboring elements of $K$ are in the same pattern of type 1 or type 2. \end{rem} \subsection{The merging algorithm} Let $\mathfrak{C}$ be an admissible chain of interface elements. The following algorithm constructs a locally induced mesh from $\mathfrak{C}$ which consists of the large interface elements of $\mathfrak{C}$ and macro-elements including all small elements of $\mathfrak{C}$ so that the elements in the induced mesh are all large elements. \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 6:}} The merging algorithm for the admissible chain of interface elements \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}} {The admissible chain $\mathfrak{C}$} {\bf{Output:}} {The induced mesh ${\rm Induced}(\mathfrak{C})$} $1^\circ$ Find all subchains $\mathfrak{S}$ of length $n\ge 2$ of $\mathfrak{C}$ such that $\mathfrak{S}\{i\}\in \mathcal{T}_1$, $i=1,\ldots, n$; {\bf{if}} {$n=2k+1$ is odd} {\bf{then}} $\quad${\bf{for}} $i=1,2,\ldots,k-1$ {\bf{do}} $\quad$ $\quad$ call the Algorithm 3 with the input $(\mathfrak{S}\{2i\},\mathfrak{S}\{2i+1\})$; $\quad${\bf{end}} call the Algorithm 2 with the input $(\mathfrak{S}\{2k-1\},\mathfrak{S}\{2k\},\mathfrak{S}\{2k+1\})$ {\bf{else if}} {$n=2k$ is even} {\bf{then}} $\quad$ {\bf{for}} $i=1,2,\ldots,k$ {\bf{do}} $\quad$ $\quad$ call the Algorithm 3 with the input $(\mathfrak{S}\{2i-1\},\mathfrak{S}\{2i\})$; $\quad${\bf{end}} {\bf{end}} $2^\circ$ Find all subchains $\mathfrak{S}$ of length $n=3$ in the remaining interface elements such that $\mathfrak{S}\{1\}\in \mathcal{T}_1$, $\mathfrak{S}\{2\}\in \mathcal{T}_2$, $\mathfrak{S}\{3\}\in \mathcal{T}_1$; call the Algorithm 4 with the input $(\mathfrak{S}\{1\},\mathfrak{S}\{2\},\mathfrak{S}\{3\})$; $3^\circ$ Find all subchains $\mathfrak{S}$ of length $n=3$ in the remaining interface elements such that $\mathfrak{S}\{1\}\in \mathcal{T}_2$, $\mathfrak{S}\{2\} \in T_1$, $\mathfrak{S}\{3\}\in \mathcal{T}_2$; call the Algorithm 1 with the input $(\mathfrak{S}\{1\},\mathfrak{S}\{2\},\mathfrak{S}\{3\})$; $4^\circ$ Find all elements $K\in \mathcal{T}_2$ in the remaining interface elements; {call the Algorithm 5 with the input $K$.} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} \bigskip {{ Figure \ref{fig:illustration} illustrates each step in Algorithm 6 starting from an admissible chain of interface elements. The black thin lines represent the boundaries of the elements. We remove the lines which are shared by adjacent elements in steps $1^\circ$ to $4^\circ$, meaning that two adjacent elements have been merged in the same macro-element. }} We attach any chain of interface elements $\mathfrak{C}$ of length $n\ge 1$ an accompany chain $\mathfrak{N}(\mathfrak{C})=\{N_1\to N_2\to\cdots\to N_n\}$ with $N_i=1$ or $2$ according to $\mathfrak{C}\{i\}\in\mathcal{T}_1$ or $\mathcal{T}_2$, $i=1,\cdots,n$. The following theorem shows the reliability of the merging algorithm. \begin{figure} \centering \subfigure[initial mesh]{ \includegraphics[width=0.3\textwidth]{./figs_new/illustration_merged_step0.pdf} } \subfigure[step 1]{ \includegraphics[width=0.3\textwidth]{./figs_new/illustration_merged_step1.pdf} } \subfigure[step 2]{ \includegraphics[width=0.3\textwidth]{./figs_new/illustration_merged_step2.pdf} } \subfigure[step 3]{ \includegraphics[width=0.3\textwidth]{./figs_new/illustration_merged_step3.pdf} } \subfigure[step 4]{ \includegraphics[width=0.3\textwidth]{./figs_new/illustration_merged_step4.pdf} }\\ \includegraphics[width=0.8\textwidth]{./figs_new/illustration_merged_legend.pdf} \caption{Illustration of the merging algorithm of the admissible chain of interface elements}\label{fig:illustration} \end{figure} \begin{thm}\label{thm:3.1} Let $\delta_0\in(0,1/5\,]$. For any admissible chain of interface elements $\mathfrak{C}$ { {with length $n\geq2$, if $\mathfrak{C}(1), \mathfrak{C}(n)\in\mathcal{T}_2$ or $\mathfrak{C}(1)=\mathfrak{C}(n)$, then Algorithm 6 terminates in finite number of steps with input $\mathfrak{C}$.}} All elements of the locally induced mesh ${\rm Induced}(\mathfrak{C})$ are large elements. \end{thm} \begin{proof} By the step $1^\circ$ of the algorithm, any two consecutive elements of type $\mathcal{T}_1$ are merged. Thus in the remaining elements of the chain, the type $\mathcal{T}_1$ elements must be interlaced if they are present. The step $2^\circ$ merges all remaining elements in the chain which consists of a subchain of length 3 of the type $1\to 2\to 1$. The remaining type $\mathcal{T}_1$ elements in the chain of length $3$ can appear only in the form $2\to 1\to 2$ which are merged by the step $3^\circ$. {{Thus the first three steps of the algorithm merge all elements in $\mathcal{T}_1$. Here we have used the assumption that the first and last elements in $\mathfrak{C}$ both belong to $\mathcal{T}_2$ or the first and last elements are the same interface elements.}} The left type $\mathcal{T}_2$ elements are treated in the step $4^\circ$ of the algorithm. The elements in $\mathcal{T}_3$ are all large elements and thus need not be merged. This shows that Algorithm 6 will merge all interface elements in the chain to output a locally induced mesh ${\rm Induced}(\mathfrak{C})$ which consists of the large elements of $\mathfrak{C}$ and the macro-elements containing all small elements of the chain $\mathfrak{C}$. By Lemmas \ref{lem:3.1}-\ref{lem:3.2}, the elements in ${\rm Induced}(\mathfrak{C})$ are all large elements since $\delta_0\in (0,1/5\,]$. It remains to show that the non-interface elements of the mesh $\mathcal{T}$ will not be used twice in the merging Algorithm 6 to guarantee the success of the algorithm. Let $K\in\mathcal{T}\backslash\mathcal{T}^\Gamma$. By the construction of the patterns in \S 3.2, $K$ can only be merged with interface elements in $\mathcal{S}(K)_2$. Assume that $K\in\mathcal{T}\backslash\mathcal{T}^\Gamma$ is merged with some element $D\in\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$. Then by the construction of patterns, $D\in\mathcal{T}_1$ and there must exist elements $D',D''\in\mathcal{T}_1$ such that $D,D',D''$ form a pattern of type $2$. There are three possibilities: \begin{figure} \centering \includegraphics[width=0.25\textwidth]{./figs_new/p61.pdf} \caption{The element $K$ and $D,D',D''$ in $\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$.}\label{fig:p61} \end{figure} (1) All $D,D',D''$ are in the second layer of elements surrounding $K$, see the Fig.\ref{fig:p61}. In this case, we know by Rule 4 of the admissible chain that $K$ cannot be merged with elements in $\mathcal{S}(K)_2$ other than $D,D',D''$. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{./figs_new/p62.pdf}\hskip 1.0cm \includegraphics[width=0.25\textwidth]{./figs_new/p63.pdf} \caption{The element $K$, $D'\in\mathcal{S}(K)_1$ diagonal to $K$, and $D,D''\in\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$.}\label{fig:p62} \end{figure} (2) $D$ is neighboring an element $D'\in\mathcal{S}(K)_1$ which is diagonal to $K$. In this case, the element $G$ neighboring $D'$ can be either type $\mathcal{T}_1$ or $\mathcal{T}_2$, see Fig.\ref{fig:p62} (left) or Fig.\ref{fig:p62} (right). Again by Rule 4, if $K$ is merged with an element in $\mathcal{S}(K)_2$ different from $D,D',D''$, $K$ must be merged with $G$. In the case of Fig.\ref{fig:p62} (left), since $G\in\mathcal{T}_2$, the pattern includes $K$ and $G$ must be a patter of type $1$. This is impossible since $D\in\mathcal{T}_1$. On the other hand, in the case Fig.\ref{fig:p62} (right), since $G\in\mathcal{T}_1$, $G\in\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$, $G''$ must be of type $\mathcal{T}_1$ so that $G'',G,D'$ and $K$ are in a pattern of type 2. In this case, $G$ and $G''$ form a pattern of type 3 which will be merged by Algorithm 3 before $G$ can be merged with $K$ by Algorithm 2 in the first step of our merging algorithm. Thus the case depicted in Fig. \ref{fig:p62} (right) is also impossible. This shows that $K$ cannot be merged with elements in $\mathcal{S}(K)_2$ other than $D,D',D''$. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{./figs_new/p64.pdf}\hskip 1.0cm \includegraphics[width=0.25\textwidth]{./figs_new/p65.pdf} \caption{The element $K$, $D'\in\mathcal{S}(K)_1$ neighboring to $K$, and $D,D''\in\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$.}\label{fig:p63} \end{figure} (3) $D$ is neighboring an element $D'\in\mathcal{S}(K)_1$ which is neighboring to $K$. In this case, the element $G$ neighboring $D'$ can be either type $\mathcal{T}_2$ or type $\mathcal{T}_1$, Fig.\ref{fig:p63} (left) or Fig.\ref{fig:p63} (right). Again by Rule 4 of the admissible chain, if $K$ is merged with an element in $\mathcal{S}(K)_2$ different from $D,D''$, $K$ can only be merged with $G$. This is again impossible by the same argument in (2) for the cases shown in Fig.\ref{fig:p62} (left) and Fig.\ref{fig:p62} (right). Therefore, $K$ cannot be merged with elements in $\mathcal{S}(K)_2$ other than $D,D''$. In conclusion, we have shown that any element $K\in\mathcal{T}\backslash\mathcal{T}^\Gamma$ cannot be merged with two different patterns, each of which includes an interface elements in $\mathcal{S}(K)_2\backslash\mathcal{S}(K)_1$. Now we assume $K$ is merged with interface elements in $\mathcal{S}(K)_1$ which belong to two patterns $\mathcal{P}_1$ and $\mathcal{P}_2$. By Rule 4 of the admissible chain, the elements in $\mathcal{P}_1\cup\mathcal{P}_2$ must be connected. Assume $D_1\in\mathcal{P}_1$, $D_2\in\mathcal{P}_2$ are connected, then one of $D_1$ and $D_2$ must be diagonal to $K$. Without loss of generality, we assume $D_1=D(K)$. If $D_1\in\mathcal{T}_2$, then by Remark \ref{rem1} the element $D'$ neighboring $K$ and $D_1$ must be in $\mathcal{T}_1$ so that $K,D_1',D_1$ form a pattern of type 1. Thus by Rule 2 of the admissible chain, $D_2$, as an interface element, cannot be neighboring $D_1$, see Fig.\ref{fig:SK1} (top left). This is a contradiction. Therefore, $D_1$ can only be of type $\mathcal{T}_1$. There are three possibilities illustrated in Fig.\ref{fig:SK1}. (1) In the case of Fig.\ref{fig:SK1} (top right), Rule 2 implies $D_2$ must be in $\mathcal{T}_2$. By Remark \ref{rem1}, $K,D_1$, and $D_2$ form a pattern of type 1, which contradicts to the assumption that $D_1,D_2$ belong to different patterns. (2) In the case of Fig.\ref{fig:SK1} (bottom left), Rule 2 implies $D_2$ cannot be neighboring $D_1$. (3) {{ In the case of Fig.\ref{fig:SK1} (bottom right), $D_1$ has only one common vertex with $K$. By Remark \ref{rem2} the neighboring elements of $D_1,D_1',D_1''$ both must be of type $\mathcal{T}_1$ or $\mathcal{T}_2$ and $K,D_1,D_1',D_1''$ form a pattern of type 1 or 2. If $K,D_1,D_1',D_1''$ form a pattern of type $1$, then this case belongs to (1). If $K,D_1,D_1',D_1''$ form a pattern of type 2, $D_2$ must be equal to one of $D_1'$ or $D_1''$, which contradicts that $D_1,D_2$ are in different patterns. In conclusion, $K$ cannot be merged with two interface elements in $\mathcal{S}(K)_1$ belonging to different patterns.}} This completes the proof. \end{proof} \begin{figure}[h] \centering \begin{tabular}{ccc} \adjustbox{valign=t}{\includegraphics[width=.25\linewidth]{./figs_new/p11.pdf}} & $\quad$ & \adjustbox{valign=t}{\includegraphics[width=.25\linewidth]{./figs_new/p13.pdf}}\\ \\ \adjustbox{valign=b}{\includegraphics[width=.25\linewidth]{./figs_new/p12.pdf}} & $\quad$ & \adjustbox{valign=b}{\includegraphics[width=.25\linewidth]{./figs_new/p14.pdf}} \end{tabular} \caption{The element $K$ and $D_1\in\mathcal{S}(K)_1$ is a type $\mathcal{T}_2$ element (top left). The element $K$ and $D_1\in\mathcal{S}(K)_1$ is a type $\mathcal{T}_1$ element in the other figures.}\label{fig:SK1} \end{figure} To conclude this section, we show that the merging Algorithm 6 leads to a reliable algorithm to automatically construct a body-fitted shape regular mesh for arbitrarily shaped smooth interface. We start from a conforming uniform mesh $\mathcal{T}_0$ of the domain $\Omega$. We refine the interface elements of $\mathcal{T}_0$ by quad refinements and their surrounding elements to generate a Cartesian mesh $\mathcal{T}$ with hanging nodes such that all interface elements of $\mathcal{T}$ form an admissible chain $\mathfrak{C}$. This is possible because the interface $\Gamma$ is $C^2$-smooth. Now we use Algorithm 6 to obtain an induced mesh $\mathcal{M}={\rm Induced}(\mathfrak{C})$. Since each interface element $K\in\mathcal{M}^\Gamma$ is a large element, $K_i^h$, $i=1,2$, is strongly shape regular in the sense that it is the union of shape regular triangles which we denote as $T_K^{ij}$, $1\le j\le m_K$. Then the mesh \begin{eqnarray*} \widetilde{\mathcal{M}}=\{T_K^{ij}: i=1,2,\ j=1\cdots, m_K,K\in\mathcal{M}^\Gamma\}\cup\{K:K\in\mathcal{M}\backslash\mathcal{M}^\Gamma\} \end{eqnarray*} is a triangular-rectangular mixed finite element mesh of the domain $\Omega$. $\{T_K^{ij}: i=1,2,\ j=1\cdots, m_K,K\in\mathcal{M}^\Gamma\}$ is a body-fitted shape regular triangular mesh that covers the interface and $\{K:K\in\mathcal{M}\backslash\mathcal{M}^\Gamma\}$ consists of a rectangular mesh whose elements are similar to the elements of the initial mesh $\mathcal{T}_0$. Fig.\ref{fig:new} shows a mixed mesh constructed from the unfitted finite element mesh in Fig.\ref{fig:illustration}(e). \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{./figs_new/illustration_merged_step5.pdf} \end{center} \caption{Illustration of a mixed triangular-rectangular body-fitted shape regular finite element mesh.}\label{fig:new} \end{figure} \section{The condition number of the stiffness matrix} In this section we study the condition number of the stiffness matrix of the unfitted finite element method defined in \eqref{a2}. Since we allow the Cartesian mesh $\mathcal{T}$ having hanging nodes, which is a nonconforming mesh in the classical sense, we recall an important concept of the $K$-mesh in Babu\v{s}ka and Miller \cite{Babuska87a}. It is introduced to control the undesirable excessive local refinements so that the local mesh sizes around each vertex of the elements are comparable. This concept is further developed in Bonito and Nochetto \cite{Bonito} as the control of the local level of incompatibility of the nonconforming meshes. Let $\mathcal{N}^0$ be the set of conforming nodes of the mesh $\mathcal{T}$. A conforming node of $\mathcal{T}$ is a vertex of the elements in $\mathcal{T}$ which either locates on the boundary $\partial\Omega$ or is shared by the four elements to which it belongs. For each conforming node $P$, we define $\psi_P\in\mathbb{X}_1(\mathcal{T})\cap H^1(\Omega)$, which is bilinear in each element and satisfies $\psi_P(Q)=\delta_{PQ}$ for any $Q\in\mathcal{N}^0$. Here $\delta_{PQ}$ is the Kronecker delta. It is proved in \cite{Babuska87a} that $\{\psi_P:P\in\mathcal{N}^0\}$ consists of a basis of $\mathbb{X}_1(\mathcal{T})\cap H^1(\Omega)$ and satisfies the property of the partition of unity $\sum_{P\in\mathcal{N}^0}\psi_P=1$. In the rest of the paper, we impose the following assumption on the finite element mesh $\mathcal{T}$ which is called the $K$-mesh in \cite{Babuska87a}. \medskip {\bf Assumption (H3)} There exists a constant $C>0$ uniform on the level of discretization of $\mathcal{T}$ such that for any conforming node $P\in\mathcal{N}^0$, \begin{eqnarray*} {\rm diam}({\rm supp}(\psi_P))\le C\min_{K\in\mathcal{T}_P}h_K, \end{eqnarray*} where $\mathcal{T}_P:=\{K\in\mathcal{T},\,\,K\subset{\rm supp}(\psi_P)\}$. \medskip One can find further properties of $K$-meshes in \cite{Babuska87a}. We refer to \cite[\S 6]{Bonito} for a refinement algorithm to enforce the assumption (H3) in practical computations. The following lemma on the continuous approximation of discontinuous piecewise polynomials on $K$-meshes is proved in \cite[Lemma 3.2]{CLX}. \begin{lem}\label{lem:2.2} Let $\mathbb{V}_P(\mathcal{T})=\Pi_{K\in\mathcal{T}} Q_p(K)$. There exists an interpolation operator $\pi_h:\mathbb{V}_p(\mathcal{T})\to\mathbb{V}_p(\mathcal{T})\cap H^1(\Omega)$ such that for any $v\in\mathbb{V}_p(\mathcal{T})$, \begin{eqnarray*} & &\|v-\pi_h v\|_{L^2(K)}\le C\|p^{-1}h^{1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{L^2(\sigma(K))},\\ &&\|\nabla(v-\pi_h v)\|_{L^2(K)}\le C\|ph^{-1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{L^2(\sigma (K))}, \end{eqnarray*} where $\sigma(K)=\{e\in\mathcal{E}^{\rm side}:e\subset\widetilde\omega(K)\}$, $\widetilde\omega(K)$ is a set of elements including $K$ such that ${\rm diam}(\widetilde\omega (K))\le Ch_K$. The constant $C$ is independent of $h_K,p$. Moreover, $\pi_hv\in H^1_0(\Omega)$ if $v=0$ on $\partial\Omega$. \end{lem} Since the induced mesh $\mathcal{M}={\rm Induced}(\mathcal{T})$ is obtained by merging some of the elements of $\mathcal{T}$, $\mathbb{V}_p(\mathcal{M})\subset\mathbb{V}_p(\mathcal{T})$. Thus Lemma \ref{lem:2.2} is also valid for any functions $v\in\mathbb{V}_p(\mathcal{M})$. We have the following discrete Poincar\'e inequality. \begin{lem}\label{lem:2.3} For any $v\in\mathbb{X}_p(\mathcal{M})$, we have $\|v\|_{L^2(\Omega)}\le C\|v\|_{\rm DG}$, where $C>0$ is a constant independent of the mesh sizes, $p$, and the interface deviations $\eta_K$ for all $K\in\mathcal{M}^\Gamma$. \end{lem} \begin{proof} Let $v=v_1\chi_{\Omega_1}+v_2\chi_{\Omega_2}\in\mathbb{X}_p(\mathcal{M})$. By Lemma \ref{lem:2.2}, for $v_i\in\mathbb{V}_p(\mathcal{M}_i)$, $i=1,2$, there exists $\pi_h v_i\in\mathbb{V}_P(\mathcal{M}_i)\cap H^1(\Omega_i^h)$ such that \begin{equation} \|v_i-\pi_h v_i\|_{\mathcal{M}_i}\le C\|p^{-1}h^{1/2}[{\hskip -1.5pt} [ v_i]{\hskip -1.5pt} ]\|_{{{\mathcal{E}_i^{side}}}},\ \ \|\nabla(v_i-\pi_h v_i)\|_{\mathcal{M}_i}\le C\|ph^{-1/2}[{\hskip -1.5pt} [ v_i]{\hskip -1.5pt} ]\|_{{{\mathcal{E}_i^{side}}}}. \label{a4} \end{equation} Recall that we have assumed $\bar\Omega_1\subset\Omega$. Let $w_i\in H^1(\Omega_i)$, $i=1,2$, satisfy \begin{eqnarray*} & &-\Delta w_1=0\ \ \mbox{in }\Omega_1, \ \ w_1=[{\hskip -1.5pt} [\pi_h v]{\hskip -1.5pt} ]_\Gamma\ \ \mbox{on }\Gamma=\partial\Omega_1,\\ & &-\Delta w_2=0\ \ \mbox{in }\Omega_2, \ \ w_2=0\ \ \mbox{on }\Gamma,\ \ w_2=\pi_h v_2\ \ \mbox{on }\partial\Omega. \end{eqnarray*} Then $w_i\in H^1(\Omega_i)$ satisfies $\|w_1\|_{H^1(\Omega_1)}\le C\|[{\hskip -1.5pt} [\pi_h v]{\hskip -1.5pt} ]\|_{H^{1/2}(\Gamma)}$, $\|w_2\|_{H^1(\Omega_2)}\le C\|\pi_h v\|_{H^{1/2}(\partial\Omega)}$. From the proof of \cite[Lemma 3.4]{CLX} we know that \begin{equation} \|[{\hskip -1.5pt} [\pi_h v]{\hskip -1.5pt} ]\|_{H^{1/2}(\Gamma)}^2\le C(\|ph^{-1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\mathcal{E}^\Gamma\cup\mathcal{E}^{\rm side}_1\cup\mathcal{E}^{\rm side}_2}+\|p^{-1}h^{1/2}\nabla_T[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\mathcal{E}^\Gamma}).\label{a5} \end{equation} Now we use a similare argument to bound $\|\pi_h v\|_{H^{1/2}(\partial\Omega)}$. By the the localization lemma of the $H^{1/2}$ semi-norm in Faermann \cite[Lemm 2.3]{Faermann} and the Gagliardo-Nirenberg type estimate for the $H^{1/2}$ semi-norm, we obtain as in \cite[(3.13)]{CLX} that \begin{eqnarray} &&\|\pi_h v_2\|_{H^{1/2}(\partial\Omega)}^2\le C\sum_{K\in\mathcal{M}_2}(\|\pi_h v_2\|_{L^2(\Sigma_K)}\|\nabla_T(\pi_h v_2)\|_{L^2(\Sigma_K)}+h^{-1}_K\|\pi_h v_2\|_{L^2(\Sigma_K)}^2),\label{a6} \end{eqnarray} where $\Sigma_K=\partial K\cap\partial\Omega$. By the $hp$-inverse estimate and Lemma \ref{lem:2.2}, we obtain \begin{eqnarray*} \|\nabla_T(\pi_h v_2)\|_{L^2(\Sigma_K)}&\le&\|\nabla_T v_2\|_{L^2(\Sigma_K)}+\|{{\nabla_T}}(v_2-\pi_h v_2)\|_{L^2(\Sigma_K)}\\ &\le&Cp^2h_K^{-1}\|v_2\|_{L^2(\Sigma_K)}+Cph_K^{-1/2}\|\nabla(v_2-\pi_h v_2)\|_{L^2(K)}\\ &\le&Cp^2h_K^{-1}\|v_2\|_{L^2(\Sigma_K)}+Cph_K^{-1/2}\|ph^{-1/2}[{\hskip -1.5pt} [ v_2]{\hskip -1.5pt} ]\|_{L^2(\sigma(K))}. \end{eqnarray*} Similarly, one can prove $\|\pi_h v_2\|_{L^2(\Sigma_K)}\le\|v_2\|_{L^2(\Sigma_K)}+\|[{\hskip -1.5pt} [ v_2]{\hskip -1.5pt} ]\|_{L^2(\sigma(K))}$. Recall that $[{\hskip -1.5pt} [ v_2]{\hskip -1.5pt} ]=v_2$ on $\partial\Omega$. This implies by \eqref{a6} that \begin{eqnarray*} \|\pi_h v_2\|_{H^{1/2}(\partial\Omega)}\le C\|ph^{-1/2}[{\hskip -1.5pt} [ v_2]{\hskip -1.5pt} ]\|_{\mathcal{E}^{\rm bdy}\cup\mathcal{E}_2^{\rm side}}. \end{eqnarray*} Therefore, by combining with \eqref{a5} we have \begin{equation} \|w_1\|_{H^1(\Omega_1)}+\|w_2\|_{H^1(\Omega_2)}\le C\|v\|_{\rm DG}.\label{a7} \end{equation} Let $\pi_h^cv=(\pi_h v_1-w_1)\chi_{\Omega_1}+(\pi_h v_2-w_2)\chi_{\Omega_2}$. Then $\pi_h^cv\in H^1_0(\Omega)$ and by using Poincar\'e inequality for $\pi_h^cv$, we have \begin{eqnarray*} \|v\|_{L^2(\Omega)}&\le&\|v-\pi_h^c v\|_{L^2(\Omega)}+\|\pi_h^c v\|_{L^2(\Omega)}\\ &\le&\sum^2_{i=1}(\|v_i-\pi_h v_i\|_{\mathcal{M}_i}+\|w_i\|_{L^2(\Omega_i)})+C\|\nabla\pi_h^c v\|_{L^2(\Omega)}\\ &\le&{{\sum^2_{i=1}(\|v_i-\pi_h v_i\|_{\mathcal{M}_i}+\|w_i\|_{L^2(\Omega_i)})+C(\|\nabla_h(\pi_h^cv-v)\|_{\mathcal{M}}+\|\nabla_h v\|_{\mathcal{M}}}})\\ &\le&C{{\sum^2_{i=1}(\|v_i-\pi_h v_i\|_{H^1(\mathcal{M}_i)}+\|w_i\|_{H^1(\Omega_i)})+C\|\nabla_h v\|_\mathcal{M}.}} \end{eqnarray*} Here for $i=1,2$, $\|w\|_{H^1(\mathcal{M}_i)}^2=\|w\|_{\mathcal{M}_i}^2+\|\nabla_h w\|_{\mathcal{M}_i}^2\ \ \forall w\in H^1(\mathcal{M}_i)$. This completes the proof by using \eqref{a4} and \eqref{a7}. \end{proof} Now we consider the condition number of the stiffness matrix. We start by introducing the basis functions we use in each element. If $K\in\mathcal{M}\backslash\mathcal{M}^\Gamma$ is not an interface element, we will use a set of basis functions which are Lagrangian interpolation functions corresponding to Gauss-Lobatto points. We first recall some facts about spectral method and refer to Bernardi and Maday \cite{Bernardi} for the details. Let $I=(-1,1)$ and $\{L_i\}^p_{i=0}$ the set of Legendre polynomials of $Q_p(I)$ which is the set of polynomials of degree $p$ in $I$. Let $\{l_i\}^p_{i=0}$ be the set of Lagrangian interpolation functions in $Q_p(I)$ corresponding to the Gauss-Lobatto points $\{\xi_i\}^p_{i=0}$ which are the zeros of $(1-\xi^2)L_p'(\xi)$ in $I$. Now let $\hat K=I\times I$ and $\{(\xi_i,\xi_j):0\le i,j\le p\}$ be the Gauss-Lobatto grid of $\hat K$. Any function $\hat v\in Q_p(\hat K)$ can be written as $\hat v=\sum^p_{i,j=0}\hat v_{ij}l_i(\hat x_1)l_j(\hat x_2)$. The following important result is proved in Melenk \cite[Proposition 2.8, Theorem 4.1]{Melenk}. \begin{lem}\label{lem:4.1} There exists a constant $C$ independent of $p$ such that for any function $\hat v=\sum^p_{i,j=0}\hat v_{ij}l_i(\hat x_1)l_j(\hat x_2)$, there holds \begin{align*} & C^{-1}p^{-2}\sum^p_{i,j=0}\hat v^2_{ij}\le\|\hat v\|_{H^1(\hat K)}^2\le Cp\sum^p_{i,j=0}\hat v_{ij}^2, \end{align*} and \begin{align*} & \|\hat v\|^2_{L^2(\partial \hat K)}\le Cp^{-1}\left(\sum_{i=0,p}\sum^p_{j=0}\hat v_{ij}^2+\sum_{j=0,p}\sum_{i=0}^p\hat v_{ij}^2\right). \end{align*} \end{lem} For any $K\in\mathcal{M}$, let $F_K:\hat K\to K$ be the one-to-one and surjective affine mapping. Denote $\phi_K^{ij}=\hat\phi_{ij}\circ F_K^{-1}$, where $\hat\phi_{ij}=l_i(\hat x_1)l_j(\hat x_2)$, $0\le i,j\le p$. For any $v\in Q_p(K), v=\sum^p_{i,j=0} v_K^{ij}\phi_K^{ij}$, we have by Lemma \ref{lem:4.1} and the standard scaling argument that \begin{align} &C^{-1}p^{-2}\|\boldsymbol{V}_K\|_{\ell_2}^2\le\|\nabla v\|_{L^2(K)}^2+h_K^{-2}\|v\|_{L^2(K)}^2\le Cp\|\boldsymbol{V}_K\|_{\ell_2}^2,\label{b7}\\ &\|v\|_{L^2(\partial K)}^2\le Cp^{-1}h_K\|\boldsymbol{V}_K\|_{\ell_2}^2,\label{b8} \end{align} where $\boldsymbol{V}_K=(\boldsymbol{v}_0^T,\cdots,\boldsymbol{v}^T_p)^T$, $\boldsymbol{v}_i=(v_{i_0},\cdots,v_{i_p})^T$ is the coefficient vector corresponding to $v\in Q_p(K)$. For the interface element $K\in\mathcal{M}^\Gamma$, we have $K_1^{h-\delta_K}\subset K_1$ and $K_2^{h-\delta_K}\subset K_2$. We also have $\hat K_1^{h-\delta_K}\subset \hat K_1$ and $\hat K_2^{h-\delta_K}\subset \hat K_2$, where $\hat K_i^{h-\delta_K}=F_K^{-1}(K_i^{h-\delta_K})$, $\hat K_i=F_K^{-1}(K_i)$, $i=1,2$. Let $\{\hat\psi^j_{\hat K^h_i}\}^{(p+1)^2}_{j=1}$ the $L^2$-orthonormal basis of $Q_p(\hat K^{h-\delta_K}_i)$, that is, $(\hat\psi^j_{\hat K_i^h},\hat\psi^k_{\hat K_i^h})_{\hat K_i^{h-\delta_K}}=\delta_{kj}$. Denote by $\psi^j_{K_i^h}=p^{-3/2}(\hat\psi^j_{\hat K^h_i}\circ F_K^{-1})$. Then $\{\psi^j_{K_i^{h}}\}^{(p+1)^2}_{j=1}$ is an $L^2$-orthogonal basis of $Q_p(K_i^{h-\delta_K})$, that is, \begin{equation}\label{b4} (\psi^j_{K_i^h},\psi^k_{K_i^h})_{K_i^{h-\delta_K}}=p^{-3}\frac{|K|}{|\hat K|}\,\delta_{jk}. \end{equation} The scaling constant $p^{-3}$ in \eqref{b4} is important for us to balance the contribution of different basis functions used in interface and non-interface elements in the estimation of the condition number of the stiffness matrix. Now for any $v\in\mathbb{X}_p(\mathcal{M})$, $K\in\mathcal{M}^\Gamma$, \begin{equation}\label{yy1} v|_K=\sum^{(p+1)^2}_{j=1}(v_{K_1}^j\psi^j_{K_1^h}\chi_{K_1}+v^j_{K_2}\psi^j_{K_2^h}\chi_{K_2}):=v_1\chi_{K_1}+v_2\chi_{K_2}. \end{equation} Let $\boldsymbol{V}_K=(v^1_{K_1},\cdots,v^{(p+1)^2}_{K_1},v^1_{K_2},\cdots,v_{K_2}^{(p+1)^2})^T$ the coefficient vector corresponding to $v$, then by \eqref{b4} we have \begin{equation}\label{b5} \|v_1\|_{L^2(K_1^{h-\delta_K})}^2+\|v_2\|_{L^2(K_2^{h-\delta_K})}^2=p^{-3}\frac{|K|}{|\hat K|}\,\|\boldsymbol{V}_K\|_{\ell_2}^2. \end{equation} By Lemma \ref{lem:new} we obtain \begin{equation}\label{b6} Cp^{-3}h_K^2\|\boldsymbol{V}_K\|_{\ell_2}^2\le \|v\|_{L^2(K)}^2\le C\Theta_{K}p^{-3}h_K^2\|\boldsymbol{V}_K\|_{\ell_2}^2\ \ \ \forall K\in\mathcal{M}^\Gamma. \end{equation} Now, by the construction, any function $v\in\mathbb{X}_p(\mathcal{M})$ can be written as \begin{equation}\label{c1} v=\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}\sum^p_{i,j=0}v_K^{ij}\phi^{ij}_K+\sum_{K\in\mathcal{M}^\Gamma}\sum^{(p+1)^2}_{j=1}(v_{K_1}^j\psi^j_{K_1^h}\chi_{K_1}+v^j_{K_2}\psi^j_{K_2^h}\chi_{K_2}). \end{equation} Let $N=\#\mathcal{M}$ be the number of elements of the mesh $\mathcal{M}$, $\{G_1,\cdots,G_{N}\}$ the elements of $\mathcal{M}$, and $\boldsymbol{V}_{G_i}$ the coefficient vector of $v|_{G_i}$, $i=1,\cdots,N$. We denote $\boldsymbol{V}=(\boldsymbol{V}_{G_1}^T,\cdots,\boldsymbol{V}_{G_N}^T)^T$ the vector of coefficients of $v$. The dimension of the vector $\boldsymbol{V}$ is $N_p=(p+1)^2N$. We write $\boldsymbol{V}=\Phi(v)$, where $\Phi:\mathbb{X}_p(\mathcal{M})\to\mathbb{R}^{N_p}$ is the mapping between functions in $\mathbb{X}_p(\mathcal{M})$ and their coefficient vectors. Let $\boldsymbol{V}=\Phi(v),\boldsymbol{W}=\Phi(w)\in\mathbb{R}^{N_p}$ for $v,w\in\mathbb{X}_p(\mathcal{M})$. Then the stiffness matrix $\mathbb{A}=(a_{ij})^{N_p}_{i,j=1}$ is defined by \begin{eqnarray*} (\mathbb{A}\boldsymbol{V},\boldsymbol{W})_{\ell_2}=a_h(v,w). \end{eqnarray*} Recall that $\Theta=\max_{K\in\mathcal{M}}\Theta_K$. The following theorem is the main result of this section. \begin{thm}\label{thm:4.1} Denote $N^\Gamma=\#\mathcal{M}^\Gamma$ the number of elements of $\mathcal{M}^\Gamma$ and $M=\min(N-N^\Gamma,N^\Gamma)$. Then the following bound of the condition number of the stiffness matrix holds \begin{eqnarray*} \kappa(\mathbb{A})\le C\Theta^2(1+|\ln(h_{\min}^2M)|)\left(p^3(N-N^\Gamma)+p^4N^\Gamma\right), \end{eqnarray*} where $h_{\min}=\min_{K\in\mathcal{M}}h_K$ and the constant $C>0$ is independent of the mesh sizes, $p$, and the interface deviations $\eta_K$ for all $K\in\mathcal{M}^\Gamma$. \end{thm} We note that $N-N^\Gamma$ is the number of non-interface elements. For elliptic equations, it is well-known that the condition number of the stiffness matrix of standard finite element methods grows linearly in terms of the number of elements (see, e.g., Bank and Scott \cite{Bank}). The condition number of the stiffness matrix of the $hp$ finite element method using Gauss-Lobatto shape functions is studied in \cite{Melenk}, which in particular generalizes earlier results that the condition number grows as $O(p^3)$ of the spectral method. Thus the estimate in Theorem \ref{thm:4.1} is optimal in terms of the number of elements and $p$. Our numerical results in Example 1 of section 5 show that the bound is also sharp in terms of the growth factor $\Theta^2$. \begin{proof} For any $v=v_1\chi_{\Omega_1}+v_2\chi_{\Omega_2}\in\mathbb{X}_p(\mathcal{M})$, denote $w=(\pi_h v_1)\chi_{\Omega_1}+(\pi_h v_2)\chi_{\Omega_2}$ and $\boldsymbol{W}=\Phi(w)$ the coefficient vector corresponding to $w$. By \eqref{b7}, \eqref{b6} and Lemma \ref{lem:2.1} we know that \begin{eqnarray*} \|\boldsymbol{V}-\boldsymbol{W}\|_{\ell_2}^2&\le&Cp^2\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}(\|\nabla(v-w)\|_{L^2(K)}^2+{{h_K^{-2}}}\|v-w\|_{L^2(K)}^2) +C\sum_{K\in\mathcal{M}^\Gamma}p^3h_K^{-2}\|v-w\|_{L^2(K)}^2\\ &\le&C p^2\|ph^{-1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ]\|_{\mathcal{E}^{\rm side}_1\cup\mathcal{E}^{\rm side}_2}^2. \end{eqnarray*} Thus by the triangle inequality \begin{align} \|\boldsymbol{V}\|_{\ell_2}^2 \le C p^2\|ph^{-1/2}[{\hskip -1.5pt} [ v ]{\hskip -1.5pt} ] \|_{\mathcal{E}^{\rm side}_1\cup\mathcal{E}^{\rm side}_2}^2+2\|\boldsymbol{W}\|_{\ell_2}^2.\label{d1} \end{align} Again by \eqref{b7}, \eqref{b6} we have \begin{eqnarray*} {{\|\boldsymbol{W}\|_{\ell_2}^2\le Cp^2\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma} \big(\|\nabla w\|_{L^2(K)}^2+h_K^{-2}\|w\|_{L^2(K)}^2\big)+C\sum_{K\in\mathcal{M}^\Gamma}p^3h_K^{-2}\|w\|_{L^2(K)}^2.}} \end{eqnarray*} Now we use an argument in \cite{Bank}. By H\"older inequality, for any $r\ge 2$, \begin{eqnarray*} \sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}h_K^{-2}\|w\|_{L^2(K)}^2 \le C\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}h_K^{-4/r}\|w\|_{L^r(K)}^2 &\le&{{C\left(\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma} h_K^{-4/(r-2)}\right)^{\frac{r-2}r}\|w\|_{L^r(\Omega_1\cup\Omega_2)}^2}}\\ &\le&{{Ch_{\min}^{-4/r}(N-N^\Gamma)^{\frac{r-2}r}\|w\|_{L^r(\Omega_1\cup\Omega_2)}^2}}. \end{eqnarray*} Similarly, \begin{eqnarray*} {{\sum_{K\in\mathcal{M}^\Gamma}p^3h_K^{-2}\|w\|_{L^2(K)}^2\le Cp^3h_{\rm min}^{-4/r}(N^\Gamma)^{\frac{r-2}r}\|w\|_{L^r(\Omega_1\cup\Omega_2)}^2}}. \end{eqnarray*} Therefore, \begin{eqnarray*} \|\boldsymbol{W}\|_{\ell_2}^2&\le&{{Cp^2\|\nabla w\|_{\mathcal{M}\backslash\mathcal{M}^\Gamma}^2+C(p^2(N-N^\Gamma)+ p^3N^\Gamma)h_{\rm min}^{-4/r}M^{-2/r}\|w\|_{L^r(\Omega_1\cup\Omega_2)}^2}}\\ &\le&{{C(p^2(N-N^\Gamma)+ p^3N^\Gamma)(h^2_{\rm min}M)^{-2/r}r\|w\|_{H^1(\Omega_1\cup\Omega_2)}^2}}, \end{eqnarray*} where we have used the embedding inequality, $\|w\|_{L^r(D)}\le Cr^{1/2}\|w\|_{H^1(D)}$ for any $w\in H^1(D)$, $r\ge 1$, on any Lipschitz domain $D$. Notice that for any $\zeta>0$, $\zeta^{-2/r}=e^{-2\ln\zeta/r}=e^{-2}$ if $r=\ln\zeta$, by taking $r=\max(2,|\ln(h_{\rm min}^2M)|)$ we obtain \begin{equation}\label{d2} \|\boldsymbol{W}\|_{\ell_2}^2\le {{C(p^2(N-N^\Gamma)+ p^3N^\Gamma)(1+|\ln(h_{\rm min}^2M)|)\|w\|_{H^1(\Omega_1\cup\Omega_2)}^2}}. \end{equation} By Lemma \ref{lem:2.1} and the discrete Poincar\'e inequality in Lemma \ref{lem:2.3} \begin{eqnarray*} \|w\|_{H^1(\Omega_1\cup\Omega_2)}^2&\le& {{2\|w-v\|_{H^1(\Omega_1\cup\Omega_2)}^2+2(\|\nabla_h v\|_{L^2(\Omega)}^2+\|v\|_{L^2(\Omega)}^2)}}\\ &\le&{{C(\|ph^{-1/2}[{\hskip -1.5pt} [ v]{\hskip -1.5pt} ] \|_{\mathcal{E}_1^{\rm side}\cup\mathcal{E}_2^{\rm side}}^2+\|\nabla_h v\|_{L^2(\Omega)}^2+\|v\|_{L^2(\Omega)}^2)}}\\ &\le& {{Ca_h(v,v).}} \end{eqnarray*} This yields by \eqref{d1}-\eqref{d2} that \begin{equation}\label{d3} \|\boldsymbol{V}\|_{\ell_2}^2\le C (p^2(N-N^\Gamma)+p^3N^\Gamma)(1+|\ln(h_{\rm min}^2M)|)a_h(v,v). \end{equation} On the other hand, since $a_h(v,v)\le C\|v\|_{\rm DG}^2$, we have \begin{eqnarray} a_h(v,v)&\le&C\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}\left(\|\nabla v\|_{L^2(K)}^2+\Theta_K\|ph^{-1/2}v\|_{L^2(\partial K)}^2\right)\nonumber\\ & &+\sum_{K\in\mathcal{M}^\Gamma}\sum^2_{i=1}\left(\|\nabla v_i\|_{L^2(K_i)}^2+\Theta_K\|ph^{-1/2}v_i\|_{L^2(\partial K_i)}^2+\|p^{-1}h^{1/2}\nabla_T v_i\|_{L^2(\Gamma_K)}^2\right)\nonumber\\ &:=&{\rm I}+{\rm II}.\label{d4} \end{eqnarray} By \eqref{b7}-\eqref{b8} \begin{equation}\label{d5} {\rm I}\le C\Theta\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}\left(\|\nabla v\|_{L^2(K)}^2+\|ph^{-1/2}v\|_{L^2(\partial K)}^2\right)\le C\Theta p\sum_{K\in\mathcal{M}\backslash\mathcal{M}^\Gamma}\|\boldsymbol{V}_K\|_{\ell_2}^2. \end{equation} For $K\in\mathcal{M}^\Gamma$, by Lemma \ref{lem:2.1} and \eqref{b6}, for $i=1,2$, \begin{eqnarray*} \|\nabla v_i\|_{L^2(K_i)}^2\le C\Theta_Kp^4h_K^{-2}\|v_i\|_{L^2(K_i)}^2\le C\Theta_K^2p\|\boldsymbol{V}_K\|_{\ell_2}^2. \end{eqnarray*} By \eqref{a10}, Lemma \ref{lem:2.1}, the $hp$ trace inequality, and inverse estimate \begin{eqnarray*} \|v_i\|_{L^2(\partial K_i)}^2&\le&C\|v_i\|_{L^2(K_i)}\|\nabla v_i\|_{L^2(K_i)}+C\|v_i\|_{L^2(\partial K_i^h)}^2\\ &\le&C\Theta_{K}\|v_i\|_{L^2(K_i^{h-\delta_K})}\|\nabla v_i\|_{L^2(K_i^{h-\delta_K})}+Cp^2h_K^{-1}\|v_i\|_{L^2(K_i^h)}^2\\ &\le&C\Theta_{K} p^2h_K^{-1}\|v_i\|_{L^2(K_i^{h-\delta_K})}^2, \end{eqnarray*} where we used the fact $\|v_i\|_{L^2(K_i^h)}^2\le C\Theta_{K} \|v_i\|_{L^2(K_i^{h-\delta_K})}^2$, which follows directly from Lemma \ref{lem:2.0}, in the last inequality. Thus by \eqref{b5} \begin{eqnarray*} \Theta_K\|ph^{-1/2}v_i\|_{L^2(\partial K_i)}^2\le C\Theta_K^2p^4h_K^{-2}\|v_i\|_{L^2(K_i^{h-\delta_K})}^2\le C\Theta_K^2 p\|\boldsymbol{V}_K\|_{\ell_2}^2. \end{eqnarray*} Similarly, one can prove $\|p^{-1}h^{1/2}\nabla_T v_i\|_{L^2(\Gamma_K)}^2\le C\Theta_K^2 p\|\boldsymbol{V}_K\|_{\ell_2}^2$. Therefore, we have \begin{equation}\label{d6} {\rm II}\le C\Theta^2 p\sum_{K\in\mathcal{M}^\Gamma}\|\boldsymbol{V}_K\|_{\ell_2}^2. \end{equation} Combining \eqref{d4}-\eqref{d6} we obtain \begin{eqnarray*} a_h(v,v)\le C\Theta^2 p\|\boldsymbol{V}\|_{\ell_2}^2. \end{eqnarray*} This completes the proof by using \eqref{d3}. \end{proof} To conclude this section, we remark that since $\Theta_K=\mathsf{T}(\frac{1+3\eta_K}{1-\eta_K})^{4p+3}$, Theorem \ref{thm:4.1} indicates that to control the condition number of the stiffness matrix, one should choose $\eta_K\ll 1$, that is, one should have the interface being well resolved by the mesh. \section{Numerical examples}\label{sec_numeric} In this section we provide some numerical examples to verify our theoretical results. In order to construct the orthogonal polynomials on the polygons $\hat{K}_i^{h-\delta_K}$ for the interface elements $K$, we adopt the Gram-Schmidt process starting from the basis functions of $Q_p(\hat{K})$ which are the Lagrange interpolation polynomials through the Gauss-Lobatto integration points on $\hat{K}$. The details can be found in Sommariva and Vianello \cite{Sommariva2017MCS}. The algorithms are implemented in MATLAB on a workstation with Intel(R) Core(TM) i9-10885H CPU 2.40GHz and 64GB memory. \begin{exmp}\label{example_cond} In this example we show that the growth factor $\Theta^2$ in the bound of the condition number of the stiffness matrix in Theorem \ref{thm:4.1} is sharp. For this purpose, we consider the case of one interface element. Let $K=(-2,2)^2$ and the interface $\Gamma=\{(x(t),y(t))\in \mathbb{R}^2:t\in(-\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\}$, where $x(t)$ and $y(t)$ are defined as follows: \begin{align*} &x(t)=\sqrt{2}\cos(\alpha+\frac{\pi}{4})t+\sqrt{2}\sin(\alpha+\frac{\pi}{4})(100t^3-\beta t)-1,\\ &y(t)=-\sqrt{2}\sin(\alpha+\frac{\pi}{4})t+\sqrt{2}\cos(\alpha+\frac{\pi}{4})(100t^3-\beta t)-1, \end{align*} where $\cos(\alpha)=\frac{1}{\sqrt{\mu^2+1}}$, $\sin(\alpha)=\frac{\mu}{\mu^2+1}$, $\beta=\frac{100}{\sqrt{\mu^2+1}}-\mu$ and $\mu=3.8$. \end{exmp} The domain and the interface are shown in Fig.\ref{fig_one_element} (left) in which $K_1^{h-\delta_K}=\Delta AE'F'$ and $K_2^{h-\delta_K}$ is the polygon with vertices $F'',E'',B,C,D$. The interface deviation is $\eta_K=\frac{200}{3\sqrt{3}(\mu^2+1)^2}\approx 0.16$. We first consider the condition number of the mass matrix to verify our analysis in Lemma \ref{lem:new}. For $v\in\mathbb{X}_p(K)$, in the notation of \eqref{yy1}, the mass matrix $\mathbb{M}\in\mathbb{R}^{(p+1)^2\times(p+1)^2}$ is defined as $(\mathbb{M}\boldsymbol{V}_K,\boldsymbol{W}_K)_{\ell_2}=(v,w)_K\ \ \forall v,w\in\mathbb{X}_p(K)$. Then \eqref{b6} implies that the condition number $\kappa(\mathbb{M})\le C\Theta$ for some constant $C$ independent of $p$ and $\eta_K$. We plot $\kappa(\mathbb{M})$ vs. $\Theta$ via different degrees of polynomials with loglog scaling in Fig.\ref{fig_one_element} (right). It is clear that the condition number of $\mathbb{M}$ grows as $\Theta$ which agrees with our theoretical bound. We plot the curve $\kappa(\mathbb{A})$ vs. $\Theta^2 p^4$ via different degrees of polynomials with loglog scaling in Fig.\ref{fig_one_element_small_eta} (left). We observe that the condition number grows as $\Theta^2p^4$ which confirms our analysis in Theorem \ref{thm:4.1}. We also observe that the $\kappa(\mathbb{A})$ increases very fast with the increase of polynomial degree. One can reduce the interface deviation to reduce the $\kappa(\mathbb{A})$. We change $\mu$ to reduce $\eta_K$ such that $\eta_K\le \frac{0.1}{p(p+1)}$ and plot the curve $p^4$ vs. $\kappa(\mathbb{A})$ in Fig.\ref{fig_one_element_small_eta} (right). We can find the $\kappa(\mathbb{A})$ is significantly reduced and the $\kappa(\mathbb{A})$ has $p^4$ increasing rates. \begin{figure}[!ht] \centering \includegraphics[width=0.35\textwidth]{./figs_new/curved_one_element_configure-eps-converted-to.pdf} \includegraphics[width=0.4\textwidth]{./figs_new/mass_mat_cond_vs_p_curve-eps-converted-to.pdf} \caption{Example \ref{example_cond}: The geometry setting of Example \ref{example_cond} (left) and the growth rate of the condition number of the mass matrix (right).} \label{fig_one_element} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{./figs_new/cond_curved_vs_theta-eps-converted-to.pdf} \includegraphics[width=0.4\textwidth]{./figs_new/cond_vs_small_theta_p-eps-converted-to.pdf} \caption{Example \ref{example_cond}: The growth rate of the condition number of $\mathbb{A}$ with $\eta_K=0.16$ (left) and the condition number of $\mathbb{A}$ with $\eta_K\le \frac{0.1}{p(p+1)}$ (right).} \label{fig_one_element_small_eta} \end{figure} This example shows clearly the importance of reducing the interface deviation to control the condition number of the stiffness matrix. In the following we always require \begin{equation}\label{yy} \max_{K\in\mathcal{M}}\eta_K\le\frac{0.1}{p(p+1)}, \end{equation} which is stronger than that in Assumption (H2). The finite element meshes in our following numerical examples are constructed as follows. \noindent\rule{\textwidth}{0.35mm} \noindent{\bf{Algorithm 7:}} {The algorithm for generating the induced mesh satisfying Assumption (H3) and \eqref{yy}} \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} {\bf{Input:}} A uniform initial Cartesian mesh $\mathcal{T}_0$ of mesh size $h$ {\bf{Output:}} The induced mesh $\mathcal{M}={\rm Induced}(\mathfrak{C})$ $1^\circ$ Set $\mathcal{T}=\mathcal{T}_0$; $2^\circ$ Refine the elements of $\mathcal{T}$ near the interface by quad refinements to generate a Cartesian mesh (still denoted by) $\mathcal{T}$ with possible handing nodes such that all interface elements of $\mathcal{T}$ form an admissible chain $\mathfrak{C}$; $3^\circ$ Call the refinement procedure in \cite[\S 6.3]{Bonito} such that $\mathcal{T}$ satisfies Assumption (H3); $4^\circ$ Use Algorithm 6 to generate an induced mesh $\mathcal{M}={\rm Induced}(\mathfrak{C})$; $5^\circ$ If the interface elements in $\mathcal{M}$ do not satisfy \eqref{yy}, release all merged elements in $\mathfrak{C}$, go to $2^\circ$. \vspace{-0.2cm} \noindent\rule{\textwidth}{0.35mm} We remark that after step $2^\circ$ in Algorithm 7, the interface elements are of the same size which is smaller than the sizes of non-interface elements. Thus when implementing the refinement procedure in \cite[\S 6.3]{Bonito} in our situation, only non-interface elements are refined and consequently, the interface elements still form an admissible chain. \begin{exmp}\label{example1} Let interface $\Gamma$ be the circle centered at $(0,0)^T$ with radius $r_0=1.1$. We set $\Omega=(-2,2)^2$, $\Omega_1=\{(x,y)\in \mathbb{R}^2:\sqrt{x^2+y^2}<r_0\}$ and $\Omega_2=\Omega\setminus\bar{\Omega}_1$. Set $a_1=10$ and $a_2=1$. The right-hand side $f$ and boundary condition $g$ are computed such that the exact solution is \begin{align*} u(x,y)=\left\{\begin{array}{ll} e^{x^2+y^2-r_0^2}+10r_0^2-1+(x^2+y^2-r_0^2)^2\sin(2\pi x)\sin(2\pi y) & \text{in } \Omega_1,\\ 10(x^2+y^2)+(x^2+y^2-r_0^2)^2\sin(2\pi x)\sin(2\pi y) &\text{in }\Omega_2. \end{array}\right. \end{align*} \end{exmp} \begin{table}[!ht]\centering \caption{Example \ref{example1}: numerical errors $\|u-U\|_{DG}$ and orders for $p=1,2,3,4,5$.}\label{tab1} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|cc|cc|cc|cc|cc|} \hline &\multicolumn{2}{|c|}{$p=1$}&\multicolumn{2}{|c|}{$p=2$}&\multicolumn{2}{|c|}{$p=3$}&\multicolumn{2}{|c|}{$p=4$}&\multicolumn{2}{|c|}{$p=5$}\\\hline $h$ & error & order & error & order & error & order & error & order & error & order\\ \hline $1/4$ & 1.13E+00 & -- & 4.00E-01 & -- & 1.20E-01 & -- & 3.21E-02 & -- & 2.09E-03 & -- \\ $1/8$ & 6.72E-01 & 0.75 & 1.08E-01 &1.89 & 2.01E-02 & 2.58 & 1.55E-03 & 4.37 & 1.62E-04 & 3.69 \\ $1/16$ & 3.57E-01& 0.91 &2.89E-02 & 1.90 & 2.49E-03 & 3.01 & 1.03E-04 & 3.91 & 5.18E-06 & 4.97\\ $1/32$ & 1.79E-01 & 0.99 & 7.32E-03 &1.98 & 3.12E-04 & 3.00 & 6.56E-06 & 3.98& 1.62E-07 & 5.00\\\hline \end{tabular} } \end{table} In Table \ref{tab1}, we show the errors $\|u-U\|_{DG}$ and the corresponding convergence orders for $p=1,2,3,4,5$. We clearly observe the optimal $p$-th order convergence and the superior performance of high order methods. Fig. \ref{fig_mesh_one_circle} shows the induced mesh when $h=1/4$ and the corresponding numerical solution. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{./figs_new/one_circle_N16.pdf} \includegraphics[width=0.45\textwidth]{./figs_new/numerical_solution_one_circle-eps-converted-to.pdf} \caption{Example \ref{example1}: The induced mesh of $940$ elements when $h=1/4$ (left) and the corresponding numerical solution (right). }\label{fig_mesh_one_circle} \end{figure} \begin{exmp}\label{example2} In this example we consider geometrically more complex interface. Let interface $\Gamma$ be defined as follows: \begin{align*} \Gamma=\{(x,y)\in \mathbb{R}^2:r=\frac{2}{9}(3+4^{\sin(5\theta)})\}, \end{align*} where $(r,\theta)$ are the polar coordinates. The domain $\Omega$ is divided to $\Omega_1$ and $\Omega_2$ by $\Gamma$, that is, \begin{align*} &\Omega_1=\{(x,y)\in (-2,2)^2: r<\frac{2}{9}(3+4^{\sin(5\theta)})\},\\ &\Omega_2=\{(x,y)\in (-2,2)^2: r>\frac{2}{9}(3+4^{\sin(5\theta)})\}. \end{align*} We set $a_1=10$, $a_2=1$, the right-hand side $f=1$, and the boundary condition $g=0$. \end{exmp} The exact solution of this example is unknown. We use the a posteriori error estimate in \cite{CLX} to measure the accuracy of computation. In Table \ref{tab2}, we observe the optimal $p$-th order convergence. The induced mesh when $h=1/4$ is shown in Fig. \ref{fig_mesh_exmp_five_stars} which has $2654$ elements. The discrete solution is depicted in Fig. \ref{fig_discrete_solution}. \begin{table}[!ht]\centering \caption{Example \ref{example2}: A posterior error estimates and the convergence orders for $p=1,2,3,4,5$.}\label{tab2} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|cc|cc|cc|cc|cc|} \hline &\multicolumn{2}{|c|}{$p=1$}&\multicolumn{2}{|c|}{$p=2$}&\multicolumn{2}{|c|}{$p=3$}&\multicolumn{2}{|c|}{$p=4$}&\multicolumn{2}{|c|}{$p=5$}\\\hline $h$ & error & order & error & order & error & order & error & order & error & order\\ \hline $1/4$ & 1.38E+00 & -- & 1.08E-01 & -- & 3.96E-02 & -- & 2.85E-03 & -- &1.02E-04 & -- \\ $1/8$ & 7.31E-01 & 0.92& 2.93E-02 &1.88 &5.13E-03 &2.95 & 1.83E-04 &3.96 & 3.35E-06 &4.93 \\ $1/16$ & 3.79E-01& 0.95 & 8.13E-03 &1.85 &6.46E-04 &2.99 &1.15E-05 &3.99 &1.08E-07 &4.95\\ $1/32$ & 1.90E-01 & 0.99 &2.09E-03 &1.96 & 8.12E-05 &2.99 &7.25E-07 &3.99 & 3.41E-09 &4.99 \\ \hline \end{tabular} } \end{table} \begin{figure}[!ht] \centering \begin{minipage}[c]{0.35\textwidth} \includegraphics[width=0.9\textwidth, height = 0.9\textwidth]{./figs_new/mesh_five_star_N8.pdf} \end{minipage} \begin{minipage}[c]{0.35\textwidth} \includegraphics[width=0.9\textwidth, height = 0.9\textwidth]{./figs_new/mesh_five_star_zoomed_N8.pdf} \end{minipage} \caption{Example \ref{example2}: The induced mesh of $2654$ elements when $h=1/4$ (left) and the corresponding zoomed local mesh (right). } \label{fig_mesh_exmp_five_stars} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{./figs_new/solution_five_star_N8-eps-converted-to.pdf} \caption{Example \ref{example2}: The discrete solution on the mesh of $2654$ elements. } \label{fig_discrete_solution} \end{figure}
1,314,259,992,752
arxiv
\section{ Introduction} \label{sec:in} The objective of the present paper is to investigate algebraic and combinatorial aspects of the formula given in the abstract which gives the number of equivalence classes of arbitrarily oriented but non-backtracking non-periodic closed paths of length $N$ in a oriented graph. The formula is well known in association with the zeta function of a graph investigated by several authors. See \cite{te,tee,storm} and references therein. As shown in this paper the formula has several properties of the Witt formula type not investigated previously, as far as I know. Let's recall Witt formula and some of its properties. Let $N$ be a positive integer, $R$ a real number, $\mu$ the classical M\"obius function defined by the rules: a) $\mu(+1)=+1$, b) $\mu(g)=0$, if $g=p_{1}^{e_{1}}...p_{q}^{e_{q}}$, $p_{1},...,p_{q}$ primes, and any $e_{i}>1$, c) $\mu(p_{1}...p_{q})=(-1)^{q}$. The polynomial of degree $N$ in $R$ with rational coefficients given in terms of M\"obius function, \begin{equation} \label{(1)} {\cal M}(N;R) = \frac{1}{N} \sum_{g \mid N} \mu (g) R^{\frac{N}{g}}, \end{equation} has many applications in algebra and combinatorics \cite{mor}. It is called the {\it Witt formula} when it is associated with the following result \cite{ser}: If $V$ is an $R$-dimensional vector space and $L$ is the free Lie algebra generated by $V$ then $L$ has a ${\mathbb{Z}}_{>0}$ gradation $L= \bigoplus_{N=1}^{\infty} L_{N}$, $L_{N}$ has dimension given by ${\cal M}(N;R)$ which satisfies the formal relation \begin{equation} \label{(2)} \prod_{N=1}^{\infty} (1-z^{N})^{{\cal M}(N;R)} = 1- Rz \end{equation} called the {\it Witt identity}. Notice that the coefficient of the linear term in the right hand side is minus the dimension of the vector space that generates the Lie algebra. Witt identity follows from the Poincar\'e-Birkoff-Witt theorem which says that \begin{equation} \label{(3)} \prod_{N=1}^{\infty} (1-z^{N})^{-{\cal M}(N;R)} = 1+\sum_{N=1}^{\infty} R^{N} z^{N} \end{equation} is the generating function for the dimensions of the homogeneous subspaces of the enveloping algebra of $L$. Witt formula is also called the {\it necklace polynomial} because ${\cal M}(N;R)$ gives the number of inequivalent non-periodic colorings of a circular string of $N$ beads - a necklace - with at most $R$ colors \cite{metro}. In \cite{sherm} S. Sherman associated it to the number of equivalence classes of closed non-periodic paths of length $N$ which traverse counterclockwisely without backtracking the edges of a graph with $R$ loops counterclockwisely oriented and hooked to a single vertex so that $\Omega$ generalizes ${\cal M}$. Notice that the coefficient of the linear term in the right hand side of Witt identity is minus the number of loops in the graph. It is proved that the formula for $\Omega(N,T)$ satisfies some identities analogous to those satisfied by ${\cal M}(N,R)$ which Carlitz proved in \cite{car} and Metropolis and Rota in \cite{metro}. In \cite{mor}, Moree proved similar identities for his Witt transform. Also, the formula can be interpreted as a dimension formula and it can be associated to a coloring problem. The paper is organized as follows. In section 2, some preliminary definitions and results are given. In section 3, several identities satisfied by the formula is proved. In section 4, the formula is interpreted as a dimension formula associated to free Lie super algebras and, in section 5, to necklace colorings. The formula is applied to some examples. \section{ Preliminaries} \label{sec:pre} Let $G=(V,E)$ be a finite connected and oriented graph where $V$ is the set of vertices with $|V|$ elements and $E$ is the set of oriented edges with $|E|$ elements labeled $e_{1}$, ...,$e_{|E|}$. An edge has an origin and an end as given by its orientation. The graph may have multiple edges and loops. Now, consider the graph $G^{*}$ built from $G$ by adding in the opposing oriented edges $e_{|E|+1}=(e_{1})^{-1}$, ...,$e_{2 |E|}=(e_{|E|})^{-1}$, $(e_{i})^{-1}$ being the oriented edge opposite to $e_{i}$ and with origin (end) the end (origin) of $e_{i}$. In the case that $e_{i}$ is an oriented loop, $e_{i+|E|}=(e_{i})^{-1}$ is just an additional oriented loop hooked to the same vertex. Thus, $G^{*}$ has $2|E|$ oriented edges. A path in $G$ is given by an ordered sequence $(e_{i_{1}},...,e_{i_{N}})$, $i_{k} \in \{1, ..., 2|E|\}$, of oriented edges in $G^{*}$ such that the end of $e_{i_{k}}$ is the origin of $e_{i_{k+1}}$. Also, a path can be represented by a word in the alphabet of the symbols in the set $\{e_{1}, ..., e_{2E} \}$, a word being a concatenated product of symbols which respect the order of the symbols in the sequence. In this paper, all paths are cycles. These are non-backtracking tail-less closed paths, that is, the end of $e_{i_{N}}$ coincides with the origin of $e_{i_{1}}$, subjected to the non-backtracking condition that $e_{i_{k+1}} \neq e_{i_{k}+|E|}$. In another words, a cycle never goes immediately backwards over a previous edge. Tail-less means that $e_{i_{1}} \neq e_{i_{N}}^{-1}$. The length of a cycle is the number of edges in its sequence. A cycle $p$ is called periodic if $p=q^r$ for some $r>1$ and $q$ is a non periodic cycle. Number $r$ is called the period of $p$. The cycle $(e_{i_{N}}, e_{i_{1}}, ...,e_{i_{N-1}})$ is called a circular permutation of $(e_{i_{1}},...,e_{i_{N}})$ and $(e_{i_{N}}^{-1},...,e_{i_{1}}^{-1})$ is an inversion of the latter. A cycle and its inverse are taken as distinct. The classical M\"obius inversion formula is used several times in this paper. Given arithmetic functions $f$ and $g$ it states that $g(n)= \sum_{d|n} f(d)$ if and only if $f(n)=\sum_{d|n} \mu(d) g(n/d)$. In order to count cycles of a given length in a graph $G$ a crucial tool is the edge adjacency matrix of $G$ \cite{te}. This is the $2|E| \times 2|E|$ matrix $T$ defined as follows: $T_{ij}=1$, if end vertex of edge $i$ is the start vertex of edge $j$ and edge $j$ is not the inverse edge of $i$; $T_{ij}=0$, otherwise. \begin{thm}\label{thm1} (\cite{te}) The number $Tr T^{N}$ (over)counts cycles of length $N$ in a graph $G$. \end{thm} \begin{proof} Let $a$ and $b$ be two edges of $G$. The $(a,b)^{th}$ entry of matrix $T^{N}$ is \begin{displaymath} (T^{N})_{(a,b)}= \sum_{e_{i_{1}}, ..., e_{i_{N-1}}} T_{(a, e_{i_{1}})} T_{(e_{i_{1}},e_{i_{2}})}...T_{(e_{i_{N-1}},b)} \end{displaymath} From the definition of the entries of $T$ it follows that $(T^{N})_{(a,b)}$ counts the number of paths of length $N$ with no backtracks from edge $a$ to edge $b$. For $b=a$, only closed paths are counted. Taking the trace gives the number of non-backtracking closed paths with every edge taken into account as starting edge, hence, the trace overcounts closed paths because every edge in the path is taken into account as starting edge. The paths counted by the trace are tail-less, that is, $e_{i_{1}} \neq e_{i_{N}}^{-1}$; otherwise, $Tr T^{N}= \sum_{a} (T^{N})_{(a,a)}$ would have a term with entry $(a,a^{-1})$ which is not possible. \end{proof} \begin{thm} \label{thm2} \cite{tee} Denote by $\Omega(N,T)$ the number of equivalence classes of non periodic cycles of length $N$ in $G$. This number is given by the following formula: \begin{equation} \label{(4)} \Omega(N,T)= \frac{1}{N} \sum_{g|N} \mu(g) \hspace{1mm} Tr T^{\frac{N}{g}} \end{equation} \end{thm} \begin{proof} In the set of $Tr T^{N}$ cycles there is the subset with $N \Omega(N,T)$ elements formed by the non periodic cycles of length $N$ plus their circular permutations and the subset with $\sum_{g \neq 1|N}\frac{N}{g} \Omega(\frac{N}{g},T)$ elements formed by the periodic cycles of length $N$ (whose periods are the common divisors of $N$) plus their circular permutations. (A cycle of period $g$ and length $N$ is of the form \begin{equation*} (e_{k_{1}} e_{k_{2}} ...e_{k_{\alpha}})^{g} \end{equation*} where $\alpha= N/g$, and $ (e_{k_{1}} e_{k_{2}} ...e_{k_{\alpha}}) $ is a non periodic cycle so that the number of periodic cycles with period $g$ plus their circular permutations is given by $(N/g)\Omega(N/g, T)$). Hence, \begin{equation*} Tr T^{N}= \sum_{g \mid N} \frac{N}{g} \hspace{1mm} \Omega \left( \frac{N}{g},T \right) \end{equation*} M\"obius inversion formula gives the result. \end{proof} \begin{rem} \label{rmk1} Some terms in the right hand side of \eqref{(4)} are negative. In spite of that the right hand side is always positive. Multiply both sides by $N$. The fist term equals $Tr T^{N}$ while the other terms give (in absolute value) the number according to period of the various subsets of periodic cycles which are proper subsets of the larger set with $Tr T^{N}$ elements. \end{rem} \begin{rem} \label{rmk2} Witt formula can be expressed in a form analogous to \eqref{(4)}. Define $Q$ as the $R \times R$ matrix with all entries equal to one. The trace $Tr Q^{N}=R^{N}$ counts counterclockwisely oriented cycles in the graph with $R$ loops counterclockwisely oriented and hooked to a single vertex so that \begin{equation} \label{(5)} {\cal M}(N;R) = \frac{1}{N} \sum_{g \mid N} \mu (g) Tr Q^{\frac{N}{g}} \end{equation} \end{rem} A recurrence relation which may be useful in practical calculations of $\Omega(N,T)$ is given next. \begin{thm} \label{thm3} \begin{equation} \label{(6)} N\Omega (N,T) = Tr T^{N} - \sum_{g \mid N, g \neq N} g \hspace{1mm} \Omega \left( g,T \right) \end{equation} \end{thm} \begin{proof} This follows from \begin{equation*} Tr T^{N}= \sum_{g \mid N} g \hspace{1mm} \Omega \left( g,T \right) = N\Omega (N,T) +\sum_{g \mid N, g \neq N} g \hspace{1mm} \Omega \left( g,T \right). \end{equation*} \end{proof} \section{Some identities satisfied by $\Omega$} \label{sec:count} It turns out that $\Omega(N,T)$ satisfies some identities analogous to those satisfied by Witt formula proved in \cite{car,metro}. These identities are established in this section. In \cite{mor}, Moree proved similar identities for his Witt transform. \begin{thm} \label{thm4} Given the matrices $T_{1}$ and $T_{2}$ define $S (s,T_{i})= \sum_{g|s} \mu(g) \hspace{1mm} Tr T_{i}^{\frac{N}{g}}$, $i=1,2$, and denote by $T_{1} \otimes T_{2}$ the Kronecker product of $T_{1}$ and $T_{2}$. Then, \begin{equation} \label{(7)} \sum_{[s,t]=N} S (s,T_{1}) S (t,T_{2}) = S (N,T_{1} \otimes T_{2}) \end{equation} The summation is over the set of positive integers $\{s,t \mid [s,t]=N\}$, $[s,t]$ being the least common multiple of $s,t$. It also holds that \begin{equation} \label{(8)} S (N,T^{l})=\sum_{[l,t]=N l} S (t,T) \end{equation} \end{thm} \begin{proof} In order to prove \eqref{(7)} it suffices to consider the equivalent formula (see \cite{car}) \begin{equation*} \sum_{k|N}\sum_{[s,t]=k } S (s,T_{1}) S (t,T_{2}) = \sum_{k|N} S (k,T_{1} \otimes T_{2}) \end{equation*} Using M\"obius inversion formula, the left hand side is equal to \begin{equation*} \sum_{s|N} S (s,T_{1}) \sum_{t|N} S (t,T_{2}) = (Tr T_{1}^{N}) (Tr T_{2}^{N}) \end{equation*} But $(Tr T_{1}^{N}) (Tr T_{2}^{N})=Tr (T_{1} \otimes T_{2})^{N}$. By M\"obius inversion formula this gives the right hand side of the equivalent formula. Using ideas from \cite{mor}, the next identity can be proved using the following equivalent formula: \begin{equation*} \sum_{g|N} \sum_{[l,t]=\frac{Nl}{g}} S (t,T) = \sum_{g|N} S (N,T^{l}) \end{equation*} The left hand side is equal to $\sum_{t|lN} S(t,T)=Tr T^{lN}=Tr (T^{l})^{N}$. Apply M\"obius inversion formula to get the result. \end{proof} \begin{rem} \label{rmk3} Formula \eqref{(7)} may be generalized to the case $T=T_{1} \otimes T_{2} \otimes ...\otimes T_{l}$ to give \begin{equation} \label{(9)} \sum_{[s_{1}, \dots ,s_{l}]=N } S (s_{1},T_{1}) \dots S (s_{l},T_{l}) = S (N,T) \end{equation} Also, it can be proved that \begin{equation} \label{(13)} S(N,T_{1}^{s} \otimes T_{2}^{r})= \sum_{[rp,sq]=nrs} S (p,T_{1})S (q,T_{2}) \end{equation} where $r$ and $s$ are relatively prime and the summation is over all positive integers $p,q$ such that $[rp,sq]=nrs$. The proof is an application of previous identities as in \cite{metro}, Theorem 5. \end{rem} \begin{rem} \label{rmk4} In terms of $\Omega$, using that $[s,t](s,t)=st$, (7) becomes \begin{equation} \label{(14)} \sum_{[s,t]=N} (s,t) \Omega(s,T_{1}) \Omega (t,T_{2}) = \Omega (N,T_{1} \otimes T_{2}) \end{equation} where $(s,t)$ is the maximum common divisor of $s$ and $t$. This can be extended to the general case \eqref{(9)} to give \begin{equation} \sum_{[s_{1}, \dots ,s_{l}]=N } (s_{1}, \cdots, s_{l} )\Omega (s_{1},T_{1}) \dots \Omega (s_{l},T_{l}) = \Omega (N,T) \end{equation} where $(s_{1}, \cdots, s_{l})$ is the greatest common divisor of $(s_{1}, \cdots,s_{l})$ and the sum runs over all positive integers $(s_{1}, \cdots,s_{l})$ with least common multiple equal to $N$ and $T=T_{1} \otimes \dots \otimes T_{l}$. Also, from (8), \begin{equation} \label{(15)} \Omega (N,T^{l})=\sum_{[l,t]=N l} \frac{t}{N}\Omega (t,T). \end{equation} In terms of $\Omega$ (10) reads \begin{equation*} N \Omega (N,T_{1}^{s} \otimes T_{2}^{r})= \sum_{[rp,sq]=Nrs} pq \Omega (p,T_{1}) \Omega(q,T_{2}) \end{equation*} Using $(rp,sq)[rp,sq]=rpsq$ with $[rp,sq]=Nrs$ implies $(rp,sq)N=pq$ and \begin{equation*} \Omega (N,T_{1}^{s} \otimes T_{2}^{r})= \sum_{[rp,sq]=Nrs} (rp,sq) \Omega (p,T_{1}) \Omega(q,T_{2}) \end{equation*} Replace $s$ and $r$ by $s/(r,s)$ and $r/(r,s)$ to get \begin{equation} (r,s) \Omega (N,T_{1}^{s/(r,s)} \otimes T_{2}^{r/(r,s)})= \sum (rp,sq) \Omega (p,T_{1}) \Omega(q,T_{2}) \end{equation} The sum is over $p,q$ such that $pq/(pr,qs)=N/(r,s)$. \end{rem} Another identity satisfied by $\Omega$ is of the Witt type \eqref{(2)}. It is a well known result about the $\zeta$ function of a graph $G$ which is defined as follows: \begin{equation} \label{(17)} \zeta(z):=\prod_{[p]} (1-z^{l(p)})^{-1}=\prod_{N=1}^{\infty} (1-z^{N})^{-\Omega(N,T)} \end{equation} See \cite{te} and \cite{tee}. The first product is over the equivalence classes $[p]$ of backtrack-less and tail-less closed paths of length $l(p)$ in $G$. It is a famous result that $\zeta=[det(1-zT)]^{-1}$, hence, $\Omega(N,T)$ satisfies the Witt type identity \begin{equation} \label{(18)} \prod_{N=1}^{\infty} (1-z^{N})^{\Omega(N,T)}=det(1-zT) \end{equation} As mentioned in the introduction, the coefficient of the linear term in Witt's identity is the negative of the number of loops in a graph with $R$ loops hooked to a single vertex. The coefficients in the expansion of the determinant $det(1-zT)$ as a polynomial in $z$ also have nice combinatorial meanings related to the structure of the graph $G$ as proved by several authors. See \cite{storm} and references therein. Next theorem gives formulas for these coefficients and for those of the inverse of the determinant which are relevant for the next section. \begin{thm} \label{thm5} Define \begin{equation} \label{(19)} g(z):=\sum_{N=1}^{\infty} \frac{ Tr T^{N} }{N} z^{N}. \end{equation} Then, \begin{equation} \label{(20)} \prod_{N=1}^{+\infty} (1-z^{N})^{\pm \Omega(N,T)} = e^{\mp g(z)} =[det(1-zT)]^{\pm}= 1 \mp \sum_{i=1}^{+\infty} c_{\pm}(i) z^{i},\\ \end{equation} where \begin{equation} \label{(21)} c_{\pm}(i)= \sum_{m=1}^{i} \lambda_{\pm}(m) \sum_{ \begin{array}{l} a_{1}+2a_{2}+...+ia_{i} =i\\ a_{1}+...+a_{i} = m \end{array}} \prod_{k=1}^{i} \frac{(Tr T^{k})^{a_{k}}}{a_{k}! k^{a_{k}}} \end{equation} with $\lambda_{+}(m)=(-1)^{m+1}$, $\lambda_{-}(m)=+1$, $c_{+}(i)=0$ for $i > 2|E|$, and $c_{-}(i) \geq 0$. Furthemore, \begin{equation} \label{(22)} Tr T^{N} = N \sum_{ \begin{array}{l} s = (s_{i})_{i \geq 1}, s_{i} \in {\bf Z}_{\geq 0}\\ \sum is_{i}=N \end{array}} (\pm 1)^{|s|+1} \frac{(\mid s \mid -1)!}{s!} \prod c_{\pm}(i)^{s_{i}}\\ \end{equation} where $\mid s \mid = \sum s_{i}, s! = \prod s_{i} !$. \end{thm} \begin{proof} Define $P_{\pm}$ by \begin{equation*} P_{\pm}(z)=\prod_{N'=1}^{+\infty} (1-z^{N'})^{\pm \Omega(N',T)} \end{equation*} Take the logarithm of both sides and use \eqref{(4)} to get \begin{eqnarray*} ln P_{\pm} &=&\mp \sum_{N'} \sum_{k} \frac{1}{k} \Omega(N',T) z^{N' k} =\mp \sum_{N=1}^{+\infty} \sum_{k|N}\frac{1}{k} \Omega\left(\frac{N}{k}, T \right) z^{N}\\ &=& \mp \sum_{N=1}^{+\infty} \frac{Tr T^{N}}{N} z^{N} = \mp g(z) \end{eqnarray*} from which the first equality in \eqref{(20)} follows. From the definition of $g(z)$, it follows that \begin{eqnarray*} \mp g(z):= \mp \sum_{N=1}^{\infty} \frac{Tr T^{N}}{N} z^{N} &=& \mp Tr \sum_{N=1}^{+\infty} \frac{1}{N} T^{N} z^{N} = \pm Tr \hspace{1mm} ln(1-zT)\\ &=& \pm ln \hspace{1mm} det(1-zT)\\ \end{eqnarray*} proving the second equality in \eqref{(20)}. The third equality is obtained formally expanding the exponential. As the formal Taylor expansion of $1-e^{\mp g}$, the coefficients $c_{\pm}$ are given by \begin{equation*} c_{\pm}(i) = \frac{1}{i!} \frac{d^{i}}{d z^{i}} \left[ \pm(1-e^{\mp g}) \right]|_{z=0} \end{equation*} Using Faa di Bruno's formula as in \cite{coss}, the derivatives can be computed explicitly and \eqref{(21)} follows. The determinant is a polynomial of maximum degree $2|E|$, hence, $c_{+}(i)=0$ for $i>2|E|$. Clearly, $c_{-}(i) \geq 0$. To prove \eqref{(22)} write \cite{kangg} \begin{eqnarray*} \mp ln \left( 1 \mp \sum_{i} c_{\pm}(i) z^{i} \right) &=& \mp \sum_{l=1}^{+\infty} \frac{(-1)}{l} \left( \pm \sum_{i} c_{\pm}(i) z^{i} \right)^{l}\\ &=& \pm \sum_{l=1}^{+\infty} \frac{(\pm 1)}{l} \sum_{\begin{array}{l} s = (s_{i})_{i \geq 1}\\ s_{i} \in {\Bbb Z}_{\geq 0}\\ \sum s_{i}=l \end{array}} \frac{(\sum s_{i})!}{\prod (s_{i} !)} \left(\prod c_{\pm}(i)^{s_{i}} \right) z^{\sum s_{i} l}\\ &=& \sum_{k=1}^{+\infty} z^{k} \sum_{ \begin{array}{l} s = (s_{i})_{i \geq 1}, s_{i} \in {\Bbb Z}_{\geq 0}\\ \sum is_{i}=k \end{array}} (\pm 1)^{|s|+1} \frac{(\mid s \mid -1)!}{s!} \prod c_{\pm}(i)^{s_{i}} \end{eqnarray*} The second equality in \eqref{(20)} applied to the left hand side yields \begin{equation*} \mp ln \left( 1\mp \sum_{i} c_{\pm}(i) z^{i} \right) =\sum_{k=1}^{+\infty} \frac{Tr T^{k}}{k} z^{k} \end{equation*} Compare coefficients to get the result. \end{proof} \begin{rem} \label{rmk5} Witt identity can be expressed in terms of a determinant: \begin{equation*} \prod_{N=1}^{\infty} (1-z^{N})^{{\cal M}(N;R)} = 1- Rz=det(1-zQ) \end{equation*} The proof is analogous to the proof of previous theorem using \eqref{(5)}. \end{rem} \section{ $\Omega$ and free Lie super algebras} \label{sec:lie} As mentioned in the introduction the coefficient in the Witt formula \eqref{(1)} has an algebraic interpretation as the negative of the dimension of a vector space that generates a free Lie algebra and the inverse of this formula is the generating function of dimensions of the subspaces of the enveloping algebra of the Lie algebra. Is it possible that the coefficients in the determinant $det(1-zT_{G})$ above have similar interpretation? The answer is positive. Formula \eqref{(4)} and the coefficients of the determinant and its inverse can be interpreted algebraically in terms of data related to Lie super algebras. In series of papers S. -J. Kang and M. -H Kim \cite{kangg, kangggg} generalized Witt formula \eqref{(1)} to the case the free Lie algebra $L$ is generated by an infinite graded vector space. They obtained a generalized Witt formula for the dimensions of the homogeneous subspaces of $L$ which satisfies a generalized Witt identity. In \cite{kangggg} S. -J. Kang extended these results to super spaces and Lie super algebras. Some of the results are summarized in the following proposition. \begin{prop} \label{pr6} Let $V= \bigoplus_{i=1}^{\infty} V_{i}$ be a ${\mathbb{Z}}_{>0}$-graded super space with finite dimensions $dim V_{i}= |t(i)|$ and super dimensions $Dim V_{i}= t(i) \in {\mathbb{Z}}$, $\forall i \geq 1$. Let ${\cal L}= \bigoplus_{N=1}^{\infty} {\cal L}_{N}$ be the free Lie super algebra generated by $V$ with a ${\mathbb{Z}}_{>0}$-gradation induced by that of $V$. Then, the super dimensions of the subspaces ${\cal L}_{N}$ are given by formula \begin{equation} \label{(23)} Dim {\cal L}_{N}= \sum_{g | N} \frac{\mu(g)}{g} W \left(\frac{N}{g}\right) \end{equation} The summation ranges over all common divisors $g$ of $N$ and $W$ is given by \begin{equation} \label{(24)} W(N)= \sum_{s \in T(N)} \frac{(\mid s \mid -1)!}{s!} \prod t(i)^{s_{i}} \end{equation} where $T(N)=\{ s = (s_{i})_{i \geq 1} \mid s_{i} \in {\mathbb{Z}}_{\geq 0}, \sum is_{i}=N \}$ and $\mid s \mid = \sum s_{i}$, $ s! = \prod s_{i} !$. The numbers $Dim {\cal L}_{N}$ satisfy the identity \begin{equation}\label{(25)} \prod_{N=1}^{\infty} (1-z^{N})^{Dim {\cal L}_{N}}= 1- \sum_{i=1}^{\infty} t(i) z^{i}. \end{equation} The right hand side of \eqref{(25)} is related to the generating function for the $W$'s, \begin{equation}\label{4.7} g(z) :=\sum_{n=1}^{\infty} W(n)z^{n} \end{equation} by the relation \begin{equation}\label{4.8} e^{-g(z)}= 1-\sum_{i=1}^{\infty} t(i) z^{i} \end{equation} Furthermore, \begin{equation}\label{4.9} \prod_{N=1}^{\infty} \left( \frac{1}{1-z^{N}} \right)^{Dim {\cal L}_{N}}= 1+\sum_{i=1}^{\infty} Dim U({\cal L})_{i} z^{i} \end{equation} where $Dim U({\cal L})_{i}$ is the dimension of the $i$-th homogeneous subspace of $U({\cal L})$, the universal enveloping algebra of ${\cal L}$. \end{prop} In \cite{kangggg}, \eqref{(23)} is called the {\it generalized Witt formula}; $W$ is called {\it Witt partition function}; and \eqref{(25)} is called {\it generalized Witt identity}. See section 2.3 of \cite{kangggg}. Given a formal power series $\sum_{i=1}^{+\infty} t_{i} z^{i}$ with $ t_{i} \in {\Bbb Z}$, for all $i \geq 1$, the coefficients in the series can be interpreted as the super dimensions of a ${\Bbb Z}_{>0}$-graded super space $V= \bigoplus_{i=1}^{\infty} V_{i}$ with dimensions $dim V_{i}= |t_{i}|$ and super dimensions $Dim V_{i}= t_{i} \in {\Bbb Z}$. Let ${\cal L}$ be the free Lie super algebra generated by $V$. Then, it has a gradation induced by $V$ and the homogeneous subspaces have dimension given by \eqref{(23)}. Apply this interpretation to the determinant $det(1-zT)$ which is a polynomial of degree $2|E|$ in the formal variable $z$. It can be taken as a power series with coefficients $t_{i}=0$, for $i > 2|E|$. Comparison of the formulas in Theorem \ref{thm5} with formulas in the above Proposition implies the next result: \begin{prop} \label{7} Given a graph $G$, $T$ its edge matrix, let $V= \bigoplus_{i=1}^{2|E|} V_{i}$ be a ${\mathbb{Z}}_{>0}$-graded super space with finite dimensions $dim V_{i}= |c_{+}(i)|$ and the super dimensions $Dim V_{i}= c_{+}(i)$ given by \eqref{(21)}, the coefficients of $det(1-zT)$. Let ${\cal L}= \bigoplus_{N=1}^{\infty} {\cal L}_{N}$ be the free Lie super algebra generated by $V$. Then, the super dimensions of the subspaces ${\cal L}_{N}$ are given by $Dim{\cal L}_{N} =\Omega(N, T)$. The algebra has generalized Witt identity given by \eqref{(18)}. The zeta function of $G$ \eqref{(17)} is the generating function for the dimensions of the subspaces of the enveloping algebra $U({\cal L})$ of ${\cal L}$ which are given by $Dim U({\cal L})_{n}=c_{-}(n)$, $c_{-}(n)$ given by \eqref{(21)} . \end{prop} \begin{exmp} $G_{1}$, the graph with $R \geq 2$ edges counterclockwisely oriented and hooked to a single vertex shown in Figure 1. The edge matrix for $G_{1}$ is the $2R \times 2R$ symmetric matrix \begin{equation*} T_{G_{1}} = \left( \begin{array}{clcr} A & B\\ B & A \end{array} \right) \end{equation*} \begin{center} \begin{figure}[h] \centering \includegraphics[scale=0.5]{fig1.jpg} \caption{Graph $G_{1}$} \label{Fi:G1} \end{figure} \end{center} where $A$ is the $R \times R$ matrix with all entries equal to $1$ and $B$ is the $R \times R$ matrix with the main diagonal entries equal to $0$ and all the other entries equal to $1$. This matrix has the trace given by \begin{equation*} Tr T_{G_{1}}^{N} = 1+(R-1)(1+(-1)^{N})+(2R-1)^{N}, \hspace{2mm} N=1,2, \dots \end{equation*} and the determinant \begin{eqnarray*} det(1-zT_{G_{1}}) &=& (1-z) \left[ 1-(2R-1)z \right](1-z^{2})^{R-1}\\ &=& 1-\sum_{i=1}^{2R} c(i) z^{i} \end{eqnarray*} where $c(2R)=(-1)^{R}(2R-1)$, \begin{equation*} c(2i)=(-1)^{i}(2i-1) \left( \begin{array}{c} R\\ i \end{array} \right), \hspace{2mm} i=1, \cdots, R-1 \end{equation*} and \begin{equation*} c(2i+1)=2R(-1)^{i} \left( \begin{array}{c} R-1\\ i \end{array} \right), \hspace{2mm} i=0,1, \cdots R-1 \end{equation*} Futhermore, \begin{eqnarray*} [det(1-zT_{G_{1}})]^{-1} &=& \sum_{q=0}^{+\infty} z^{q} \sum_{i=0}^{q} a_{i} (2R-1)^{q-i} \end{eqnarray*} where \begin{equation*} a_{i}= \sum_{k=0}^{i} (-1)^{i-k} \left( \begin{array}{c} k+R-1\\ R-1 \end{array} \right) \left( \begin{array}{c} i-k+R-2\\ R-2 \end{array} \right) \end{equation*} Let's consider the case $R=2$. In this case, \begin{equation*} Tr T_{G_{1}}^{N}=2+(-1)^{N}+3^N, \hspace{5mm} det(1-zT_{G_{1}})= 1-4z+2z^2+4z^3-3z^4 \end{equation*} so that the number of classes of reduced nonperiodic cycles of length $N$ is given by the formula \begin{equation*} \Omega(N, T_{G_{1}}) = \frac{1}{N} \sum_{g|N} \mu (g) \left( 2+(-1)^{\frac{N}{g}}+3^{\frac{N}{g}} \right) \end{equation*} The graph generates the following algebra. Let $V= \bigoplus_{i=1}^{4} V_{i}$ be a ${\bf Z}_{>0}$-graded super space with dimensions $dimV_{1}=4$, $dimV_{2}=2$, $dimV_{3}=4$, $dimV_{4}=3$ and super dimensions $DimV_{1}=-4$, $DimV_{2}=2$, $DimV_{3}=4$, $DimV_{4}=-3$. Let ${\cal L}= \bigoplus_{N=1}^{\infty} {\cal L}_{N}$ be the free graded Lie super algebra generated by $V$. The dimensions of the subspaces ${\cal L}_{N}$ are given by the generalized Witt formula \begin{equation*} Dim{\cal L}_{N} =\frac{1}{N} \sum_{g|N} \mu (g) \left( 2+(-1)^{\frac{N}{g}}+3^{\frac{N}{g}} \right) \end{equation*} which satisfies the generalized Witt identity \begin{eqnarray*} \prod_{N=1}^{+\infty} (1-z^{N})^{\Omega(N, T_{G_{1}})} &=& 1-[4z-2z^2-4z^3+3z^4]\\ \end{eqnarray*} The subspace $U_{n}({\cal L})$ of the enveloping algebra $U({\cal L})$ have dimensions given by the zeta function of the graph, \begin{eqnarray*} \prod_{N=1}^{+\infty} (1-z^{N})^{-\Omega(N, T_{G_{1}})} &=& 1+\frac{1}{16}\sum_{n=1}^{\infty} ((-1)^{n}+39 \cdot 3^{n}-24-12n) z^{n}\\ \end{eqnarray*} so that \begin{displaymath} Dim U_{n}({\cal L})=\frac{1}{16} \left( (-1)^{n}+39 \cdot 3^{n}-24-12n\right) \end{displaymath} \end{exmp} \begin{exmp} $G_{2}$, the bipartite graph shown in Figure 2. \begin{center} \begin{figure}[h] \centering \includegraphics[scale=0.5]{fig2.jpg} \caption{Graph $G_{2}$} \label{Fi:G2} \end{figure} \end{center} The edge matrix of $G_{2}$ is \begin{equation*} T_{G_{2}} = \left( \begin{array}{clcrclcr} 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 \end{array} \right) \end{equation*} \\ \\ The matrix has the trace $Tr T_{G_{2}}^{N}= 0$ if $N$ is odd and $Tr T_{G_{2}}^{N}= 4+2 \cdot 2^{N}$ if $N$ is even, and the determinant \begin{equation*} det(1-zT_{G_{2}})= 1-6z^2+9z^4-4z^6 \end{equation*} If $N$ is odd, the number of classes of nonperiodic cycles of length $N$ is $\Omega(N, T_{G_{2}} )=0$, if $N$ is odd, and \begin{equation*} \Omega (N, T_{G_{2}})= \frac{1}{N}\sum_{g|N} \mu (g) Tr T_{G_{2}}^{\frac{N}{g}} \end{equation*} if $N$ is even. The graph generates the following algebra. Let $V= \bigoplus_{i=1}^{3} V_{2i}$ be a ${\bf Z}_{>0}$-graded superspace with dimensions $dimV_{2}=6$, $dimV_{4}=9$, $dimV_{6}=4$ and superdimensions $DimV_{2}=6$, $DimV_{4}=-9$, $DimV_{6}=4$. Let ${\cal L}= \bigoplus_{N=1}^{\infty} {\cal L}_{N}$ be the free graded Lie superalgebra generated by $V$. The dimensions of the subspaces ${\cal L}_{N}$ are $Dim{\cal L}_{N}=0$, for $N$ odd and \begin{equation*} Dim{\cal L}_{N} =\frac{1}{N}\sum_{g|N} \mu (g) Tr T_{G_{2}}^{\frac{N}{g}} \end{equation*} for $N$ even. The dimensions satisfy the generalized Witt identity \begin{eqnarray*} \prod_{N=1}^{+\infty} (1-z^{N})^{\Omega(N, T_{G_{2}})} &=& 1-[6z^2-9z^4+4z^6] \end{eqnarray*} The generating function for the dimensions of the subspaces $U_{n}({\cal L})$ of the enveloping algebra $U({\cal L})$ is given by \begin{eqnarray*} \prod_{N=1}^{+\infty} (1-z^{N})^{-\Omega(N, T_{G_{2}})} &=& 1+\frac{1}{18}\sum_{n=1}^{\infty} (2^{2n+5}-6n-14) z^{2n}\\ \end{eqnarray*} so that \begin{equation*} Dim U_{n}({\cal L})= 2^{2n+5}-6n-14 \end{equation*} \end{exmp} \section{Necklace colorings induced by paths} \label{sec:col} Given the set of $2|E|$ colors $\{c_{1},..., c_{2|E|} \}$, assign the colors $c_i, c_{|E|+i}$ to edges $e_{i}, e_{|E|+i}=e_{i}^{-1} \in G^*$, respectively, so that to a cycle of length $N$ in $G$ corresponds an ordered sequence of $N$ colors. Now, assign each color in this sequence to a bead in a circular string with $N$ beads - a necklace - in such a manner that two adjacent colors in the sequence are assigned to adjacent beads. The non backtracking condition for cycles implies that no two adjacent beads are painted with colors, say, $c_i$ and $c_{|E|+i}$. It is clear that there is a correspondence between the classes of nonperiodic cycles of length $N$ in $G$ and classes of nonperiodic colorings of a necklace with $N$ beads with at most $2|E|$ distinct colors induced by the cycles so that the number of inequivalent colorings is $\Omega_{G}(N, T)$. Of course the structure of the graph reflects itself on the coloring. For instance, the presence of loops in the graph means that their assigned colors may appear repeated in a string of adjacent beads. This can not happen to a color assigned to an edge which is not a loop. The edge matrix $T$ may be called the {\it color matrix}. It basically tells what colors are allowed to follow a given color in the necklace. Element $T_{ij}=1$, if a color $c_j$ can follow color $c_i$ and $c_j \neq c_{|E|+i}$; $T_{ij}=0$, otherwise. \begin{exmp} The number of nonperiodic colorings of a necklace with $N$ beads with at most 6 colors and color matrix given by graph $G_{1}$ with $R=3$: \begin{equation*} T_{G_{1}} = \left( \begin{array}{clcrclcr} 1 & 1 & 1 & 0 & 1 & 1\\ 1 & 1 & 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 & 1\\ \end{array} \right) \end{equation*} is \begin{equation*} \Omega(N,T_{G_{1}}) = \frac{1}{N} \sum_{g|N} \mu (g) \left( 3+2(-1)^{\frac{N}{g}}+5^{\frac{N}{g}} \right) \end{equation*} For $N=3$, $\Omega(3, T_{G_{1}})=40$. The classes of nonperiodic colorings are $[c_{i}c_{j}^{2}]$, $[c_{3+i} c_{j}^{2}]$, $[c_{i} c_{3+j}^{2}]$, $[{c_{3+i}} \hspace{1mm} {c_{3+j}}^{2}]$, $[c_{i}^{2} c_{j}]$, $[{c_{3+i}}^{2} c_{j}]$, $[c_{i}^{2} {c_{3+j}}]$, $[{c_{3+i}}^{2} {c_{3+j}}]$, for $(i,j) = (1,2), (1,3), (2,3)$ and $[c_{i}c_{j}c_{k}]$, $[{c_{3+i}} c_{j}c_{k}]$, $[c_{i} {c_{3+j}} c_{k}]$,$[c_{i} c_{j}{c_{3+k}}]$, $[{c_{3+i}} \hspace{1mm} { c_{3+j}} c_{k}]$, $[{c_{3+i}} c_{j}{c_{3+k}}]$, $[c_{i} {c_{3+j}} \hspace{1mm} {c_{3+k}}]$, $[{c_{3+i}} \hspace{1mm} {c_{3+j}} \hspace{1mm} {c_{3+k}}]$, for $(i,j,k) = (1,2,3), (1,3,2)$. These corresponds to the classes of cycles $[e_{i}^{+1} e_{j}^{+2}]$, $[e_{i}^{-1} e_{j}^{2}]$, $[e_{i}^{+1} e_{j}^{-2}]$, $[e_{i}^{-1} e_{j}^{-2}]$, $[e_{i}^{+2} e_{j}^{+1}]$, $[e_{i}^{-2} e_{j}^{+1}]$, $[e_{i}^{+2} e_{j}^{-1}]$, $[e_{i}^{-2} e_{j}^{-1}]$, for $(i,j) = (1,2), (1,3), (2,3)$ and $[e_{i}^{+1} e_{j}^{+1}e_{k}^{+1}]$, $[e_{i}^{-1} e_{j}^{+1}e_{k}^{+1}]$, $[e_{i}^{+1} e_{j}^{-1}e_{k}^{+1}]$,$[e_{i}^{+1} e_{j}^{+1}e_{k}^{-1}]$, $[e_{i}^{-1} e_{j}^{-1}e_{k}^{+1}]$, $[e_{i}^{-1} e_{j}^{+1}e_{k}^{-1}]$,$[e_{i}^{+1} e_{j}^{-1}e_{k}^{-1}]$, $[e_{i}^{-1} e_{j}^{-1}e_{k}^{-1}]$, for $(i,j,k) = (1,2,3), (1,3,2)$. \end{exmp} \begin{exmp} For the graph $G_2$ assign to the three oriented edges $e_{1},e_{2},e_{3}$ the colors $c_1, c_2, c_3$, respectively. The graph is bipartite so only paths of even length are possible. The number of inequivalent nonperiodic colorings induced by the nonperiodic cycles of length $N$ is given by \begin{equation} \Omega(N, T_{G_{2}})= \frac{1}{N} \sum_{g|N} \mu (g) Tr T_{G_{2}}^{\frac{N}{g}} \end{equation} For $N=2$, $\Omega(2, T_{G_{2}})=6$. The classes are $[e_{1} e_{2}^{-1}]$, $[e_{1}^{-1} e_{2}]$, $[e_{1} e_{3}^{-1}]$, $[e_{1}^{-1} e_{3}]$, $[e_{2} e_{3}^{-1}]$, $[e_{2}^{-1} e_{3}]$. The induced colorings are $[c_{1} {c_{5}}]$, $[{c_{4}}, c_{2}]$, $[c_{1} {c_{6}}]$, $[{c_{4}} c_{3}]$, $[c_{2} {c_{6}}]$, $[{c_{5}} c_{3}]$. For $N=4$ , $\Omega(4, T_{G_{2}})=6$. The paths are $[e_{1} e_{2}^{-1} e_{3} e_{2}^{-1}]$, $[e_{1} e_{2}^{-1} e_{1} e_{3}^{-1}]$, $[e_{1} e_{3}^{-1} e_{2} e_{3}^{-1}]$, $[e_{1}^{-1} e_{2} e_{1}^{-1}e_{3}]$, $[e_{1}^{-1} e_{2} e_{3}^{-1}e_{2}]$, $[e_{1}^{-1} e_{3} e_{2}^{-1}e_{3}]$. To these classes correspond the colorings $[c_{1} {c_{5}} c_{3} {c_{5}}]$, $[c_{1} {c_{5}} c_{1} {c_{6}}]$, $[c_{1} {c_{6}} c_{2} {c_{6}}]$, $[{c_{4}} c_{2} {c_{4}}c_{3}]$, $[{c_{4}} c_{2} {c_{6}}c_{2}]$, $[{c_{4}} c_{3} {c_{5}}c_{3}]$. The graph has no loops so strings of two or more beads with a same color is not possible. \end{exmp} \begin{ack} I would like to thank Prof. C. Storm (Adelphi University, USA) for sending me his joint paper with G. Scott on the coefficients of Ihara zeta function. Also, many thanks to Prof. Asteroide Santana for his help with the figures, latex commands and determinants. \end{ack}
1,314,259,992,753
arxiv
\section{Introduction} The Schwinger effect~\cite{Sch51} of pair production in a constant electric field is one of the beautiful predictions of QED. The production rate of a pair of particles with masses $m$ and charges $\pm e$ is exponentially suppressed for weak fields $E$ as \begin{equation} P\propto \,{\rm e}\,^{-\pi m^2/|e E|}, \label{dr} \end{equation} where the exponent has the meaning of a classical Euclidean action associated with the tunneling. With increasing $E$, fluctuations about the classical (Euclidean) trajectory, which has the form of a circle of radius $R=m/|e E|$, become more and more important, but nothing special happens even for $|e E|\gtrsim 1/m^2$, when the saddle-point approximation~\cite{AAM82} in the path integral over (pseudo)particle trajectories ceases to be applicable. This smooth behavior drastically differs from that \cite{FTs85,Bur86,BP92}% \footnote{For a review see Ref.~\cite{AMSS00}.} in string theory, where there exists an instability for the fields larger than a certain critical value of the order of the string tension: $|e E_c|\sim 1/2\pi\alpha'$. This instability is apparently not related to the Schwinger effect and takes place even for a neutral string with opposite charges at the ends, thus occurring because stretching of the string then costs negative energy. Recently, a very interesting conjecture about an existence of such a critical electric field for ${\cal N}=4 $ super Yang-Mills (SYM) has been made in Ref.~\cite{SZ11}, based on a holographic description~\cite{GSS02} of the Schwinger effect via the AdS/CFT correspondence. In this approach the saddle-point trajectory is governed in the supergravity approximation by a minimal surface spanned by a circle. The goal of the present article is to account for fluctuations about this minimal surface in anti-de Sitter (AdS), which result in a preexponential factor. We evaluate the decay rate using a representation of the Wilson loop in ${\cal N}=4$ SYM through a path integral over reparametrizations of the boundary circle with the action prescribed by AdS/CFT, that holographically captures fluctuations in the bulk. We show that the fluctuations do not cure the instability, and the critical value of electric field is simply shifted in the quadratic approximation (as is displayed in \eq{Ec} below). Our results confirm the expectation that the Schwinger effect in ${\cal N}=4$ SYM at strong coupling does not look as it does in QED but is rather as it would appear in string theory. \section{The setup} The saddle-point (Euclidean) action that determines the exponent of the production rate in a constant electric field is given by the minimum of \begin{equation} S= 2\pi R m -\pi |e E | R^2 - \ln W\left({\rm circle}\right) \label{Seff} \end{equation} with respect to the radius $R$ of the circle. This effective action emerges after performing the path integral over (pseudo)particle trajectories, representing the vacuum-to-vacuum amplitude in an external constant electric field. In the path integral, first it is shown that the integral over the proper time has a saddle point, and then one can show that the saddle-point trajectory is a circle with (a large) radius $R=m/|e E|$~\cite{AAM82}. The circle lies in the $\mu,\nu$-plane, when the constant electric field $E$ is given by the $\mu,\nu$-component of the field strenght $F_{\mu\nu}$. The existence of this saddle point is justified for small $ |e E|$, when the logarithm of the Wilson loop on the right-hand side of \eq{Seff} is subleading at weak couplings and contributes only to the preexponential. The holographic description of the Schwinger effect in SYM relies~\cite{GSS02} on the spherical solution~\cite{BCFM98,DGO99} of the Euler--Lagrange equations for the minimal surface in $AdS$ enclosed by a circle in the boundary. We shall write it for the upper half-plane (UHP) parametrization of the surface: $z=x+{\rm i} y$ ($y>0$), which is customary in string theory, using the standard embedding space coordinates $Y_{-1}$, $Y_0$, $Y_1$, $Y_2$, $Y_3$, $Y_4$ obeying \begin{equation} Y\cdot Y \equiv -Y_{-1}^2- Y_0^2+Y_1^2+Y_2^2+Y_3^2+Y_4^2=-1. \label{=1} \end{equation} The solution reads \begin{eqnarray} Y_1&= &\frac{1-x^2-y^2}{2y},\quad Y_2 = \frac{x}{y}, \nonumber \\* Y_{-1} &= &\frac{1+x^2+y^2}{2y},\quad Y_4 = Y_0=Y_3=0, \label{solu} \end{eqnarray} or \begin{subequations} \begin{eqnarray} Z&\equiv&\frac{R}{Y_{-1}-Y_4}=R\frac{2y}{1+x^2+y^2} ,\\ X_1&\equiv& Z Y_1=R\frac{1-x^2-y^2}{1+x^2+y^2},\\ X_2&\equiv& Z Y_2=R\frac{2x}{1+x^2+y^2}, \end{eqnarray} \label{PP} \end{subequations} on the Poincare patch, so the induced metric \begin{equation} {\rm d} \ell^2 = \frac{{\rm d} x^2+{\rm d} y^2}{y^2}. \label{Poin} \end{equation} is the Poincare metric of the Lobachevsky plane. The solution~\rf{PP} obeys $X_1^2+X_2^2+Z^2=R^2$ and corresponds to a circle of the radius $R$ in the boundary when $Z=0$. For these coordinates the Euler--Lagrange equations in the embedding $Y$-space are \begin{equation} \left(\Delta-2\right)Y_i=0,\quad \Delta =y^2\left( \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2} \right) \label{eqY} \end{equation} and the ``mass'' 2 arises because of the presence of the Lagrange multiplier which is used to implement \eq{=1}. \section{Dirichlet Green function and Poisson formula in $\mathbf{AdS}$} As in flat space, we found it most convenient to use an extension of Douglas' algorithm~\cite{Dou31} for finding minimal surfaces to the Lobachevsky plane. Our program is to first construct the Dirichlet Green function of \eq{eqY} on the Lobachevsky plane, and then to derive the version of the Poisson formula relevant to the Lobachevsky plane. This formula will then allow us to reconstruct the minimal surface from its boundary value, so the problem of finding the minimal surface will be reduced to the problem of minimizing a boundary functional with respect to reparametrizations. Finally, we use this boundary functional for evaluations of bulk fluctuations about the minimal surface. The Dirichlet Green function on the Lobachevsky plane is a function of the (geodesic) distance between the images of the points $(x_1,y_1)$ and $(x_2,y_2)$, which is determined by the metric \rf{Poin} to be \begin{equation} L^2=\frac{(x_1-x_2)^2+(y_1-y_2)^2}{4 y_1 y_2}. \end{equation} Acting by the operator on the left-hand side of \eq{eqY}, we obtain the Legendre equation whose solution for the Dirichlet Green function is \begin{eqnarray} G\left(x_1,y_1;x_2,y_2\right)&=& -\frac{3}{4\pi} \Big(\frac{(x_1 - x_2)^2 + y_1^2 + y_2^2}{4 y_1 y_2} \nonumber \\* &&\times \ln \frac{(x_1 - x_2)^2 + (y_1 - y_2)^2}{(x_1 - x_2)^2 + (y_1 + y_2)^2} +1 \Big). \nonumber \\* && \label{Green} \end{eqnarray} To obtain the Poisson formula, which reconstructs a harmonic function in the Lobachevsky plane ({i.e.}\ a function which obeys \eq{eqY}) from its value at the boundary, we take the normal derivative of \eq{Green} near the boundary at a certain minimal value $y_2=y_{\rm min}$ to which the boundary is moved as usual to regularize divergences: \begin{eqnarray} \left. \frac{\partial G\left(x_1,y_1;x_2,y_2\right)}{\partial y_2}\right|_{y_2=y_{\rm min}} &=& \frac{2y_1^2 y_{\rm min}}{\pi((x_1-x_2)^2+y_1^2)^2}\nonumber \\* &&+{\cal O}(y_{\rm min}^3). \end{eqnarray} We shall return soon to a physical meaning of this procedure. Finally, we obtain \begin{equation} Y_i(x,y)= \int_{-\infty} ^{+\infty}\frac {{\rm d} s} \pi \, \frac{2Y_i(t(s)) y^2 y_{\rm min}}{((x-s)^2+y^2)^2}, \label{LDir} \end{equation} where $Y_i(t(s))$ is the boundary value and the function $t(s)$ is a possible reparametrization of the boundary, which plays a crucial role in Douglas' algorithm. This is an extension of the well-known Poisson formula to the Lobachevsky plane. It is instructive to see how the known solution~\rf{solu} for a circular boundary is reproduced by \eq{LDir} from the boundary values \begin{eqnarray} Y_1(t) &= &\frac{1-t^2}{2y_{\rm min}},\quad Y_2(t) = \frac{t}{y_{\rm min}},\nonumber \\* Y_{-1}(t) &= &\frac{1+t^2}{2y_{\rm min}},\quad Y_0(t) = Y_3(t) =Y_4(t) = 0 ~~~ \label{63} \end{eqnarray} for $t(s)=s$, which means that no reparametrization of the boundary is required for a circle, in analogy with the situation for the ordinary Euclidean plane. The reason for this is that the coordinates in use are conformal for a circle. Note that $y_{\rm min}$ is nicely canceled, when \rf{63} in substituted in \eq{LDir}. \section{An extension of Douglas' functional to $\mathbf{AdS}$} As in flat space, to obtain the minimal surface we have to minimize the quadratic action, which now reads \begin{eqnarray} S&=&\int {\rm d} x \,{\rm d} y \left[\frac 12 \partial_a Y(x,y) \cdot \partial _a Y(x,y)\right.\nonumber \\* &&\left.+ \frac{\xi}{y^2}\left( Y(x,y)\cdot Y(x,y)+1 \right)\right], \label{Squa} \end{eqnarray} where $Y_i(x,y)$ is recovered in UHP from the boundary value~\rf{63} by \eq{LDir} and the Lagrange multiplier $\xi(x,y)=1$ at the minimum. This obtained value of $S$ has to be minimized with respect to the functions $t(s)$, reparametrizing the boundary. The minimization is required for $Y_i$'s to obey a conformal gauge, where $\sqrt{g}$ would coincide with the quadratic integrand in \eq{Squa}. Remarkably, this can be formulated as the problem of minimizing a boundary functional which is an extension of the flat-space Douglas integral \begin{equation} S_{\rm flat}=\frac{1}{4\pi} \int {{\rm d} s_1} \int {{\rm d} s_2}\,\frac{ (x_B(t(s_1))- x_B(t(s_2)))^2 }{ (s_1-s_2)^2} \label{oDou} \end{equation} to AdS space. The Douglas integral~\rf{oDou} turned out to be extremely useful for representing the area-law behavior of large Wilson loops in QCD. Reference~\cite{MO08} contains a detailed description of this method. An advantage of using such a representation of the minimal area is that path integrals over trajectories $x^\mu(t)$ are now Gaussian and easily doable, while nonlinearities are encoded in a path integral over reparametrizations, whose extension to ${\cal N}=4$ will be soon considered. Because $Y_i$'s obey \eq{eqY}, the integral over $y$ in \eq{Squa} reduces to a boundary term, after which the integral over $x$ yields \begin{widetext} \begin{equation} S= -\frac{1}\pi \int {{\rm d} s_1} \int {{\rm d} s_2}\, Y_B(t(s_1))\cdot Y_B(t(s_2))\,y_{\rm min}^2\left[\frac{1}{(s_1-s_2)^4}\right]_{\rm reg} \label{ADou} \end{equation} with \begin{equation} \left[\frac{1}{(s_1-s_2)^4} \right]_{\rm reg}= \left( \frac{1}{((s_1-s_2)^2+4y_{\rm min}^2)^2} +\frac{32y_{\rm min}^2}{((s_1-s_2)^2+4y_{\rm min}^2)^3} -\frac{384 y_{\rm min}^4}{((s_1-s_2)^2+4y_{\rm min}^2)^4} \right). \label{Greg} \end{equation} \end{widetext} This is the required boundary functional whose minimum with respect to the functions $t(s)$ equals the minimal area. The integral on the right-hand side of \eq{ADou} looks pretty similar to that in \eq{oDou}, while the denominator in \eq{ADou} is $(s_1-s_2)$ to degree four rather than square as in \eq{oDou}. This is a manifestation of the well-known divergences which are regularized by shifting the boundary from $y=0$ to $y=y_{\rm min}$. In the dual language of D-branes this corresponds~\cite{Mal98,RY} to the breaking of the $U(N)$ symmetry down to $U(N-1)\times U(1)$ by assigning a finite mass to the $U(1)$ gauge boson. If this mass is associated with shifting the boundary to the slice $Z=\varepsilon$, then \begin{equation} y_{\rm min}(t)=\frac\varepsilon{2R}(t^2+1) \label{ymvsrm} \end{equation} from \eq{63}. The right-hand side of \eq{ADou} always diverges like \begin{equation} S_{\rm div}= 2\pi \frac {R-\varepsilon}\varepsilon, \end{equation} which comes from the domain $(s_1-s_2)\sim y_{\rm min}$. It is universal and does not depend on the reparametrizing function $t(s)$. Subtracting the divergent part, we obtain for the regularized part \begin{eqnarray} \lefteqn{ S_{\rm reg}\equiv S-S_{\rm div} =\frac{1}{2\pi} \int {{\rm d} s_1} \int {{\rm d} s_2} }\nonumber \\* &&\times(Y_B(t(s_1))- Y_B(t(s_2)))^2y_{\rm min}^2 \left[\frac{1}{(s_1-s_2)^4}\right]_{\rm reg}~~. \label{Sreg} \end{eqnarray} The domain $(s_1-s_2)\sim y_{\rm min}$ now gives a finite contribution to this integral in view of the important formula \begin{equation} \int {{\rm d} s}\, s^2 \left[\frac{1}{s^4} \right]_{\rm reg}=0. \label{impo} \end{equation} \section{Reparametrization path integral in ${\cal N}=4$ SYM} We represent the circular Wilson loop in ${\cal N}=4$ SYM by the reparametrization path integral of the form \begin{equation} W\left(\hbox{circle}\right)= \,{\rm e}\,^{-\sqrt{\lambda}S_{\rm div}/2\pi} \int {\cal D}_{\rm diff} t(s) \,{\rm e}\,^{-\sqrt{\lambda}S_{\rm reg}[t]/2\pi}, \label{ansatz} \end{equation} where \begin{equation} S_{\rm reg}[t]=\frac1{2\pi} \int {\rm d} s_1 {\rm d} s_2\,{\left(t(s_1)-t(s_2)\right)^2} \left[\frac{1}{(s_1-s_2)^4} \right]_{\rm reg} \label{subt} \end{equation} since $S_{\rm div}$ does not depend on the reparametrization as is already pointed out. The constant $\sqrt{\lambda}$ is prescribed by the AdS/CFT correspondence to be \begin{equation} \sqrt{\lambda}= \frac{R^2_{AdS}}{\alpha'}, \end{equation} but we shall simply consider it as a parameter of the ansatz to be fixed by comparing with the Wilson loop in the ${\cal N}=4$ SYM perturbation theory. Let us substitute for the reparametrizing function \begin{equation} t(s)=s+\frac{1}{\sqrt[4]{\lambda}} \,\beta(s). \end{equation} Because of \eq{impo} we then have \begin{equation} \sqrt{\lambda}S_{\rm reg}= \frac{1}{2\pi}\int {\rm d} s_1 {\rm d} s_2\left(\beta(s_1)-\beta(s_2)\right)^2 \left[\frac{1}{(s_1-s_2)^4} \right]_{\rm reg}. \label{integ} \end{equation} While \eq{integ} is exactly equivalent to \eq{subt}, we shall restrict ourselves by an expansion in $1/{\sqrt[4]\lambda}$ to quadratic order because the measure in the path integral~\rf{ansatz} is the one for integrating over subordinated functions with ${\rm d} t(s)/{\rm d} s \geq 0$ and, as explicitly constructed in Ref.~\cite{MO08}, is highly nonlinear. Only to the quadratic order it can be substituted by the ordinary Lebesgue measure. Before evaluating the path integral~\rf{ansatz}, it is worth noting that the integral~\rf{integ} has three zero modes \begin{equation} \beta_1(s)=1,\quad \beta_2(s)=s,\quad \beta_3(s)=s^2, \label{sl2t} \end{equation} which is a consequence of three $SL(2,\Bbb{R})$ symmetries. For the second and third ones, \eq{impo} is again important. These three zero modes result in a preexponential factor of $\lambda^{-3/4}$ in a full analogy with the string theory analysis~\cite{DG00}. We thus obtain from the ansatz~\rf{ansatz} at large $\lambda$: \begin{equation} W\left(\hbox{circle}\right)\propto \lambda^{-3/4}\,{\rm e}\,^{\sqrt{\lambda}}, \label{Wfin} \end{equation} reproducing the result~\cite{ESZ00} for the ${\cal N}=4$ SYM perturbation theory, providing $\lambda$ is identified with the 't~Hooft coupling. Since fermions and the RR field, which are present in the IIB string representation of the ${\cal N}=4$ SYM, will manifest themselves only to next orders, we believe that the constant factor in \eq{Wfin} is also calculable like that~\cite{KT08} in the string representation. \section{Reparametrization path integral in ${\cal N}=4$ SYM (continued)} In the derivation of \eq{Wfin}, we have mostly paid attention to the dependence of the result on $\lambda$ rather than on $1/\varepsilon$ which plays the role of the $U(1)$ boson mass~\cite{Mal98,RY} \begin{equation} m=\frac{\sqrt{\lambda}}{2\pi \varepsilon} \label{Wmass} \end{equation} as is already mentioned. We shall now concentrate on the dependence of $W\left(\hbox{circle}\right)$ on $\varepsilon$, looking in detail at the divergences regularized by $\varepsilon$. We are thus interested in the contributions from the reparametrization path integral to the effective action, which are important at small $\varepsilon$. The calculation is pretty much similar to that of Ref.~\cite{MO10a} for a $T\times R$ rectangle in flat space, where the L\"uscher term was obtained from the path integral over reparametrizations. In that case $T/R$ was large, now $R/\varepsilon$ is large. The idea is to perform a mode expansion \begin{equation} \beta(s)= \sum _n \beta_n f_n(s) \label{modes} \end{equation} using a complete set of orthogonal basis functions $f_n(s)$ (in general complex ones obeying $f_{-n}(s)=f^*_n(s)$), and then do the Gaussian integrals over $\beta_n$'s. We can restrict ourselves by those modes for which the integral~\rf{integ} has maximal ``divergence'' $\sim (R/\varepsilon)^\nu$. We then obtain \begin{equation} \prod_n \left(\frac R{\varepsilon}\right)^{-\nu/2} = \left(\frac{R}{\varepsilon}\right)^{\nu/2} =\,{\rm e}\,^{\frac \nu 2 \ln (R/\varepsilon)}, \label{QQ} \end{equation} where the product goes over those modes for which the integral~\rf{integ} is $\sim (R/\varepsilon)^\nu$. We have used here the $\zeta$-function regularization of the product and taken into account that $f_n(s)$'s are complex functions, so $n$ ranges from $-\infty$ to $+\infty$. What is the value of $\nu$? We have no reason to expect that typical functions in the path integral over $\beta(s)$ are continuous, as it is the case for usual path integrals with Wiener measure. Moreover, for smooth functions we can substitute \mbox{$(\beta(s_1)-\beta(s_2))^2=$} \mbox{$(s_1-s_2)^2 ({\rm d} \beta(s_1)/{\rm d} s_1)^2$} and their contribution to~\rf{integ} vanishes in view of \eq{impo}. This is intimately linked to the above mentioned $SL(2,\Bbb{R})$ symmetry of the integral. In general, $\nu$ is determined by the Hausdorff dimension of $\beta(s)$. We assume that typical trajectories in the reparametrization path integral have Hausdorff dimension zero% \footnote{We remind that the Hausdorff dimension of the usual Brownian trajectories is one half.}, as was advocated in Ref.~\cite{BM09}. This corresponds to $\nu=3$. Some more arguments in favor of this are given in Appendix~\ref{appA}, where we discuss in detail the Fourier expansion of $\beta(s)$. \section{Schwinger effect in ${\cal N}=4$ SYM} In the gravity approximation, when fluctuations about the minimal surface are not taken into account, the action~\rf{Seff} reads~\cite{GSS02} \begin{equation} \sqrt{\lambda} S_{\rm cl}= \sqrt{\lambda} \pi\left( \cosh\rho -1 -\frac{|e E|}{m^2}\sinh^2 \rho \right), \label{Scl} \end{equation} where $\sinh\rho = R/\varepsilon=2\pi m R /\sqrt{\lambda}$. This formula is applicable, strictly speaking, for $|e E|\lesssim m^2$, when the minimization of $S_{\rm cl}$ with respect to $\rho$ gives \begin{equation} \cosh\rho_0 = \frac{2\pi m^2}{|e E|\sqrt{\lambda}}. \label{solu0} \end{equation} As was pointed out in Ref.~\cite{SZ11}, this equation has no solution for $\rho_0$ when $|e E|>2 \pi m^2/\sqrt{\lambda}$, which implies the existence of a critical electric field. We are now in a position to answer the question as to how fluctuations about the minimal surface affect this very interesting result. The calculation of their contribution to the effective action has been already obtained in \eq{QQ}. For the sum of $S_{\rm cl}$ plus the contribution from fluctuations about the minimal surface in the quadratic approximation we have \begin{eqnarray} \sqrt{\lambda} S_{\rm cl+1loop}&=& \sqrt{\lambda} \pi\left( \cosh\rho -1 -\frac{|e E|}{m^2}\sinh^2\rho \right) \nonumber \\* &&-\frac \nu2 \ln \cosh\rho. \label{Scl1} \end{eqnarray} The negative sign for the contribution from the fluctuations in the second line of this formula is like for the L\"uscher term in string theory. We have mentioned already this analogy, but would like to emphasize that it may have far-reaching consequences. The minimum of the effective action~\rf{Scl1} is now reached for \begin{equation} \frac{1}{\cosh\rho_0 }= \frac{\sqrt{\lambda}}\nu \left(1-\sqrt{1-\frac{\nu |e E|}{\pi m^2}}\right), \label{solu1} \end{equation} so the solution \rf{solu0} is only slightly modified by the quantum fluctuations. They simply shift the critical value of the constant electric field to the value \begin{equation} |e E_c| = \pi m^2 \left( \frac{2}{\sqrt{\lambda}}-\frac{\nu}{\lambda} \right), \label{Ec} \end{equation} where $\nu=3$ as is argued. Thus the quantum fluctuations about the minimal surface result in a $1/\sqrt{\lambda}$ correction at large $\lambda$, as it might be expected. Our final comment is on how the one-loop effective action~\rf{Scl1} agrees with that resulting in superstring theory from semiclassical fluctuations about the minimal surface. The case of an open superstring in $AdS_5\times S^5$ with the ends lying in the boundary circle was elaborated in Refs.~\cite{DGT00,SY08,KT08}. It is tempting to assume that $\nu=3$ is just the number of the $SL(2,\Bbb{R})$ zero modes, whose contribution has gotten regularized by nonvanishing $\varepsilon$. This issue will be addressed elsewhere. \vspace*{-4mm} \begin{acknowledgments} \vspace*{-2mm} We are grateful to Emil Akhmedov, Pawel Caputa, Charlotte Kristjansen, Andrey Mironov, and Gordon Semenoff for useful discussions. J.A. thanks FNU, the Danish Research Council for Independent Research, for financial support via the project ``Quantum Gravity and the Role of Black Holes''. Y.M. thanks the NBI High Energy Theory group for hospitality and financial support. \end{acknowledgments}
1,314,259,992,754
arxiv
\section{Introduction} A renewed impetus into the description of BPS black hole microstates in four dimensions has been sparked by the OSV conjecture~\cite{Ooguri:2004zv} which equates black hole entropy in Type~IIA string theory compactified on a Calabi-Yau threefold $\cal M$ to the modulus squared of the topological string partition function on $\cal M$. The black hole is constructed by wrapping D2-branes around arbitrary two-cycles of $\cal M$ and D4-branes around a four-cycle which is a fixed ample divisor of $\cal M$. With respect to a fixed basis of two-cycles in $H_2({\cal M},\zed)$, and a dual basis of four-cycles in $H_4({\cal M},\zed)$, the D2 and D4~branes carry electric and magnetic charges $\mbf Q_2,\mbf Q_4\in\zed^n$ where $n=h^{1,1}({\cal M})$. We also specify the D0-brane charge $Q_0\in\zed$ and turn off the D6-brane charge. The black hole partition function then takes the symbolic form \begin{equation} Z_{\rm BH}(\mbf Q_4,\mbf\varphi_2,\varphi_0)=\sum_{\mbf Q_2\in\nat_0^n}~\sum_{Q_0\in\nat_0}\, \Omega(\mbf Q_4,\mbf Q_2,Q_0)~{\,\rm e}\,^{-Q_0\,\varphi_0-\mbf Q_2\cdot \mbf\varphi_2} \label{ZBHsymb}\end{equation} where $\Omega(\mbf Q_4,\mbf Q_2,Q_0)$ is the indexed degeneracy of BPS states in spacetime with the specified charges, and $\varphi_0,\mbf\varphi_2$ are chemical potentials. The OSV conjecture then equates (\ref{ZBHsymb}) for large black hole charges to the topological string amplitude $|Z_{\rm top}(\mbf t,g_s)|^2$, where $\mbf t\in\complex^n$ are the K\"ahler parameters of $\cal M$ and the various moduli between the two partition functions are related by the attractor mechanism. The index $\Omega(\mbf Q_4,\mbf Q_2,Q_0)$ can be computed by counting BPS states in the supersymmetric gauge theory on the D4-branes~\cite{Vafa:1995bm}, where the D0-branes are interpreted as instantons and the D2-branes as sources of magnetic flux endowing the instantons with non-trivial first Chern class. Then (\ref{ZBHsymb}) coincides with the partition function of ${\cal N}=4$ topological Yang-Mills theory in four-dimensions, summed over all topological sectors and with the insertion of observables giving the inclusion of D0 and D2~brane charges. The structure of this gauge theory partition function has been recently argued in~\cite{Vafa:2004qa,Aganagic:2004js} to simplify drastically in the case of a local Calabi-Yau space which is a rank~$2$ normal bundle over a compact Riemann surface $\Sigma$, and the D4-brane worldvolume is given by the total space of a non-trivial holomorphic line bundle over $\Sigma$. It is argued that the partition function (\ref{ZBHsymb}) localizes onto field configurations which are invariant under the natural $U(1)$ action on the fibres of this line bundle, and the four-dimensional gauge theory reduces to a two-dimensional gauge theory on the base $\Sigma$ called q-deformed Yang-Mills theory~\cite{Boulatov}--\cite{Klimcik:1999kg}. Various aspects of the black hole partition function on toric Calabi-Yau threefolds from this remarkable two-dimensional point of view are analysed in~\cite{Vafa:2004qa,Aganagic:2004js,Aganagic2:2005,Jafferis:2006ny}. In this paper we will analyse in detail the problem of computing the black hole partition function (\ref{ZBHsymb}) using the sewing formalism of q-deformed Yang-Mills theory for the most general toric singularity $X(p,q)$ in four dimensions.\footnote{A word of caution about notation. In the literature on q-deformed gauge theory the symbol $q$ is used to denote the q-deformation, which in the topological string setting is given by $q={\,\rm e}\,^{-g_s}$. In this paper we will only use $q$ to denote the integer modulus of the toric four-manifold, always writing the q-deformation explicitly as ${\,\rm e}\,^{-g_s}$ with the usual identification $g_s=g_{\rm YM}^2/2$ between the string and four-dimensional Yang-Mills coupling constants. Accordingly, we also avoid using the standard notation $q$ for the arguments of modular forms which typically arise in four-dimensional instanton calculations.} This construction extends the $A_{k}$ ALE spaces which were considered in~\cite{Aganagic2:2005}. It also includes the four-manifolds $X(p,1)$ which are the total spaces of the holomorphic line bundles ${\cal O}_{{\mathds{P}}^1}(-p)$ and for which the relevant two-dimensional gauge theory is q-deformed Yang-Mills theory on the sphere which was studied in great detail in~\cite{Aganagic:2004js,Arsiwalla:2005jb}--\cite{Caporaso:2005np}. Our results are in agreement with the recent analysis in~\cite{fcr} of the black hole partition function (\ref{ZBHsymb}) using direct instanton calculations in the four-dimensional gauge theory. One of our main computations is the modular inversion of the heat kernel representation of the q-deformed partition function which casts it as a sum over two-dimensional Yang-Mills instantons living on the blowups of the minimal resolution of the toric singularity. This resummation is necessary to match the topological expansion (\ref{ZBHsymb}), and we immediately find problems with the black hole interpretation of the two-dimensional gauge theory. The semi-classical expansion of the two-dimensional gauge theory contains terms which cannot simply correspond to an indexed degeneracy $\Omega$. We identify part of the expansion with the value of the Chern-Simons partition function on the boundary of the non-compact space $X(p,q)$, which is a generic three-dimensional Lens space $L(p,q)$. To match with the four-dimensional instanton computation we must follow the standard prescription of summing over the admissible {\it non-dynamical} boundary conditions on the gauge fields, whose asymptotic values are governed by Chern-Simons gauge theory. This amounts to identifying that part of the two-dimensional amplitude which corresponds to the perturbative expansion of the Chern-Simons theory about a given vacuum. We will find that, when these terms are stripped and only the classical Chern-Simons contributions are retained, the q-deformed gauge theory reproduces {\it exactly} the contributions given in~\cite{fcr} from ``fractional'' instantons which are stuck at the singularity of $X(p,q)$. In particular, it is not entirely clear exactly how the two-dimensional formalism can reproduce the remaining contributions, such as those coming from instantons which are free to propagate throughout the four-dimensional space. In the course of this analysis we are faced with the derivation of the nonabelian localization formula for the Chern-Simons partition function on a generic Lens space $L(p,q)$ (we have not found a complete and general calculation in the literature). We carry out this computation in detail by using Seifert fibration techniques to evaluate the classical contributions and surgery methods to compute the fluctuation determinants. In particular, we derive the explicit mapping between Chern-Simons vacua and two-dimensional Yang-Mills connections, and hence with fractional instantons in four dimensions. We also briefly examine the problem of counting instantons on ruled Riemann surfaces for genus $g\geq1$, which are non-toric four-manifolds for which little is known about the structure of instantons. We use the prescription given above to predict the structure of the full $U(1)$ partition function in four dimensions for any genus, and to predict the contributions from fractional instantons in the nonabelian case for genus $g=1$. In particular, we conclude that the $U(N)$ partition function does not seem to factorize into a $U(1)^N$ contribution, as it does in the genus~$0$ cases. The organisation of this paper is as follows. In Section~\ref{InstSect} we review the structure of four-dimensional instanton partition functions on the toric manifolds $X(p,q)$ and in particular some of the results of~\cite{fcr}. In Section~\ref{qYMToric} we construct the pertinent q-deformed gauge theory amplitude and describe how to extract the four-dimensional instanton contributions. In Section~\ref{CSLpq} we work out the semi-classical expansion of Chern-Simons gauge theory on the Lens spaces $L(p,q)$. In Section~\ref{HigherGenus} describe some analogous computations on the ruled Riemann surfaces. In Section~\ref{Conclusions} we summarize our findings. Finally, some technical details of our calculations are summarized in an appendix at the end of the paper. \section{D-Brane Partition Function on Toric Orbifolds\label{InstSect}} In this section we will study the partition function of a bound system of D0--D2--D4~branes where the D4-branes wrap a four-cycle $X$ of a local Calabi-Yau threefold given by the total space of the canonical line bundle $K_X$. We take $X$ to be a smooth four-dimensional manifold given by the minimal resolution of the quotient space $\mathds{C}^2/\Gamma$, where $\Gamma\cong\zed_p$ is a generic finite cyclic group. The action of $\Gamma$ on the coordinates $(z,w)$ of $\complex^2$ can be linearized locally as \begin{equation} \big(z\,,\,w\big) ~\longmapsto~ \big({\,\rm e}\,^{2\pi {\,{\rm i}\,} q/p}\,z\,,\,{\,\rm e}\,^{2\pi {\,{\rm i}\,} /p}\,w\big) \label{gamma} \end{equation} where $(p,q)$ are coprime integers with $p>q>0$. The orbifold action (\ref{gamma}) generates an $A_{p,q}$ singularity at the origin of $\complex^2$, whose minimal resolution (known as the Hirzebruch-Jung resolution) gives rise to a smooth four-dimensional manifold $X(p,q)$ called a Hirzebruch-Jung space~\cite{BPVdeV}. This space contains a chain of $\ell$ exceptional divisors at the origin given by projective lines $\mathds{P}^1$ whose intersection numbers are summarized by the (generalized) Cartan matrix \begin{equation} C=-\begin{pmatrix} - e_1 & 1 & 0 & \cdots &0\\ 1 & -e_2 & 1& \cdots &0\\ 0 & 1 & - e_3 &\cdots&0\\ \vdots &\vdots & \vdots &\ddots&\vdots\\0&0&0&\cdots&-e_\ell \end{pmatrix} \ . \label{InterMatrix}\end{equation} The moduli of the self-intersection numbers $e_i\ge 2$, $i=1,\ldots,\ell$ are obtained by expanding the rational number $\frac pq>1$ in a simple continued fraction \begin{equation} \frac pq=[e_1,\dots,e_\ell]:= e_1-{1\over\displaystyle e_{2}- {\strut 1\over \displaystyle e_{3}- {\strut 1\over\displaystyle\ddots {}~ e_{\ell-1}-{\strut 1\over e_\ell}}}} \label{pqcontfrac} \end{equation} with $e_1$ the smallest integer $>\frac pq$, and so on. For example, for $q=1$ there is only one exceptional divisor with self-intersection number $-e_1=-p$ and the manifold $X(p,1)$ can be regarded as the total space of a holomorphic line bundle ${\cal O}_{{\mathds{P}}^1}(-p)$ of degree $p$ over $\mathds{P}^1$. The other limiting case $q=p-1$ corresponds to an $A_{p-1}$ ALE space, which contains a chain of $\ell=(p-1)$ $\mathds{P}^1$'s each with self-intersection number $-e_i=-2$, and in this case (\ref{InterMatrix}) coincides with the Cartan matrix of the $A_{p-1}$ Dynkin diagram. Standard arguments~\cite{Bershadsky:1995qy} show that the gauge theory living on $N$ D4-branes wrapping $X(p,q)$ is a $U(N)$ Vafa-Witten topologically twisted ${\cal N}=4$ Yang-Mills theory~\cite{Vafa:1994tf}. In this context, the D0-branes are interpreted as instantons of the four-dimensional gauge theory. These instantons can also have a non-vanishing first Chern class due to the presence of D2-branes wrapping the exceptional divisors which generate a non-trivial magnetic flux on the D4-brane worldvolume. Under suitable assumptions~\cite{Vafa:1994tf} the twisted ${\cal N}=4$ Yang-Mills partition function computes the Euler number of the instanton moduli space. The powerful toric localization techniques developed in recent years~\cite{Nekrasov:2002qd}--\cite{Fucito:2005wc} have enabled the computation of this partition function for ALE spaces~\cite{Fucito:2004ry} and for the total spaces of the ${\cal O}_{\mathds{P}^1}(-p)$ bundles with $p=1,2$~\cite{Nakajima:2003pg,Sasaki:2006vq}. However, for generic $A_{p,q}$ singularities an explicit description of the instanton moduli space is not available at the moment and the direct evaluation of the complete instanton partition function on Hirzebruch-Jung spaces is still an open problem. In this paper we will address this problem from a somewhat different perspective. According to the proposal of~\cite{Aganagic:2004js,Aganagic2:2005}, one can localize the four-dimensional path integral via the natural $U(1)$ action on the fibres of the normal bundles over the ${\mathds{P}}^1$'s. In this way one reduces the D4-brane gauge theory to a q-deformed Yang-Mills theory living on the exceptional divisors which arise from the minimal resolution of the toric orbifold singularity. As we will see in the following, the two-dimensional computation gives results in agreement with the direct instanton counting presented recently in~\cite{fcr}, where the instanton partition functions on $A_{p,q}$ toric orbifolds are described by assuming some factorization properties which we review below. For ALE spaces an explicit description of the instanton moduli space in terms of ADHM data has been derived in~\cite{kronaka} and reinterpreted in terms of D-brane bound states in~\cite{Douglas:1996sw}. Let us recall some features of this construction which will be useful in the following. The first and second Chern characters of a $U(N)$ instanton gauge bundle ${\cal E}$ over an ALE space $X_p:=X(p,p-1)$ are given by \begin{eqnarray} {\rm ch}_1 ({\cal E}) &=& \sum_{i=0}^{p-1}\, u_i~{\rm ch}_1 ({\cal T}^i) \ , \nonumber \\[4pt] {\rm ch}_2 ({\cal E}) &=& \sum_{i=0}^{p-1}\, u_i~{\rm ch}_2 ({\cal T}^i) - \frac{K}{p}\, \Omega_{X_p} \qquad \mbox{with} \quad K = \sum_{i=0}^{p-1}\, k_i \ , \label{chern} \end{eqnarray} where ${\cal T}^i$ are principal $U(1)$-bundles of degree $i$ corresponding to the tautological bundles associated to each of the exceptional divisors~\cite{gocho}. In particular, ${\cal T}^0$ is the trivial line bundle. In eq.~(\ref{chern}), ${\rm ch}_1 =c_1$ and ${\rm ch}_2 =\frac12\,c_1^2 - c_2$, where $c_1$ and $c_2$ are the first and second Chern classes, respectively, and $\Omega_{X_p}$ is the unit volume form on $X_p$. The coefficients $u_i$, $i=0,1,\ldots, p-1$ are given in terms of partitions $K=\sum_i\,k_i$ and $N = \sum_i\,N_i$ of the numbers of D0 and D4~branes, respectively, into the $i$-th irreducible representation of $\zed_p$ as \begin{equation} u_i =- C_{ij}\, \int_{X_p}\, c_1({\cal E}) \wedge c_1({\cal T}^j) = N_i + k_{i+1} + k_{i-1} - 2 k_i \ . \label{ui} \end{equation} On the ALE space $X_p$ one can distinguish between two classes of instantons, the \textit{regular} and \textit{fractional} instantons~\cite{Fucito:2001ha}. Regular instantons live in the regular representation $k_0=k_1=\ldots=k_{p-1}=k$ of the orbifold group $\zed_p$. As such, they are free to move together with their orbifold images on the whole space $X_p$ and their moduli space for gauge group $U(1)$ coincides with the Hilbert scheme $X_p^{[K]}$ of $K= k\,p $ points on $X_p$. The Poincar\'e polynomial of this space is well-known and is given by~\cite{nakabook} \begin{equation} P\big(t\,\big|\,X_p^{[K]}\big) = \prod_{m=1}^\infty\, \frac{1}{\left(1-{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau}\,t^{2m}\right)^{p-1}\,\left(1- {\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau} \,t^{2m-2}\right)} \ , \label{PoincareALE}\end{equation} where $\tau = \frac{4\pi {\,{\rm i}\,}}{g_{\rm YM}^2} + \frac{\theta}{2\pi}$ is the complexified gauge coupling. The expression (\ref{PoincareALE}) is a function of the usual Boltzmann weight of regular instantons in the supersymmetric Yang-Mills path integral. By putting $t=1$ in (\ref{PoincareALE}) one gets the Euler characteristic of the moduli space of regular $U(1)$ instantons given by \begin{equation} Z^{U(1)}_{\rm reg} = \frac{1}{\hat\eta(\tau)^{p}} \qquad \mbox{with} \quad \hat\eta(\tau):= \prod_{m=1}^\infty\,\left(1-{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau} \right) \ . \label{reg} \end{equation} The generic $U(N)$ partition function is given by the $N$-th power of (\ref{reg}). Fractional instantons are instead stuck at the orbifold singularity and have no moduli associated to their position in the four-dimensional space $X_p$. More precisely, they correspond to the non-trivial self-dual abelian gauge connections $A^{\rm frac}$ of the tautological bundle with curvature \begin{equation} F_A^{\rm frac} = -2\pi {\,{\rm i}\,}\,\sum_{i=0}^{p-1}\, u_i\, c_1({\cal T}^i) \ . \label{tauto} \end{equation} From (\ref{chern}) we then immediately realize that their contribution to the path integral is weighted in terms of the intersection matrix \begin{equation} I^{ij}:= \int_{X_p}\, c_1({\cal T}^i) \wedge c_1({\cal T}^j)=- \big(C^{-1}\big)^{ij} \ , \label{C-1} \end{equation} where $C^{-1}$ is the inverse of the Cartan matrix (\ref{InterMatrix}). Since the four-dimensional space $X_p$ is non-compact, the Cartan matrix is not unimodular and so its inverse generally has rational-valued elements (see Appendix~A). Thus fractional instantons indeed have a fractional charge. The contribution of fractional $U(1)$ instantons to the ${\cal N}=4$ partition function on ALE spaces has been written elegantly in~\cite{fujii,fcr} by rewriting the second Chern character in (\ref{chern}) as \begin{equation} {\rm ch}_2({\cal E})=\sum_{i=0}^{p-1}\,\big(C^{-1}\big)^{ii}\, u_i-\frac Kp\,\Omega_{X_p}=\mbox{$\frac12$}\,\big(C^{-1}\big)^{ij}\, u_i\,u_j \label{ch2id}\end{equation} to get \begin{equation} Z^{U(1)}_{\rm frac} = \sum_{\mbf u\in \mathds{Z}^{p-1}} \, {\,\rm e}\,^{\pi{\,{\rm i}\,} \tau\, u_i\, (C^{-1})^{ij}\,u_j}~{\,\rm e}\,^{-u_i\,z^i } \ , \label{frac0} \end{equation} where $z^i = (C^{-1})^{ij}\,(\varphi_2)_j$ is the contribution of the magnetic fluxes associated to the D2-branes with chemical potentials $(\varphi_2)_i$. The result for general $U(N)$ gauge group can again be obtained by simply taking the $N$-th power of (\ref{frac0})~\cite{fujii,fcr}. In~\cite{Fucito:2004ry} it was observed that the regular and fractional instanton contributions factorize in the evaluation of the ${\cal N}=4$ partition function on ALE spaces. This result has been further developed and established on a firm mathematical basis in~\cite{fujii}. Thus the full partition function on ALE spaces is given simply by the $N$-th power of the product of (\ref{reg}) and (\ref{frac0}). In~\cite{fcr} analogous formulas are proposed for the regular and fractional instantons on more general $A_{p,q}$ toric orbifolds. The results which follow in the next section indicate that an analogous factorization takes place as well for these more general four-manifolds, even though a more direct analysis is required to properly confirm this property. \section{q-Deformed Gauge Theory on Toric Singularities\label{qYMToric}} In this section we will evaluate the q-deformed Yang-Mills partition function living on the minimal resolution of generic $A_{p,q}$ orbifold singularities. After carefully resolving some subtleties, our results will correctly reproduce the contributions of fractional instantons to the four-dimensional gauge theory partition function of the previous section. This is what one would naturally expect, since the fractional instantons are bounded to the exceptional divisors of the four-dimensional geometry. Moreover, it follows from eq.~(\ref{tauto}) that there is a one-to-one correspondence between fractional instantons on $X(p,q)$ and classical solutions of the q-deformed gauge theory which are obtained as configurations of magnetic monopoles on the ${\mathds{P}}^1$'s (i.e. monopole connections on the tautological line bundles $\mathcal{T}^i$). We will describe this correspondence in more detail in Section~\ref{CSLpq}. \subsection{Sewing Construction of the Partition Function\label{Sewing}} We will begin by computing the partition function of ${\cal N}=4$ topologically twisted Yang-Mills theory on the Hirzebruch-Jung spaces $X(p,q)$ following the approach proposed in~\cite{Aganagic:2004js,Aganagic2:2005}. This method was originally developed in~\cite{Aganagic:2004js} for the $q=1$ cases and then later extended to the $A_k$ ALE spaces in~\cite{Aganagic2:2005}. More general black hole microstate counting was also attempted in~\cite{Aganagic2:2005} using the same strategy, by considering theories derived from more complicated configurations of D4-branes on toric Calabi-Yau threefolds. The main idea underlying the computation consists in cutting the four-manifold $X(p,q)$ into pieces where the theory is simple enough to solve explicitly. Then, thanks to the topological nature of the gauge theory, one glues the pieces back together using an appropriate set of rules. The Hirzebruch-Jung spaces can be obtained by patching together $\ell$ copies of $\mathds{C}^2$, suggesting that one should be able to derive the relevant Yang-Mills amplitudes by sewing topological amplitudes on $\mathds{C}^2$. Since both spaces $\mathds{C}^2$ and $X(p,q)$ have ${\mathds T}^2$ isometries, the four-dimensional gauge theory path integral should localize onto fixed points of these torus actions. Based on this observation, a simple set of local rules for constructing four-dimensional amplitudes in terms of q-deformed two-dimensional Yang-Mills theory was proposed in~\cite{Aganagic2:2005}. The important building block in the construction is the topological amplitude on $\mathds{C}^2$. By regarding $\mathds{C}^2$ as a ${\mathds T}^2$ fibration over $\real^2$, it can be written as \begin{equation} {\cal Z}(U,V)=\sum_{R,Q}\,S_{R,Q}~{\rm Tr}_{R}(U)~{\rm Tr}_{Q}(V) \end{equation} where $U$ and $V$ represent the holonomies of the four-dimensional gauge field along the boundaries of the two disks which are fixed by the torus action~\cite{Aganagic2:2005}. The sum runs over the irreducible representations $R,Q$ of the $U(N)$ gauge group that label the boundary conditions on the gauge field through \begin{equation} \int_{M_1} \,F_a=\mbox{$\frac12$}\, n_a(R)\,g_{\rm YM}^2 \qquad \mbox{and} \qquad \int_{M_2}\, F_a=\mbox{$\frac12$}\, n_a(Q)\,g_{\rm YM}^2 \ , \end{equation} where $n_a(R)$ is the length of the $a$-th row in the Young tableau of $R$ shifted by $\frac{1}{2}\,(N+1)-a$. The two-dimensional manifolds $M_1$ and $M_2$ are respectively the ``fiber component'' and the ``base component'' of $\mathds{C}^2$ regarded as a torus bundle. Finally, the quantity $S_{R,Q}$ is the basic correlator \begin{equation} \label{Corr}S_{R,Q}=\Bigl\langle{\rm Tr}_{R}\,\exp\Bigl(-{\,{\rm i}\,}\int_{M_1}\, F\Bigr)~{\rm Tr}_{Q}\,\exp\Bigl({\,{\rm i}\,}\int_{M_2}\, F\Bigr)\Bigr\rangle\end{equation} carrying the dynamical information of the topological Yang-Mills theory. Its explicit expression in terms of group theoretical data is given below. The complete partition function on the toric four-manifold $X(p,q)$ is now gotten by appropriately gluing the patches together. Every time that two disks are glued together along their boundaries (with opposite orientation) a $\mathds{P}^1$ appears, corresponding to a partial resolution of the orbifold singularity described in the previous section. Sewing the boundary holonomies is achieved by integrating over them as \begin{equation} {\cal Z}_{{\mathds{P}}^1}\big(V\,,\,V'\,\big)= \int_{U(N)}\,{\rm d} U~{\cal Z}\big(V\,,\,U\big)\, {\cal Z}\big(U^{-1}\,,\,V'\,\big)\end{equation} in the invariant Haar measure on the unitary group $U(N)$, and using standard orthogonality properties of the group characters $\Tr_R(U)$. However, the amplitude (\ref{Corr}) is expressed using coordinates in which $\mathds{C}^2$ is a trivial fibration over both $M_1$ and $M_2$. For the generic Hirzebruch-Jung spaces, the normal bundle to the $i$-th exceptional divisor is ${\cal O}_{{\mathds{P}}^1}(-e_i)$ corresponding to its non-trivial self-intersection number in (\ref{InterMatrix}). It is argued in~\cite{Aganagic:2004js} (see also~\cite{Vafa:2004qa,Blau:2006gh}--\cite{Caporaso:2006kk}) that the dynamical effect of the non-trivial fibration ${\cal O}_{{\mathds{P}}^1}(-e_i)$ is encoded in the term $T_R^{e_i}$ which accompanies the gluing operation creating the corresponding $\mathds{P}^1$, where \begin{equation} T_R={\,\rm e}\,^{-\frac{g_{\rm YM}^2}{4}\,C_2(R)} \label{TRdef}\end{equation} and $C_2(R)$ is the second Casimir invariant of the representation $R$. The presence of this term can also be interpreted~\cite{Aganagic2:2005} as an annulus insertion, within the general framework proposed by~\cite{Bryan:2004iq} to compute the relevant amplitudes using two-dimensional topological quantum field theory. The resulting partition function on $X(p,q)$ is therefore a simple generalization of the partition function on $A_k$ ALE spaces constructed in~\cite{Aganagic2:2005}. The difference here is that, in generating the chain of $\mathds{P}^1$'s by gluing disks, the self-intersection moduli $e_i$ are generically different from~$2$. At the ``ends'' of the chain we should turn off the gauge fields by taking trivial holonomies on the external disks, i.e. trivial representations $R=0$. In this way the partition function on the Hirzebruch-Jung space takes the form \begin{eqnarray} Z_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big)&=& \sum_{R_1,\dots,R_\ell}\, S_{0,R_1}\,S_{R_1,R_2}\,\cdots\,S_{R_{\ell-1},R_\ell}\,S_{R_\ell,0}~ T_{R_1}^{e_1}\,\cdots\,T_{R_\ell}^{e_\ell}\nonumber\\ && \qquad\qquad\times~{\,\rm e}\,^{-{\,{\rm i}\,}\sum_i\,\theta_i\,C_1(R_i)} \ . \label{Zpq}\end{eqnarray} In (\ref{Zpq}) we have inserted one independent two-dimensional $\theta$-angle $\theta_i$, $i=1,\dots,\ell$ for each exceptional divisor, owing to the fact that the divisors define independent homology two-cycles in $H_2(X(p,q),\zed)\cong\zed^\ell$. In the black hole context they are related to chemical potentials for D2-branes wrapping the divisors. We will see this explicitly in Section~\ref{2Dto4D} below, but for the moment they simply weight here the $U(1)$ fluxes through the $\mathds{P}^1$'s represented by the first Casimir invariant $C_1(R)=\sum_a\,n_a(R)$ of the representation $R$~\cite{Aganagic2:2005}. Note that for $q=1$ one has $\ell=1$ and $e_1=p$, and (\ref{Zpq}) reduces to the partition function of q-deformed Yang-Mills theory on the sphere~\cite{Aganagic:2004js}. It remains to write down explicit formulas for the amplitudes (\ref{Corr}) and (\ref{TRdef}) above. Let $\hat n_a(R)$ be the weight vector classifying an irreducible representation $R$ of the gauge group $U(N)$, where the index $a$ spans the rows of the corresponding Young diagram, and let $r(R)$ denote the $U(1)$ charge of $R$. Then the second Casimir invariant of $R$ can be conveniently written as \begin{equation} C_2(R)=\sum_{a=1}^N\, \left(\hat n_a(R)+r(R)-a-\mbox{$\frac{N-1}{2}$}\right)^2=\left\{\begin{array}{ll} \displaystyle\sum_{a=1}^N\, n_a(R)^2 &\ \ \ \ \ \mathrm{for\ }N\mathrm{\ odd} \ , \\ \\ \displaystyle\sum_{a=1}^N\,\left(n_a(R)-\mbox{$ \frac{1}{2}$}\right)^2 &\ \ \ \ \ \mathrm{for\ }N\mathrm{\ even \ , } \end{array}\right. \end{equation} where in the second equality we have absorbed an irrelevant shift into the weight integers $\mbf n(R)\in\mathds{Z}^N$. (We have also dropped an overall factor depending only on $N$.) Note that the trivial representation $R=0$ has weight $\mbf n(0)=\mbf0$. Throughout we will assume that the rank $N$ is odd. This restriction is not necessary but it will simplify some of our analysis in the following. The correlators $S_{R_i, R_{i+1}}$ appearing in (\ref{Zpq}) arise from the gluing of disks and annuli to build the necklace of $\ell$ spheres, and they are given by \begin{equation} S_{R,Q}=\sum_{w\in S_N}\,\varepsilon(w)~{\,\rm e}\,^{-\frac{g_{\rm YM}^2} {2}\,w(\mbf n(R)+\mbf\rho)\cdot(\mbf n(Q)+\mbf\rho)} \ . \label{Smatrixdef}\end{equation} This operator is related to the modular S-matrix of the $U(N)$ WZW model in the Verlinde basis (see Section~\ref{CSLpqPartFn}). Here $\mbf\rho$ is the Weyl vector of $U(N)$ (the half sum of positive roots) whose components are given by \begin{equation} \rho_a=\mbox{$\frac{N-2a+1}{2}$} \ , \end{equation} and the elements $w$ of the Weyl group $S_N$ of $U(N)$ act by permuting the entries of $N$-vectors with sign $\varepsilon(w)$. \subsection{Semi-Classical Expansion\label{InstExp}} The Poisson resummation of (\ref{Zpq}) has a natural interpretation as an expansion of the q-deformed gauge theory into a sum over classical solutions~\cite{Arsiwalla:2005jb}--\cite{Caporaso:2005ta}. After some trivial manipulations and dropping of an overall irrelevant normalization, we can recast the partition function in the form \begin{eqnarray} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big) &=&\sum_{w\in S_N}\,\varepsilon(w)~\sum_{\mbf n_1,\dots,\mbf n_\ell\in \mathds{Z}^N}\, {\,\rm e}\,^{-\frac{g_{\rm YM}^2}{4}\,\sum_i\, e_i\,{\mbf n}_i^2- \frac{g_{\rm YM}^2}{2}\,\sum_{i<j}\,\mbf{n}_i\cdot\mbf{n}_j+{\,{\rm i}\,}\sum_i\, \mbf{f}_i\cdot\mbf n_i}\nonumber\\ && \qquad\qquad\qquad\qquad\qquad\qquad \times~ {\,\rm e}\,^{-\frac{g_{\rm YM}^2}2\,(\mbf{\rho}\cdot\mbf{n}_1-w( \mbf{\rho})\cdot\mbf{n}_{\ell})} \ , \label{Zpq1}\end{eqnarray} where $\mbf{f}_i=\theta_i\,(1,1,\dots,1)$. Each integer vector $\mbf n_i$, $i=1,\dots,\ell$ classifies one of the original irreducible $U(N)$ representations $R_i$ appearing in (\ref{Zpq}). The quadratic form in the exponent of (\ref{Zpq1}) can be succinctly rewritten in terms of the Cartan matrix (\ref{InterMatrix}) to get \begin{equation} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big) =\sum_{w\in S_N}\,\varepsilon(w)~ \sum_{\mbf n_1,\dots,\mbf n_\ell\in \mathds{Z}^N}\, {\,\rm e}\,^{-\frac{g_{\rm YM}^2}{4}\,C_{ij} \,\mbf n_i\cdot\mbf{n}_j+ {\,{\rm i}\,}\mbf f_i\cdot \mbf n_i}~{\,\rm e}\,^{ -\frac{g_{\rm YM}^2}2\,(\mbf{\rho}\cdot\mbf{n}_1-w(\mbf{\rho})\cdot \mbf{n}_{{\ell}})} \end{equation} with an implicit sum over repeated indices. The desired modular inversion is now realized through an elementary gaussian integration and one finds \begin{equation} \begin{split} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big) =&\left({\frac{4\pi}{g_{\rm YM}^2}}\right)^{N \,{\ell}/2}\, \frac{1}{\det(C)^{N/2}}\\ & \times\,\sum_{\mbf{m}_1,\dots,\mbf{m}_{{\ell}}\in \mathds{Z}^{N}}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, (C^{-1})^{ij}\, (\mbf{m}_i+\frac{\mbf f_i}{2\pi})\cdot (\mbf{m}_j+\frac{\mbf f_j}{2\pi})}\\ &\times\,\sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{2\pi {\,{\rm i}\,} (\mbf{m}_i +\frac{\mbf f_i}{2\pi})\cdot ((C^{-1})^{1i}\,\mbf{\rho} +(C^{-1})^{i{\ell}}\,w(\mbf{\rho}))}\\ &\qquad\qquad\times~{\,\rm e}\,^{\frac{g_{\rm YM}^2}{4} \, ((C^{-1})^{1 1}\,\mbf{\rho}^2 +2 (C^{-1})^{1 {\ell}}\,w(\mbf{\rho})\cdot \mbf{\rho}+ (C^{-1})^{{\ell}\R}\,w(\mbf{\rho})^2)} \ . \end{split} \end{equation} This expression can be simplified by exploiting the explicit form for the inverse of the Cartan matrix provided in Appendix~\ref{Bappendix}, where the definitions of the integers $q_i$ and $p_i$ appearing below may be found. We obtain \begin{equation} \begin{split} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big)=&~\mathcal{N}\, \sum_{\mbf{m}_1,\dots,\mbf{m}_\ell\in \mathds{Z}^{N}}\, {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, (C^{-1})^{ij}\, \mbf{m}_i\cdot\mbf m_j-\frac{4\pi}{g_{\rm YM}^2}\,\mbf m_i\cdot \mbf f_i }\\ & \times\, \sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{\frac{2\pi {\,{\rm i}\,}}{p} \, q_i\,\mbf m_i\cdot (q\,\mbf\rho+w(\mbf\rho))+\frac{g_{\rm YM}^2}{2p}\,w( \mbf{\rho})\cdot \mbf{\rho}} \end{split} \label{Zpqinstsimpl}\end{equation} where \begin{equation} \mathcal{N}:=\left({\frac{4\pi}{g_{\rm YM}^2}}\right)^{N\, {\ell}/2}\, p^{-N/2}~{\,\rm e}\,^{\frac{{\,{\rm i}\,} N}{p}\,(p_i+q_i)\,\theta_i + \frac{g_{\rm YM}^2}{24 p}\, (N^3-N)\,(q+q^\prime\,)- \frac{N}{g_{\rm YM}^2}\, (C^{-1})^{ij} \,\theta_i\,\theta_j} \ . \end{equation} The sum over permutations $w$ in (\ref{Zpqinstsimpl}) does not depend on the instanton numbers $\mbf m_i$ individually, but rather only on the linear combination $q_i\,\mbf m_i$. This special dependence suggests the change of variables \begin{equation} \mbf s_1=q_i\,\mbf m_i \qquad \mbox{and} \qquad \mbf s_j=\mbf m_j \ \ \mathrm{for}\ \ j=2,\dots,{\ell} \end{equation} with $q_1=1$ (see Appendix~\ref{Bappendix}) and \begin{equation} \mbf m_1=\mbf s_1- \sum_{i=2}^{\ell}\, q_i\, \mbf s_i \ . \end{equation} Then the partition function (\ref{Zpqinstsimpl}) takes the form \begin{eqnarray} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big)&=&\mathcal{N}\, \sum_{{\mbf{s}}_1,\dots,{\mbf{s}}_\ell\in \mathds{Z}^{N}}\, {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, q_j\, h_{i}\,\mbf{s}_i\cdot\mbf{s}_j-\frac{4}{g_{\rm YM}^2\, p}\, \sum_{j=2}^k\, q_j\,\mbf{s}_1\cdot \mbf f_j}~ {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\frac{q\,\mbf{s}_1^2}{p} }\nonumber\\ && \times\,\sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{\frac{2\pi {\,{\rm i}\,}}{p}\, \mbf{s}_1\cdot(q\,\mbf \rho+w(\mbf\rho))+ \frac{g_{\rm YM}^2}{2p}\,w(\mbf{\rho})\cdot \mbf{\rho}} \ , \label{Zpqinstchange}\end{eqnarray} where the integers $h_i$, $i=1,\dots,\ell$ are defined in Appendix~\ref{Bappendix}. The sum over permutations in (\ref{Zpqinstchange}) now depends only on the single integer $\mbf{s}_1$. Since the Weyl vector $\mbf\rho$ is integer-valued for $N$ odd, the dependence on $\mbf{s}_1$ is periodic with period $p$. It is natural then to decompose the sum over $\mbf{s}_1$ in two separate steps. First we sum over $\mbf{s}_1$ modulo $p$, and then we sum over all integer multiples of $p$. This is achieved through the change of variable \begin{equation} \mbf{s}_1~\longrightarrow~\mbf m+p\,\mbf{s}_1 \end{equation} with $\mbf m\in\zed_p^N$ and $\mbf{s}_1\in\zed^N$. After this final change of variable, we can recast the partition function (\ref{Zpqinstchange}) in the form \begin{eqnarray} {Z}_{U(N)}^{q{\rm YM}}\big(X(p,q)\,,\,g_{\rm YM}^2\big)&=&\mathcal{N}\, \sum_{{\mbf{s}}_1,\dots,{\mbf{s}}_{\ell}\in \mathds{Z}^{N}}~ \sum_{\mbf m\in \mathds{Z}^N_p}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, G_{ij}\,\mbf{s}_i\cdot\mbf{s}_j-\frac{4}{g_{\rm YM}^2\, p}\, \sum_{j=2}^k\, q_j\,\mbf f_j\cdot(p\, \mbf{s}_1+ \mbf m)}\nonumber\\ && \qquad\qquad\qquad\qquad \times~ {\,\rm e}\,^{ -\frac{8\pi^2}{g_{\rm YM}^2}\,{q\,\mbf{s}_1\cdot\mbf m}}~ Z_{U(N)}^{\rm CS}\big(L(p,q)\,,\,\mbf m\big) \ , \label{Finaltheta}\end{eqnarray} where the symmetric $\ell\times\ell$ integer-valued matrix $G_{ij}$ is defined in Appendix~\ref{Bappendix} and \begin{equation} Z_{U(N)}^{\rm CS}\big(L(p,q)\,,\,\mbf m\big)= {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\,\frac{q\,\mbf m^2}{p} }~ \sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{\frac{2\pi {\,{\rm i}\,}}{p}\,\mbf m\cdot(q\,\mbf\rho+w(\mbf\rho))+ \frac{g_{\rm YM}^2}{2p}\,w(\mbf{\rho})\cdot \mbf{\rho}} \ . \label{ZCSUNLpqm}\end{equation} If we now set \begin{equation} g_{\rm YM}^2=\frac{4\pi {\,{\rm i}\,}}{k+N} \label{gYMkCS}\end{equation} then (\ref{ZCSUNLpqm}) can be identified as the partition function of $U(N)$ Chern-Simons gauge theory with level $k\in\nat_0$ on the Lens space $L(p,q)$ in the background of the flat connection defined by the torsion vector $\mbf m\in\zed_p^N$. We will derive this explicitly in Section~\ref{CSLpq}. The relationship between q-deformed Yang-Mills theory and Chern-Simons theory is not surprising and was anticipated by~\cite{Aganagic:2004js,Blau:2006gh,Beasley:2005vf}. In this paper we extend this correspondence very explicitly to the generic chain of exceptional divisors of the Hirzebruch-Jung spaces $X(p,q)$ whose boundaries are the more general Lens spaces $L(p,q)$. \subsection{Emergence of Four-Dimensional Instantons\label{2Dto4D}} In~\cite{Vafa:2004qa,Aganagic:2004js,Aganagic2:2005} the expression (\ref{Zpq}) is conjectured to be the partition function of the topologically twisted $\mathcal{N}=4$ supersymmetric Yang-Mills theory on the four-dimensional toric manifold $X(p,q)$ obtained by blowing up the $A_{p,q}$ singularity. However, as is manifest from the form (\ref{Finaltheta}), this interpretation is {\it a priori} difficult. Although there is a sum over $\ell$ vectors $\mbf{s}_i\in\mathds{Z}^N$ playing the putative role of instanton numbers, this contribution is accompanied in (\ref{ZCSUNLpqm}) by a second sum over permutations $w$ of $N$ elements which is {\it perturbative} in its dependence on the Yang-Mills coupling constant. Such terms are interpreted as fluctuations around the given instanton background. However, in the topologically twisted ${\cal N}=4$ gauge theory, contributions of this sort are absent because the partition function simply computes the Euler characteristic of the instanton moduli space. Similar problems with interpreting two-dimensional gauge theory partition functions as generating functions for instanton counting were noticed in~\cite{Aganagic2:2005,Caporaso:2006kk}. It is clear from eq.~(\ref{Finaltheta}) that the two-dimensional gauge theory treats the asymptotic values of the gauge fields on the boundary $L(p,q)$ of $X(p,q)$ as dynamical quantities, whose evolution is governed by the Chern-Simons action on the Lens space $L(p,q)$. This is in marked contrast with what happens in a typical instanton computation, whereby the gauge fields on the boundary are fixed quantities and the partition function is simply obtained by summing over the admissible boundary conditions. In other words, in order to make contact with the four-dimensional instanton computations, it appears natural to drop the perturbative contribution to the Chern-Simons partition function (\ref{ZCSUNLpqm}) from (\ref{Finaltheta}) and keep only its classical part. This modifies (\ref{Finaltheta}) to the partition function \begin{eqnarray} {\mathcal{Z}}_{X(p,q)}^{U(N)}&=&\mathcal{N}\, \sum_{{\mbf{s}}_1,\dots,{\mbf{s}}_{\ell}\in \mathds{Z}^{N}}~ \sum_{\mbf m\in{\mathds{Z}^N_p}}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, G_{ij}\,\mbf{s}_i\cdot\mbf{s}_j-\frac{4}{g_{\rm YM}^2\, p}\, \sum_{j=2}^{\ell}\, q_j\,\mbf f_j\cdot(p \,\mbf{s}_1+ \mbf m)} \nonumber\\ && \qquad\qquad\qquad\qquad\qquad \times~ {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\,\frac{q\,\mbf m^2}{p} - \frac{8\pi^2}{g_{\rm YM}^2}\,{q\,\mbf{s}_1\cdot\mbf m}} \ . \label{Final0}\end{eqnarray} After evaluating the lattice Gauss sum over $\mbf m\in\zed_p^N$, we obtain finally the result \begin{equation} \label{Final1} {\mathcal{Z}}_{X(p,q)}^{U(N)}=\mathcal{N}\, \sum_{\mbf{u}_1,\dots,\mbf{u}_{\ell}\in \mathds{Z}^{N}}\, {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, (C^{-1})^{ij}\,\mbf u_i\cdot\mbf u_j -\mbf z^i\cdot\mbf u_i} \end{equation} where we have identified $\mbf z^i= 4\,(C^{-1})^{ij}\,\mbf f_j/g^2_{\rm YM}$ in terms of the chemical potentials for D2-branes wrapped on the exceptional divisors of $X(p,q)$. The expression (\ref{Final1}) has a nice interpretation in terms of instanton counting in the ${\cal N}=4$ topologically twisted Yang-Mills theory. Apart from the trivial overall normalization factor, it corresponds precisely to the contribution of fractional instantons to the ${\cal N}=4$ partition function, as provided explicitly in~\cite{fcr} for the family $X(p,q)$ of toric four-manifolds. This identification can also be made by recognizing that the exponent in (\ref{Final1}) is exactly the classical action for fractional instantons, as we showed in Section~\ref{InstSect} (see eq.~(\ref{frac0})). The fact that the two-dimensional gauge theory naturally captures the contributions of fractional instantons can be understood by noting that they are the pullbacks to $X(p,q)$ of the two-dimensional classical solutions. There are in fact bijective correspondences between two-dimensional instantons on a certain orbifold of ${\mathds{P}}^1$, flat three-dimensional connections on the boundary $L(p,q)$, and fractional instantons on the four-manifold $X(p,q)$ as illustrated by the discussion of Section~\ref{InstSect} and shown in detail in Section~\ref{CSLpq} below. Moreover, the expression (\ref{Final0}) yields an even more refined formula providing the contributions of each topological sector of fractional instantons with fixed holonomy $\mbf m\in\zed_p^N$ at infinity from the finite action requirement that the gauge fields be asymptotically flat. The contributions from regular instantons are more elusive, since they can move freely on the whole non-compact four-dimensional space, and require in general some sort of regularization procedure. In the q-deformed gauge theory, these ambiguities are most evident on the toric manifold $\mathds{C}^2$, where the fractional instanton contribution is absent. In this case, the gluing rules of Section~\ref{Sewing} above yield a divergent amplitude which must be suitably regularized. To elucidate further the structure of our result and compare with existing results in the literature, in the remainder of this section we will look more closely at the two extreme cases ${{\cal O}_{{\mathds{P}}^1}(-p)}$ ($q=1$) and ALE spaces ($q=p-1$). \subsection{Example: Line Bundles over ${\mathds{P}}^1$\label{P1Example}} The limiting case $X(p,1)$ is the total space of the holomorphic line bundle $\mathcal{O}_{{\mathds{P}}^1}(-p)$ of degree $p$ over ${\mathds{P}}^1$. In this case the Cartan matrix (\ref{InterMatrix}) has just one element $e_1=p$. The partition function (\ref{Zpq}) is that of q-deformed Yang-Mills theory on the sphere, whose instanton expansion was worked out explicitly in~\cite{Caporaso:2005ta} and written for $\theta_1=0$ in the compact form \begin{equation} {Z}_{U(N)}^{q{\rm YM}}\big({\mathcal{O}_{{\mathds{P}}^1}(-p)}\,,\, g_{\rm YM}^2\big)= \sum_{\stackrel{\scriptstyle\mbf N\in\nat_0^p} {\scriptstyle\sum_k\,N_k=N}}~\prod_{k=0}^{p-1}\,\frac{\theta_3\left( \left.\frac{4\pi{\,{\rm i}\,} p}{g_{\rm YM}^2} \right| \frac{4\pi{\,{\rm i}\,} k}{g_{\rm YM}^2}\right)^{N_k}}{N_k!}~Z_{U(N)}^{\rm CS}\bigl(L(p,1)\,,\,\mbf N\bigr) \ , \label{ZP1ZCS}\end{equation} where \begin{equation} \theta_3(\tau|z)=\sum_{m\in \mathbb{Z}}{\,\rm e}\,^{\pi {\,{\rm i}\,}\tau \,{{m}}^2 + 2 \pi {\,{\rm i}\,} {m}\,{z}} \label{Jacobi3def}\end{equation} is a Jacobi-Erderlyi theta-function. Here \begin{equation} Z_{U(N)}^{\rm CS}\bigl(L(p,1)\,,\,\mbf N\bigr)=\exp\left(-\frac{4\pi^2} {g_{\rm YM}^2\,p}\,\sum_{m=0}^{p-1}\, N_m \,m^2\right)~ {\cal W}_p^{\mathrm{inst}}(\,\underbrace{0,\dots,0}_{N_0},\dots, \underbrace{p-1,\dots,p-1}_{N_{p-1}}\,) \ , \label{LS} \end{equation} with \begin{equation} {\cal W}_p^{\mathrm{inst}}(\mbf s)=\left(\frac{4\pi}{g_{\rm YM}^2\,p} \right)^{N/2}~{\,\rm e}\,^{-\frac{g_{\rm YM}^2\,(N^3-N)}{12p}}~ \sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{\frac{2\pi {\,{\rm i}\,}}{p}\,\mbf s\cdot(\mbf\rho+w(\mbf\rho))+ \frac{g_{\rm YM}^2}{4p} ({\mbf\rho}^2 +2w({\mbf\rho})\cdot {\mbf\rho})} \ , \label{LSfluct} \end{equation} is simply the partition function of $U(N)$ Chern-Simons gauge theory on the Lens space $L(p,1)$, the boundary of the total space of the line bundle $\mathcal{O}_{{\mathds{P}}^1}(-p)$, for the vacuum contribution corresponding to the $p$-component partition $\mbf N\in\nat_0^p$ of $N$, as computed in~\cite{Marino:2002fk,Aganagic:2002wv} and in Section~\ref{CSLpq} below. In this case, the only surviving sum in (\ref{ZP1ZCS}) is carried over ordered partitions $\mbf N$ of $N$ into $p$ parts. This ensures that we are summing over gauge inequivalent flat connections on the boundary of $\mathcal{O}_{{\mathds{P}}^1}(-p)$. If we now drop the perturbative contribution in (\ref{LSfluct}), we immediately find \begin{eqnarray} {\cal Z}^{U(N)}_{\mathcal{O}_{{\mathds{P}}^1}(-p)}&=& \mathcal{N}\,\sum_{\stackrel{\scriptstyle\mbf N\in\nat_0^p} {\scriptstyle\sum_k\,N_k=N}}\, \exp\left(-\frac{4\pi^2}{g_{\rm YM}^2\,p}\, \sum_{m=0}^{p-1}\, N_m\, m^2\right)~ \prod_{k=0}^{p-1}\,\frac{\theta_3\left( \left.\frac{4\pi{\,{\rm i}\,} p}{g_{\rm YM}^2} \right| \frac{4\pi{\,{\rm i}\,} k}{g_{\rm YM}^2}\right)^{N_k}}{N_k!}\nonumber\\[4pt] &=&\frac{1}{N!}\,{\cal N}\,\left[~\sum_{k=0}^{p-1}\,{\,\rm e}\,^{-\frac{4\pi^2\,k^2 }{g^2_{\rm YM}\,p}}~\theta_3\left(\mbox{$ \left.\frac{4\pi{\,{\rm i}\,} p}{g^2_{\rm YM}} \right| \frac{4\pi{\,{\rm i}\,} k}{g^2_{\rm YM}}$}\right)\right]^N\nonumber\\[4pt] &=& \frac{1}{N!}\,{\cal N}\,\left[~ \sum_{k=0}^{p-1}~\sum_{m\in \mathbb{Z}}\,{\,\rm e}\,^{-\frac{4\pi^2} {g^2_{\rm YM}\,p}\,(k+p\,m)^2}\right]^N= \frac{1}{N!}\,{\cal N}~\theta_3\left (\mbox{$\left. \frac{4\pi{\,{\rm i}\,}}{g_{\rm YM}^2\,p}\right|0$}\right)^N \ . \label{LS1}\end{eqnarray} This coincides with the contribution, derived in~\cite{fcr}, of fractional instantons to the partition function of $\mathcal{N}=4$ gauge theory on $\mathcal{O}_{{\mathds{P}}^1}(-p)$ in the absence of D2-branes. \subsection{Example: ALE Spaces\label{ALEExample}} The partition function of q-deformed Yang-Mills theory on the ALE spaces $A_k$ was first computed in~\cite{Aganagic2:2005} by embedding this space into the local Calabi-Yau threefold $A_k\times\mathds{C}$. This threefold can be thought of as the limit of the usual ALE fibration over $\mathds{P}^1$ as the area of the base $\mathds{P}^1$ becomes infinite. Setting $p=k+1$ and $q=k$, the instanton representation of the partition function is given by \begin{eqnarray} {Z}_{U(N)}^{q{\rm YM}}\big(A_k\,,\,g_{\rm YM}^2\big) &=&\left({\frac{4\pi}{g_{\rm YM}^2}}\right)^{N\,k/2}\, \frac{1}{(k+1)^{N/2}} \\ &&\times\, \sum_{\mbf s_0,{\mbf s}_1,\dots,{\mbf s}_{k-1}\in \mathds{Z}^{N}}~ \sum_{\mbf m\in \zed^N_p}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, \mathcal{ A}^{ij}\,\mbf s_i\cdot \mbf s_j- \frac{8\pi^2}{g_{\rm YM}^2}\, \sum_{j} \,(k-j) \,\mbf s_j\cdot \mbf m}\nonumber \\ &&\times\, {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\,\frac{k}{k+1}\, \mbf m^2}\,\sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{\frac{2\pi {\,{\rm i}\,} k }{k+1}\,\mbf m\cdot(w(\mbf\rho)- \mbf\rho)+\frac{g_{\rm YM}^2}{2(k+1)}\, (k\,{\mbf\rho}^2 +w({\mbf\rho})\cdot {\mbf\rho})}\nonumber \end{eqnarray} where the symmetric $k\times k$ matrix elements $\mathcal{A}^{ij}$ for $i,j=1,\dots,k-1$ are given by \begin{equation} {\mathcal{A}}^{ij}=\big( C^{-1}\big)^{ij}+\frac{1}{2(k+1)}\,\Big[\big(k+1-i\big)\,\big(k\,(k+1-j)-2 j\big)+\big(k+1-j\big)\,\big(k\,(k+1-i)-2 i\big)\Big] \ , \end{equation} while ${\mathcal{A}}^{0j}=(k+1)\,(k-j) $ for any $j=0,1,\dots,k-1$. To avoid an overly cumbersome expression we have set all $\theta$-angles $\theta_i=0$. We have also used the fact that the Cartan matrix for ALE spaces coincides with the Cartan matrix for the $A_k$ Dynkin diagram \begin{equation} C=\begin{pmatrix} 2 & -1 & 0 & \cdots & 0 \\ -1 &2 & -1& \cdots & 0 \\ 0 &-1 & 2&\cdots & 0 \\ \vdots &\vdots & \vdots &\ddots&\vdots\\ 0&0&0&\cdots&2 \end{pmatrix} \ . \end{equation} Dropping the perturbative contribution again we arrive at \begin{equation} \begin{split} {\cal Z}_{A_k}^{U(N)} &={\cal N}\,\sum_{\mbf{u}_1,\dots,{\mbf u}_k\in \mathds{Z}^{N}}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2}\, (C^{-1})^{ij}\,\mbf u_i\cdot\mbf u_j} \ , \end{split} \end{equation} which is exactly the contribution of fractional instantons to the ${\cal N}=4$ Yang-Mills partition function, derived in~\cite{fcr} via explicit instanton computations, given by the $N$-th power of (\ref{frac0}) for $z^i=0$. \section{Chern-Simons Gauge Theory on Lens Spaces\label{CSLpq}} In this section we will describe in some detail the nonabelian localization of $U(N)$ Chern-Simons gauge theory on the generic three-dimensional Lens spaces $L(p,q)$, and thereby prove some of the assertions made in the previous section. The quantum gauge theory is defined by the path integral \begin{equation} \mathcal{Z}_{U(N)}^{\rm CS}\big(L(p,q) \,,\, k\big) = \int\, {\rm D} A ~ {\,\rm e}\,^{-S_{U(N)}^{\rm CS}(A)} \label{CSpartfn}\end{equation} where \begin{equation} S_{U(N)}^{\rm CS}(A)= \frac{{\,{\rm i}\,} k}{4 \pi }\, \int_{L(p,q)}\, \Tr\left(A \wedge \mathrm{d} A + \mbox{$\frac{2}{3}$}\, A \wedge A \wedge A \right) \label{CSaction}\end{equation} is the Chern-Simons action evaluated on a connection $A$ of a principal $U(N)$ bundle over $L(p,q)$. As is well-known~\cite{Beasley:2005vf}, the partition function (\ref{CSpartfn}) is given exactly by its semi-classical approximation. For this, one takes into account the one-loop radiative correction $k\to k+N$ and sums over all critical points of the Chern-Simons action (\ref{CSaction}), which are simply the flat $U(N)$ gauge connections on $L(p,q)$. The purpose of carrying out this calculation explicitly is two-fold. Firstly, we will correctly identify for the first time the individual flat connection contributions to the semi-classical expansion of (\ref{CSpartfn}), generalizing the results of~\cite{Marino:2002fk,Aganagic:2002wv} to the generic Lens spaces $L(p,q)$ for all $1\leq q<p$ and justifying our identification (\ref{ZCSUNLpqm}). Secondly, in the course of this calculation we will construct the explicit mapping between two-dimensional Yang-Mills instantons on a ${\mathds{P}}^1$ orbifold, flat connections on $L(p,q)$, and fractional instantons on $X(p,q)$, which is the crux of the final results of the previous section. \subsection{Classical Solutions\label{FlatLpq}} We begin by constructing the flat connections on $L(p,q)$ explicitly. For this, it is convenient to realize the Lens space $L(p,q)$ as a Seifert fibration over the two-sphere~\cite{Furuta} (see also~\cite{Beasley:2005vf}). The base is described by a projective line ${\mathds{P}}^1$ with an arbitrarily chosen marked point at which the coordinate neighbourhood is modelled on $\complex/\zed_p$, with the cyclic group acting on the local chart coordinate $z$ as $z\mapsto{\,\rm e}\,^{2\pi{\,{\rm i}\,}/p}\,z$. We construct a line V-bundle ${\cal L}(p,q)$ over this ${\mathds{P}}^1$ orbifold such that the local trivialization over the orbifold point is modelled by $\complex^2/\zed_p$, where $\zed_p$ acts on the local coordinates $(z,w)$ of the base and fibre exactly as in~(\ref{gamma}). The Lens space $L(p,q)$ may then be described as the total space of the associated unit circle bundle ${\mathds S}({\cal L}(p,q))$. Since $p$ and $q$ are relatively prime, this construction realizes $L(p,q)$ as the quotient of the three-sphere ${\mathds S}^3$ by the free $\zed_p$-action (\ref{gamma}), where ${\mathds S}^3$ is regarded as the unit sphere in $\complex^2$. Since $\pi_1({\mathds S}^3)=0$, it follows that the fundamental group of the Lens space is simply \begin{equation} \pi_1\big(L(p,q)\big)=\pi_0\big(\zed_p\big)=\zed_p \label{pi1Lens}\end{equation} and it is generated by the noncontractible loop encircling the orbifold point on the base ${\mathds{P}}^1$. Moreover, the Chern class of the line V-bundle over ${\mathds{P}}^1$ describing $L(p,q)$ is \begin{equation} c_1\big({\cal L}(p,q)\big)=\frac qp \ . \label{c1Lpq}\end{equation} This class cancels the local delta-function curvature at the marked point of ${\mathds{P}}^1$ to ensure that the total degree of the fibration is~$0$. Gauge equivalence classes of flat $U(N)$ connections on $L(p,q)$ are in one-to-one correspondence with conjugacy classes of homomorphisms $\rho$ from the fundamental group (\ref{pi1Lens}) to $U(N)$, i.e. with $N$-dimensional unitary representations of $\zed_p$. The image of $\rho$ in $U(N)$ decomposes into $N_m$ copies of the $m$-th one-dimensional irreducible representation of $\zed_p$, where $m=0,1,\dots,p-1$, and any representation $\rho$ lives in the maximal torus $U(1)^N\subset U(N)$ with \begin{equation} N=\sum_{m=0}^{p-1}\,N_m \ . \label{partpconstr}\end{equation} It follows that there is a one-to-one correspondence between flat $U(N)$ gauge connections on $L(p,q)$ and $p$-component partitions $\mbf N\in\nat_0^p$ of the rank $N$. Moreover, any such connection defines a central element of the Lie algebra of $U(N)$. The isomorphism class $[{\cal T}]$ of the tautological line bundle over ${\mathds{P}}^1$ is the generator of $H^1({\mathds{P}}^1,U(1))\cong H^2({\mathds{P}}^1,\zed)\cong\zed$. Since $H^1({\mathds{P}}^1,\zed)=0$, it follows from the Thom-Gysin exact sequence for circle bundles~\cite{Furuta} that $H^2(L(p,q),\zed)=H^2({\mathds{P}}^1,\zed)/\langle[{\cal T}^{p}]\rangle\cong\zed_p$. This means that all unitary vector bundles over $L(p,q)$ have $p$-torsion magnetic charges (Chern classes) $m$, and that all such torsion bundles over $L(p,q)$ are pullbacks of ordinary bundles over ${\mathds{P}}^1$ under the bundle projection $\pi:{\mathds S}({\cal L}(p,q))\to{\mathds{P}}^1$. As we now explicitly demonstrate, this implies that every flat connection on $L(p,q)$ is the pullback of a configuration of Dirac monopoles on the sphere ${\mathds{P}}^1$. Extending the pullback to the bulk $X(p,q)$ is then in agreement with the construction of fractional instantons given in Section~\ref{InstSect}. The critical points of the Yang-Mills action functional $\frac1{2g^2}\,\int_{{\mathds{P}}^1}\,\Tr(F_a\wedge{}^*F_a)$ are the $U(N)$ gauge connections $a$ satisfying ${\rm d}_a{}^*F_a=0$. These are the connections with constant central curvature. On the two-sphere every constant curvature bundle is (up to isomorphism) a sum of line bundles. There is thus a one-to-one correspondence between Yang-Mills connections of a principal $U(N)$-bundle $\cal P$ of degree $m$ over ${\mathds{P}}^1$ and non-increasing sequences of integers $\mbf m\in\zed^r$, of respective multiplicities $\mbf N\in\nat_0^r$, with \begin{equation} m=\sum_{i=1}^r\,m_i \qquad \mbox{and} \qquad N=\sum_{i=1}^r\,N_i \ . \label{YMpartitions}\end{equation} On the sphere ${\mathds{P}}^1$, each such connection is gauge equivalent to the connection $a_0(\mbf m,\mbf N)=\bigoplus_i\,a^{(m_i)}~{1\!\!1}_{N_i}$, where $a^{(m_i)}$ is the monopole potential of magnetic charge $m_i$ and the $i$-th block is an abelian connection on the bundle $({\cal T}^{m_i})^{\oplus N_i}$. The curvature of this connection is given by \begin{equation} F_{a_0}(\mbf m,\mbf N)={\rm d} a_0(\mbf m,\mbf N)= \bigoplus_{i=1}^r\,2\pi\,m_i~{1\!\!1}_{N_i} \otimes\omega_{{\mathds{P}}^1} \ , \label{centralcurvYM}\end{equation} where $\omega_{{\mathds{P}}^1}$ is the symplectic two-form on ${\mathds{P}}^1$ normalized to unit volume \begin{equation} \int_{{\mathds{P}}^1}\,\omega_{{\mathds{P}}^1}=1 \ . \label{omegaPP1int}\end{equation} The monopole connection of course has trivial monodromies around arbitrary smooth points on ${\mathds{P}}^1$. To take into account the orbifolding of ${\mathds{P}}^1$ required to define the Lens space as a Seifert manifold, we need a connection which has non-trivial monodromy ${\,\rm e}\,^{2\pi{\,{\rm i}\,} q/p}$ about the given fixed marked point on ${\mathds{P}}^1$. Since the choice of orbifold point is arbitrary, we may thus define \begin{equation} a(\mbf m,\mbf N):=\mbox{$\frac qp$}\,a_0(\mbf m,\mbf N) \qquad \mbox{and} \qquad F_a(\mbf m,\mbf N)={\rm d} a(\mbf m,\mbf N)= \mbox{$\frac qp$}\,F_{a_0}(\mbf m,\mbf N) \label{centralmonod}\end{equation} with the requisite monodromy. In particular, the Chern class (\ref{c1Lpq}) has a Chern-Weil description in terms of smooth curvature in the bulk of the ${\mathds{P}}^1$ orbifold as $c_1({\cal L}(p,q))=\frac1{2\pi}\,\int_{{\mathds{P}}^1}\,F_a(1,1)$. The holonomy of this abelian gauge connection depends only on the values of the monopole numbers $m_i$ mod~$p$. By a trivial rearrangement, we will denote by $0\leq N_m\leq N$ the multiplicity of the degree~$m$ monopole bundle ${\cal T}^{m}$ for the torsion magnetic charges $m=0,1,\dots,p-1$ alluded to earlier and hence drop the labels $\mbf m$ from the notation above. To describe the pullback of these gauge fields to $L(p,q)$, we use (\ref{c1Lpq}) to introduce a connection $\kappa$ on the principal $U(1)$-bundle $\pi:{\mathds S}({\cal L}(p,q))\to{\mathds{P}}^1$ whose curvature is given by \begin{equation} {\rm d}\kappa=\mbox{$\frac qp$}\,\pi^*(\omega_{{\mathds{P}}^1}) \ . \label{kappacurv}\end{equation} The integral of $\kappa$ over a generic fibre of the Seifert fibration is given by~\cite{Blau:2006gh,Beasley:2005vf} \begin{equation} \oint_{{\mathds S}^1}\,\kappa=1 \ , \label{intkappa}\end{equation} while from (\ref{omegaPP1int}), (\ref{kappacurv}) and (\ref{intkappa}) it follows that the Chern class (\ref{c1Lpq}) of the line V-bundle ${\cal L}(p,q)$ can be computed from the integral \begin{equation} \int_{L(p,q)}\,\kappa\wedge{\rm d}\kappa=\frac qp \ . \label{c1integral}\end{equation} We can now compute the pullback of the curvature in (\ref{centralmonod}) as \begin{equation} F_A\big(\mbf N\big):= \pi^*\big(F_a(\mbf N)\big)=\bigoplus_{m=0}^{p-1}\,\frac{2\pi\,m\,q}p~ {1\!\!1}_{N_m}\otimes\pi^*\big(\omega_{{\mathds{P}}^1}\big)=\bigoplus_{m=0}^{p-1}\, 2\pi\,m~{1\!\!1}_{N_m}\otimes{\rm d}\kappa \ , \label{Fapullback}\end{equation} from which we may identify the pullback of the Yang-Mills instanton on the ${\mathds{P}}^1$ orbifold up to gauge transformation as \begin{equation} A(\mbf N)=\bigoplus_{m=0}^{p-1}\, 2\pi\,m~{1\!\!1}_{N_m}\otimes\kappa \ . \label{YMinstpullback}\end{equation} Note that from (\ref{intkappa}) it follows that the connection (\ref{YMinstpullback}) has trivial holonomy along any ${\mathds S}^1$ fibre of the Seifert manifold, $\exp({\,{\rm i}\,}\oint_{{\mathds S}^1}\,A(\mbf N))={1\!\!1}_N$, as required since all fibre loops are contractible in $L(p,q)$ and the only non-trivial elements of (\ref{pi1Lens}) arise from loops which wind around the marked point of the base ${\mathds{P}}^1$. Moreover, if ${\cal P}\to{\mathds{P}}^1$ is the irreducible $U(N)$ bundle of degree $m$ on which the two-dimensional gauge theory is defined, then the corresponding flat gauge bundle over $L(p,q)$ is~\cite{Furuta} $\pi^*({\cal P})\otimes\pi^*({\cal T}^{-m})$. Finally, we can compute the value of the Chern-Simons action (\ref{CSaction}) on a generic classical solution on $L(p,q)$ by using the fact that the connection (\ref{YMinstpullback}) has constant central curvature. After taking into account the quantum shift of the Chern-Simons level $k\to k+N$, one finds \begin{eqnarray} S_{U(N)}^{\rm CS}\big(\mbf N\big):=S_{U(N)}^{\rm CS}\big(A(\mbf N) \big)&=&\frac{{\,{\rm i}\,}(k+N)}{4\pi}\,\int_{L(p,q)}\,\Tr\big(A(\mbf N) \wedge{\rm d} A(\mbf N)\big)\nonumber\\[4pt] &=&\frac{{\,{\rm i}\,}(k+N)}{4\pi}\, \sum_{m=0}^{p-1}\,(2\pi\,m)^2\,N_m~\int_{L(p,q)}\,\kappa\wedge {\rm d}\kappa \ . \label{SCSflatvalues}\end{eqnarray} By using eq.~(\ref{c1integral}) we arrive at the final form \begin{equation} S_{U(N)}^{\rm CS}\big(\mbf N\big)=\frac{\pi{\,{\rm i}\,}(k+N)\,q}p\, \sum_{m=0}^{p-1}\,N_m\,m^2 \ . \label{SCSflatfinal}\end{equation} This result confirms the conjectured formula~\cite{Hansen} for the set of values of the Chern-Simons action functional of flat $G$-connections on $L(p,q)$ in the case of gauge group $G=U(N)$. It also agrees with the classical part of the partition function (\ref{ZCSUNLpqm}), upon using the identification (\ref{gYMkCS}). \subsection{Semi-Classical Expansion\label{CSLpqPartFn}} We will now describe how to compute the one-loop fluctuation determinants needed to write down the localization of the partition function (\ref{CSpartfn}) onto a sum over the classical solutions constructed in Section~\ref{FlatLpq} above. For this, we use the well-known surgery construction of the Lens space $L(p,q)$~\cite{Freed:1991wd,Jeffrey:1992tk}. Choose a pair of integers $r,s$ which satisfy the Diophantine equation $s\,q-r\,p=1$. Then the Seifert manifold $L(p,q)$ is obtained from ${\mathds{P}}^1\times{\mathds S}^1$ by removing a solid torus ${\mathds D}^2\times{\mathds S}^1$ (with disk ${\mathds D}^2\subset{\mathds{P}}^1$) and gluing it back by twisting its torus boundary by the $SL(2,\zed)$ modular transformation \begin{equation} {\sf M}=\begin{pmatrix}q~&~r\\p~&~s\end{pmatrix} \ . \label{Mglue}\end{equation} The basis element $(q,p)\in H_1({\mathds S}^1\times{\mathds S}^1,\zed)\cong\zed^2$ specifies the slope of the meridian of the boundary torus, while $(r,s)$ gives the slope of the longitude. With the continued fraction expansion (\ref{pqcontfrac}), the gluing matrix (\ref{Mglue}) can be cast in the form \begin{equation} {\sf M}={\sf S}~{\sf T}^{e_1}~{\sf S}~\cdots~{\sf S}~ {\sf T}^{e_\ell}~{\sf S} \ , \label{MglueST}\end{equation} where \begin{equation} {\sf S}=\begin{pmatrix}0&~-1\\1&~0\end{pmatrix} \qquad \mbox{and} \qquad {\sf T}=\begin{pmatrix}1~&~1\\0~&~1\end{pmatrix} \label{SL2Zgens}\end{equation} are the standard generators of $SL(2,\zed)$ obeying the relations ${\sf S}^2=({\sf S}~{\sf T})^3={1\!\!1}$. In particular, the matrix (\ref{InterMatrix}) in this context gives the linking matrix of the framed surgery link with framings of components specified by the integers $e_i$. According to the gluing rules of topological quantum field theory~\cite{Witten:1988hf}, the partition function (\ref{CSpartfn}) of Chern-Simons theory in the canonical two-framing of $L(p,q)$ may thus be computed (up to normalization) as the matrix element~\cite{Freed:1991wd,Jeffrey:1992tk} \begin{equation} Z_{U(N)}^{\rm CS}\big(L(p,q)\,,\,k\big)={\cal R}({\sf M})_{0,0} \ , \label{ZCSsurgery}\end{equation} where $\cal R$ is the representation of the mapping class group on the finite-dimensional quantum Hilbert space of Chern-Simons gauge theory on ${\mathds{P}}^1\times{\mathds S}^1$. In the Verlinde basis of level~$k$ integrable representations $R$ of the $U(N)$ WZW model, the generators (\ref{SL2Zgens}) are represented as~\cite{Jeffrey:1992tk} \begin{equation} {\cal R}({\sf S})_{R,Q}=S_{R,Q} \qquad \mbox{and} \qquad {\cal R}({\sf T})_{R,Q}=\delta_{R,Q}~T_R \label{RSTVerlinde}\end{equation} in terms of the amplitudes (\ref{Smatrixdef}) and (\ref{TRdef}) with the identification (\ref{gYMkCS}). We can thus write (\ref{ZCSsurgery}) as \begin{equation} Z_{U(N)}^{\rm CS}\big(L(p,q)\,,\,k\big)=\sum_{R_1,\dots,R_\ell}\, S_{0,R_1}\,S_{R_1,R_2}\,\cdots\,S_{R_{\ell-1},R_\ell}\,S_{R_\ell,0}~ T_{R_1}^{e_1}\,\cdots\,T_{R_\ell}^{e_\ell} \ . \label{ZCSLpqreps}\end{equation} Although formally identical to the q-deformed gauge theory partition function (\ref{Zpq}) with $\theta_i=0$, in (\ref{ZCSsurgery}) one quantizes the Yang-Mills coupling as in (\ref{gYMkCS}) and restricts the sum to {\it integrable} representations of the $U(N)$ gauge group at level $k\in\nat_0$. Similar correspondences between q-deformed Yang-Mills theory and Cherns-Simons theory on circle bundles have been noted in~\cite{Blau:2006gh}. The sums over integrable representations in (\ref{ZCSLpqreps}) can be written in terms of weight vectors $\mbf n(R_i)$ with $0\leq n_a(R_i)\leq N+k-1$. Using the explicit matrix elements in (\ref{Smatrixdef}) and (\ref{TRdef}), this writes the Chern-Simons partition function as a lattice Gauss sum. To cast (\ref{ZCSLpqreps}) as a sum over classical solutions, one uses the reciprocity formula for Gauss sums to resum the expansion over weight vectors. This calculation was first performed for the case of an $SU(2)$ gauge group in~\cite{Jeffrey:1992tk}, and more recently extended in~\cite{Hansen} to arbitrary simple Lie groups $G$. We will not enter into the intricate details of this calculation here, which are analogous to the Poisson resummation carried out in Section~\ref{InstExp}. Dropping irrelevant overall normalization factors, for $G=U(N)$ one finds~\cite{Hansen} \begin{equation} Z_{U(N)}^{\rm CS}\big(L(p,q)\,,\,k\big)=\sum_{\mbf m\in\zed_p^N}\, {\,\rm e}\,^{-\frac{\pi{\,{\rm i}\,}(k+N)\,q}p\,\mbf m^2}~{\cal W}_{U(N)}^{\rm fluct}(p,q; \mbf m) \label{ZCSLpqfinal}\end{equation} where \begin{equation} {\cal W}_{U(N)}^{\rm fluct}(p,q;\mbf m)=\sum_{w\in S_N}\,\varepsilon(w)~ {\,\rm e}\,^{-\frac{2\pi{\,{\rm i}\,}}{(k+N)\,p}\,w(\mbf\rho)\cdot\mbf\rho}~ {\,\rm e}\,^{\frac{2\pi{\,{\rm i}\,}}p\,\mbf m\cdot(q\,\mbf\rho-w(\mbf\rho))} \ . \label{WUNfluctdef}\end{equation} In the first exponential factor of (\ref{ZCSLpqfinal}) we recognize the Boltzmann weights of the classical Chern-Simons action (\ref{SCSflatfinal}) evaluated on the set of critical points. The Weyl group sums (\ref{WUNfluctdef}) thereby represent the one-loop quantum fluctuation determinants about the classical solutions. This justifies the identification made in (\ref{ZCSUNLpqm}), and also the analysis of Section~\ref{2Dto4D}, after the reflections $(p,q)\to(-p,-q)$. This defines an orientation-reversing isometry of the four-manifold $X(p,q)$ under which the topologically twisted ${\cal N}=4$ Yang-Mills theory is invariant, but under which the Chern-Simons and q-deformed gauge theories are not. The remarkable feature of the calculation performed in~\cite{Hansen,Jeffrey:1992tk} proceeding from (\ref{ZCSLpqreps}) to (\ref{ZCSLpqfinal}) is that the final form depends only on the integers $p$ and $q$ which uniquely determine the Seifert space $L(p,q)$ up to isomorphism, and not on the continued fraction expansion (\ref{pqcontfrac}). This is expected, since the surgery construction of the Chern-Simons partition function (\ref{ZCSsurgery}) is independent of the framing integers $e_i$~\cite{Freed:1991wd}. Moreover, while the expansion (\ref{pqcontfrac}) is not unique, any two such decompositions are related by an $SL(2,\zed)$ transformation, and the Chern-Simons partition function is invariant under the action of the mapping class group. In marked contrast, the geometry of the Hirzebruch-Jung space $X(p,q)$ depends crucially on the continued expansion of $\frac pq$ (mod $SL(2,\zed)$) and the corresponding gauge theory amplitudes reflect this dependence. \section{Instantons on Higher Genus Ruled Surfaces\label{HigherGenus}} In this section we will address the problem of counting instantons on the ruled Riemann surfaces~\cite{BPVdeV}, which can be described as the total space of a holomorphic line bundle ${\cal O}_{\Sigma_g}(-p)$ of degree $p$ over a compact Riemann surface $\Sigma_g$ of genus $g\geq1$. This non-toric manifold can be viewed as a non-compact four-cycle in the local Calabi-Yau threefold which is the total space of the holomorphic rank~$2$ vector bundle ${\cal O}_{\Sigma_g}(-p)\oplus{\cal O}_{\Sigma_g}(2g-2+p)$, as considered by~\cite{Vafa:2004qa,Aganagic:2004js} for the problem of counting BPS black hole microstates in four dimensions. In this case, the direct instanton counting in four dimensions is a difficult problem, and the two-dimensional gauge theory could thus provide valuable insight. In~\cite{Aganagic:2004js} the q-deformed gauge theory on $\Sigma_g$ is proposed to compute the relevant Euler characteristic of the instanton moduli space. In the case of gauge group $U(1)$, one can give a prediction for the partition function of $\mathcal{N}=4$ gauge theory on ${\cal O}_{\Sigma_g}(-p)$ for any genus $g$. From the known partition function of $U(1)$ gauge theory on $\Sigma_g$~\cite{Blau:1991mp}, one can follow the prescription of Section~\ref{2Dto4D} to read off the fractional instanton contribution directly as \begin{equation} Z^{U(1)}_{\rm frac}({\Sigma_g})=\sqrt{\frac{4\pi}{g_{\rm YM}^2\,p }}~\sum_{m\in\mathds{Z}}\,{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2\,p }\, m^2 - z\,m} \ . \end{equation} On the other hand, the moduli space of $n$ regular instantons of rank~$1$ is given by the Hilbert scheme ${\cal O}_{\Sigma_g}(-p)^{[n]}$ of $n$ points on the total space of the bundle ${{\cal O}_{\Sigma_g}(-p)}$~\cite{nakabook}. The generating function for the corresponding Poincar\'e polynomials is given by \begin{equation} \sum_{n=0}^\infty\, P\big(t\,\big|\,{\cal O}_{\Sigma_g}(-p)^{[n]} \big)~{\,\rm e}\,^{2\pi{\,{\rm i}\,} n\,\tau} = \prod_{m=1}^\infty\, \frac{\left(1+ t^{2m-1}~{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau} \right)^{2g}} {\left(1-{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau}\,t^{2m}\right)^2\, \left(1-{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau}\,t^{2m-2}\right)} \ . \label{genera} \end{equation} By setting $t=1$ in (\ref{genera}) we get the contribution of regular instantons to the four-dimensional partition function. By assuming the factorization property between regular and fractional instanton contributions we finally get the total partition function (dropping irrelevant overall normalizations) \begin{equation} Z^{U(1)}(\Sigma_g)= \prod_{m=1}^\infty\, \frac{\left(1+ {\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau}\right)^{2g}} {\left(1-{\,\rm e}\,^{2\pi{\,{\rm i}\,} m\,\tau}\right)^2}~ \sum_{n\in\mathds{Z}}\, {\,\rm e}\,^{\frac{\pi{\,{\rm i}\,}\tau\,n^2}{p }-z\,n} =\frac{\hat\eta(2\tau)^{2g}}{\hat\eta(\tau)^{2g+2}}~ \theta_3\left(\left.\mbox{$\frac\tau p$}\right|\mbox{$\frac{{\,{\rm i}\,} z} {2\pi}$}\right) \ . \label{piropiro} \end{equation} Compared to the genus~$0$ cases considered in the previous sections, the computation for higher rank gauge groups is much more involved in this case. In particular, the $U(N)$ instanton partition function does not trivially factorize into a $U(1)^N$ contribution. Let us illustrate this point in the genus~$1$ case, wherein a complete analysis of the two-dimensional Yang-Mills partition function and of its relation with Chern-Simons theory has been recently carried out in~\cite{Caporaso:2006kk}. Starting from these results, it is possible repeat the procedure of Section~\ref{2Dto4D} to extract the contributions of fractional instantons in four dimensions for nonabelian gauge group. For example, for $U(2)$ gauge group and vanishing $\theta$-angle the instanton expansion of the two-dimensional Yang-Mills partition function on $\Sigma_1$ is given by \begin{eqnarray} {Z}_{U(2)}^{q{\rm YM}}\big({\cal O}_{\Sigma_1}(-p)\,,\, g_{\rm YM}^2\big)&=& \sum_{m_1,m_2\in\zed}\, \left((-1)^{m_1+m_2}\frac{4\pi}{g_{\rm YM}^2\, p}+\delta_{m_1,m_2}\,\frac{1}{\sqrt{2}}\,\sqrt{\frac{4\pi}{g_{\rm YM}^2\, p}}~ \right)\nonumber \\ && \qquad\qquad\qquad\qquad \times~{\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2\, p}\,(m_1^2+m_2^2)}\nonumber\\ &&+\, \sum_{m_0\in\zed}\,\frac{(-1)^{2 m_0+1}}{\sqrt{2}}\, \sqrt{\frac{4\pi}{g_{\rm YM}^2\,p}}~{\,\rm e}\,^{-\frac{2\pi^2} {g_{\rm YM}^2\, p}\,(2m_0+1)^2} \ , \label{su22} \end{eqnarray} where the coefficients of the exponentials can be identified with the one-loop fluctuations in Chern-Simons gauge theory on a torus bundle over the circle~\cite{Caporaso:2006kk}. It follows that the $U(2)$ partition function for fractional instantons on the total space of ${\cal O}_{\Sigma_1}(-p)$ is given by \begin{eqnarray} {Z}^{U(2)}_{\rm frac}(\Sigma_1)&=& \sum_{m_1,m_2\in\zed}\, {\,\rm e}\,^{-\frac{4\pi^2}{g_{\rm YM}^2\, p}\,(m_1^2+m_2^2)}+\sum_{m_0\in\zed}\, {\,\rm e}\,^{-\frac{8\pi^2}{g_{\rm YM}^2\, p}\,\left(m_0+\frac{1}{2}\right)^2} \nonumber\\[4pt] &=& \theta_3\left(\left.\mbox{$\frac\tau p$}\right|0 \right)^2+\theta_2\left(\left.\mbox{$\frac{2\tau}{p}$}\right|0\right) \ . \label{su23} \end{eqnarray} Due to the presence of the last term on the right-hand side of (\ref{su23}), the partition function for $U(2)$ gauge group cannot be written as the square of that for $U(1)$. This extra term is due to the appearance of singular fixed points in the nonabelian localization prescription on higher genus surfaces, arising from irreducible connections of the two-dimensional gauge theory. Thus, in order to provide a general formula in the nonabelian case for the class of non-toric four-manifolds modelled on ${\cal O}_{\Sigma_g}(-p)$, a more careful analysis is required. \section{Conclusions\label{Conclusions}} In this paper we have shown how instanton counting on the most general four-dimensional toric singularities $A_{p,q}$ can be carried out by studying the classical solutions of a suitable two-dimensional gauge theory living on the necklace of $\mathds{P}^1$'s arising in their minimal resolutions $X(p,q)$. We have found that the two-dimensional gauge theory captures the contributions of instantons which are stacked at the singularity. These instantons can be recovered from pullback of the classical solutions of two-dimensional Yang-Mills theory. Identical results have been obtained by a direct four-dimensional analysis in~\cite{fcr}, where the contributions of instantons which are free to move in the non-compact directions of $X(p,q)$ have also been investigated. The appearance of these latter configurations is more elusive in the two-dimensional gauge theory and require suitable regularization. Due to the lack of an explicit construction of their moduli space, a complete evaluation of their contribution to the D0--D2--D4 brane partition function is not yet available except for ALE spaces~\cite{Fucito:2004ry,fujii,fcr}, and the $\mathcal{O}_{\mathds{P}^1}(-p)$ spaces for $p=1$~\cite{Nakajima:2003pg,Nakajima:2003uh} and $p=2$~\cite{Sasaki:2006vq}. In contrast to the four-dimensional case, the two-dimensional gauge theory description contains perturbative contributions coming from the fluctuations of flat connections at the boundary of $X(p,q)$. As shown in~\cite{Arsiwalla:2005jb}--\cite{Caporaso:2005ta,Caporaso:2005np} in the case of the space $X(p,1)$, these fluctuations are a crucial ingredient in reproducing the large $N$ factorization of q-deformed Yang-Mills theory into holomorphic and antiholomorphic topological string amplitudes, in accordance with the OSV conjecture~\cite{Ooguri:2004zv}. It would be interesting to better understand the meaning of these perturbative corrections from the perspective of counting black hole microstates and D-brane bound states. The two-dimensional gauge theory can also be applied to more general non-toric manifolds such as the ruled Riemann surfaces studied in Section~\ref{HigherGenus}, which are four-cycles of the local Calabi-Yau threefold given by the total space of the bundle ${\cal O}_{\Sigma_g}(-p)\oplus{\cal O}_{\Sigma_g}(2g-2+p)$. In this case the pertinent two-dimensional gauge theory is still exactly solvable. Its large~$N$ chiral expansion in the case $p=0$ has been carried out in~\cite{deHaro}. Some results for $U(1)$ gauge group at any genus $g$ and gauge group $U(2)$ at genus $g=1$ are derived in Section~\ref{HigherGenus}. It would be gratifying to corroborate these expectations with a direct evaluation in four dimensions. \acknowledgments We warmly thank U.~Bruzzo, F.~Fucito and J.~F.~Morales for helpful discussions. We also thank B.~Fantechi, M.~Mari\~no, S.~Pasquetti and R.~Poghossian for fruitful exchanges of ideas. We thank the organisor of the informal meeting on topological strings held in Alessandria and Torino in June~2006 which stimulated the research presented in this paper. This work was supported in part by the EU-RTN Network Grant MRTN-CT-2004-005104.
1,314,259,992,755
arxiv
\section*{Abstract} Despite intensive research, the mechanisms underlying how neurons encode external inputs remain poorly understood. Recent work has focused on the response of a single neuron to a weak, subthreshold periodic signal. By simulating the FitzHugh-Nagumo stochastic model and then using a symbolic method to analyze the firing activity of the neuron, preferred and infrequent spike patterns (defined by the relative timing of the spikes) were detected, whose probabilities encode information about the signal. As not individual neurons in isolation but neuronal populations are responsible for the emergence of complex behaviors, a relevant question is whether this coding mechanism is robust when the neuron is not isolated. We study how a second neuron, which does not perceive the subthreshold signal, affects the detection and the encoding of the signal, done by the first neuron. Through simulations of two coupled FitzHugh-Nagumo neurons we show that the coding mechanism is indeed robust, as the neuron that perceives the signal fires a spike train that has symbolic patterns whose probabilities depend on the features of the signal. Moreover, we show that the second neuron facilitates the detection of the signal, by lowering the firing threshold of the first neuron. This in turn decreases the internal noise level need to fire the spikes that encode the signal. We also show that the probabilities of the symbolic patterns achieve maximum or minimum values when the period of the external signal is close to (or is half of) the mean firing period of the neuron. \section*{Author summary} Neurons encode and transmit information in sequences of spikes, and in spite of intensive research, the principles underlying the neural code are yet not fully understood. In the framework of a simple neuron model, it was recently conjectured that, when a neuron is in a noisy environment and receives a weak periodic input, it encodes the information in the form of preferred and infrequent spike patterns. Here we study how the coupling to a second neuron, which does not receive the external signal, affects the way the first neuron encodes the signal. Our goal is to characterize the role of the second neuron. We show that it has two main effects: first, it decreases the firing threshold, allowing the first neuron to encode the signal at lower noise levels, and second, it modifies the preferred and the infrequent patterns, whose probabilities still encode information about the period and amplitude of the signal. \section*{Introduction} In spite of having been the object of intensive research for decades, the mechanisms used by neuronal populations to encode and transmit information remain poorly understood. Breaking the neural code and yielding light into neuronal strategies for efficient encoding of information in noisy environments is a hot topic in neuroscience research. Advances in this area will not only improve our understanding of brain function, but also, could revolutionize artificial intelligence systems and communication technologies, as new paradigms based on how neurons efficiently encode information could allow to overcome the limitations of present day optical computing systems and communication technologies \cite{optical_neuron_oe_2011,aragoneses_2014,graphene_2016,natphot_2017}. Various mechanisms have been proposed to explain how neurons encode external inputs, which can been viewed as complementary, or functional, under different situations \cite{thorpe_2001, nature_2002,nature_2003,hidden_2004,coombes_2010}. For example, neuronal populations can encode information in the spike rate, in the spike timing, in the frequency content of spike sequences, in the coherence of spatial spike patterns, etc. Linear and non-linear data-driven methods have been developed to quantify the information content of neuronal activity \cite{eguia_2000,panzeri_nat_rev_2009,ostojic_plos_cb_2011,amigo_2013}. A lot of research has focused on the statistics of the time intervals between consecutive spikes (inter-spike intervals, ISIs) and how properties such as ISI correlations affect information encoding \cite{andre_1991,ratnam_2000,nawrot_2007,nawrot_2009,lindner_plos_comp_bio_2010}. Recently, the response of an individual neuron to a weak periodic signal was studied numerically \cite{REI16}, in the framework of the FitzHugh-Nagumo model \cite{FIT61a,NAG62}. The analysis focused in a sub-threshold signal, which means that the signal alone does not produce spikes. Therefore, without background noise, the neuron's membrane voltage displays only small, subthreshold oscillations. However, in the presence of noise, the firing activity of the neuron encodes information about the amplitude and the period of the signal \cite{REI16}. By analyzing the ISI sequence using a nonlinear symbolic method \cite{BAN02}, it was shown that the weak periodic signal induces the emergence of relative temporal ordering in the timing of the spikes, which is absent if the neuron's firing activity is only due to uncorrelated noise \cite{REI16,REI16_2}. Temporal ordering was detected in the form of more and less expressed symbolic patterns, which depend on the period of the signal and on the level of noise. The pattern's probabilities monotonically increase with the amplitude of the signal and thus encode information about both features, the amplitude and the period of the signal. A resonance-like behavior was found, as certain periods and noise levels enhance temporal ordering, maximizing (or minimizing) the probability of the more (less) expressed pattern. An open question is whether this encoding mechanism is robust when a neuron is not in isolation. In particular, can a neuron still use this mechanism to encode a sub-threshold periodic signal, when it is coupled to other neurons that do not perceive the signal? To address this question, as a first step we simulate two FitzHugh-Nagumo neurons that are mutually coupled, with a periodic sub-threshold signal applied to one of them. Despite lacking a realistic biophysical simulation of neuronal coupling, model simulations yield theoretical insights that suggest that the neuron that perceives the signal can still encode the information, as it fires a spike train which has more and less expressed spike patterns whose probabilities still depend on the signal's features. \section*{Results} We simulate two coupled FHN neurons as described in \textit{Methods}, with a periodic subthreshold signal that is applied to one of the neurons, referred to as neuron 1. Figure~\ref{Fig1} displays the voltage-like variable of neuron 1, $u_1$, in different situations. When there is no noise, no signal and no coupling, the neuron is in the rest state and when the sub-threshold signal is applied, $u_1$ displays small oscillations [panel (a)]; when noise is added, the neuron fires a spike train [panel (b)]; when the coupling to neuron 2 is added, a noticeable effect is the increase of the firing rate [panel (c)]. The differences that are qualitatively observed in these time-series are going to be quantitatively addressed by using the methods of analysis presented in \textit{Methods}. \begin{figure}[!ht] \centering \includegraphics[width=0.7\columnwidth]{Fig1.pdf} \caption{{\bf Time-series of the voltage-like variable of neuron 1} when (a) the signal is applied, and there is no noise and no coupling (subthreshold oscillations are observed); (b) when the signal is applied and there is noise but no coupling (noise-induced spikes are observed, which carry information about the applied subthreshold signal) and (c) when the signal is applied and there is noise and coupling (an increase of the spike rate is observed). The parameters are $a_0 = 0.05$, $T = 10$ and (a) $D = 0$, $\sigma_2 = 0$; (b) $D = 2\cdot 10^{-6}$, $\sigma = 0$; (c) $D = 2\cdot 10^{-6}$, $\sigma = 0.05$.} \label{Fig1} \end{figure} As we are interested in the encoding of weak signals, we first have to distinguish between a sub-threshold and a super-threshold signal. The first one refers to a signal which, in the absence of noise, it does not induce any spike [$u_1$ displays small oscillations, as in Fig.~\ref{Fig1}(a)], while the second one is a signal that is strong enough to induce spikes. A periodic signal can be either sub-threshold or super-threshold depending on both, the period and the amplitude. Thus, to identify the parameters where the signal is sub-threshold, in Fig.~\ref{Fig2} we plot the spike rate (i.e., $1/\langle I\rangle$, in color code), as a function of $a_0$ and $T$. In panel (a) neuron 1 is isolated ($\sigma_2=0$), while in panel (b) it is coupled to neuron 2 ($\sigma_1=\sigma_2=0.05$). \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{Fig2.pdf} \caption{{\bf Influence of the signal parameters on the spike rate.} The spike rate of neuron 1 in color code is plotted as a function of the signal amplitude, $a_0$, and period, $T$. Panels (a) and (b) display the deterministic spike rate ($D=0$) without coupling ($\sigma_1=\sigma_2=0$) and with coupling ($\sigma_1=\sigma_2=0.05$), respectively. In panels (c) and (d) noise is included ($D = 2\cdot 10 ^{-6}$) and it is observed that coupling (panel d) increases the spike rate with respect to the uncoupled noisy neuron (panel c).} \label{Fig2} \end{figure} When the neuron is uncoupled, for large amplitude and/or small period the signal is super-threshold, otherwise is sub-threshold. When the neuron is coupled to neuron 2 (here we want to remark that neuron 2 does not see the signal), we note that the super-threshold region is slightly larger in the parameter space ($a_0$, $T$), as compared to the uncoupled case. When we include noise, Figs.~\ref{Fig2}(c) and (d), we first note that in the super-threshold region (yellow) the spike rate does not change significantly (it is about the same as in panel (a), where $D$=0 and $\sigma_1=0$). This is due to the fact that in this region the spikes are induced by the signal, while the noise or the coupling do not have a significant effect. In contrast, in the sub-threshold region, comparing the uncoupled (panel c) and the coupled (panel d) situations, we note that coupling significantly increases the spike rate (it almost doubles). Therefore, in this region coupling plays the role of an extra source of noise (as in this region, both, noise and coupling induce spikes). Having identified the sub-threshold region in the parameter space ($a_0$, $T$) when the coupling coefficients are kept fixed ($\sigma_1=\sigma_2=0.05$), we next turn our attention to the influence of the coupling coefficients, now keeping the signal parameters fixed: we choose $a_0 = 0.05$ and $T = 10$, which are within the sub-threshold region in Fig.~\ref{Fig2}(a). Figure~\ref{Fig3} displays the spike rate as a function of $\sigma_1$ and $\sigma_2$ in different situations. In panel (a) there is no signal and no noise. We observe that when both $|\sigma_1|$ and $|\sigma_2|$ are large enough, the coupling induces spikes. Thus, a sub-threshold region in the parameter space $(\sigma_1, \sigma_2)$ is observed. Positive coupling coefficients result in higher spike rate, in comparison with negative coefficients. In panel (b), the noise is still zero but the signal is applied. Here we note that the size of the super-threshold region is slightly larger in comparison to panel (a), and now positive and negative coupling coefficients produce similar spike rates. Figures~\ref{Fig3} (c) and (d) display the spike rate when noise is included, without and with signal respectively. The vertical line in panel (c) is due to the fact that when $\sigma_1=0$ neuron 1 is uncoupled from neuron 2, and thus its spike rate does not depend of $\sigma_2$. Without signal, positive coupling coefficients result in larger spike rate as compared to negative ones, however, when the signal is applied these differences are washed out. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{Fig3.pdf} \caption{{\bf Influence of the coupling strengths in the spike rate.} The spike rate of neuron 1 in color code is plotted as a function of $\sigma_1$ and $\sigma_2$, with and without noise: panels (a) and (b) display the deterministic spike rate ($D = 0$), while panels (c) and (d) display the spike rate of noisy neurons ($D = 2\cdot 10 ^{-6}$). In (a), (c) the signal is not applied ($a_0 = 0$) while in (b), (d) it is applied ($a_0 = 0.05$ and $T = 10$). We note that, without noise, strong enough coupling induces spikes. This occurs when $\sigma_1$ and $\sigma_2$ are both positive or both negative, regardless of the input signal. When there is noise, the effect is still present when there is no signal (in panel c the spike rate is higher when $\sigma_1,\sigma_2>0$ or when $\sigma_1,\sigma_2<0$) while it is almost washed out when the signal is applied (in panel d). The vertical line in panel (c) is due to the fact that when $\sigma_1=0$, neuron 1 is uncoupled from neuron 2, therefore, its firing rate does not depend on $\sigma_2$, which is the strength of $1\rightarrow 2$ coupling.} \label{Fig3} \end{figure} In the following and unless otherwise stated, in order to limit the number of parameters we take $\sigma_1 = \sigma_2 = \sigma$. As well, we will use $\sigma=0.05$, $a_0 = 0.05$ and $T = 10$. For these parameters the signal and the coupling act as sub-threshold perturbations: without noise neuron 1 does not fire any spike. To further characterize the role of noise, Fig.~\ref{Fig4} displays the mean inter-spike interval, $\langle I \rangle$, as a function of noise intensity for different periods of the applied signal. In panel (a) $\sigma = 0$, while in panel (b), $\sigma = 0.05$. For both cases there is clearly a noise dominated regime, where $\langle I \rangle$ is the same, regardless of the coupling and of the period of the signal. In contrast, for low noise levels the coupling and the period affect the $\langle I \rangle$. Regarding the role of the period of signal, when the noise level is low, the larger $T$ is, the larger $\langle I \rangle$ is. There is a linear relation, as shown in Figs.~\ref{Fig4}(c) and (d), which holds for both, the coupled and the uncoupled cases. For stronger noise, $\langle I \rangle$ remains constant when increasing $T$. In panel (a) ($\sigma = 0$) we can also compare the mean ISI when the signal is applied (solid symbols indicate $a_0\ne0$ and different periods) and when the signal is not applied (empty circles): we see that, when $a_0\ne0$ the neuron starts firing at lower noise intensities as compared to $a_0=0$. Comparing panel (a) with panel (b) ($\sigma = 0.05$) we note that when neuron 1 is coupled to neuron 2, it starts firing at even lower noise intensities. Noise-induced regularity in the spike train \cite{PIK97,jgo_phys_rep_2004,noise_in_neural} is characterized in panels (e) and (f), where the normalized standard deviation of the ISI distribution, $R$, is plotted against the noise intensity for different $T$, without and with coupling, respectively. In both panels, two minimums are observed. Whereas the first one indicates stochastic resonance \cite{sr,sr_chialvo_longtin_1997,chialvo_longtin_1998}, as it occurs when $T \sim \langle I \rangle$, the second one reveals the coherence resonance phenomenon~\cite{PIK97,not_coh_res}, which is independent from the period of the signal. It occurs for an intermediate value of the noise amplitude for which noise-induced oscillations become most coherent. For some periods $T$ a maximum appears for very small values of the intensity of the noise. Such maxima are a signature of anticoherence resonance \cite{Lacasta02}. \begin{center} \begin{figure}[!ht] \includegraphics[width=\columnwidth]{Fig4.pdf} \caption{{\bf Interplay of noise and the period of the signal.} (a), (b) Mean inter-spike interval, $\left<I \right>$, of neuron 1 as a function of the noise strength, for different periods of the external signal; (c), (d) $\left<I \right>$ vs. the period of the signal and (e), (f) Normalized standard deviation of the ISI distribution, $R$, as a function of the noise strength, for different periods of the signal. Panels (a), (c) and (e) are without coupling ($\sigma_1 =\sigma_2 = 0$), while (b), (d) and (f) are with coupling ($\sigma_1 =\sigma_2 = 0.05$). In panels (a) and (b) we note that, for strong enough noise, the mean ISI does not depend of the period of the signal. In panels (c), (d) we note that for weak and moderate noise, $\left<I \right>$ increases linearly with $T$, while for strong noise, $\left<I \right>$ saturates to the refractory period, $T_e$ (i.e., the duration of the excursion in the phase space when a large enough perturbation triggers a spike), which is nearly independent of $T$. In panels (e), (f) we see two minima, one that occurs when $\left<I \right>\sim T$, which is interpreted as due to stochastic resonance \cite{sr,sr_chialvo_longtin_1997,chialvo_longtin_1998}, and another that occurs when $\left<I \right>\sim T_e$, which is interpreted as due to coherence resonance \cite{PIK97,not_coh_res}}. \label{Fig4} \end{figure} \end{center} After having characterized the role of the various parameters in the spike rate, we next apply non-linear ordinal analysis in order to undercover possible preferred spike patterns. We begin by considering the situation in which no signal is applied and analyze the effect of increasing the noise level or the coupling strength: Figs.~\ref{Fig5} (a) and (b) display the ordinal probabilities as a function of $D$ and $\sigma$, respectively. We note that neither the noise nor the coupling induce temporal correlations along the ISI sequence (as all the probabilities are within the gray region that indicates values consistent with equal probabilities). When the signal is applied, panels (c) and (d), we note that increasing either the noise level or the coupling strength induce temporal ordering in the ISI sequence, as the probabilities and not consistent with the uniform distribution and thus reveal the presence of preferred and less frequent spike patterns. Moreover, we note that the variation of the probabilities with $D$ or $\sigma$ is qualitatively similar. \begin{figure}[!ht] \center \includegraphics[width=0.9\columnwidth]{Fig5.pdf} \caption{{\bf Ordinal probabilities as a function of the noise and coupling strengths.} In panels (a), (b) the probabilities of the six ordinal patterns are plotted respectively as a function of $D$ (for $\sigma_1=\sigma_2=0$) and as function of $\sigma$ (for $D = 2\cdot 10^{-6}$), both for $a_0 = 0$. Panels (c) and (d) are as (a), (b), but a subthreshold signal is applied ($a_0 = 0.05$ and $T = 10$). In all the panels the gray region indicates the interval of probability values that are consistent with the uniform distribution with 99.74\% confidence level. We observe that without the signal [panels (a) and (b)], there are no noise-induced or coupling-induced ISI correlations, as all the ordinal probabilities are within the gray interval of values. In contrast, when the signal is applied [panels (c) and (d)], the probabilities are not consistent with the uniform distribution. In these panels we also note that the variation of the ordinal probabilities with $D$ or with $\sigma$ is qualitatively similar. This similarity is valid for low $D$ or low $\sigma$ values.} \label{Fig5} \end{figure} Next, we investigate how the coupling affects the encoding of the signal features (the amplitude and period): we compare how the ordinal probabilities vary with $a_0$ and $T$, when neuron 1 is isolated [Figs. \ref{Fig6} (a) and (c)] and when it is coupled to neuron 2 [Figs. \ref{Fig6} (b) and (d)]. In both cases, when $a_0$ increases (within the subthreshold region) the probabilities monotonically increase or decrease. This variation is consistent with the results reported in \cite{REI16}. It is important to remark that in \cite{REI16} the sub-threshold signal was applied to the slow variable, $v$, while here it is applied to the fast variable, $u$. In both cases, the probabilities encode information of the amplitude of the signal. Nevertheless, coupling to neuron 2 changes the preferred and infrequent patters, i.e., modifies the temporal order in the spike sequence. For instance, for $\sigma = 0.05$ the probability of the ordinal pattern 012 monotonically increases with $a_0$, whereas for $\sigma = 0.05$ monotonically decreases. In panels (b) and (d) we note that, with or without coupling, the preferred and infrequent patterns depend on the period of the signal, confirming the results reported in \cite{REI16}. \begin{figure}[!ht] \center \includegraphics[width=0.9\columnwidth]{Fig6.pdf} \caption{{\bf Influence of coupling on signal encoding.} Panels (a) and (b) display the ordinal probabilities as a function of $a_0$ without and with coupling, respectively. Panels (c) and (d) display the probabilities as a function of $T$ without and with coupling, respectively. In (a) and (b) $T = 10$, in (c) and (d) $a_0 = 0.05$. In all panels the noise strength is $D = 2\cdot 10^{-6}$. In (a) and (c) $\sigma = 0$, in (b) and (d) $\sigma = 0.05$. Comparing panels (a) and (b) we note that, with coupling, the ordinal probabilities are outside the blue region (that indicates the interval of values that are consistent with the uniform distribution with 99.74\% confidence level) for lower values of $a_0$. This means that, when neuron 1 is coupled to neuron 2, is able to detect and encode signals with smaller amplitude. Comparing panels (c) and (d) we note that, with or without coupling, the probabilities depend of the period of the signal. This suggests that the encoding mechanism is robust to coupling, as the neuron that perceives the signal can still encode the information about the period, by firing a spike sequence which has more frequent and less frequent patterns, which depend on the signal period.} \label{Fig6} \end{figure} Next we address the issue whether there is an optimal coupling configuration (i.e., a set of coupling coefficients $\sigma_1$ and $\sigma_2$) for signal encoding. To quantify the information content of the spike train, when is represented by symbolic ordinal patterns constructed from ISI intervals, we calculate the entropy computed from the probabilities of the ordinal patterns (known as \textit{permutation entropy}, $H = - \sum_i p_i \log{p_i}$ \cite{BAN02}). To investigate how the coupling coefficients that maximize the information content (i.e. minimize the entropy) depend on the input signal, we calculate the entropy for different periods. Fig. \ref{Fig7} displays the permutation entropy (normalized to its maximum value) in color code as a function of $\sigma_1$ and $\sigma_2$ for three periods: $T = 6$, $T = 10$ and $T = 14$, panels (a), (b) and (c), respectively. We observe that for small and large periods ($T = 6$ and $T = 14$) and for all coupling strengths, the entropy is close to 1, which indicates that the ordinal probabilities are all similar, i.e., neuron 1 has an stochastic dynamics. Whereas for $T = 10$ there is a region of coupling strengths where lower entropy values reveal that there are more likely and less likely patterns, i.e., the spike sequence carries information about the signal. From panel (b) we learn than when $\sigma_1\sigma_2 > 0$ the coupling to a second neuron helps to encode the signal, as the entropy has lower values. In contrasts, when $\sigma_1\sigma_2 < 0$ the coupling to the second neuron detriments the encoding of the signal, because the permutation entropy is highest. \begin{figure}[!ht] \center \includegraphics[width=\columnwidth]{Fig7.pdf} \caption{{\bf Influence of the coupling strengths on signal encoding.} The information content of the sequence of ordinal patterns computed from the spikes of neuron 1 is quantified by the permutation entropy in color code that is plotted as a function of the coupling strengths $\sigma_1$ and $\sigma_2$ for three periods of the signal: $T = 6$, $T = 10$ and $T = 14$, panel (a), (b) and (c), respectively. Other parameters: $a_0 = 0.05$ and $D = 2 \cdot 10^{-6}$. We note that the information content is maximum (lower entropy) for an intermediate value of $T$ and coupling strengths such that $\sigma_1\sigma_2 > 0$.} \label{Fig7} \end{figure} Classical measures to quantify linear ISI correlations are the serial correlation coefficients (SCCs, see \textit{Methods}). Next, we compare the results obtained with nonlinear symbolic ordinal analysis, with those obtained with SCCs. To do this, we first compare in Fig.~\ref{Fig8} how the ordinal probabilities and the SCCs vary while changing the mean ISI (we calculated the mean ISI $\langle I \rangle$ for each noise intensity within the range $10^{-6} \leqslant D \leqslant 10^{-3}$) for a fixed period $T$. We see that while the probabilities of ordinal patterns 012 and 210 (respectively three increasingly and decreasingly spikes) show a minimum at $\langle I \rangle = 4$ the other four show a maximum. This is captured as well with the linear measures $C_1$ and $C_2$, which respectively show a minimum and a maximum at $\langle I \rangle \approx 4$. Nevertheless correlations that appear for large noise, i.e. small $\langle I \rangle$, which are captured by ordinal patterns probabilities (they are outside the blue region) are not captured by the linear measures $C_1$ and $C_2$. \begin{figure}[!ht] \center \includegraphics[width=\columnwidth]{Fig8.pdf} \caption{{\bf Relation between ordinal probabilities, serial correlation coefficients and mean ISI.} (a) Ordinal probabilities and (b) serial correlation coefficients, $C_1$ and $C_2$, as a function of the mean ISI, $\langle I \rangle$, when the noise strength is varied within the range $10^{-6} \leqslant D \leqslant 10^{-3}$. The signal parameters are $T = 8$, $a_0 = 0.05$ and the coupling strength is $\sigma = 0.05$.} \label{Fig8} \end{figure} Next, we choose the trend patterns 012 and 210 (three increasingly longer or shorter ISIs), and analyze how their probabilities vary with the mean ISI, and compare with the variation of $C_1$ and $C_2$. Our goal is, first, to determine if there is any relation between the linear quantifiers of ISI correlations, $C_1$ and $C_2$, and the nonlinear ones, $P(012)$ and $P(210)$. Secondly, we want to analyze how they depend on $\langle I \rangle$ and $T$. Figure~\ref{Fig9} displays $P(012)$, $P(210)$, C$_1$ and $C_2$ as a function of $\langle I \rangle$ for four different periods $T = 6, 8, 10$ and $12$ in panels (a), (b), (c) and (d), respectively. A first thing we note is that the minimum of $P(012)$ and $P(210)$ tends to occur when $T \sim \langle I \rangle/2$ (black arrows in the different panels indicate $T = \left<I \right>/2$). We also note that when $\langle I \rangle$ is too short (i.e., the noise level is high), $C_1$ and $C_2$ are close to zero, regardless of the period of the signal; in contrast, $P(012)$ and $P(210)$ are not within the region of values which are consistent with uniform probabilities, and thus, carry information about the subthreshold signal. \begin{figure}[!ht] \center \includegraphics[width=0.8\columnwidth]{Fig9.pdf} \caption{{\bf} {\bf Comparison of results of linear and nonlinear measures.} $P(012)$, $C_1$ and $C_2$ in function of $\langle I \rangle$ for different periods $T = 6$ , $T = 8$, $T = 10$ and $T = 12$ in panels (a), (b), (c) and (d), respectively. Noise amplitude was within the range $10^{-6} \leqslant D \leqslant 10^{-3}$, $a_0 = 0.05$ and $\sigma = 0.05$.} \label{Fig9} \end{figure} Figure~\ref{Fig10} displays temporal series for two different values of the signal period $T = 6$ and $T = 8$ and the same noise intensity. We observe how for $T = 6$ ordinal pattern 012 is highly expressed in contrast to the period $T = 8$, for which it is less observed. \begin{figure}[!ht] \center \includegraphics[width=0.7\columnwidth]{Fig10.pdf} \caption{{\bf Examples of spike sequences where pattern 012 is more/less expressed.} Spike train of neuron 1 when the model parameters are such that the ordinal pattern 012 (i.e., three increasingly separated spikes) is more expressed (a) ($P(012) = 0.22$) and less expressed (b) ($P(012) = 0.08$). In (a) $T = 6$ and in (b) $T = 8$. Other parameters are $\sigma = 0.05$, $a_0 = 0.05$ and $D = 3.2\cdot 10^{-6}$.} \label{Fig10} \end{figure} Another relevant issue to discuss is how the coupling terms are implemented. While we have presented simulations of Eqs.~\ref{eq:model_2}, where the terms $\sigma_2 u_1$ and $\sigma_1 u_2$ couple neuron 1 to neuron 2 and vice-versa \cite{neiman_prl_2015}, we have also performed simulations with i) the coupling in the recovery-like variable (i.e., $\sigma_2 v_1$ and $\sigma_1 v_2$ added to the rate equations of $v_2$ and $v_1$ respectively) and ii) with differential coupling (i.e., $\sigma (u_1-u_2)$ and $\sigma (u_2-u_1)$ added to the rate equations of $u_1$ and $u_2$ respectively). We have consistently found that the probabilities of the ordinal patterns vary with both, the period and the amplitude of the signal, in a similar way as with with non diffusive coupling (see Fig.~\ref{Fig11}). We have also found that the relationship between $P(012)$, $P(210)$ and $\langle I \rangle$ shown in Fig.~\ref{Fig9} is robust. \begin{figure}[!ht] \center \includegraphics[width=0.75\columnwidth]{Fig11.pdf} \caption{{\bf Influence of diffusive coupling on the signal encoding.} Panels (a) and (b) display the ordinal probabilities as a function of $a_0$ (with $T = 10$) and as a function of $T$ (with $a_0 = 0.05$). Other parameters are $\sigma = 0.025$ and $D = 2\cdot 10^{-6}$. We note that the encoding of the signal features (amplitude and period) is as in Fig.~\ref{Fig6}, which was done with non diffusive coupling. } \label{Fig11} \end{figure} \section*{Discussion} We have studied two coupled FitzHugh-Nagumo neurons with a subthreshold periodic signal applied to one of them. We have used symbolic analysis to investigate the spike train fired by the neuron that perceives the signal. By applying ordinal analysis to the sequence of inter-spike intervals (ISIs) we have shown that the spike train has ordinal probabilities which depend on the signal features (the amplitude and the period). By lowering the firing threshold, the second neuron facilitates the detection and encoding of the signal applied to the first neuron. We have also shown that the ordinal probabilities achieve maximum or minimum values when the period of the external signal is about half the mean ISI. In addition, we have shown that, when the noise level is high, the ordinal probabilities encode information about the subthreshold signal, while the serial correlation coefficients (SCCs) at lag 1 and 2 vanish and mean ISI is independent of the signal period. Our findings contribute to advance the understanding of how neurons encode information about subthreshold signals in noisy environments. The encoding mechanism demonstrated here, by which the period and the amplitude of the applied sub-threshold signal are encoded in the values of the ordinal probabilities, is very slow if the probabilities are computed from the spike train of a single neuron, because a large number of spikes are needed in order to determine the probabilities of the different spike patterns. However, if the encoding is performed by a neuronal ensemble, then, the probabilities could be computed from the spike trains of a large number of neurons, and in this case, only few spikes per neuron are be enough to compute the probabilities. This ensemble-based mechanism allows also encoding a sub-threshold signal with time-varying amplitude and/or period. Therefore, as future work, it will be interesting to extend this study to models of neuronal ensembles~\cite{brunel_2000,roxin_prl_2005,Ostojic_2014,torcini_2017}. \section*{Materials and methods} \subsection*{Model} We consider two identical FitzHugh-Nagumo neurons \cite{FIT61a,NAG62}, mutually coupled as in \cite{neiman_prl_2015}, with a periodic signal applied to one of them (referred to as neuron 1): \begin{equation} \begin{gathered} \epsilon \dot{u_1}= u_1 - \frac{u_1^3}{3} - v_1 + a_0\cos(2\pi t/T) + \sigma_1 u_2 +\sqrt{2D}\xi_1(t),\\ \dot{v_1} = u_1+ a,\\ \epsilon \dot{u_2}= u_2 - \frac{u_2^3}{3} - v_2 + \sigma_2 u_1 +\sqrt{2D} \xi_2(t)\\ \dot{v_2} = u_2+ a \end{gathered} \label{eq:model_2} \end{equation} The coupling configuration is schematically represented in Fig.~\ref{Fig11}. The dimensionless variables $u_i$ and $v_i$ are a fast variable that represents the voltage of the membrane, and a recovery-like variable that represents the refractory properties of the membrane (slow variable); $a$ and $\epsilon$ are parameters that control the spiking activity of the uncoupled neurons. The coupling terms $\sigma_2 u_1$ and $\sigma_1 u_2$ mimic synaptic currents from neuron 1 to neuron 2 and vice-versa~\cite{neiman_prl_2015}. The signal has amplitude $a_0$ and period $T$. The noise is modeled with statistically independent Gaussian white noise terms [$\langle \xi_i(t)\xi_i(t')\rangle = \delta(t-t')$ and $\langle \xi_i(t)\xi_j(t)\rangle = \delta(i-j)$] and the noise level, $D$, is the same for both neurons. \begin{figure}[!ht] \center \includegraphics[width=0.75\columnwidth]{Fig12.pdf} \caption{{\bf Schematic representation of two mutually coupled neurons, one of which (neuron 1) perceives a periodic input signal.} $\sigma_1$ and $\sigma_2$ represent the strength of the coupling of neuron 2 to neuron 1, and of neuron 1 to neuron 2, respectively.} \label{Fig12} \end{figure} The values of the parameters, $a=1.05$ and $\epsilon = 0.01$, are chosen such that, when $D=0$ and $\sigma_1=\sigma_2=0$, the neurons are in the excitable regime: each neuron resides in a stable state (rest state) unless it is perturbed. If a strong enough perturbation occurs, the neuron leaves the rest state and after firing a spike, it returns to the rest state. Then, a refractory period follows during which another perturbation will not trigger a spike. The equations are integrated, starting from random initial conditions, using the Euler-Maruyama method with an integration step of $dt = 10^{-3}$. The signal parameters, $a_0$ and $T$, and the coupling coefficients, $\sigma_1$ and $\sigma_2$, are varied within the ``subthreshold'' region of the parameter space: without noise the voltage-like variables $u_1$ and $u_2$ display only small oscillations [see Fig. 1(a)]. For each set of parameters, the voltage-like variable of the neuron that receives the signal, $u_1$, is analyzed and the ISI sequence is computed, $\{I_i; I_i= t_{i+1} - t_{i}\}$ with $t_i$ defined by the condition $u_1(t_i) = 0$ considering only the ascensions. To compute the mean ISI and the coefficient $R$ (see \textit{Methods}) time-series with a minimum number of 100 spikes are generated (as this is sufficient to estimate the mean values of the ISI distribution), while to compute the ordinal probabilities, time-series with at least 10000 spikes are generated. This is because a large number of ordinal patterns are needed in order to determine if their probabilities are consistent or not with the uniform distribution \cite{REI16_2}. \subsection*{Methods} The regularity of the ISI sequence is often characterized by the coefficient $R$~\cite{PIK97}: \begin{equation} R=\frac{\sqrt{\langle I^2 \rangle - \langle I \rangle^2}}{\langle I \rangle}, \end{equation} where $\langle I \rangle$ is the mean value of the ISI distribution. Correlations between ISIs are characterized by the serial correlation coefficients (SCCs): \begin{equation} C_j = \frac{\langle (I_i - \langle I \rangle)(I_{i-j} - \langle I \rangle)}{\langle I^2 \rangle - \langle I \rangle^2} \label{eq:SCCs} \end{equation} where $j$ is an integer number. SCCs are a standard tool to analyze spike trains \cite{neiman_pre_2005,andre_pre_2017}, however, they only capture linear correlations. In contrast, a symbolic methodology known as \textit{ordinal analysis} \cite{BAN02} has been demonstrated to be well suited for detecting nonlinear correlations in spike trains \cite{amigo_2013,REI16,pre_2009}. In this approach the actual ISI values $\{I_{1}, ..., I_i, ..., I_{N}\}$ are not taken into account, instead, their relative temporal ordering is considered. Ordinal analysis transforms a particular signal into symbols, which are known as ordinal patterns. Here, ordinal analysis is used to study the spike train of neuron 1: the ISI sequence $\{I_{1}, ..., I_i, ..., I_{N}\}$ is transformed into a sequence of ordinal patterns, which are defined by the relative order of $L$ consecutive ISI values. Once the length $L$ of the ordinal patterns is defined, for each interval $I_i$ the subsequent $L - 1$ intervals are considered and compared. The total number of possible order relations (i.e., ordinal patterns of length $L$) is then equal to the number of permutations $L!$. If we set $L =2$ we have only two patterns: 12 and 21 for $I_1 < I_2$ and $I_1 > I_2$, respectively, but if we set $L = 3$, we have 3! = 6 possible ordinal patterns, which are listed in Table \ref{t:table1}. For example, we consider the following sequence of intervals $\{4.9, 3.4, 3.3, 3.2, 5.0, ...\}$. The first value $I_1 = 4.9$, when compare with $I_2$ = 3.4 and $I_3$ = 3.3 leads to the ordinal pattern 210 since $I_1 > I_2 > I_3$. As well for $I_2$, since $I_2 > I_3 > I_4$. But for $I_3$ we have pattern 102 since $I_4 < I_3 < I_5$. \begin{table}[!ht] \begin{adjustwidth}{-2.25in}{0in} \centering \captionof{table}{Ordinal patterns for $L = 3$} \begin{tabular}{ ||c c||} \hline \textsc{Symbol} & \textsc{Relation} \\ \hline\hline 012 & $I_3 > I_2 > I_1$ \\ 021 & $I_2 > I_3 > I_1$\\ 102 & $I_3 > I_1 > I_2$ \\ 120 & $I_2 > I_1 > I_3$ \\ 201 & $I_1 > I_3 > I_2$ \\ 210 & $I_1 > I_2 > I_3$ \\ \hline \end{tabular} \label{t:table1} \end{adjustwidth} \end{table} The symbolic sequence of ordinal patterns is computed using the function \texttt{perm} \underline{ } \texttt{indices} defined in \cite{PAR12}. Then, the ordinal probabilities are estimated as $p_i=N_i/M$ where $N_i$ denotes the number of times the i-th pattern occurs in the sequence, and $M=\sum_{i=1}^{L!} N_i$ denotes the total number of patterns. If the patterns are equi-probable one can infer that there are no preferred order relations in the timing of the spikes. On the other hand, the presence of frequent (or infrequent) patterns will result into a non-uniform distribution of the ordinal patterns. A binomial test will be used to analyze the significance of preferred and infrequent patterns: if all the ordinal probabilities are within the interval $[p - 3\sigma, p + 3\sigma]$ (with $p = 1/L!$ and $\sigma = \sqrt{p(1-p)/M}$), the probabilities are consistent with the uniform distribution, else, there are significant deviations which reveal the presence of preferred and infrequent patterns. A main advantage of this method is that it is simple to implement and can be applied directly to the ISI sequence (no need to pre-process the data). Here we use $L=3$, which allows to investigate order relations among three ISI (i.e., four consecutive spike times). This choice is motivated by the fact that the signal parameters and the coupling strengths are subthreshold, i.e., the firing activity of neuron 1 is driven by white noise (without noise, there are no spikes). Therefore, only short ISI correlations are expected in the spike train. \section*{Acknowledgments} This work was supported by Spanish MINECO (FIS2015-66503-C3-2-P) and the program ICREA ACADEMIA of Generalitat de Catalunya. \nolinenumbers
1,314,259,992,756
arxiv
\section{INTRODUCTION} Modern mathematics begins with \emph{symbolic manipulation}. The central role of signs and symbols \emph{per se} is one of the main achievement of the Medieval culture \cite{medieval-semiotics} leading, among others, to the development of elementary or \emph{symbolic algebra}. Starting from the latter, the syntactic manipulation of symbols more or less independently of their meaning --- i.e. to what symbols stand for --- has become an essential part of mathematical reasoning, not to say of reasoning \emph{in general}. Today, symbolic manipulation is not just a pillar of mathematics, but it is at the very hearth of \emph{computation}. Indeed, the symbolic manipulations of elementary algebra carry a computational content and, vice versa, computational processes can be fully described symbolically. Rewriting theory \cite{newman,terese} is the discipline that studies (the computational content of) symbolic manipulation in general. As such, rewriting has its origin both in symbolic algebra as the study of the algorithmic properties of equational reasoning, and in computability and programming language theory, where rewriting systems have been used to define symbolic models of computation --- such as the $\lambda$-calculus~\cite{Barendregt/Book/1984} and combinatory logic~\cite{curry-combinatory-logic-1985,lambda-calculus-and-combinators-hindley-seldin} --- as well as the (operational) semantics and implementation of programming languages~\cite{asperti-optimal}. In both cases, rewriting is motivated by the need to define \emph{operational} notions of equality revealing the computational content of equational deductions. Remarkably, operationality is ultimately achieved by making equality asymmetric, so that the aforementioned computational content can be fully uncovered by orienting equations. Nowadays, these oriented equations (and the evolution thereof) are known as \emph{rewriting} --- or \emph{reduction} --- relations. All of that highlights a crucial trait of rewriting theory, namely its deep connection with equational reasoning. In fact, rewriting does not actually focus on arbitrary symbolic transformations, but with \emph{equality-preserving} ones: a rewriting relation \emph{refines} equality by making the latter operational, and it is thus contained in it. Recent advances in theoretical computer science, however, have questioned the central role played by equality in semantics, arguing for more quantitative and approximated forms of equivalence. For instance, equality is a too strong notion for reasoning about probabilistic computations, where even small perturbations break the equivalence between probabilistic processes. To overcome this problem, researchers have thus refined equality to \emph{distances} between probabilistic processes, this way replacing equivalences with \emph{metrics}. Similarly, metric-based and approximated equivalences have been used to to reason about privacy and security of systems \cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017}, not to mention reasoning about intensional aspects of computation, such as resource consumption~\cite{DBLP:journals/pacmpl/LagoG22a,modal-reasoning-equal-metric-reasoning}. Prompted by that, several theories of semantic equality have been refined giving rise to \emph{quantitative theories of semantic differences}, prime examples being general theories of program~\cite{Arnold/Metric-interpretations/1980,DeBakker/Semantics-concurrency/1982,Escardo/Metric-model-PCF/1999,GaboardiEtAl/POPL/2017,Pierce/DistanceMakesTypesGrowStronger/2010,Gavazzo/LICS/2018,CrubilleDalLago/LICS/2015,CrubilleDalLago/ESOP/2017,DBLP:phd/basesearch/Gavazzo19,DBLP:journals/pacmpl/LagoG22a,DBLP:conf/ictcs/LagoG20,DLR2022,DBLP:conf/icalp/LagoGY19,DALLAGO2021} and system distances~\cite{Deng-Gebler/Behavioural-pseudometrics/2016,Bonchi/Behavioral-metrics-via-functor-lifting/FSTTCS/2014,Bonchi/Towards-trace-metrics-via-functor-fifting/CALCO/2015,Panangaden/Metric-markov-finite/2004,Panangaden/Metric-markov-infinite/2005,Gebler/Compositional-biismulation/2016} and the theory of quantitative{} algebras and quantitative{} equational reasoning~\cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017,plotkin-quantitative-algebras-2018,plotkin-quantitative-algebras-2018-bis,plotkin-quantitative-algebras-2021,plotkin-quantitative-algebras-2021-bis,DBLP:conf/lics/MioSV21}. The latter, in particular, aim to provide a common foundation for general quantitative reasoning by refining traditional, set-based algebraic structures to metric-like ones and by replacing traditional equations with \emph{quantitative{} equations} bounding the difference (or distance) between the equated elements. Accordingly, classic equations $t = s$ are replaced by expressions of the form $t \qequal{\varepsilon} s$, with the informal reading that $t$ and $s$ are at most $\varepsilon$ apart, or that they are equal up to an error $\varepsilon$. Thus, quantitative algebraic theories are not theories about \emph{equality} between objects, but about \emph{distances} between them, and can thus be seen as the quintessence of quantitative{} and metric reasoning. But what about the \emph{computational content} of quantitative{} equational reasoning? What is an \emph{operational} notion of quantitative{} equality or distance allowing us to effectively compute distances by means of quantitative{} equations? And, more generally, what is the theory of \emph{quantitative{} symbolic manipulation}, where symbolic transformations can break semantic equivalence? The development of such a theory, which is the main topic of this work, is of paramount importance not only to make quantitative{} equational deduction effective, but also to develop a general quantitative{} theory of programming language semantics. In this paper, we introduce the theory of \emph{quantitative{} and metric rewriting systems} as a first step towards a general theory of the computational content of quantitative{} symbolic manipulations. Such a theory is rich and subsumes and largely (and nontrivially) extends traditional rewriting. The goal of this paper is to lay the foundation of quantitative{} rewriting systems, this way opening the door to a larger research program. More specifically, in this work we introduce the notion of a quantitative{} abstract rewriting system and its general theory. We define quantitative{} notions of confluence and termination, refining cornerstone results such as the Newman~\cite{newman} and Hindley-Rosen~\cite{hindley-1964,rosen-70} Lemma to a quantitative{} and metric setting. Such notions are crucial for the \emph{operational} study of what we shall call \emph{metric word problems}, the quantitative{} refinement of traditional word problems. We then introduce \emph{quantitative{} linear and non-expansive term rewriting systems} and apply the general theory previously developed to such systems. Linearity and the related notion of non-expansiveness will be crucial to avoid distance triviliasation phenomena \cite{CrubilleDalLago/ESOP/2017,DBLP:phd/basesearch/Gavazzo19} and the failure of major confluence theorems. Concerning the latter, in fact, we shall prove general quantitative{} critical pair-like lemmas~\cite{Huet80} ensuring confluence of large families of linear and non-expansive systems. Finally, we go beyond linearity and non-expansiveness by introducing \emph{graded quantitative{} term rewriting systems}. In such systems, rewriting is not only quantitative{} but also \emph{modal} and \emph{context-sensitive}, meaning that contexts are not required to non-expansively propagate distances --- as in linear systems --- but they directly act on them, this way behaving as generalised Lipschitz continuous functions. We will extend the confluence results proved for non-expansive systems to graded ones, as well as prove an additional confluence result for orthogonal systems. All our theory is developed following the abstract relational theory of distances initiated by \citet{Lawvere/GeneralizedMetricSpaces/1973}, whereby we work with relations taking values in arbitrary quantales \cite{Rosenthal/Quantales/1990}. Such an approach has been successfully applied to the study of general theories of program and process distances \cite{Worrell-omega-categories,Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19,paul-wild-2022}. Moreover, since abstract metric and modal reasoning are essentially equivalent~\cite{DBLP:journals/pacmpl/LagoG22a,modal-reasoning-equal-metric-reasoning}, our theory can be seen both as a general theory of metric and quantitative rewriting systems and as a theory of modal and substructural rewriting, this way suggesting possible connections with modal and coeffectful systems~\cite{Orchard-icfp-2019,DBLP:conf/esop/GhicaS14,Mycroft-et-al/ICFP/2014,Gaboradi-et-al/ICFP/2016,DBLP:journals/pacmpl/AbelB20}. In addition to the just outlined theoretical development, in this paper we deal with several examples of quantitative{} (and modal) systems. Such examples come from the field of algorithms (notably edit distances on strings), quantitative{} algebras and algebraic effects (e.g. quantitative{} barycentric algebras), programming language theory (quantitative{} and graded combinatory logic), and combinations thereof. \paragraph{From Equality to Distances: A Gap} Before outlining the main contents and contributions of the present work in more detail, it is instructive to shortly stress the gap between traditional, equality-based reasoning and quantitative{} one (this gap will be the main theme of the first example in \autoref{section:long-intro}). When it comes to reason about equality between objects, we usually have at our disposal a \emph{heterogenous} arsenal of techniques, ranging from semantic and denotational characterisations of equality to symbolic and operational ones. Think about equality between (natural) numbers: we can approach it foundationally using set-theory or Peano arithmetic --- depending on whether we prefer a semantic or syntactic approach --- but we can also study it using algebra, category theory, or type theory, this way relying on its inductive nature. And that is not the end of the story, as we can also use plain number theory, this way building upon numerical and analytical techniques, rather than symbolic ones. When we move to quantitative{} equality and metric reasoning, the situation vastly changes and only a few of the aforementioned techniques are available, with a strong orientation towards numerical and analytical ones. On natural numbers, we can consider the Euclidean distance, which is ultimately defined numerically. And when it comes to reason about it, numerical and analytical techniques are largely used, other techniques being simply not available. However, the Euclidean distance (between natural numbers) has an embarrassingly simple inductive definition (and thus an associated induction principle) in terms quantitative{} equations, as well as a well-behaved associated notion of operational equality (i.e. rewriting). The same story can be told for many other (more challenging) distances, ranging from edit to transportation distances \cite{encyclopedia-of-distances}, in all cases obtaining elegant quantitative{} equational characterisations and well-behaved quantitative{} rewriting systems. As already mentioned, in the last decade researchers have started to realise that the mathematical heterogeneity characterising equality pertains to quantitative{} equality and distances too, although this new awareness it still in its infancy. This paper has the ambitious goal to contribute to all of that by beginning the exploration of the computational content of quantitative{} equality. \paragraph{Structure of the Paper} We dedicate \autoref{section:long-intro} to gently introduce the reader to the theory of quantitative{} and metric rewriting systems by means of concrete examples. After that, we move to the technical development of our theory, which is divided in three sections. Once recalled the necessary mathematical preliminaries (\autoref{sect:quantales}), in \autoref{section:qars} we outline a theory of \emph{abstract} quantitative{} rewriting systems, focusing on quantitative{} notions of confluence and termination. The main results proved are quantitative{} refinements of Newman's Lemma, Church-Rosser Theorem, and Hindley-Rosen Lemma. In \autoref{sect:qtrs}, we specialise the theory of \autoref{section:qars} to quantitative{} \emph{term} rewriting systems. We prove several quantitative{} critical pair-like lemmas for \emph{linear} and \emph{non-expansive} systems, and use them to infer nontrivial properties of the systems introduced in \autoref{section:long-intro}. Finally, in \autoref{sect:beyond-non-expansive-systems}, we go beyond linearity and introduce the theory of \emph{graded} (modal) quantitative{} rewriting systems. We extend quantitative{} critical pair lemmas to graded systems and prove that (graded) orthogonal systems are always confluent. Using such results, we obtain a confluence result for a system of graded combinators extending bounded combinatory logic. \section BEYOND TRADITIONAL REWRITING: SHAPING A THEORY } \label{section:long-intro} \renewcommand{e}{t} \renewcommand{f}{s} \renewcommand{g}{u} In this section, we gently introduce the reader to quantitative rewriting systems by looking at some simple examples of quantitative systems coming from diverse fields (e.g. algebra, programming language theory, and biology). For pedagogical reasons, we shall focus on \emph{non-expansive} systems (i.e. systems where rewriting inside terms non-expansively propagates distances) only, postponing graded systems to \autoref{sect:beyond-non-expansive-systems}. \subsection{From Equality to Differences: Warming-Up} \label{sect:natural-numbers} Let us begin with one of the simplest possible example: the system of natural numbers with the addition operation. Such a system can be define in several ways (algebraically, set-theoretically, numerically, etc) each giving a specific perspectives on equality between numbers. In this example, we model natural numbers \emph{symbolically} using the signature $\Sigma_{\mathcal{N}} \triangleq \{\texttt{Z}, \nsucc, \texttt{A}\}$ containing a constant $\texttt{Z}$ for zero, a unary function symbol $\nsucc$ for the successor function, and a binary function symbol $\texttt{A}$ for addition. Fixed a set $X$ of variables, we use the set\footnote{Given a signature $\Sigma$ and a set $X$ of variables, we denote by $\terms{\Sigma}{X}$ the collection of $\Sigma$-terms over $X$.} $\terms{\Sigma_{\mathcal{N}}}{X}$ to study natural numbers syntactically. Equality between terms in $\terms{\Sigma_{\mathcal{N}}}{X}$ is given by the relation $=_{N}$ inductively defined by the following rules.\footnote{Given an expression $e$ and a substituion $\sigma$ (i.e. a maps from variables to terms), we write $\subst{e}{\sigma}$ for the application of the substitution $\sigma$ to $e$.} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \[ \infer{\texttt{Z} =_{N} \texttt{Z}}{\phantom{\Phi}} \qquad \infer{\texttt{A}(x, \texttt{Z}) =_{N} x}{} \qquad \infer{\texttt{A}(x, \nsucc(y)) =_{N} \nsucc(\texttt{A}(x,y))}{} \] \[ \infer{x =_{N} x}{} \qquad \infer{y =_{N} x}{x =_{N} y} \qquad \infer{x =_{N} z}{x =_{N} y & y =_{N} z} \] \[ \infer{\nsucc(x) =_{N} \nsucc(y)}{x =_{N} y} \qquad \infer{\texttt{A}(x,y) =_{N} \texttt{A}(x',y')}{x =_{N} x' & y =_{N} y'} \qquad \infer{\subst{e}{\sigma} =_{N} \subst{f}{\sigma}}{e =_{N} f} \] \end{tcolorbox} } The first three rules are the defining axioms of $=_{N}$, whereas the last three rules close $=_{N}$ under substitutions and function symbols in $\Sigma_{N}$. The remaining three rules, finally, makes $=_{N}$ reflexive, symmetric, and transitive, and thus an equivalence. The equational system $(\Sigma_{\mathcal{N}}, =_{N})$ gives a symbolic approach to numerical equality. In fact, given two numbers (or numerical expressions defined as sums of natural numbers) $m$ and $n$, we can check whether $m$ and $n$ are equal in (at least) two ways: either we compute $m$ and $n$ numerically (assuming to have ways to perform calculations) \emph{or} we look at $m$, $n$ as expressions $e$, $f$ in $\terms{\Sigma_{\mathcal{N}}}{X}$ and manipulate them symbolically to produce a formal derivation of $e =_{N} f$. For instance, we see that $1+2$ is equal to $2+1$ because we numerically compute them --- obtaining $3$ in both cases --- or because we observe that $\texttt{A}(x,y) =_{N} \texttt{A}(y,x)$ is provable in $(\Sigma_{\mathcal{N}}, =_{N})$, and thus $\texttt{A}(\nsucc(\texttt{Z}), \nsucc(\nsucc(\texttt{Z}))) =_{N} \texttt{A}(\nsucc(\nsucc(\texttt{Z})), \nsucc(\texttt{Z}))$ is derivable. Furthermore --- and most importantly --- we can uncover the computational content of $=_{N}$ by orienting its defining equational axioms, this way obtaining a rewriting (or reduction) relation $\to_{N}$ defined as follows: { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \texttt{A}(x, \texttt{Z}) \mapsto_{N} x \qquad \texttt{A}(x, \nsucc(y)) \mapsto_{N} \nsucc(\texttt{A}(x,y)) \qquad \infer{C[\subst{e}{\sigma}] \to_{N} C[\subst{f}{\sigma}]} {e \mapsto_{N} f} \] \end{tcolorbox} } The first two axioms define the relation $\mapsto_{N}$, whereas the last rule extends to $\mapsto_{N}$ to the (full) rewriting relation $\to_{N}$. To define the latter, we have denoted by $C[\cdot]$ a context in $\terms{\Sigma_{\mathcal{N}}}{X}$ --- i.e. a term with a single occurrence of a hole $\Box$ --- and by $C[e]$ the term obtained by replacing the hole $\Box$ with $e$ in $C[\cdot]$. Accordingly, we see that $\to_{N}$ is obtained by applying (substitution) instances of $\mapsto_{N}$ inside arbitrary terms. The rewriting system $(\Sigma_{\mathcal{N}}, \mapsto_{N})$ fully describes equality in $(\Sigma_{\mathcal{N}}, =_{N})$ \emph{operationally}, in the sense that an equality $e =_{N} f$ is provable if and only if $e$ and $f$ are $\to_{N}$-convertible, meaning that there is a rewriting path from $t$ to $s$ obtained by performing a finite number of bidirectional rewriting steps (i.e. from left to right as well as from right to left). Additionally, $(\Sigma_{\mathcal{N}}, \mapsto_{N})$ enjoys several nice properties. In particular, it is confluent and terminating, meaning that convertibility (and thus $=_{N}$) is decidable and coincides with having the same normal form. \paragraph{The Computational Content of a Distance} What we have seen so far shows that equality between natural numbers can not only be defined symbolically as $=_{N}$, but also \emph{operationally} via $\to_{N}$, this way making explicit its computational content. All of that is no more than a classic introductory example to rewriting theory. Let us make a step further and ask the following question: what happens if we move from \emph{equality} to \emph{distances} between numbers? That is, what happens if instead of determining whether two numbers are equal or not, we ask the finer question about how much \emph{different} they are? Answering these questions numerically is not a problem at all: we consider the Euclidean metric and define the distance between two numbers $n$ and $m$ as $|n - m|$. But what about symbolic approaches? And what is the \emph{computational content}, if any, of the Euclidean distance? Answering these questions in the affirmative ultimately means finding systems like $(\Sigma_{\mathcal{N}}, =_{N})$ and $(\Sigma_{\mathcal{N}}, \mapsto_{N})$ describing, however, the Euclidean distance between numbers, rather than their equality. To define such systems, we refine $=_{N}$ and $\mapsto_{N}$ \emph{quantitatively}. Let us begin with $=_{N}$. Following the methodology of quantitative{} equational theories \cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017}, we move from traditional equations $e = f$ to \emph{quantitative{} equations}, that is \emph{ternary relations} $\mrel{\varepsilon}{e}{=}{f}$ relating pairs of terms $e, f$ with non-negative numbers\footnote{For the moment, whether we work with natural, rational, or real numbers is not relevant.} $\varepsilon$, the informal reading of a quantitative equation $\mrel{\varepsilon}{e}{=}{f}$ being that $e$ and $f$ are at most $\varepsilon$-apart.\footnote{ Other possible readings come from the world of metric spaces (\emph{the distance between $e$ and $f$ is at most $\varepsilon$}), intensional and resource analysis (\emph{given resource $\varepsilon$, the terms $e$ and $f$ can be proved equal}), and fuzzy and graded logic(s) (\emph{$e$ is equal to $f$ with degree $\varepsilon$}). } \begin{notation} To improve readability, we oftentimes abbreviate $\mrel{\varepsilon}{e}{=}{f}$ as $e \qequal{\varepsilon} f$. \end{notation} This shift from traditional to quantitative{} equality leads to a change in the classic rules of equational deduction which now have a quantitative{} flavour: transitivity, for instance, now describes the usual triangular inequality axiom of metric spaces. \[ \infer{\mrel{\varepsilon + \delta}{e}{=}{f}} {\mrel{\varepsilon}{e}{=}{g} & \mrel{\delta}{g}{=}{f}} \] We will see this kind of rules in detail throughout this paper, but for the moment we can leave them aside. Traditional equality now corresponds to the null (zero) distance, whereas congruence rules give \emph{non-expansiveness} of syntactic constructs. Non-expansiveness is a crucial feature of quantitative{} systems and we will say more about that in \autoref{sect:combinators-intro} and \autoref{sect:beyond-non-expansive-systems}. Finally, in order to deal with natural numbers, we add a single distance-producing, quantitative{} equation: $\mrel{1}{\nsucc(x)}{=_{N}}{x}$. This equation simply states that a number and its successor are at distance one. Overall, we obtain the following quantitative{} refinement of system $(\Sigma_{\mathcal{N}}, =_{N})$ which, overloading the notation, we still denote by $(\Sigma_{\mathcal{N}}, =_{N})$ (this will not create confusion, since from now on we shall deal quantitative{} systems only).\footnote{ The complete definition of $=_{N}$ actually requires the addition of all rules of quantitative{} equational deduction previously mentioned. Such rules (which are formally described in \autoref{sect:qtrs}) include the quantitative{} refinements of reflexivity, symmetry, and transitivity --- which essentially correspond to the usual identity of indiscernibles, symmetry, and triangle inequality axioms of metric spaces, respectively --- as well as structural rules for $=_{N}$ (for instance, there is a weakening rule stating that whenever $\mrel{\varepsilon}{e}{=_{N}}{f}$ is derivable, then so is $\mrel{\delta}{e}{=_{N}}{f}$, for any $\delta \geq \varepsilon$. } { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \infer{\nsucc(x) \qequal{1}_{N} x}{\phantom{F}} \qquad \infer{\texttt{Z} \qequal{0}_{N} \texttt{Z}}{} \qquad \infer{\texttt{A}(x, \texttt{Z}) \qequal{0}_{N} x}{} \qquad \infer{\texttt{A}(x, \nsucc(y)) \qequal{0}_{N} \nsucc(\texttt{A}(x,y))}{} \] \[ \infer{\nsucc(x) \qequal{\varepsilon}_{N} \nsucc(y)}{x \qequal{\varepsilon}_{N} y} \qquad \infer{\texttt{A}(x,y) \qequal{\varepsilon + \delta}_{N} \texttt{A}(x',y')}{x \qequal{\varepsilon}_{N} x' & y \qequal{\delta}_{N} y'} \qquad \infer{\subst{e}{\sigma} \qequal{\varepsilon}_{N} \subst{f}{\sigma}}{e \qequal{\varepsilon}_{N} f} \] \end{tcolorbox} } Notice that $=_{N}$ is an \emph{inductive} notion and that it defines a distance $E$ on $\terms{\Sigma_{\mathcal{N}}}{X}$ as $$ E(e,f) \triangleq \inf \{\varepsilon \mid e \qequal{\varepsilon}_{N} f\}. $$ Such a distance is a pseudometric and when it is applied to terms representing natural numbers it indeed gives the Euclidean distance between such numbers, hence showing that the latter distance has an inductively-defined algebraic characterisation. Let us now uncover the computational content of the Euclidean distance by giving an \emph{operational} account of $=_{N}$. We do so by refining the rewriting relation previously introduced to the (ternary) \emph{quantitative{} rewriting relation} $\to_{N}$ giving information on the distance produced when rewriting terms. We thus read $\mrel{\varepsilon}{e}{\to_{N}}{f}$ as stating that reducing $e$ to $f$ produces a difference $\varepsilon$ between the former and the latter. \begin{notation} As before, we often abbreviate $\mrel{\varepsilon}{t}{\to_{N}}{s}$ as $t \qreduce{\varepsilon}_{N} s$ (and similarly for the other rewriting relations we are going to introduce). \end{notation} We first define $\mapsto_{N}$ by stipulating that actual distances between terms are produced by deleting successor functions and extends it to the full quantitative{} rewriting relations $\to_{N}$ by \emph{non-expansively} propagating distances produced by substitution instances of $\mapsto_{N}$ throughout arbitrary contexts of the language. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \texttt{A}(x, \texttt{Z}) \qstepto{0}_{N} x \qquad \texttt{A}(x, \nsucc(y)) \qstepto{0}_{N} \nsucc(\texttt{A}(x,y)) \qquad \nsucc(x) \qstepto{1}_{N} x \] \vspace{-0.2cm} \[ \infer{C[\subst{e}{\sigma}] \qreduce{\varepsilon}_{N} C[\subst{f}{\sigma}]} {e \qstepto{\varepsilon}_{N} f} \] \end{tcolorbox} } The relation $\to_{N}$ induces a (rewriting) distance $N$ between terms defined by $$ N(e, f) \triangleq \inf \{\varepsilon \mid e \qreduce{\varepsilon}_{N} f\} $$ so that we obtain a quantitative{} relational system $\mathcal{N} = (\terms{\Sigma_{\mathcal{N}}}{X}, N)$, which is our first example of an \emph{abstract quantitative{} rewriting system}. We will study such systems in \autoref{section:qars}. For the moment, we simply say that as an abstract rewriting system consists of a set $A$ of objects together with a relation $R \subseteq A \times A$ on it, a \emph{quantitative{} abstract rewriting system} is given by a set $A$ together with a \emph{quantitative{} relation} $R: A \times A \to [0,\infty]$ on it.\footnote{ Actually, we will consider a more general form of quantitative{} relations (see \autoref{section:qars}), but for the moment it is more convenient to restrict to $[0,\infty]$-valued relations.} \emph{Quantitative term rewriting systems} are a special class of quantitative{} abstract rewriting systems where objects are terms and the quantitative{} relation $R$ is canonically defined as $R(e,f) \triangleq \inf\{\varepsilon \mid e \qreduce{\varepsilon}_{R} f\}$ starting from a ternary relation $\mapsto_{R}$ (then extended to $\to_{R}$) like those we have seen so far as. \begin{remark} Notice that in a quantitative{} abstract rewriting system $(A, R)$, the quantitative{} relation $R$ gives the rewriting distance between elements of $A$. Such a distance, however, is \emph{not} required to obey the usual (psuedo)metric axioms, nor a subset thereof. Such a requirement, in fact, would be morally the same as requiring a traditional rewriting relation to be an equivalence, which is clearly undesirable. \end{remark} Let us expand on quantitative relations. As pointed out by Lawvere \cite{Lawvere/GeneralizedMetricSpaces/1973}, quantitative relations (or distances) are governed by an algebra close to the one of ordinary relations\footnote{We could think about such an algebra as a monoidal algebra of relations.}~\cite{relational-mathematics} so that a large part of the calculus of relations~\cite{tarski-1941,relational-mathematics,algebra-of-programming} can be refined to give rise to a calculus of quantitative{} relations. In fact, by viewing binary relations as maps $R: A \times B \to \{\bot, \top\}$, we see that a quantitative relation simply refines the structure $(\{\bot, \top\}, \leq, \wedge)$ by replacing it with $([0,\infty], \geq, +)$, so that we can use this similarity to generalise many relational constructions and their properties to a quantitative{} setting. For instance, by refining the existential quantifier $\exists$ as the infimum $\inf$ and the Boolean meet $\wedge$ as addition $+$, we can define the composition between quantitative{} relations $R$, $S$ by $$ (R;S)(a,c) \triangleq \inf_b R(a, b) + S(b,c). $$ Consequently, we will say that a quantitative{} relation $R$ is transitive if $R;R \geq R$, i.e. if $$ \inf_b R(a, b) + R(b,c) \geq R(a,c), $$ which is nothing but the usual triangle inequality law. In the same way, we can refine the notions of reflexivity, symmetry, and transitivity to quantitative{} relations, this way obtaining exactly the defining axioms of a pseudometric. In particular, as any rewriting relation induces --- by taking its reflexive, symmetric, and transitive closure --- an equality between terms as convertibility, any rewriting distance $R$ defines a \emph{convertibility distance} (pseudometric, acutally) $\makedistance{R}$ by means of its reflexive, symmetric, and transitive closure. We shall see in detail the general theory of abstract quantitative{} relations \emph{\`a la} Lawvere in \autoref{sect:quantales}. What is relevant, for the moment, is that by modelling traditional rewriting relationally~\cite{backshouse-calculational-approach-to-mathematical-induction}, we can then rely on such a general theory for quantitatively refining them. Let us apply the ideas seen so far to the rewriting distance $N$. The pseudometric $\makedistance{N}$ gives the convertibility distance between terms, which is nothing but the distance $E$ induced by $=_{N}$, i.e. the Euclidean distance. Consequently, the Euclidean distance is not only obtained symbolically via $=_{N}$, but it is also completely described operationally as the convertibility distance induced by the quantitative{} rewriting system $\mathcal{N}$ (and thus by $\to_{N}$). At this point, it is natural to ask whether $\makedistance{N}$ (and thus $E$) has nice computational properties. Without much of a surprise, the `nice computational properties' we have in mind are the quantitative{} refinements of well-known rewriting notions, such as confluence and termination. We postpone precise definitions of these notions until \autoref{section:qars} and content ourselves with some intuitions behind them for now. Suppose we are approximating the distance $\makedistance{N}(e,f)$ with a bidirectional reduction path of the form $$ e \qreduce{\varepsilon_1} \cdot \stackrel{\varepsilon_2}{\leftarrow} \cdot \qreduce{\varepsilon_3} \cdots \stackrel{\varepsilon_{n-1}}{\leftarrow} \cdot \qreduce{\varepsilon_n} f $$ so that the convertibility distance between $e$ and $f$ given by this path is $\sum_{i=1}^n \varepsilon_i$. When asked to compute or approximate such a distance, it is desirable to have a term $g$ such that $$ e \qreduce{\delta_1} \cdots \qreduce{\delta_m} g \qreduceleft{\eta_p} \cdots \qreduceleft{\eta_1} f \quad \text{ and } \quad \sum_{i=1}^n \varepsilon_i \geq \sum_{j=1}^m \delta_i + \sum_{k=1}^p \eta_k. $$ Moving to distances, that means that to approximate $\makedistance{N}(e,f)$ (and thus $E$), we can restrict ourselves to proper rewriting rather than convertibility. Formally: $$ \makedistance{N}(e,f) = \inf_{g} N^*(e,g) + N^*(f,g), $$ where $N^*$ denotes the reflexive and transitive closure of $N$ (which is precisely the distance induced by the reflexive and transitive closure of $\to_{N}$). This is nothing but the quantitative{} refinement of the well-known Church-Rosser property. In a similar fashion, we obtain the quantitative{} refinement of confluence; and since $\mathcal{N}$ is confluent --- as we will see in \autoref{sect:qtrs-confluence} --- it also has the aforementioned (quantitative{)} Church-Rosser property. Additionally, $\mathcal{N}$ is terminating (in a suitable sense that we will see in \autoref{section:qars}), so that not only we can approximate $\makedistance{N}(e,f)$ by measuring the distance between the common reducts of $e$ and $f$, but we can also reduce the search space to normal forms. Now that the reader is warmed-up, we can move to slightly more involved (and interesting) examples: those will also give us the chance to introduce, still at an informal level, a few more notions related to quantitative{} rewriting systems. \subsection{Quantitative String Rewriting Systems} \label{sect:string-rewriting} Historically, rewriting systems have first appeared in the form of \emph{string rewriting systems}~\cite{string-rewriting-systems,thue}: we thus find appropriate to include examples of \emph{quantitative{} string rewriting system} in this motivational section. Recall that given an alphabet $\Sigma$, a string rewriting system is given by a relation $\mapsto_{R}$ on strings $\Sigma^*$ over $\Sigma$. The relation $\mapsto_{R}$ induces a rewriting relation $\to_{R}$ that rewrites substrings according $\mapsto_{R}$. That is, whenever we have $t \mapsto_{R} s$, then we have $u t v \to_{R} u s v$ too, where $u,v$ are strings and we write string concatenation as juxtaposition. In this example, we consider a family of classic examples of string rewriting systems: \emph{DNA-based systems}. Let us fix the alphabet $\Sigma_{\mathcal{M}} \triangleq \{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}$ of DNA bases (the latter $\mathcal{M}$ stands form \emph{molecule}). We view strings over this alphabet as representing DNA molecules or DNA sequences, so that, for instance, we view a string such as $\texttt{T}\texttt{A}\texttt{G}\texttt{C}\texttt{T}\texttt{A}\texttt{G}\texttt{C}\texttt{T}\texttt{A}\texttt{G}\texttt{C}\texttt{T}$ as describing a DNA molecule. A string rewriting system over $\Sigma$ specifies how DNA molecules can be transformed into one another, and thus it is a crucial tool to deal with \emph{word-problems}, i.e. problems asking whether two DNA molecules are equal. In fact, once we know that equality coincides with convertibility in a string rewriting system, it is sufficient to prove confluence of the latter to obtain semi-decidability of equality (and thus of its associated word problem); and if, additionally, the system is terminating, then equality is decidable. \paragraph{Quantitative String Rewriting Systems} When we transform a DNA molecule into another one, however, we usually obtain \emph{different} molecules, so that reasoning about DNA sequences in terms of equality or convertibility is often too restrictive. And in fact, researchers are more interested in measuring distances between molecules rather than studying their equivalence. For instance, if we modify a DNA molecule to cure or prevent a disease, we obviously do \emph{not} want our modification to make the involved molecules equivalent. Similarly, to measure DNA compatibility and similarity it is not realistic to look at exact equivalence between molecules: instead, one should look for metrics and distances between them. To cope with these problems, we move from traditional string rewriting systems to \emph{quantitative{} string rewriting systems}. Following the same ideas of \autoref{sect:natural-numbers}, we refine string rewriting relations to ternary quantitative{} (rewriting) relations relating pairs of strings with non-negative extended real numbers.\footnote{ We apply the same notational conventions of previous section, hence using the notations $\mrel{\varepsilon}{e}{\mapsto_{R}}{f}$ and $e \qstepto{\varepsilon}_{R} f$ interchangeably. } Here is an example of quantitative{} string rewriting system --- called $\mathcal{M}$ --- over the DNA alphabet, where $\lambda$ denotes the empty string, $b, c \in \{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}$ and $b \neq c$ in the last rule. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=\linewidth/2,colframe=black,colback=black!0!white,arc=0mm] $$ b \qstepto{1}_{M} \lambda \qquad \lambda \qstepto{1}_{M} b \qquad b \qstepto{1}_{M} c $$ \end{tcolorbox} } Ignoring its quantitative dimension, system $\mathcal{M} = (\Sigma_{\mathcal{M}}, \mapsto_{M})$ allows us to substitute bases with one another inside any molecule --- so that, for instance, we can always replace $\texttt{A}$ with $\texttt{G}$ --- as well as to arbitrarily erase and insert bases inside a molecule. This results in an inconsistent (equational) system, in the sense that any two molecules are convertible. The situation changes when we take the quantitative{} information into account. A rewriting step $e \qstepto{\varepsilon} f$ gives the distance between $e$ and $f$ and a rewriting sequence $$ f_1 \qreduce{\varepsilon_1} f_2 \qreduce{\varepsilon_2} \cdots \qreduce{\varepsilon_{n-1}} f_n $$ produces the distance $\varepsilon_i$ when rewriting $f_i$ into $f_{i+1}$, so that the overall distance between $f_1$ and $f_n$ is bounded by $\sum_i \varepsilon_i$. For this example, we have stipulated the distance produced by substitution, inserting, and deleting a base to be $1$, although we could have chosen any non-negative extended real number. For instance, the rewriting relation defined below measures mutations between a purine ($\texttt{A},\texttt{G}$) and a pyrimidine $(\texttt{C}$, $\texttt{T}$) only.\footnote{ The distance induced by these mutations is used to study virus and cancer proliferation under control of drugs or the immune system~\cite{encyclopedia-of-distances}.} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=\linewidth/2,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} \texttt{A} &\qstepto{1}_{} \texttt{C} & \texttt{G} &\qstepto{1}_{} \texttt{T} & \texttt{A} &\qstepto{1}_{} \texttt{T} \\ \texttt{A} &\qstepto{0}_{} \texttt{G} & \texttt{G} &\qstepto{1}_{} \texttt{C} & \texttt{C} &\qstepto{0}_{} \texttt{T} \end{align*} \end{tcolorbox} } But what is the meaning of such distances? As before, the rewriting relation $\to_{M}$ induces a distance $M$ on molecules defined by $ M(e,f) \triangleq \inf\{\varepsilon \mid e \qreduce{\varepsilon}_{M} f\}, $ so that $(\Sigma^*_{\mathcal{M}}, M)$ is a quantitative{} abstract rewriting system. Consequently, we can consider the convertibility pseudometric $\makedistance{M}$ induced by $M$ and realise that the latter is nothing but the \emph{Levenshtein distance} \cite{string-algorithms,encyclopedia-of-distances} between DNA molecules. This means that system $\mathcal{M}$ is a way to formalise the computational content of the Levenshtein distance, and that $M$ is an operational definition of the latter. In particular, any bidirectional rewriting sequence between molecules $e$ and $f$ approximates the Levenshtein distance between them. At this point, we can study properties of the Levenshtein distance and, most importantly, of its computation relying on the theory of quantitative{} rewriting systems that we will introduce later in this paper. As we shall see, $M$ is confluent, so that we can approximate the convertibility distance (i.e. the Levenshtein distance) between molecules as the sum of the rewriting distances into their common reducts: $$ \makedistance{M}(e,f) = \inf_g M^*(e,g) + M^*(f,g). $$ Additionally, even if system $\mathcal{M}$ is not terminating, we can extract a terminating system out of it. All of that holds not only for $\mathcal{M}$, but also for its variations. For instance, allowing $\mapsto_{M}$ to perform substitutions only (so that we simply have the rule $b \qstepto{1}_{M} c$, for $b,c$ different bases), we see that $\makedistance{M}$ measures the number of mutations between DNA sequences, and thus gives the \emph{Hamming distance} between molecules~\cite{string-algorithms}. Similarly, the distance induced by the previously defined quantitative{} relation measuring mutations between purines ($\texttt{A},\texttt{G}$) and pyrimidines $(\texttt{C}$, $\texttt{T}$) gives the so-called Eigen–McCaskill–Schuster distance\footnote{One obtains the Watson–Crick distance in a similar way~\cite{encyclopedia-of-distances}.} between molecules~\cite{encyclopedia-of-distances}. \paragraph{Metric Word Problems} Other interesting properties of the Levenshtein distance, as well as of the other aforementioned distances, can be described operationally in terms of \emph{metric word problems}. Contrary to traditional (string) rewriting systems, in the quantitative{} world a word problem can take several forms to which we shall generically refer to as \emph{metric word problems}. Here are some of those. \begin{description} \item[\textbf{Reachability}] The \emph{reachability problem} is the quantitative{} refinement of the traditional word problem. Given a quantitative{} (abstract) rewriting system $\mathcal{R} = (A, R)$, the reachability problem asks whether $\makedistance{R}(a,b) < \infty$, for elements $a, b \in A$. If $R$ is confluent, then the reachability problem is semi-decidable; if, additionally, $R$ is terminating, then we obtain decidability of the reachability problem. The reachability problem for $\mathcal{M}$ --- i.e. for its associated abstract system $(\Sigma^*_{\mathcal{M}}, M)$ --- as well as for variations thereof, is indeed decidable. In fact, in this paper we will introduce several techniques to prove confluence of $M$. Moreover, even if $M$ is not terminating, we can easily extract a terminating rewriting relation out of it (for instance, we force substitution rules to be asymmetric and stipulate that molecules can be deleted, but not inserted). \item[\textbf{$\varepsilon$-Reachability}] More interesting metric word problems are obtained by strengthening the reachability problem to what we shall call \emph{$\varepsilon$-reachability} problems. Fixing a number $\varepsilon$, the $\varepsilon$-reachability problem asks whether $\makedistance{R}(a,b) < \varepsilon$ holds. Equivalently, the $\varepsilon$-reachability problem asks whether there exists a bidirectional $R$-rewriting sequence between $a$ and $b$ producing distance $\varepsilon$. Contrary to reachability (which is nothing but $\infty$-reachability), confluence and termination are in general not enough to solve the $\varepsilon$-reachability problem. In fact, looking at the rewriting paths leading to the common normal form (if any) of two objects $a, b$ can give too coarse (over)approximations of $\makedistance{R}(a,b)$ only, as illustrated by the following rewriting diagram: \[ \xymatrix{ a \ar@{<->}[rr]^{\varepsilon + \delta} \ar[rd]_{\varepsilon} & & b \ar[ld]^{\delta} \\ & c \ar@{->>}[d]^{\eta} & \\ & d & } \] Assuming the system to be confluent, one can try to obtain better approximations by enlarging the state space and looking at arbitrary common reducts (and this strategy is indeed sound and complete, since confluence of $R$ entails $\makedistance{R}(a,b) = \inf_{c} R^*(a,c) + R^*(b,c)$). That, however, does not solve the $\varepsilon$-reachability problem either, as there may be infinitely many such reducts (see, for instance, \autoref{ex:example-4}). \item[\textbf{Shortest Path}] The \emph{shortest path} is a problem specific to quantitative{} string and term rewriting systems. Given such a system $\mathcal{R} = (\Sigma, \mapsto_{R})$, the shortest path problem for $\mathcal{R}$ asks to determine whether $$ \makedistance{R}(e,f) = \min\{\varepsilon \mid \mrel {\varepsilon} {e} {\equiv_{R}} {f}\}, $$ i.e. whether the infimum $\inf\{\varepsilon \mid \mrel {\varepsilon} {e} {\equiv_{R}} {f}\}$ is achieved by an actual conversion $\mrel {\varepsilon} {e} {\equiv_{R}} {f} $ (we write $\equiv_{R}$ for the conversion ternary relation induced by $\to_{R}$). All the systems seen in this section have a shortest path, although finding such a path is usually difficult. Indeed, in all these cases, shortest paths are usually found relying on optimisation techniques and dynamic programming~\cite{string-algorithms}, and it is thus an interesting question whether solutions to this problem can be given in terms of quantitative{} rewriting. The shortest path problem, additionally, is particularly interesting from a rewriting perspective because it opens the door to another problem: the \emph{optimal strategy problem}. \item[\textbf{Optimal Strategy}] Assuming a term or string rewriting system $\mathcal{R} = (\Sigma, \mapsto_{R})$ to have shortest paths, the \emph{optimal strategy} problem asks whether there exists a quantitative{} rewriting strategy $\to_{R_{\mathtt{s}}}$ such that:\footnote{Actually, several variations of this problem can be given simply by replacing $\to_{R_{\mathtt{s}}}^*$ with other relations related to $\to_{R_{\mathtt{s}}}$.} $$ \mrel {\varepsilon} {e} {\equiv_{R}} {f} \text{ } \iff \text{ } \mrel {\varepsilon} {e} {\to_{R_{\mathtt{s}}}^* } {f}. $$ An optimal strategy or an approximation thereof for $\mathcal{M}$ can be then used to efficiently compute distances distance between DNA molecules. To the best of the authors' knowledge, optimal strategy problems for the systems considered so far are still open. \end{description} \subsection{Beyond Traditional Rewriting: Quantitative Term Rewriting Systems} \label{sect:combinators-intro} We now go beyond string rewriting systems and take a closer look at examples of quantitative{} \emph{term} rewriting systems. Even if we have already seen an example of a quantitative{} term rewriting system --- the system of natural numbers of \autoref{sect:natural-numbers} --- we now consider more interesting examples of such systems and take the chance to illustrate further features of quantitative{} rewriting. Among the many examples available, we focus on those coming from the field of (quantitative) algebras and programming language theory. \paragraph{Affine Combinatory Logic} As a first example, we consider a basic system of affine combinators that we shall enrich with effectful and quantitative{} primitives in subsequent sections: system $\mathcal{K}$ of \emph{affine combinatory logic}~\cite{Barendregt/Book/1984,hindley-basic-simple-type-theory,lambda-calculus-and-combinators-hindley-seldin}. System $\mathcal{K}$ has three constants (known as basic combinators) --- $\texttt{B}$, $\texttt{C}$, and $\texttt{K}$ --- and a single binary operation symbol $\cdot$ for application. We denote by $\Sigma_{\mathcal{K}}$ the signature thus obtained. As usual, we assume application to associate to the left and omit unnecessary parentheses. We refer to terms written by means of variables, basic combinators, and application as \emph{combinators}. Even if $\mathcal{K}$ is historically defined as an equational theory (from which a rewriting system is then extracted), we directly define $\mathcal{K}$ by means of rewriting rules as follows, with $\mapsto$ being the (ground) reduction relation and $\to$ being defined by applying substitution instances of $\mapsto$ inside arbitrary context. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*3)/4,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \texttt{B} \cdot x \cdot y \cdot z \mapsto x \cdot (y \cdot z) \qquad \texttt{C} \cdot x \cdot y \cdot z \mapsto x \cdot z \cdot y \qquad \texttt{K} \cdot x \cdot y \mapsto x \] \vspace{-0.2cm} \[ \infer{C[t^\sigma] \to C[s^\sigma]}{t \mapsto s} \] \end{tcolorbox} } To obtain a quantitative{} refinement of system $\mathcal{K}$, we assign distances in $[0,\infty]$ to basic rewriting rules, this way obtaining the quantitative{} rewriting relation $\mapsto_{K}$ defined by\footnote{ We use the same notational conventions introduced for string rewriting systems.} \begin{align*} \texttt{B} \cdot x \cdot y \cdot z &\qstepto{0}_{K} x \cdot (y \cdot z) \\ \texttt{C} \cdot x \cdot y \cdot z &\qstepto{0}_{K} x \cdot z \cdot y \\ \texttt{K} \cdot x \cdot y &\qstepto{0}_{K} x, \end{align*} and then extending $\mapsto_{K}$ to $\to_{K}$ by non-expansively propagating distances produced by substitution instances of $\mapsto_{K}$ throughout arbitrary contexts of the language. \[ \infer{C[t^\sigma] \qreduce{\varepsilon}_{K} C[s^\sigma]}{t \qstepto{\varepsilon}_{K} s} \] Although quantitative, the system thus obtained can only produce trivial distances (i.e. either $0$ or $\infty$), since no rewriting rule creates non-zero distances. In the next section, we will introduce effectful quantitative{} extensions of $\mathcal{K}$. For the moment, we simply extend $\mathcal{K}$ with (the combinatory counterpart of) system $\mathcal{N} = (\Sigma_{\mathcal{N}}, \mapsto_{N})$ of \autoref{sect:natural-numbers}. That is, we add to $\mathcal{K}$ natural numbers and addition. Even if the resulting system is not particularly interesting from a programming language perspective, it gives us the chance to illustrate the role played by linearity in quantitative{} and metric reasoning. We thus consider the three additional basic combinators: $\texttt{Z}$, $\texttt{S}$, and $\texttt{A}$ for zero, successor, and addition, respectively. The system $\BCK_{\NATS} = (\Sigma_{\BCK_{\NATS}}, \mapsto_{\combrelone_\natrelone})$ of affine combinators with natural numbers is given by the signature $\Sigma_{\BCK_{\NATS}} \triangleq \Sigma_{\bck} \cup \{\texttt{Z},\texttt{S}, \texttt{A}\}$ and the quantitative{} rewriting relation defined thus: { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*3)/4,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \texttt{B} \cdot x \cdot y \cdot z \qstepto{0}_{\combrelone_\natrelone} x \cdot (y \cdot z) \qquad \texttt{C} \cdot x \cdot y \cdot z \qstepto{0}_{\combrelone_\natrelone} x \cdot z \cdot y \qquad \texttt{K} \cdot x \cdot y \qstepto{0}_{\combrelone_\natrelone} x \] \vspace{-0.2cm} \[ \texttt{A} \cdot x \cdot \texttt{Z} \qstepto{0}_{\combrelone_\natrelone} x \qquad \texttt{A} \cdot x \cdot (\texttt{S} \cdot y) \qstepto{0}_{\combrelone_\natrelone} \texttt{S} \cdot (\texttt{A} \cdot x \cdot y) \qquad \texttt{S} \cdot x \qstepto{1}_{\combrelone_\natrelone} x \] \vspace{-0.2cm} \[ \infer{C[t^\sigma] \qreduce{\varepsilon}_{\combrelone_\natrelone} C[s^\sigma]}{t \qstepto{\varepsilon}_{\combrelone_\natrelone} s} \] \end{tcolorbox} } As usual $\mapsto_{\combrelone_\natrelone}$ induces a distance $\combrelone_\natrelone$ on combinators defined by $ \combrelone_\natrelone(t,s) \triangleq \inf\{\varepsilon \mid t \qreduce{\varepsilon}_{\combrelone_\natrelone} s\}, $ from which we obtain a pesudometric $\makedistance{\combrelone_\natrelone}$. Let us now turn our attention to the definition of $\to_{\combrelone_\natrelone}$. Given a quantitative{} rewriting relation $\mapsto_{R}$, all the systems considered so far define $\to_{R}$ by forcing \emph{non-expansiveness} of contexts and substitution (cf. quantitative{} equational theories). System $\BCK_{\NATS}$ is no exception. The defining rules of $\to_{\combrelone_\natrelone}$ ensures that the application operation is non-expansive with respect to $\makedistance{\combrelone_\natrelone}$. Formally: $$ \makedistance{\combrelone_\natrelone}(t,t') + \makedistance{\combrelone_\natrelone}(s,s') \geq \makedistance{\combrelone_\natrelone}(t\cdot s, t' \cdot s'). $$ In particular, if $\mrel{\varepsilon}{e}{\to_{\combrelone_\natrelone}}{e'}$ and $\mrel{\delta}{f}{\to_{\combrelone_\natrelone}}{f'}$, then $\mrel{\varepsilon+\delta}{e \cdot f}{\to_{\combrelone_\natrelone}}{e' \cdot f'}$. Non-expansiveness, however, does not come for free: it is a direct consequence of \emph{linearity}\footnote{ The word \emph{linearity} is used both in rewriting and in logic with different, although similar, meaning. For the moment, we use it informally to indicate the absence of variable duplication, leaving formal definitions to the technical part of this paper.} of $\BCK_{\NATS}$. In fact, the addition of a non-linear combinator such as $\texttt{W}$ directly leads to breaking non-expansiveness (in the sense that forcing non-expansiveness leads to undesired results, such as distance trivialisation and non-confluence). To see that, let us add the basic combinator $\texttt{W}$ together with the following rewriting rule to our system. $$ \texttt{W} \cdot x \cdot y \qstepto{0}_{\combrelone_\natrelone} x \cdot y \cdot y $$ We now show that the presence of $\texttt{W}$ makes quantitative{} reasoning trivial. \begin{notation} Let us write $\code{n}$ for the combinator $\texttt{S} \cdot ( \cdots \cdot (\texttt{S} \cdot \texttt{Z}))$, with $n$ applications of $\texttt{S}$, so that $\makedistance{\combrelone_\natrelone}(\code{n}, \code{m}) = |n - m|$ and $\makedistance{\combrelone_\natrelone}(\texttt{A} \cdot \code{n} \cdot \code{m}, \code{n+m}) = 0$. \end{notation} \begin{proposition}[Distance Trivialisation] \label{prop:distance-trivialisation} In presence of the combinator $\texttt{W}$, the convertibility distance $\makedistance{\combrelone_\natrelone}$ trivialises, meaning that the distance $\makedistance{\combrelone_\natrelone}(t,s)$ is either $0$ or $\infty$, for all combinators $t,s$. \end{proposition} \begin{proof} Given combinators $t,s$, let $\makedistance{\combrelone_\natrelone}(t,s) = \varepsilon$. If $\varepsilon$ is $\infty$, we are done. Otherwise, $\varepsilon$ is a natural number, since the defining rule of $\mapsto_{\combrelone_\natrelone}$ ensures the codomain of $\makedistance{\combrelone_\natrelone}$ to actually be $\mathbb{N}^{\infty}$. Consequently, we have combinators $\code{m}$, $\code{n}$ such that $\makedistance{\combrelone_\natrelone}(\code{m}, \code{n}) = \varepsilon$. Notice also that whenever we have combinators $t,t',s,s'$ such that $\makedistance{\combrelone_\natrelone}(t,t') = 0$ and $\makedistance{\combrelone_\natrelone}(s,s') = 0$, then $\makedistance{\combrelone_\natrelone}(t,s) = \makedistance{\combrelone_\natrelone}(t',s')$. Thus, for instance, we see that $$ \makedistance{\combrelone_\natrelone}(\texttt{W} \cdot t \cdot s, \texttt{W} \cdot t' \cdot s') = \makedistance{\combrelone_\natrelone}(t \cdot s \cdot s, t' \cdot s' \cdot s'). $$ Non-expansiveness of application then gives (using $\makedistance{\combrelone_\natrelone}(\texttt{W},\texttt{W}) = 0$ and $\makedistance{\combrelone_\natrelone}(\texttt{A},\texttt{A}) = 0$): \begin{align*} \makedistance{\combrelone_\natrelone}(\code{n}, \code{m}) &\geq \makedistance{\combrelone_\natrelone}(\texttt{W} \cdot \texttt{A} \cdot \code{n}, \texttt{W} \cdot \texttt{A} \cdot \code{m}) \\ &= \makedistance{\combrelone_\natrelone}(\texttt{A} \cdot \code{n} \cdot \code{n}, \texttt{A} \cdot \code{m} \cdot \code{m}) \\ &= \makedistance{\combrelone_\natrelone}(\code{n+n}, \code{m+m}), \end{align*} meaning that we have $\varepsilon \geq \varepsilon + \varepsilon$. In $\mathbb{N}^{\infty}$ this is possible only for $0$ and $\infty$, and thus we conclude $\makedistance{\combrelone_\natrelone}(t,s)=0$. \end{proof} \autoref{prop:distance-trivialisation} is known as \emph{distance trivialisation} or \emph{distance amplification} \cite{DBLP:phd/basesearch/Gavazzo19,CrubilleDalLago/ESOP/2017} and it has been deeply investigated studying (effectful) program distancing. Linearity, and variations thereof, are a way to avoid trivialisation of quantitative{} reasoning. Additionally, we shall see in \autoref{sect:qtrs-confluence} that linearity is also crucial to ensure quantitative{} forms of confluence, the latter being, together with distance trivialisation, the reason why in \autoref{sect:qtrs} we will focus on \emph{linear non-expansive} rewriting systems. In \autoref{sect:beyond-non-expansive-systems}, we will see how moving to \emph{graded (modal) systems} gives us a way to go beyond the linearity assumption and refine non-expansive systems to \emph{Lipschitz continuous} ones. \subsubsection{Effectful Combinatory Logic} System $\BCK_{\NATS}$ of affine combinators and arithmetic allowed us to highlight the role of linearity and non-expansiveness in quantitative{} reasoning. Apart from that, system $\BCK_{\NATS}$ is not particularly interesting. Here, we extend affine combinators with quantitative{} algebraic theories modelling computational effects~\cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017,plotkin-quantitative-algebras-2018,plotkin-quantitative-algebras-2018-bis,plotkin-quantitative-algebras-2021,plotkin-quantitative-algebras-2021-bis,DBLP:conf/lics/MioSV21}. Probabilistic~\cite{foundations-probabilistic-programming} and, more generally, effectful programming languages have been extensively studied in the last decade using, among others, probabilistic~\cite{Bournez-2002,Bournez-2005,Faggian-2019,Faggian-Ronchi-2019,dal-lago-avanzini-yamada} and effectful~\cite{Gavazzo-Faggian-2021} rewriting systems. Such systems come in two flavours, depending on whether effects are considered internally or externally to the system. In the latter case, one obtains \emph{probabilistic}~\cite{Bournez-2002,Bournez-2005,dal-lago-avanzini-yamada} and \emph{monadic rewriting systems}~\cite{Gavazzo-Faggian-2021}, In the former case, instead, one models (equational theories defining) computational effects themselves as rewriting systems and then combines the latter with the actual calculus or programming language at hand, which is modelled as a rewriting system itself. Here, we follow the later approach and look at computational effects as defined by quantitative{} equational theories~\cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017}. \paragraph{Barycentric Algebras} Let us begin with one of the main examples of a quantitative equational theory: \emph{barycentric algebras}. Barycentric algebras have been introduced by \citet{stone49} as an equational axiomatisation of finite distributions, and they have recently refined as a \emph{quantitative{} equational theory} by \citet{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017}. Here, we present such a quantitative{} refinement directly as a quantitative{} term rewriting system. Let us consider a signature $\Sigma_{\mathcal{B}}$ containing a family of binary probabilistic choice operations $\barplus{\epsilon}$ indexed by rational numbers $\epsilon \in \mathbb{Q} \cap [0,1]$. The quantitative{} term rewriting system $\mathcal{B} = (\Sigma_{\mathcal{B}}, \mapsto_{B})$ of Barycentric algebras is defined thus: { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*3)/4,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} x \barplus{1} y &\qstepto{0}_{B} x & \\ x \barplus{\epsilon} y &\qstepto{0}_{B} y \barplus{1 - \epsilon} x & \\ (x \barplus{\epsilon_1} y) \barplus{\epsilon_2} z &\qstepto{0}_{B} x \barplus{\epsilon_1 \epsilon_2} (y \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z) & \epsilon_1, \epsilon_2 \in (0,1) \\ x \barplus{\epsilon} y &\qstepto{\varepsilon}_{B} z \barplus{\epsilon} y & \epsilon \leq \varepsilon \in \mathbb{Q} \cap [0,1] \end{align*} \end{tcolorbox} } Notice that system $\mathcal{B}$ does not have the idempotency rule $ x \barplus{\epsilon} x \qstepto{0}_{B} x$, meaning that we are actually modelling \emph{multi-distributions}~\cite{dal-lago-avanzini-yamada} rather than distributions: this guarantees linearity of $\mathcal{B}$ and thus agrees with the definition of a probabilistic rewriting system by \citet{dal-lago-avanzini-yamada}, which is indeed based on multi-distributions. The operation $\barplus{\epsilon}$ behaves as an unfair (binary) probabilistic choice operation weighted by $\epsilon$, so that we can read $x \barplus{\epsilon} y$ as stating that we have $x$ with probability $\epsilon$ and $y$ with probability $1 - \epsilon$. Accordingly, for a set $X$ of variables, a term in $\terms{\Sigma_{\mathcal{B}}}{X}$ can be seen as a finite \emph{formal sum}, i.e. a syntactic representation of a finitely supported distribution. As usual, starting from $\mapsto_{B}$, we obtain the rewriting relation $\to_{B}$, the rewriting distance $B$, and the convertibility distance $\makedistance{B}$. Remarkably, the latter is precisely the \emph{total variation distance}~\cite{Villani/optimal-transport/2008} between multi-distributions (see \autoref{sect:beyond-non-expansive-systems} for another example of a probabilistic distance). We can then combine systems $\mathcal{B}$ and $\bck$ (or even $\BCK_{\NATS}$), this way obtaining the quantitative{} term rewriting system $\BCK_{\BA} = (\Sigma_{\mathcal{K}} \cup \Sigma_{\mathcal{B}}, \mapsto_{\combreloneB})$ for probabilistic affine combinatory logic,\footnote{Another, more powerful, system is obtained by modelling operations $\barplus{\epsilon}$ as combinators.} as summarised in \autoref{figure:probabilistic-bck}. \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=\linewidth,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} \texttt{B} \cdot x \cdot y \cdot z &\qstepto{0}_{\combreloneB} x \cdot (y \cdot z) & (x \barplus{\epsilon_1} y) \barplus{\epsilon_2} z &\qstepto{0}_{\combreloneB} x \barplus{\epsilon_1 \epsilon_2} (y \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z) & \epsilon_1, \epsilon_2 \in (0,1) \\ \texttt{C} \cdot x \cdot y \cdot z &\qstepto{0}_{\combreloneB} x \cdot z \cdot y & x \barplus{\epsilon} y &\qstepto{\varepsilon}_{\combreloneB} z \barplus{\epsilon} y & \epsilon \leq \varepsilon \in \mathbb{Q} \cap [0,1] \\ \texttt{K} \cdot x \cdot y &\qstepto{0}_{\combreloneB} x & x \barplus{1} y &\qstepto{0}_{\combreloneB} x & \\ & & x \barplus{\epsilon} y &\qstepto{0}_{\combreloneB} y \barplus{1 - \epsilon} x & \end{align*} \end{tcolorbox} } \caption{The quantitative{} rewriting relation $\mapsto_{\combreloneB}$} \label{figure:probabilistic-bck} \end{figure} In particular, the convertibility distance induced by $\mapsto_{\combreloneB}$ is essentially $\makedistance{(K \min B)}$, where $(K \min B)(t,s) \triangleq \min(K(t,s), B(t,s))$, which gives the total variation distance between probabilistic combinators. That puts together the usual equational theory of combinators with the quantitative{} analysis of probabilistic choice, this way giving a quantitative{} theory of probabilistic (affine) computation. In light of that, quantitative{} rewriting properties and metric word problems become interesting both for the (quantitative{}) equational theory of probabilistic affine computations (is the theory consistent? Is it decidable or semi-decidable?) and for its operational semantics (is reduction confluent? Do we have an optimal strategy?). In this paper, we shall prove confluence of $(\Sigma_{\mathcal{K}} \cup \Sigma_{\mathbf{BA}}, \mapsto_{\combreloneB})$. Consequently, we will obtain consistency of its (quantitative{}) equational theory and semi-decidability of the reachability problem. Achieving such a result is nontrivial and requires the introduction of several new results on quantitative{} rewriting system. In particular, we will prove confluence in a modular fashion relying on a suitable quantitative{} refinement of the Hindley-Rosen Lemma~\cite{hindley-1964,rosen-70} (\autoref{lemma:hindley-rosen}) and proving confluence of $(\Sigma_{\mathcal{K}}, \mapsto_{K})$ and $(\Sigma_{\mathbf{BA}}, \mapsto_{B})$ separately (\autoref{sect:qtrs-confluence-part-2}), the latter requiring the extension of critical pair-like lemmas~\cite{Huet80} to quantitative{} rewriting systems (\autoref{sect:qtrs-confluence} and \autoref{sect:qtrs-confluence-part-2}). \paragraph{Ticking} Barycentric algebras are just \emph{one} example of a quantitative{} algebraic theory used to model computational effects. Other examples include the theory of quantitative{} semilattices~\cite{plotkin-quantitative-algebras-2016} (whose associated distance is the Hausdorff distance), quantitative{} global states, and quantitative{} output \cite{bacci-mardare-panangaden-plotkin-2020}. Here, we introduce the quantitative{} theory of ticking, a specific instance of quantitative{} output used in improvement theory and cost analysis \cite{Sands/Improvement-theory/1998} to study intensional aspects of programs. Let us consider the monoid $(\mathbb{N}, +, 0)$ of natural numbers with addition endowed with the Euclidean distance.\footnote{ A more general definition can be given by fixing a quantitative{} output monoid, that is a monoid endowed with a generalised distance~\cite{Lawvere/GeneralizedMetricSpaces/1973} making monoid multiplication non-expansive. Besides the monoid of natural numbers with the Euclidean distance, another classic example of quantitative{} output monoid is given by the monoid of words over an alphabet endowed with the least common prefix distance.} The (quantitative{}) term rewriting system $\mathcal{T} = (\Sigma_{\mathcal{T}}, \mapsto_{T})$ of ticking is defined by the signature $\Sigma_{\mathcal{T}}$ of unary operation symbols $\writeop{n}{(\cdot)}$ indexed by elements $n \in \mathbb{N}$ and the following rewriting rules: { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} \writeop{0}{x} &\qstepto{0}_{T} x & \\ \writeop{n}{(\writeop{m}{x})} & \qstepto{0}_{T} \writeop{(n+m)}{x} & \\ \writeop{n}{x} & \qstepto{\varepsilon}_{T} \writeop{m}{x} & \varepsilon \geq E(n,m) \end{align*} \end{tcolorbox} } The operation $\writeop{n}{e}$ can be informally read as \emph{count $n$ unit of cost, then continue as $e$}. Oftentimes, one writes terms of the form $\writeop{1}{e}$ as $\checkmark e$ and decorates programs with $\checkmark$ annotations to count computation steps (for instance, in systems based on the $\lambda$-calculus or combinatory logic, applications $e \cdot f$ are decorated as $\checkmark(e \cdot f)$: this way, one measures the cost of a computation as the numbers of applications performed).\footnote{ The ticking operation can be seen as a particular instance of an output operation, where the output produced is the cost of computation. \citet{bacci-mardare-panangaden-plotkin-2020} have shown that the quantitative{} equational theory associated to this reading of ticking is exactly the theory of the quantitative{} writer monad, which specialises to the one of the cost or ticking monad \cite{DBLP:conf/esop/LagoG19,Sands/Improvement-theory/1998}.} In this case, we actually obtain a simplified system $\mathcal{T}_{\checkmark}$ whose signature contains the unary function symbol $\checkmark$ only and whose (unique) rewriting rule is the following \begin{align*} \checkmark x &\qstepto{1} x \end{align*} Let us now come back to system $\mathcal{T}$. The first two rewriting rules of system $\mathcal{T}$ model null cost production and cost sequencing, whereas the last rule allows us to measure differences between cost traces. Accordingly, variations of $\mathcal{T}$ are obtained by changing the way we measure cost differences. For instance, having in mind program refinement, one may want to to replace the Euclidean distance with its asymmetric counterpart. Finally, we can combine systems $\mathcal{T}$ and $\bck$ --- or even $\BCK_{\BA}$ --- together, this way obtaining systems for the quantitative{} cost analysis of affine and probabilistic computations. We can then (and we will) prove confluence of the resulting systems \emph{compositionally} relying on the quantitative{} Hindley-Rosen lemma (\autoref{lemma:hindley-rosen}) and proving confluence of each system separately. \subsection{Further Examples and Where to Find Them} The systems we have seen so far are just \emph{some} of the many examples of quantitative{} rewriting systems one can either find in the literature or design independently. For instance, several quantitative{} equational theories of computational effects have been recently developed in addition to the ones introduced in this motivational section. Examples of those include quantitative{} nondeterminism (describing the Hausdorff distance between sets) \cite{plotkin-quantitative-algebras-2016}, global stores~\cite{bacci-mardare-panangaden-plotkin-2020}, and combined pure-probabilistic nondeterminism~\cite{DBLP:conf/lics/MioSV21}. All these theories can be analysed operationally as quantitative{} rewriting systems and combined with the systems introduced so far. A further source of examples follows the line of \autoref{sect:string-rewriting}, where we have provided operational descriptions of several edit distances (e.g. the Hamming and Levenshtein distance) on (DNA) strings. Indeed, quantitative{} rewriting systems are particularly well-suited to model edit distances --- not necessarily on strings --- operationally. The \emph{Encyclopedia of Distances} by \citet{encyclopedia-of-distances} is a great source of potential examples of (edit) distances that could be approached operationally. Potential applications of quantitative{} rewriting systems are given by optimisation theory~\cite{optimization}, where one naturally deals with weighted graphs and searches optimal paths. By characterising such weighted graphs as the reduction graphs of quantitative{} rewriting systems, one may give a more symbolic account to optimisation. Numeric and approximated computation provide interesting examples of quantitative{} rewriting system too. In fact, several computer algebra systems allow the user to combine symbolic and numeric computation.\footnote{See, for instance, the library \texttt{SymPy} (\url{https://www.sympy.org/en/index.html}).} It then seems natural to consider exact rewriting to model the symbolic part of a computation and quantitative{} rewriting to model the numeric one, as the latter naturally involves numerical approximations and precision errors. For instance, the numerical evaluation of the symbolic constant $\pi$ to, e.g., the numerical approximation $3.14$ could be modelled as the reduction $\pi \qreduce{\varepsilon} 3.14$, with $\varepsilon$ the error produced in the approximation. The reader should now be sufficiently familiar with examples of quantitative{} rewriting systems and basic ideas behind them. The rest of the paper is devoted to introduce the general theory of quantitative{} and metric rewriting in full detail, starting from abstract and then moving to quantitative{} term rewriting systems. In doing so, we also analyse the examples seen in this introductory section (as well as new ones) formally. \section{PRELIMINARIES: QUANTITATIVE RELATIONAL CALCULUS, \textnormal{\emph{\`a la}} LAWVERE} \label{sect:quantales} We begin our analysis of quantitative{} rewriting by developing a theory of \emph{quantitative{} abstract rewriting systems}. To do so, we first recall some mathematical preliminaries. \paragraph{Quantales} Traditional abstract rewriting systems can be naturally defined and studied \emph{relationally}. To define a theory of quantitative rewriting, it thus seems natural to rely on \emph{quantitative relational calculi}. Here, we follow the analysis of generalised metric spaces as enriched categories by \citet{Lawvere/GeneralizedMetricSpaces/1973} and work with relations taking values in a quantale \cite{Rosenthal/Quantales/1990}. Quantale-valued relations are extensively used in monoidal topolgy \cite{Hoffman-Seal-Tholem/monoidal-topology/2014} and they have been successfully applied to define metric and quantitative{} semantics of higher-order languages~\cite{Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19}, as well behavioural metrics~\cite{Worrell-omega-categories,paul-wild-2022}. Let us begin recalling the definition of a quantale, which we view as modelling abstract quantities. \begin{definition} A (unital) quantale $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \mathsfit{k}, \otimes)$ consists of a monoid $(\Omega}%{\mathsfit{V}, \mathsfit{k}, \otimes)$ and a sup-semilattice $(\Omega}%{\mathsfit{V}, \leq)$ satisfying the following distributivity laws: \begin{align*} \delta \otimes \bigvee_{i\in I} \varepsilon_i &= \bigvee_{i \in I} (\delta \otimes \varepsilon_i) \\ (\bigvee_{i \in I} \varepsilon_i) \otimes \delta &= \bigvee_{i \in I} (\varepsilon_i \otimes \delta). \end{align*} The element $\mathsfit{k}$ is called unit of the quantale, whereas $\otimes$ is called the tensor (or multiplication) of the quantale. \end{definition} It is easy to see that $\otimes$ is monotone in both arguments. We denote the top and bottom element of a quantale by and $\top$, $\bot$ respectively. Quantales having unit $\mathsfit{k}$ coinciding with the top element are called \emph{integral} quantales. Moreover, we say that a quantale is commutative if its underlying monoid is, and that it is non-trivial if $\mathsfit{k} \neq \bot$. Integral quantales are particularly well-behaved: for instance, in an integral quantale $\varepsilon_1 \otimes \varepsilon_2$ is a lower bound of each $\varepsilon_i$.\footnote{ By monotonicity of $\otimes$, we have: $ \varepsilon_1 \otimes \varepsilon_2 \leq \varepsilon_i \otimes \top = \varepsilon_i \otimes \mathsfit{k} = \varepsilon_i. $ } Additionally, in an integral quantale we have $\varepsilon \otimes \bot = \bot$, for any $\varepsilon \in \Omega}%{\mathsfit{V}$. If the opposite direction holds, i.e. whenever $\varepsilon \otimes \delta = \bot$, either $\varepsilon = \bot$ or $\delta = \bot$ holds, we say that the quantale is \emph{cointegral}. From now on, we assume quantales to be commutative, (co)integral, and non-trivial. We refer to such quantales as \emph{Lawvereian}. Finally, we say that a quantale is \emph{idempotent} if $\varepsilon \otimes \varepsilon = \varepsilon$. Notice that any quantale $(\Omega}%{\mathsfit{V}, \leq, \mathsfit{k}, \otimes)$ induces an idempotent quantale as $(\Omega}%{\mathsfit{V}, \leq, \top, \wedge)$ and that in any integral idempotent quantale $\wedge$ and $\otimes$ coincide. \begin{example} \begin{enumerate} \item The \emph{boolean quantale} $\mathbbm{2} = (\mathbb{2}, \leq, \wedge, \top)$, where $\mathbb{2} = \{\top, \bot\}$ and $\bot \leq \top$, is an idempotent Lawverian quantale. \item Any frame\footnote{ Recall that a frame \citep{Vickers/Topology-via-logic} consists of a sup lattice $(V, \leq, \bigvee)$ satisfying the following distributivity laws: \begin{align*} y\wedge \bigvee_{i\in I} x_i &= \bigvee_{i \in I} (y \wedge x) & (\bigvee_{i \in I} x_i) \wedge y &= \bigvee_{i \in I} (x_i \wedge y). \end{align*} A main concrete example of a frame is the structure $(\tau, \subseteq, \cap, X)$ given by the open sets $\tau$ of a topological space.} is an idempotent integral quantale. If the frame is cointegral, then we obtain a Lawverian quantale. \item The \emph{Lawvere quantale} $\mathbb{L} = ([0, \infty], \geq, +,0)$ consisting of the extended real half-line ordered by the ``greater or equal'' relation $\geq$ and extended\footnote{We extend ordinary addition as follows: $x + \infty \triangleq \infty \triangleq \infty + x$.} addition as tensor product is a Lawverian quantale. Notice that we use the opposite of the natural ordering, so that, e.g., $0$ is the top element of $\mathbb{L}$. \item The \emph{Strong Lawvere quantale} $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}} = ([0,\infty], \geq, \max, 0)$ obtained by replacing addition with maximum in the Lawvere quantale is an idempotent Lawverian quantale/ Notice that in the strong Lawvere quantale tensor and meet coincide, and thus the quantale is idempotent. \item The unit interval $\mathbb{I} = ([0,1], \leq, *)$ endowed with a left continuous \emph{triangular norm}~\cite{fuzzy-metamathematics} ($t$-norm for short)\footnote{Recall that a $t$-norm is a binary operator $*: [0,1] \times [0,1] \to [0,1]$ that induces a quantale structure over the complete lattice $([0,1], \leq)$ in such a way that the quantale is commutative.} $*$ is an integral quantale. Examples of $t$-norms are: \begin{enumerate} \item The \emph{product $t$-norm}: $x *_p y \triangleq x \cdot y$. \item The \emph{\L{}ukasiewicz $t$-norm}: $x *_l y \triangleq \max\{x + y - 1, 0\}$. \item The \emph{G\"{o}del $t$-norm}: $x *_g y \triangleq \min\{x,y\}$. \end{enumerate} If, additionally, $x * y = 0$ implies $x=0$ or $y=0$, then we obtain a Lawverian quantale. In particular, both the product and G\"{o}del $t$-norms give Lawverian quantales. Such quantales are used to model Fuzzy reasoning, and thus we refer to $\mathbb{I} = ([0,1], \leq, *,1)$ as the Fuzzy quantale(s). \item The set of \emph{monotone modal predicates} $2^W$ on a preorder monoid with top element $(W, \leq, +, 0, \top)$ of possible worlds, endowed with the tensor product defined below, is a Lawverian quantale. $$(p \otimes q)(w) \iff \exists u,v.\ w \geq u + v \wedge p(u) \wedge q(v).$$ Such a quantale is used to study modal and coeffectful properties of programs \cite{modal-reasoning-equal-metric-reasoning,DBLP:journals/pacmpl/LagoG22a}. \item If in the previous example of modal predicates we replace $2$ with a (Lawverian) fuzzy quantale $([0,1], \leq, *,1)$, then we obtain the Lawverian quantale of \emph{Fuzzy modal predicates}. \item The set $\mathbb{F} \triangleq \{f \in [0,1]^{[0,\infty]} \mid f \text{ monotone and } f(a) = \bigvee_{b < a} f(b)\}$ used by \citet{HOFMANN20131} to model probabilsitic metric spaces is a quantale. Notice that elements of $\mathbb{F}$ can be seen as Fuzzy modal predciates over the set of possible worlds $[0,\infty]$. \end{enumerate} \end{example} To help the reader working with quantales, we summarise the correspondence between the Boolean ($\mathbbm{2}$), Lawvere ($\mathbb{L}$), and Strong Lawvere ($\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$) quantale --- our main running examples --- as well as a generic quantale $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \mathsfit{k}, \otimes)$, in \autoref{table:correspondence-quantale}. \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|c|c|} \hline & $\mathbbm{2}$ (Boolean) & $\mathbb{L}$ (Lawvere) & $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$ (strong Lawvere)& $\mathbb{\quantale}$ (quantale) \\ \hline Carrier & $\mathbb{2} $ & $[0,\infty]$ & $[0,\infty]$ & $\Omega}%{\mathsfit{V}$ \\ Order & $\leq$ & $\geq$ & $\geq$ & $\leq$ \\ Join & $\exists$ & $\inf$ & $\inf$ & $\bigvee$ \\ Meet & $\forall$ &$\sup$ &$\sup$ & $\bigwedge$ \\ Tensor & $\wedge$ & $+$ & $\max$ & $\otimes$ \\ Unit & $\top$ & $0$ & $0$ & $\mathsfit{k}$ \\ \hline \end{tabular} \caption{Correspondence $\mathbb{2}$-$[0,\infty]$-$\Omega}%{\mathsfit{V}$.} \label{table:correspondence-quantale} \end{table*} Since any quantale is, in particular, a complete lattice and tensor product is monotone in both arguments, the latter has both left and right adjoints (which coincide in our case, as we assume $\otimes$ to be commutative): $$ \varepsilon \otimes \delta \leq \eta \iff \delta \leq \varepsilon \multimap \eta. $$ Explicitly, we have $\varepsilon \multimap \delta \triangleq \bigvee \{\eta \mid \varepsilon \otimes \eta \leq \delta\}$. For instance, in the Boolean quantale $\multimap$ is ordinary implication, whereas in the Lawvere quantale $\multimap$ is truncated subtraction. This is all the reader has to know about quantales to understand quantitative{} \emph{abstract} rewriting systems. When it comes to move to quantitative{} \emph{term} rewriting systems, a little more notions about (and a little more conditions on) quantales are needed. In particular, the addition of structural rules akin to quantitative{} equational theories requires us to work with \emph{continuous} quantales~\cite{continuous-lattices-and-domains} (but see \autoref{rem:structural-rules}). \begin{definition} Given a quantale $\mathbb{\quantale}$ and elements $\varepsilon, \delta \in \Omega}%{\mathsfit{V}$, the \emph{way-below} relation $\ll$ is defined thus: $\delta \ll \varepsilon$ if and only if for every subset $A \subseteq \Omega}%{\mathsfit{V}$, whenever $\varepsilon \leq \bigvee A$, there exists a finite subset $A_0 \subseteq A$ such that $\delta \leq \bigvee A_0$. We say that $\mathbb{\quantale}$ is \emph{continuous} if and only if $$ \varepsilon = \bigvee_{\delta \ll \varepsilon} \delta. $$ \end{definition} \begin{example} The Boolean quantale $\mathbbm{2}$ being finite it is trivially continuous. Both the Lawvere and the strong Lawvere quantales are continuous with $>$ (i.e. greater than) as the way below relation. There, we extend $>$ to $[0,\infty]$ by stipulating $\infty > \infty$. In the same way, one obtains continuity of Fuzzy quantales. There, the way below relation is given by $<$ (i.e. less than) extended by stipulating $0 < 0$. \end{example} \paragraph{Quantale-valued Relations} We now move to quantale-valued relations, our main tool to model quantitative{} rewriting. As quantales model abstract quantities, quantale-valued relations provide abstract notions of distances. \begin{definition} Given a quantale $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \mathsfit{k}, \otimes)$, a $\mathbb{\quantale}$-relation $R: A \tobar B$ between sets $A$ and $B$ is a function $R: A \times B \to \Omega}%{\mathsfit{V}$. For any set $A$, we define the identity (or diagonal) $\mathbb{\quantale}$-relation $\Delta_{A} : A \tobar A$ mapping diagonal elements $(a,a)$ to $\mathsfit{k}$, and all other elements to $\bot$. Moreover, the composition $R; S: A \tobar C$ of $\mathbb{\quantale}$-relations $R: A \tobar B$ and $S: B \tobar C$ is defined by the so-called matrix multiplication formula \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}: $$ (R;S)(a,c) \triangleq \bigvee_{b \in B} R (a,b) \otimes S (b,c). $$ \end{definition} In general, we think about a $\mathbb{\quantale}$-relation as giving the distance or the degree of relatedness of two elements~\cite{Flagg-1992,Flagg-1997,Flagg-1997-b,Hoffman-Seal-Tholem/monoidal-topology/2014}. For instance, when the quantale is Boolean, elements are either related or not, whereas for Fuzzy quantales $\mathbb{\quantale}$-relations coincide with Fuzzy relations~\cite{Fuzzy-relational-systems}, and thus they give the degree to which elements are related, as well as proximity and similarity relations. When we move to the Lawvere quantale (and quantales alike), $\mathbb{\quantale}$-relations give general notions of distances~\cite{Lawvere/GeneralizedMetricSpaces/1973}, and thus act as a foundation for metric reasoning~\cite{BonsangueBreguelRutten/GeneralisedMetricSpaces/1998}. Coarser forms of metric reasoning are obtained by considering interval-based~\cite{Geoffroy-Pistone-2021} and probabilistic quantales~\cite{HOFMANN20131}, where instead of establishing the distance between elements exactly, one obtains only an interval to which such a distance belongs, or a probability of the accuracy of its measurement. Finally, considering the quantale of (fuzzy) modal predicates, we obtain (fuzzy) modal and coeffectful relations~\cite{DBLP:journals/pacmpl/LagoG22a,Routley-1972-II,Routley-1972-III,Routley-1973,Urquhart-1972}, whereby (the degree of) relatedness of elements is given with respect a possible world (such as the available resources). \begin{example} We summarise composition on the Boolean, Lawvere, and Strong Lawvere quantale in \autoref{figure:composition-boolean-lawevere-stronglawvere}. \end{example} \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline & $\mathbbm{2}$ & $\mathbb{L}$ & $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$ \\ \hline $(R; S)(a,c)$ & $ \exists y.\ R (a,b) \wedge S (b,c)$ & $\inf_y R (a,b) + S (b,c)$ & $\inf_y \max(R (a,b), S (b,c))$ \\ \hline \end{tabular} \caption{Composition on $\mathbbm{2}$, $\mathbb{L}$, and $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$} \label{figure:composition-boolean-lawevere-stronglawvere} \end{table*} Since $\mathbb{\quantale}$-relation composition is associative and has $\Delta$ as unit element, for any quantale $\mathbb{\quantale}$ we have a category, denoted by $\Vrel{\mathbb{\quantale}}$, with sets as objects and $\mathbb{\quantale}$-relations as arrows. Moreover, the complete lattice structure of $\mathbb{\quantale}$ lifts to $\mathbb{\quantale}$-relations pointwise, so that we can say that a $\mathbb{\quantale}$-relation $R: A \tobar A$ is \emph{reflexive} if $\Delta \leq R$; \emph{transitive} if $R; R \leq R$; and symmetric if $\dual{R} \leq R$, where the transpose of $R: A \tobar B$ is the $\mathbb{\quantale}$-relation $\dual{R}: B \tobar A$ defined by $\dual{R}(b,a) \triangleq R(a,b)$. When read pointwise, reflexivity, transitivity, and symmetry give the following inequalities: \begin{align*} \mathsfit{k} &\leq R(a,a) \\ R(a,b) \otimes R(b,c) &\leq R(a,c) \\ R(a,b) &\leq R(b,a). \end{align*} Altogether, we obtain the notion of a preorder (i.e. reflexive and transitive) and equivalence (i.e. reflexive, transitive, and symmetric) $\mathbb{\quantale}$-relation. \begin{notation} Fixed a quantale $\mathbb{\Theta}$, we oftentimes refer to $\mathbb{\quantale}$-relation on $\mathbb{\Theta}$ as $\mathbb{\Theta}$-relations. Thus, for example, $\mathbb{L}$-relations are just $\mathbb{\quantale}$-relations on the Lawvere quantale $\mathbb{L}$. \end{notation} \begin{example} \begin{enumerate} \item On the Boolean quantale, $\mathbbm{2}$-relations are ordinary (binary) relations, and preorder and equivalence $\mathbbm{2}$-relations coincide with traditional preorders and equivalences. \item On the Lawvere quantale, $\mathbb{L}$-relations are distances. Instantiating transitivity on $\mathbb{L}$, we obtain the usual \emph{triangle inequality} formula: $$ \inf_{b} R(a,b) + R(b,c) \geq R(a,c) $$ Similarly, reflexivity gives the identity of indiscernibles inequality: $$ 0 \geq R(a, a). $$ Altogether, we see that preorder $\mathbb{L}$-relations coincide with \emph{generalised metrics} \cite{Lawvere/GeneralizedMetricSpaces/1973,BonsangueBreguelRutten/GeneralisedMetricSpaces/1998} and equivalence $\mathbb{L}$-relations with \emph{pseudometrics} \cite{steen/CounterexamplesTopology/1995}. \item Moving from the Lawvere to the Strong Lawvere quantale, we replace addition with binary maximum, so that transitivity now gives the \emph{strong triangle inequality} formula: $$ \inf_{b} \max(R(a,b), R(b,c)) \geq R(a,c) $$ Consequently, equivalence $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$-relations coincide with \emph{ultra-pseudometrics}. \item On the quantale $\mathbb{F}$, equivalence $\mathbb{F}$-relations give \emph{probabilistic metric spaces} \cite{HOFMANN20131}. The informal reading of a $\mathbb{F}$-relation $R$ is that $R(a,b)(\varepsilon)$ gives the probability that $a$ and $\mathbb{2}$ are at most $\varepsilon$-far. \item On the unit interval (fuzzy) quantale(s), $\mathbb{I}$-relations coincide with fuzzy relations \cite{Fuzzy-relational-systems}. Equivalence $\mathbb{I}$-relations are often called similarity or proximity relations. \end{enumerate} \end{example} We summarise how reflexivity, symmetry, and transitivity instantiate on the Boolean, Lawvere, and Strong Lawevere quantale in \autoref{figure:correspondence-reflexivity-symmetry-transitivity}. \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|} \hline $\mathbbm{2}$ & $\mathbb{L}$ & $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$ \\ \hline $\top \leq R(a,a) $ & $0 \geq R(a,a)$ & $0 \geq R(a,a)$ \\ $R(a,b) \leq R(b,a) $ & $R(a,b) \geq R(b,a)$ & $R(a,b) \geq R(b,a)$ \\ $R(a,b) \wedge R(b,c) \leq R(a,c)$ & $R(a,b) + R(b,c) \geq R(a,c)$ & $\max(R(a,b), R(b,c)) \geq R(a,c)$ \\ \hline \end{tabular} \caption{Correspondences reflexivity-symmetry-transitivity.} \label{figure:correspondence-reflexivity-symmetry-transitivity} \end{table*} Finally, we notice that the ``algebra'' of $\mathbb{\quantale}$-relations is close to the on ordinary relations,\footnote{As the category of traditional relations, the category $\Vrel{\mathbb{\quantale}}$ is a \emph{quantaloid}~\cite{Hoffman-Seal-Tholem/monoidal-topology/2014,introduction-to-quantaloids}. } so that we can refine a large part of calculi of relations \cite{relational-mathematics} to a quantale-based setting. In fact, we can even think about $\mathbb{\quantale}$-relations as ``monoidal relations''. Since many notions of traditional rewriting can be given in purely relational terms, we can take advantage of that and rephrase them in terms of $\mathbb{\quantale}$-relations. For the moment, we simply recall the following useful closure operations. \begin{definition} Let $R: A \tobar A$ be a $\mathbb{\quantale}$-relation. For $n \in \mathbb{N}$, we define the $n$-th iterate of $R$, notation $R^n$ by $R^{0} \triangleq \Delta$ and $R^{n+1} \triangleq R;R^{n}$. We define: \begin{enumerate} \item The \emph{reflexive closure} of $R$ as $\reflex{R} \triangleq R \vee \Delta$. \item The \emph{transitive and reflexive closure} of $R$ as ${R}^{*} \triangleq \bigvee_{n\geq 0} R^{n}$. \item The \emph{equivalence closure} of $R$ as $\makedistance{R} \triangleq (R \vee \dual{R})^*$. \end{enumerate} \end{definition} As already remarked, our approach to quantitative{} abstract rewriting systems will be algebraic and relational. Accordingly, we shall prove several nontrivial rewriting properties relying on the algebra of $\mathbb{\quantale}$-relations. To do so, it is useful to exploit fixed point characterisations of relational constructions, as well as their adjunction properties~\cite{Backhouse-fixed-point-and-galois-connection,algebra-of-programming}. Recall that $\Vrel{\mathbb{\quantale}}(A,B)$ carries a complete lattice structure, so that any monotone map $F: \Vrel{\mathbb{\quantale}}(A,B) \to \Vrel{\mathbb{\quantale}}(A,B)$ has least and greatest fixed points, denoted by $\mu X.F(X)$ and $\nu X.F(X)$, respectively. Consequently, we can define $\mathbb{\quantale}$-relations both \emph{inductively} and \emph{coinductively}. That gives us the following (fixed point) induction and (fixed point) coinduction proof principles: \[ \infer{\mu X. F(X) \leq R}{F(R) \leq R} \qquad \infer{R \leq \nu X. F(X)}{R \leq F(R)} \] In particular, we notice that $R^*$ is the least solution to the equation $ X = \Delta \vee R;X, $ so that $R^*$ can be equivalently defined the least fixed point $\mu X.\Delta \vee R;X$, and thus as the least pre-fixed point of the map $F(X) \triangleq \Delta \vee R;X$. Consequently, we obtain the following least fixed point induction rule: \[ \infer{R^* \leq S}{\Delta \vee R;S \leq S} \] \begin{notation} We denote by $\Bot$ the $\mathbb{\quantale}$-relation $\mu X.X$ assigning distance $\bot$ to all elements, and by $\qtop$ the $\mathbb{\quantale}$ relation $\nu X.X$, i.e. the indiscrete $\mathbb{\quantale}$-relation assigning distance $\mathsfit{k}$ to all elements. Explicitly, we have $\Bot(a, b) = \bot$ and $\qtop(a, b) = \mathsfit{k}$, for all $a, b$. \end{notation} Finally, we mention that $\Vrel{\mathbb{\quantale}}(A,A)$ being not only a complete lattice, but a quantale, $\mathbb{\quantale}$-relation composition has both left and right adjoints, often referred to as left and right division \cite{algebra-of-programming}: $$ R; S \leq P \iff S \leq R \setminus P \qquad R; S \leq P \iff R \leq P / S. $$ \paragraph{Ternary Relations} Even if we model quantitative{} rewriting relations as $\mathbb{\quantale}$-relations, in \autoref{section:long-intro} we have defined rewriting systems by means of suitable ternary relations from which we have then extracted a $\mathbb{\quantale}$-relations. This process, known as \emph{strata extension} \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}, is an instance of a more general correspondence~\cite{modal-reasoning-equal-metric-reasoning} between abstract distances and suitable ternary relations akin to substructural Kripke relations~\cite{Routley-1972-II,Routley-1972-III,Routley-1973,Urquhart-1972} as used in the relational analysis of coeffects~\cite{DBLP:journals/pacmpl/LagoG22a, DBLP:journals/pacmpl/AbelB20}. Since we will extensively switch between $\mathbb{\quantale}$-relations and ternary relations, we recall the notion of a $\mathbb{\quantale}$-ternary relation. { \renewcommand{R}{R} \newcommand{\reltodist}[1]{#1^{\bullet}} \newcommand{\disttorel}[1]{#1^{\circ}} \begin{definition} Given a quantale $\mathbb{\quantale}$, a \emph{$\mathbb{\quantale}$-ternary relation} over $A \times B$ is a ternary relation $R \subseteq A \times \Omega}%{\mathsfit{V} \times B$ antitone in its second argument (meaning that $R(a,\varepsilon,b)$ implies $R(a,\delta,b)$, for any $\delta \leq \varepsilon$). \end{definition} Any ternary $\mathbb{\quantale}$-relation $R$ induces a $\mathbb{\quantale}$-relation $\reltodist{R}$ thus: $$ \reltodist{R}(a,b) \triangleq \bigvee_{R(a,\varepsilon,b)} \varepsilon. $$ Vice versa, any $\mathbb{\quantale}$-relation $R$ induces a $\mathbb{\quantale}$-ternary relation $\disttorel{R}$ defined by $$ \disttorel{R}(a,\varepsilon,b) \iff \varepsilon \leq R(a,b). $$ This two processes are each other inverses, meaning that $R^{\circ\bullet} = R$ and $R^{\bullet\circ} = R$, so that we can freely switch between $\mathbb{\quantale}$-ternary relations and $\mathbb{\quantale}$-relations. \begin{notation} Oftentimes, we will use modal relations to define rewriting systems. In those cases, we will use notations of the form $\to_{R}$ and write $\mrel{\varepsilon}{a}{\to_{R}}{b}$ in place of $\to_{R}(a,\varepsilon,b)$. Moreover, we shall denote by $R$ the $\mathbb{\quantale}$-relation associated to $\to_{R}$. That is, $R(a,b) \triangleq \bigvee_{\mrel{\varepsilon}{a}{\to_{R}}{b}}\varepsilon$. \end{notation} \begin{remark} To make definitions computationally lighter, the literature on quantitative{} algebraic theories usually considers ternary relations over a \emph{base} of $\Omega}%{\mathsfit{V}$, the carrier of such a base usually being considerably smaller than $\Omega}%{\mathsfit{V}$. For instance, quantitative{} equational theories are often defined using ternary relations over non-negative rationals, the latter being a base for $[0,\infty]$. Since our results are independent of working with bases or with full quantales, we keep the necessary mathematical preliminaries as minimal as possible, this way defining (in \autoref{sect:qtrs}) quantitative{} term rewriting systems relying on quantales rather than on their bases (the interested reader can consult the recent work by \citet{An-Internal-Language-for-Categories-Enriched-over-Generalised-Metric-Spaces} to convince herself that our theory is invariant with respect to such a design choice). \end{remark} } \section{QUANTITATIVE ABSTRACT REWRITING SYSTEMS} \label{section:qars} In this section, we introduce \emph{quantitative{} abstract rewriting systems} and their theory. These systems constitute the foundation of quantitative{} rewriting, and all other notions of quantitative{} rewriting system, such as string- and term-based systems, can be ultimately regarded as quantitative{} abstract rewriting systems. Moreover, we shall use the latter to define crucial notions and properties of rewriting, such as confluence and termination, that we will later specialise to term-based systems. Throughout this and later sections, we fix a (Lawverian) quantale $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \mathsfit{k}, \otimes)$. \begin{definition} \label{def:qars} A \emph{Quantitative Abstract Rewriting Systems} ($\Quantale$-ARS, for short) is a pair $(A, R: A \tobar A)$. \end{definition} \autoref{def:qars} is extremely simple: as a traditional abstract rewriting system is defined as a set of objects together with a binary (rewriting) relation on it, a $\Quantale$-ARS{} is defined as a set of objects together with a binary (rewriting) $\mathbb{\quantale}$-relation on it. Given elements $a,b \in A$, we say that $a$ rewrites into (or reduces to) $b$ if $R(a,b) \neq \bot$: in that case, we say that the (rewriting) distance or difference between $a$ and $b$ is $R(a,b)$. Further possible informal reading (possibly depending on the quantale considered) refer to $R(a,b)$ as the \emph{degree} of the reduction, the \emph{cost} of the reduction, or as the \emph{resource} required for the reduction.\footnote{This is the case, in particular, for quantales of modal predicates, where rewriting is ultimately performed in a possible world describing intensional aspects of the rewriting process, such as the available resource.} Rewriting paths are obtained by iterating $R$. In particular, we say that: \begin{enumerate} \item $a$ reduces to $b$ in finitely many-steps if $R^*(a,b) \neq \bot$; \item $a$ reduces to $b$ in one or zero steps if $\reflex{R}(a,b) \neq \bot$; \item $a$ is convertible with $b$ if $\makedistance{R}(a,b) \neq \bot$. \end{enumerate} Notice that given a $\Quantale$-ARS{} $(A, R: A \tobar A)$, the convertibility $\mathbb{\quantale}$-relation $\makedistance{R}$ generated by $R$ is a $\mathbb{\quantale}$-equivalence and thus endows $A$ with a metric-like structure. Sometimes, we will need to explicitly consider reduction sequences. We thus say that a finite sequence $(a_0, \hdots, a_n)$ is a $R$-reduction sequence if $$ R(a_0,a_1) \otimes \cdots \otimes R(a_{n-1},a_n) \neq \bot $$ and that an infinite sequence $(a_0, \hdots, a_n, \hdots)$ is a $R$-reduction sequence if $R(a_0,a_1) \otimes \cdots \otimes R(a_{n-1},a_n) \neq \bot$, for any $n \geq 0$. Every reduction sequence\footnote{If the underlying $\Quantale$-ARS{} $(A, R)$ is clear from the context, we simply refer to \emph{reduction sequences} for $R$-reduction sequences.} has a first element: if a reduction sequence starts from $a$, then we refer to it as a reduction sequence of $a$. Notice that since $\mathbb{\quantale}$ is Lawverian, for any reduction sequence $(a_0, \hdots, a_n)$, we have $R(a_i, a_{i+1}) \neq \bot$, for any $i$.\footnote{In case the underlying quantale is not Lawverian, then one should take this condition as part of the definition of a reduction sequence.} \subsection{Confluence} Given a set $A$ of objects together with an equivalence $\equiv$ on it, traditional rewriting systems are often introduced as ways to give computational content to $\equiv$. Accordingly, one considers a rewriting relation ${\to} \subseteq A \times A$ on $A$ such that $\to$-convertibility coincides with $\equiv$. At this point, properties of $\to$ are proved so to ensure $\equiv$ to be computationally well-behaved. Among those, the so-called Church-Rosser property states that whenever $a \equiv b$, there exists an object $c$ such that both $a$ and $b$ can be reduced to $c$ in a finite number of steps. Formally, ${\equiv}$ coincides with $\to^*; \prescript{*}{}{\leftarrow}$ (where $\leftarrow$ stands for $\dual{\to}$). The Church-Rosser property thus implies that to study $\equiv$ it is enough to study directional rewriting. Moreover, if $\equiv$ is defined axiomatically, then the Church-Rosser property gives a powerful tool to test consistency of $\equiv$: if $a$ and $b$ have no common reduct, then they cannot be equivalent. In a quantitative{} setting, the relation $\equiv$ is replaced by a $\mathbb{\quantale}$-equivalence $E$, and the rewriting relation $\to$ is replaced by a $\mathbb{\quantale}$-rewriting relation $R$ such that $\makedistance{R} = E$. On then looks for properties $R$ ensuring $E$ to be computationally well-behaved. In this section, we explore some of these properties, viz. \emph{(quantitative) confluence}, \emph{(metric) Church-Rosser}, and \emph{termination}. We begin with confluence, which states that two reductions originating from the same element can be joined into a common element, as in the classical case, but with the additional property that the merging reduction is achieved without increasing distances. \begin{definition} \label{def:commutation} Let $R, S: A \tobar B$ be $\mathbb{\quantale}$-relations. \begin{enumerate} \item We say that $R$ \emph{commutes} with $S$ if $\dual{R}; S \leq S; \dual{R}$. \item We say that $R$ satisfies the \emph{diamond property} if $R$ commutes with itself, i.e. $\dual{R}; R \leq R; \dual{R}$. \end{enumerate} \end{definition} Let us comment on \autoref{def:commutation} by analysing the diamond property. On the Boolean quantale, we recover the usual diamond property as defined for traditional rewriting systems. More interesting is the case of the Lawevere quantale, which we use as vehicle to move to the general case. Pointwise, the diamond property reads as follows: $$ \inf_c R(c,a) + R(c,b) \geq \inf_d R(a,d) + R(b,d). $$ The left-hand-side of the inequality, namely $\inf_c R(c,a) + R(c,b)$, gives the minimal \emph{peak distance} between $a$ and $b$, that is the shortest connection between $a$ and $b$ obtained through a pick $c$ reducing to both $a$ and $b$. The right-hand-side, instead, gives the minimal \emph{valley distance} between $a$ and $b$. Let us say that $c$ is a peak over $a$ and $b$ if $R(c,a) + R(c,b) \neq \infty$, so that none of $R(c,a)$ and $R(c,b)$ is $\infty$.\footnote{In the general case, we say that $c$ is a peak over $a$ and $b$ if $R(c,a) \otimes R(c,b) \neq \bot$, so that $R(c,a) \neq \bot$ and $R(c,b) \neq \bot$ follow since the quantale is Lawverian.} The diamond property ensures that whenever we have a peak $c$ over $a$ and $b$, then we also have a collection of valleys under $a$ and $b$, i.e. elements $d$ such that $R(a,d) + R(b,d) \neq \infty$, such that the infimum of such valleys is smaller or equal than $R(c,a) + R(c,b)$. In fact, the diamond property gives: $$ \infty > R(c,a) + R(c,b) \geq \inf_{c} R(c,a) + R(c,b) \geq \inf_d R(a,d) + R(b,d). $$ And if there is no element $d$ such that $R(a,d) + R(b,d) \neq \infty$, then $ \inf_{d} R(a,d) + R(b,d) = \infty$, which gives a contradiction. Notice, however, that there is no guarantee that there is an actual valley $d$ such that $ R(c,a) + R(c,b) \geq R(a,d) + R(b,d). $ \begin{example} \label{ex:example-4} Consider the Lawvere quantale and the \makears{$\mathbb{L}$} over the set $A \triangleq \mathbb{R}^+ \cup \{a, b_1, b_2\}$ with $R(a, b_1) \triangleq R(a, b_2) \triangleq 0$, $R(b_1, \varepsilon) \triangleq R(b_2, \varepsilon) \triangleq \frac{\varepsilon}{2}$, for each $\varepsilon \in \mathbb{R}^+$, and $R(x,y) \triangleq \infty$ otherwise. \[ \xymatrix@-0.5pc{ & a \ar[ld]_0 \ar[rd]^0 & \\ b_1 \ar@/_/[rd]_{\frac{\varepsilon_i}{2}} & & b_2\ar@/^/[ld]^{\frac{\varepsilon_i}{2}} \\ & \text{$\begin{matrix} \vdots \\ \varepsilon_i \\ \vdots \end{matrix}$} & } \] Then, there is no $c$ such that $R(b_1, c) + R(b_2, c) = 0$, although $\inf_{\varepsilon} R(b_1, \varepsilon) + R(b_2, \varepsilon) = \inf_{\varepsilon > 0} \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = 0$. \end{example} In the general setting of an arbitrary quantale $\mathbb{\quantale}$, we see that the diamond property has the following pointwise reading: $$ \bigvee_c R(c,a) \otimes R(c,b) \leq \bigvee_d R(a,d) \otimes R(b,d). $$ The abstract formulation suggests further non-distance-based readings of the diamond property (and properties alike); and among those, noticeable ones are obtained in terms of \emph{graded properties} and \emph{degree of reductions}. Accordingly, we read $\bigvee_c R(c,a) \otimes R(c,b)$ as the \emph{divergence degree} of $a$ and $b$, and $\bigvee_d R(a,d) \otimes R(b,d)$ as the \emph{convergence degree} of $a$ and $b$. The diamond property then states that the divergence degree between any two elements is always smaller or equal than their convergence degree; that is, the system tends more to converge than to diverge. Instantiating $\mathbb{\quantale}$ with the Boolean element (and thus recovering the traditional diamond property and properties alike), we stipulate degrees of convergence and divergence to be absolute values. On the other hand, taking the unit interval quantale, we let convergence and divergence be \emph{fuzzy} notions. Finally, we mention that we also have a \emph{modal} and \emph{coeffectful} reading of the diamond property along the lines coeffectful relational calculi~\cite{modal-reasoning-equal-metric-reasoning,DBLP:journals/pacmpl/LagoG22a}. Accordingly, we parametrise the latter property with respect to possible worlds (such as information states, security levels, available resources, etc.), this way obtaining a local and (more) intensional view of rewriting. We summarise the pointwise reading of commutativity and of the diamond property in \autoref{figure:commutativity-and-diamond-property}. \begin{table*}[htbp] \begin{tcolorbox}[boxrule=0.5pt,width=\linewidth,colframe=black,colback=black!0!white,arc=0mm] \centering \begin{tabular}{cc} Commutation & Diamond Property \\ $$ \xymatrix@ -0.8pc{ & c \ar[ld]_{R} \ar[rd]^{S} & \\ a \ar@{.>}[rd]_{S} & & b \ar@{.>}[ld]^{R} \\ & d & } $$ & $$ \xymatrix@ -0.8pc{ & c \ar[ld]_{R} \ar[rd]^{R} & \\ a \ar@{.>}[rd]_{R} & & b \ar@{.>}[ld]^{R} \\ & d & } $$ \\ {$\displaystyle \bigvee_c R(c,a) \otimes S(c,b) \leq \bigvee_d S(a,d) \otimes R(b,d)$} & {$\displaystyle \bigvee_c R(c,a) \otimes R(c,b) \leq \bigvee_d R(a,d) \otimes R(b,d)$} \\ \end{tabular} \end{tcolorbox} \caption{Commutativity and the Diamond Property} \label{figure:commutativity-and-diamond-property} \end{table*} \begin{remark} \label{remark:graded-properties} We have seen that both confluence and the diamond property involve \emph{graded properties} --- namely degrees of divergence and convergence --- i.e. non-Boolean properties taking values in a quantale. Nonetheless, both confluence and the diamond property are \emph{Boolean} properties of $\mathbb{\quantale}$-relations, as they are essentially of the form $\varepsilon \leq \delta$. It is natural to push the quantitative{} perspective one step further and consider a \emph{graded} version of, e.g., commutation. In fact, by exploiting the adjunction property of $\mathbb{\quantale}$-relation composition, we see that requiring $R$ to commute with $S$, i.e. $\dual{R}; S \leq S; \dual{R}$, means requiring $$ \Delta \leq \dual{R}{\setminus}(S; \dual{R}) / S. $$ Forgetting about $\Delta$, we can think about the $\mathbb{\quantale}$-relation $\dual{R}{\setminus}(S; \dual{R}) / S$ as assigning to elements the degree of commutation of $R$ and $S$ on them, i.e. their divergence-convergence distance. Pointwise, we thus obtain: $$ (\dual{R}{\setminus}(S; \dual{R}) / S)(a, b) = \bigvee_c R(c,a) \otimes S(c,b) \multimap \bigvee_d S(a,d) \otimes R(b,d). $$ Notice that the latter is an element of $\Omega}%{\mathsfit{V}$ rather than a (Boolean) truth value, and thus it indicates \emph{how much} $R$ commutes with $S$. For instance, on the Lawvere quantale, $(\dual{R}{\setminus}(R; \dual{R}) / R)(a, b)$ gives the difference between the divergence and convergence distance on $a$ and $b$, and thus a measure of \emph{how much} $R$ has the diamond property on $a$ and $b$. Using the vocabulary of (enriched) category theory \cite{Kelly/EnrichedCats,Lawvere/GeneralizedMetricSpaces/1973}, one may say that enrichment is given not only at the level of relations, but also at the level of equality and refinement of relations (that is, not only relations $R$ take values in $\Omega}%{\mathsfit{V}$, but also statements such as $R = S$ and $R \leq S$ do). We leave the exploration of this further form of enrichment for future investigation. \end{remark} As for the traditional case, we are interesting in rewriting paths rather than in single rewriting steps. \begin{definition} \label{def:confluence} Let $(A, R)$ be a $\Quantale$-ARS{}. \begin{enumerate} \item We say that $R$ is \emph{confluent} if $R^*$ has the diamond property. \item We say that $R$ is \emph{locally confluent}\footnote{Notice that $\stardual{R} = \dualstar{R}$.} if $\dual{R}; R \leq R^*; \dualstar{R}$. \item We say that $R$ is \emph{Church-Rosser} (CR, for short) if $\makedistance{R} = R^*; \dualstar{R}$ \end{enumerate} \end{definition} If a rewriting $\mathbb{\quantale}$-relation is confluent, then we can characterise the convertibility distance $\makedistance{R}$ in terms convergent sequences of rewriting steps. \begin{proposition} \label{prop:church-rosser} Let $(A, R)$ be a $\Quantale$-ARS{}. Then $R$ is confluent if and only if it is CR. \end{proposition} \begin{proof} Clearly, if $R$ is CR, then it is confluent. Suppose now $R$ to be confluent and recall that $\makedistance{R} =(R \vee \dual{R})^*$. First, we notice that since $\dualstar{R} = R^{{\scriptstyle-}*}$, we have: $$ R^*; \dualstar{R} \leq R^*; R^{{\scriptstyle-}*} \leq (R \vee \dual{R})^*; (R \vee \dual{R})^* = (R \vee \dual{R})^*. $$ It thus remains to show $(R \vee \dual{R})^* \leq R^*; \dualstar{R}$. We proceed by fixed point induction, showing that $$\Delta \vee ((R \vee \dual{R}); R^*; \dualstar{R}) \leq R^*; \dualstar{R}.$$ Clearly, $\Delta \leq R^*; \dualstar{R}$, so that it remains to show $(R \vee \dual{R}); R^*; \dualstar{R} \leq R^*; \dualstar{R}$. We have:\footnote{Recall that for all $R_i: A \to B$ and $S: B \to C$, we have $(\bigvee_i R_i); S = \bigvee_i R_i; S$.} \begin{align*} (R \vee \dual{R}); R^*; \dualstar{R} &= R;R^*; \dualstar{R} \vee \dual{R}; R^*; \dualstar{R} & \\ &\leq R^*; \dualstar{R} \vee \dual{R}; R^*; \dualstar{R} & \\ &\leq R^*; \dualstar{R} \vee R^{{\scriptstyle -}*}; R^*; \dualstar{R} & \\ &= R^*; \dualstar{R} \vee \dualstar{R}; R^*; \dualstar{R} & \\ &\leq R^*; \dualstar{R} \vee R^*; \dualstar{R}; \dualstar{R} & \text{(by CR)} \\ &\leq R^*; \dualstar{R} \vee R^*; \dualstar{R} & \\ &= R^*; \dualstar{R} & \end{align*} \end{proof} \begin{notation} Given a $\Quantale$-ARS{} $\mathcal{A} = (A,R)$ and a property $\varphi$ on $\mathbb{\quantale}$-relations, such as being confluent, we say that $\mathcal{A}$ has property $\varphi$ if $R$ has $\varphi$. Thus, for instance, we say that $\mathcal{A}$ is confluent if $R$ is. \end{notation} Thanks to \autoref{prop:church-rosser}, we see that confluence is a crucial property in quantitative{} and metric reasoning. Proving confluence of quantitative systems, however, can be cumbersome: indeed, quantitative{} systems are often built \emph{compositionally} by joining systems together. \autoref{section:long-intro} has already shown us several examples of systems obtained that way. Consequently, it is desirable to design \emph{modular techniques} to prove confluence of such systems compositionally, i.e. relying on confluence of their component subsystems, rather than proceed monolithically from scratches. Among such modular techniques, Hindley-Rosen Lemma~\cite{hindley-1964,rosen-70} is arguably the most well-known one in traditional rewriting. \autoref{lemma:hindley-rosen} generalises such a result to quantitative systems. Before proving it, we recall a few basic properties of $\mathbb{\quantale}$-relations. \begin{lemma} \label{lemma:auxiliary-lemma-hindley-rosen} Given $\mathbb{\quantale}$-relations $R$, $S$, and $P$, we have: \begin{enumerate} \item If $R; S \leq S; R$ and $R; P \leq P; R$, then $R; (S \vee P)^* \leq (S \vee P)^*; R$. \item $(R^* \vee S^*)^* = (R \vee S)^*$. \end{enumerate} \end{lemma} \begin{proof} For the first item, we observe that proving the thesis amounts to prove $(S \vee P)^* \leq R \setminus ((S \vee P)^*; R)$, so that we can use fixed point induction. Proving $\Delta \leq R \setminus ((S \vee P)^*; R)$ is straightforward. It remains to prove $$ (S \vee P); R \setminus ((S \vee P)^*; R) \leq R \setminus ((S \vee P)^*; R). $$ Since $(S \vee P); R \setminus ((S \vee P)^*; R) = (S; R \setminus ((S \vee P)^*; R)) \vee (P; R \setminus ((S \vee P)^*; R)) $, it is sufficient to prove $S; R \setminus ((S \vee P)^*; R) \leq R \setminus ((S \vee P)^*; R)$ and $P; R \setminus ((S \vee P)^*; R) \leq R \setminus ((S \vee P)^*; R)$. We prove the former which, by adjunction, is equivalent to $$ R; S; R \setminus ((S \vee P)^*; R) \leq (S \vee P)^*; R. $$ By commutation of $R$ with $S$, we obtain: \begin{align*} R; S; R \setminus ((S \vee P)^*; R) &\leq S; R; R \setminus ((S \vee P)^*; R) \\ &\leq S; (S \vee P)^*; R \\ &\leq (S \vee P) ; (S \vee P)^*; R \\ &\leq (S \vee P)^*; R. \end{align*} Let us now move to the second item, which essentially amounts to prove $(R^* \vee S^*)^* \leq (R \vee S)^*$. We use fixed point induction and show $(R^* \vee S^*);(R \vee S)^* \leq (R \vee S)^*$. We have: \begin{align*} (R^* \vee S^*);(R \vee S)^* &= R^*;(R \vee S)^* \vee S^*;(R \vee S)^* \\ &\leq (R \vee S)^*;(R \vee S)^* \vee (R \vee S)^*;(R \vee S)^* \\ &= (R \vee S)^* \vee (R \vee S)^* \\ &= (R \vee S)^*. \end{align*} \end{proof} \begin{proposition}[Hindley-Rosen Lemma] \label{lemma:hindley-rosen} If $R^{*}$ commutes with $S^{*}$ and $R$, $S$ are confluent, then $R\veeS$ is confluent. \end{proposition} \begin{proof} By the second item of \autoref{lemma:auxiliary-lemma-hindley-rosen}, it is enough to prove that if $R$ commutes with $S$ and both $R$ and $S$ have the diamond property, then $R\veeS$ is confluent. Let us write $P$ for $R \vee S$. Using the adjoints of $\mathbb{\quantale}$-relation composition, we obtain: $$\dualstar{P};P^* \leq P^*; \dualstar{P} \iff \dualstar{P} \leq (P^*; \dualstar{P}) / P^* \iff \stardual{P} \leq (P^*; \dualstar{P}) / P^* $$ Therefore, it is sufficient to prove $\stardual{P} \leq (P^*; \dualstar{P}) / P^*$, which we do using fixed point induction. That amounts to prove $\Delta \leq (P^*; \dualstar{P}) / P^*$ and $\dual{P}; (P^*; \dualstar{P}) / P^* \leq (P^*; \dualstar{P}) / P^*$. The former holds since $$ \Delta \leq (P^*; \dualstar{P}) / P^* \iff \Delta; P^* \leq P^*; \dualstar{P} $$ and $ \Delta; P^* = P^* = P^*; \Delta \leq P^*; \dualstar{P}$. For the latter, we have $$ \dual{P}; (P^*; \dualstar{P}) / P^* \leq (P^*; \dualstar{P}) / P^* \iff \dual{P}; ((P^*; \dualstar{P}) / P^*); P^* \leq P^*; \dualstar{P} \impliedby \dual{P}; P^*; \dualstar{P} \leq P^*; \dualstar{P} $$ since $((P^*; \dualstar{P}) / P^*); P^* \leq P^*; \dualstar{P}$. To prove $\dual{P}; P^*; \dualstar{P} \leq P^*; \dualstar{P}$, we notice that $$ \dual{P}; P^*; \dualstar{P} = \dual{(R \vee S)}; P^*; \dualstar{P} = (\dual{R} \vee \dual{S}); P^*; \dualstar{P} = (\dual{R}; P^*; \dualstar{P}) \vee (\dual{S}; P^*; \dualstar{P}) $$ so that it is sufficient to prove $\dual{R}; P^*; \dualstar{P} \leq P^*; \dualstar{P}$ and $\dual{S}; P^*; \dualstar{P} \leq P^*; \dualstar{P}$. We prove the first inequality, as the second one is similar. Since $R$ commute both with itself and with $S$, by \autoref{lemma:auxiliary-lemma-hindley-rosen}, we have: \begin{align*} \dual{R}; (R \vee S)^*; \dualstar{(R \vee S)} &\leq (R \vee S)^*; \dual{R}; \dualstar{(R \vee S)} \\ &= (R \vee S)^*; \dual{R}; \stardual{(R \vee S)} \\ &\leq (R \vee S)^*; (\dual{R} \vee \dual{S}); \stardual{(R \vee S)} \\ &\leq (R \vee S)^*; \dual{(R\vee S)}; \stardual{(R \vee S)} \\ &\leq (R \vee S)^*; \stardual{(R \vee S)} \\ &=P^*; \dualstar{P} \end{align*} \end{proof} \subsection{Locality and Termination} By \autoref{prop:church-rosser}, we know that nice operational properties of a $\mathbb{\quantale}$-equivalence $E$ can be obtained by characterising $E$ as the convertibility $\mathbb{\quantale}$-relation of a \emph{confluent} rewriting $\mathbb{\quantale}$-relation $R$. Even if \autoref{lemma:hindley-rosen} gives a technique to prove confluence of composed systems compositionally, proving confluence of ``atomic'' systems may still be not easy. In fact, by its very definition, confluence is a \emph{global} property of a system, in the sense that it refers to rewriting sequences, rather than to single rewriting steps. Newman's Lemma \cite{newman} is a well-known result in the theory of abstract rewriting stating that if a system is \emph{terminating}, then confluence follows from \emph{local confluence}. The rest of this section is dedicated to refining Newman's Lemma to a quantitative{} setting. To do so, we first define the notion of a terminating $\mathbb{\quantale}$-relation and prove that terminating $\mathbb{\quantale}$-relations satisfy a suitable induction principle. We then use the latter to extend Newman's Lemma to $\mathbb{\quantale}$-relations. Before proceeding any further, we observe that up to this point our analysis of $\Quantale$-ARS{s} has been relational, proceeding in an algebraic and pointfree fashion. To make this paper as accessible as possible, we now take a (temporary) break from that methodology and first give a \emph{pointwise} analysis of quantitative{} termination and a \emph{pointwise} proof of (the quantitative{} refinement of) Newman's Lemma similar to the one by \citet{fuzzy-rewriting-1} (see \autoref{sect:conclusion} for a precise comparison). After that, we go back on our choice and review termination and Newman's Lemma in a novel way, following the relational and algebraic paradigm and extending the relational theory of induction and abstract rewriting by \citet{backshouse-calculational-approach-to-mathematical-induction} to a quantitative{}, $\mathbb{\quantale}$-enriched setting. Such an extension is nontrivial and requires the introduction of suitable relational modalities akin to corelators \cite{DBLP:journals/pacmpl/LagoG22a}. The outcome (which the authors believe is worth the effort) is interesting not only because it gives a clean analysis of induction in a $\mathbb{\quantale}$-enriched setting --- as well as a (slightly) more general version of (quantitative) Newman's Lemma --- but also for the methodology employed, which constitutes a nice example of quantitative{} relational methods. \subsubsection{Quantitative Termination, Induction, and Newman's Lemma} As a first step towards a quantitative{} refinement of Newman's Lemma, we extend the notion of termination to $\mathbb{\quantale}$-relations. Contrary to traditional rewriting, in a quantitative{} setting the notion of termination may be defined in many, non-equivalent ways. We could define, for instance, a terminal element as one having no \emph{nontrival} and \emph{non-null} reductions, so that we allow a terminal element $a$ to be reduced to another one $b$, only if the distance between $a$ and $b$ is either $\mathsfit{k}$ or $\bot$. This notion, which is meaningful in a genuine quantitative{} setting (especially if interested in `metric reasoning modulo equality'), trivialises when instantiated to the Boolean quantale, as elements are always reducible. To stay closer to traditional rewriting, we may exclude null distances too, so that terminal elements can be reduced only to those element that are $\bot$-apart from them. Finally, we may also go beyond finitary notions of reduction \cite{infinitary-rewriting-1,infinitary-rewriting-2,infnitary-rewriting-3} and think about termination in the limit \cite{Faggian-2019}, i.e. as possibly infinite reductions converging to $\mathsfit{k}$, in the limit. All these proposals are legitimate, and they all deserve to be investigated. A complete analysis of notions of quantitative{} termination, however, is beyond the scope of this paper and we should thus fix a conceptually minimal notion of termination and focus on that one only. To do that, we take an operational approach and stipulate that a (rewriting) $\mathbb{\quantale}$-relation is terminating if it supports an induction principle. But what could the latter possibly be? Let us recall \cite{backshouse-calculational-approach-to-mathematical-induction} that, given a binary relation $R \subseteq A \times A$, a property $p$ on $A$ is $R$-\emph{inductive} if it satisfies the law $$(\forall x.\ x R y \to p(x)) \to p(y),$$ for any $y \in A$. We then say that the relation $R$ \emph{admits induction} if for any $R$-inductive property $p$, $p(a)$ holds for any $a \in A$, Consequently, if $R$ admits induction and we want to prove that each element of $A$ has a given property $p$, it is enough to prove $p$ to be $R$-inductive. We thus recover the familiar formulation of well-founded induction: \[ \infer{\forall y. p(y)} {\forall y. (\forall x.\ x R y \to p(x)) \to p(y)} \] We now generalise this idea to $\mathbb{\quantale}$-relations and $\mathbb{\quantale}$-properties. \begin{definition} \label{def:inductive} Let $(A, R)$ be a $\Quantale$-ARS{}. \begin{enumerate} \item A $\mathbb{\quantale}$-property $p: A \to \Omega}%{\mathsfit{V}$ is \emph{$R$-inductive} if the following holds for any $b \in A$: $$ \bigwedge_a R(a,b) \multimap p(a) \leq p(b). $$ \item We say that $R$ \emph{admits induction} if for any $a \in A$, $p(a) = \mathsfit{k}$, for any $R$-inductive predicate $p$. \end{enumerate} \end{definition} Notice that if $R$ admits induction and $p$ is a $R$-inductive predicate, then by \autoref{def:inductive} we obtain $p(a) = \mathsfit{k}$, for any $a \in A$.\footnote{ Another (more liberal) option that we do not explore in this work is to require $\bigwedge_a p(a) = \mathsfit{k}$ in place of $(\forall a)\ p(a) = \mathsfit{k}$.} \begin{remark} Being $R$-inductive (as well as admitting induction) is a Boolean property. As already seen in \autoref{remark:graded-properties} for confluence, we could obtain a finer, quantitative{} analysis of induction by $\mathbb{\quantale}$-enriching (i.e. grading) the properties of \autoref{def:inductive} in $\mathbb{\quantale}$, this way replacing, e.g., $\bigwedge_a R(a,b) \multimap p(a) \leq p(a)$ with $\bigwedge_a (R(a,b) \multimap p(a)) \multimap p(a)$. The latter formula, intuitively, gives the \emph{degree of inductiveness} of $R$. \end{remark} Armed with \autoref{def:inductive}, we can now operationally identify terminating $\mathbb{\quantale}$-relations with those admitting induction. Nonetheless, the reader may wonder whether there is an explicit characterisation of terminating relations in terms of familiar conditions akin to the equivalence between inductive and well-founded relations in the traditional case. The answer is in the affirmative and shows that $\mathbb{\quantale}$-relations admitting induction are precisely those that terminates in the strongest sense among those discussed at the beginning of this section.\footnote{It is an interesting question to determine if weaker and quantitative{} refinements of \autoref{def:inductive} correspond to weaker and quantitative{} notions of termination.} \begin{definition} Let $(A, R)$ be a $\Quantale$-ARS{}. \begin{enumerate} \item We say that $a \in A$ is a \emph{normal form} if $\bigvee_b R(a,b) = \bot$. \item We say that a reduction sequence $(a_{0},...,a_{n})$ \emph{terminates} if $a_n$ is a normal form. \item We say that $R$ is \emph{weakly normalizing} (WN) if for any $a \in A$ there exists a normal form $b$ such that $R^*(a,b) \neq \bot$. \item We say that $R$ is \emph{strongly normalizing} (SN) if for each $a \in A$, all reduction sequences starting from $a$ terminate. \end{enumerate} \end{definition} Notice that if a reduction sequence terminates, the sequence must be finite. Moreover, if $R$ is WN and $a \in A$, then there must exists an element $b$ such that $R^*(a,b) \neq \bot$. That means $\bigvee_n R^n(a,b) \neq \bot$, which in turn means that there exists an actual index $n$ and elements $a = a_0, a_1, \hdots, a_{n-1}= b$ such that $(a_0,\hdots, a_{n-1})$ is a reduction sequence from $a$ (in particular, $R(a_i, a_{i+1}) \neq \bot$, for any index $i$). The next result shows that terminating and inductive $\mathbb{\quantale}$-relations are indeed one and the same. \begin{proposition} \label{prop:induction-iff-SN} Let $(A, R)$ be a $\Quantale$-ARS{}. Then $\dual{R}$ admits induction if and only if $R$ is SN. \end{proposition} \begin{proof} We prove the two implications separately. \begin{itemize} \item[(${\implies}$)] Suppose $\dual{R}$ admits induction. We prove that $R$ is SN. Let $p(a) = \mathsfit{k}$ if all reduction sequences from $a$ terminates, and $p(a) = \bot$, otherwise. We prove that $p$ is inductive, from which the thesis follows. We have to show $\bigwedge_b R(a,b) \multimap p(b) \leq p(a)$. If $p(a) = \mathsfit{k}$, then we are trivially done. Otherwise, $p(a) = \bot$ and we have a sequence $a = a_0, a_1, \hdots$ such that $R(a_n, a_{n+1}) \neq \bot$, for any $n$. To prove $\bigwedge_b R(a,b) \multimap p(b) \leq \bot$, we show $R(a,a_1) \multimap p(a_1) \leq \bot$. By very definition of $\multimap$, we have $R(a,a_1) \multimap p(a_1) = \bigvee \{\varepsilon \mid \varepsilon \otimes R(a, a_1) \leq p(a_1)\}$, so that it is sufficient to show that for any $\varepsilon$ such that $\varepsilon \otimes R(a, a_1) \leq p(a_1)$, we have $\varepsilon \leq \bot$. Now, obviously $p(a_1) = \bot$, so that $\varepsilon \otimes R(a, a_1) = \bot$ too. Since $\mathbb{\quantale}$ is Lawverian, we then have that either $\varepsilon = \bot$ or $R(a, a_1) = \bot$. Since $R(a, a_1) \neq \bot$, we thus conclude $\varepsilon = \bot$, and we are done. \item[(${\impliedby}$)] Suppose $R$ is SN and let $p$ be $\dual{R}$-inductive. We prove $p(a) = \mathsfit{k}$, for any $a \in A$. We proceed by contradiction showing that if there exists $a \in A$ such that $p(a) < \mathsfit{k}$, then there also exists $b \in A$ such that $R(a,b) \neq \bot$ and $p(b) < \mathsfit{k}$. Therefore, if there is an $a$ such that $p(a) < \mathsfit{k}$, we also have a non-terminating reduction sequence from $a$, this way contradicting $SN$. So suppose to have an element $a$ such that $p(a) < \mathsfit{k}$. Suppose also, for the sake of a contradiction, that for any $b$ either $R(a,b) = \bot$ or $p(b) < \mathsfit{k}$. In both cases we obtain $R(a,b) \multimap p(b) = \mathsfit{k}$, and thus $\bigwedge_b R(a,b) \multimap p(a) = \mathsfit{k}$. Since $p$ is inductive, we also have $\bigwedge_b R(a,b) \multimap p(b) \leq p(a)$ and thus $\mathsfit{k} = \bigwedge_b R(a,b) \multimap p(b) \leq p(a) < \mathsfit{k}$. Contradiction. \end{itemize} \end{proof} We now have all the ingredients to quantitatively refining Newman's Lemma. \begin{proposition} \label{prop:newmans-lemma-pointwise} Let $(A, R)$ be a \emph{strongly normalising} $\Quantale$-ARS. Then, $R$ is confluent if and only if it is locally confluent. \end{proposition} \begin{proof} Obviously, if $R$ is confluent, then it is locally confluent too. We prove the converse. Suppose that $R$ is locally confluent and let us define the $\mathbb{\quantale}$-property $p$ as follows: $$ p(a)\triangleq \begin{cases} \mathsfit{k} & \text{if } \forall b_1, b_2 \in A.\ R^*(a,b_1) \otimes R^*(a,b_2) \leq \bigvee_b R(b_1,b) \otimes R(b_2,b) \\ \bot & \text{otherwise.} \end{cases} $$ Therefore, $p$ is a Boolean property, in the sense that for any $a \in A$, $p(a)$ is either $\mathsfit{k}$ or $\bot$. Moreover, we see that $p(a) = \mathsfit{k}$ if and only if $R$ is confluent on $a$. Since $R$ is SN, it admits induction, and thus to prove the thesis it is sufficient to show that $p$ is inductive. Let us first notice that since $p$ is Boolean, we can simplify the proof of its inductiveness. \begin{claim} To prove that $p$ is inductive it is enough to show: \begin{align} \forall a.\ (\forall b.\ R(a,b) \neq \bot \implies p(b) = \mathsfit{k}) &\implies p(a) = \mathsfit{k}. \label{eq:newman-aux} \end{align} \end{claim} \begin{proofoftheclaim}{} To prove that $p$ is inductive, we have to show that for any $a$, $\bigwedge_b R(a,b) \multimap p(b) \leq p(a)$. Now, since $p$ is Boolean, either $p(a) = \mathsfit{k}$ of $p(a) = \bot$. In the former case, we trivially have $\bigwedge_b R(a,b) \multimap p(b) \leq \mathsfit{k} = p(a)$ In the latter case (i.e. $p(a) = \bot$), by \eqref{eq:newman-aux} there exists an element in $A$, call it $c$, such that $R(a,c) \neq \bot$ and $p(c) = \mathsfit{k}$. We then have $$\bigwedge_b R(a,b) \multimap p(b) \leq R(a,c) \multimap p(c) = \bot = p(a), $$ since $\mathbb{\quantale}$ is Lawverian. \end{proofoftheclaim} \noindent Coming back to the main proof, we have seen that we can conclude the main thesis by proving \eqref{eq:newman-aux}. So let us fix $a \in A$ and assume \begin{align} \forall b.\ R(a,b) \neq \bot \implies p(b) = \mathsfit{k}. \label{eq:newman-hyp} \end{align} We prove $p(a) = \mathsfit{k}$, i.e. $R^*(a,b_1) \otimes R^*(a,b_2) \leq \bigvee_b R^*(b_1,b) \otimes R^*(b_2,b)$, for arbitrary $b_1,b_2$. Let $\varepsilon \triangleq \bigvee_b R^*(b_1,b) \otimes R^*(b_2,b)$. Since $R = \Delta \vee R;R^*$, it is sufficient to prove $\Delta(a,b_1) \vee (R;R^*)(a,b_1) \leq R^*(a,b_2) \multimap \varepsilon$, which is itself implied by $\Delta(a,b_1) \leq R^*(a,b_2) \multimap \varepsilon$ and $(R;R^*)(a,b_1) \leq R^*(a,b_2) \multimap \varepsilon$. For the former, we assume $a=b_1$ (the case for $a \neq b_1$ is trivial) and notice that $R^*(a,b_2) \leq \varepsilon$ follows by taking $b = b_2$. For the former, we see that $(R;R^*)(a,b_1) \leq R^*(a,b_2) \multimap \varepsilon$ is equivalent to $(R;R^*)(a,b_2) \leq R^*(a,b_1) \multimap \varepsilon$. We repeat the above argument, this time on $(R;R^*)(a,b_2)$, so that it is sufficient to prove $\Delta(a,b_2) \leq (R;R^*)(a,b_1) \multimap \varepsilon$ and $(R;R^*)(a,b_2) \leq (R;R^*)(a,b_1) \multimap \varepsilon$. For the former, we proceed as usual. The real interesting case is the latter, which is equivalent to $$ (R;R^*)(a,b_1) \otimes (R;R^*)(a,b_2) \leq \varepsilon. $$ Since we have $$ (R;R^*)(a,b_1) \otimes(R;R^*)(a,b_2) = \bigvee_c R(a,c) \otimes R^*(c,b_1) \otimes \bigvee_d R(a,d) \otimes R^*(d,b_2), $$ it is enough to prove that for all $c,d$, we have $R(a,c) \otimes R^*(c,b_1) \otimes R(a,d) \otimes R^*(d,b_2) \leq \varepsilon$. Now, if either $R(a,c) = \bot$ or $R(a,d) = \bot$, we are trivially done, since the quantale is integral. Suppose then they are all different from $\bot$, so that we can apply \eqref{eq:newman-hyp} on both of them, this way obtaining $p(d) = p(c) = \mathsfit{k}$. Since $R$ is locally confluent, we obtain: \begin{align*} R(a,c) \otimes R^*(c,b_1) \otimes R(a,d) \otimes R^*(d,b_2) &= R(a,c) \otimes R(a,d) \otimes R^*(c,b_1) \otimes R^*(d,b_2) \\ &\leq \bigvee_e R^*(c,e) \otimes R^*(d,e) \otimes R^*(c,b_1) \otimes R^*(d,b_2) \end{align*} It is then sufficient to prove $R^*(c,e) \otimes R^*(d,e) \otimes R^*(c,b_1) \otimes R^*(d,b_2) \leq \varepsilon$, for any $e \in A$. From $p(d) = p(c) = \mathsfit{k}$, we obtain: \begin{align*} R^*(c,e) \otimes R^*(d,e) \otimes R^*(c,b_1) \otimes R^*(d,b_2) &= R^*(c,e) \otimes R^*(c,b_1) \otimes R^*(d,e) \otimes R^*(d,b_2) \\ &\leq \bigvee_f R^*(e,f) \otimes R^*(b_1,f) \otimes R^*(d,e) \otimes R^*(d,b_2) \\ &= \bigvee_f R^*(d,e) \otimes R^*(e,f) \otimes R^*(b_1,f) \otimes R^*(d,b_2) \\ &= \bigvee_f R^*(d,f) \otimes R^*(b_1,f) \otimes R^*(d,b_2) \\ &= \bigvee_f R^*(d,f) \otimes R^*(d,b_2) \otimes R^*(b_1,f) \\ &\leq \bigvee_f \bigvee_g R^*(f,g) \otimes R^*(b_2,g) \otimes R^*(b_1,f) \\ &= \bigvee_f \bigvee_g R^*(b_1,f) \otimes R^*(f,g) \otimes R^*(b_2,g) \\ &\leq \bigvee_g R^*(b_1,g) \otimes R^*(b_2,g) \\ &= \varepsilon \end{align*} \end{proof} \section{Quantitative Induction and Newman's Lemma, $\mathbb{\quantale}$-Relationally} Even if mathematically fine, the previous section does not follow the relational style we have used to define quantitative{} rewriting so far. \citet{backshouse-calculational-approach-to-mathematical-induction} have developed a relational theory of induction that allowed them to give an elegant, algebraic proof of Newman's Lemma. In this section, we extend their proof to $\Quantale$-ARS{s}. As already remarked, such an extension is nontrivial and builds upon the crucial notion of a relational modality (also known as a corelator \cite{DBLP:journals/pacmpl/LagoG22a,modal-reasoning-equal-metric-reasoning}) to define Boolean properties relationally. Let us begin by reviewing how \citet{backshouse-calculational-approach-to-mathematical-induction} deal with induction, algebraically. In a nutshell, the (Boolean) calculus of classes is first embedded into the calculus of relations, this way defining (Boolean) predicates as relations satisfying suitable laws. Secondly, given a relation $R$ and a predicate $p$, the (semantics of the) predicate ${R}{\searrow}{p}$ is defined as\footnote{${R}{\searrow}{p}$ is actually defined relying on the axioms of the calculus of relations only, rather than on their set-theoretic semantics.} $\{x \in A \mid \forall y.\ y R x \to p(y)\}$. We then say that a predicate $p$ is $R$-inductive if ${R}{\searrow}{p} \subseteq p$ and that $R$ admits induction if $$ {R}{\searrow} p \subseteq p \implies \Delta \subseteq p. $$ We now generalise this construction to the setting of $\mathbb{\quantale}$-relations. First, we define the notion of a $\mathbb{\quantale}$-predicate. There are several ways to define predicates relationally. For instance, thinking about a $\mathbb{\quantale}$-relation $R: A \tobar B$ as a $\mathbb{\quantale}$-valued matrix, we can see a predicate as a --- row or column --- vector~\cite{relational-mathematics}. Accordingly, we define a predicate over $A$ as a $\mathbb{\quantale}$-relation $p: A \tobar 1$, with $1 \triangleq \{*\}$ one-element set.\footnote{We thus view predicates as column vector. Equivalently, we may define predicates as row vectors, i.e. as $\mathbb{\quantale}$ relations $p: 1 \tobar A$.} Notice that since $\qtop: 1 \tobar 1$ coincides with $\Delta$, any predicate $p$ satisfies $p; \qtop = p$. Another way to define predicates is by means of \emph{coreflexive} $\mathbb{\quantale}$-relations~\cite{backshouse-calculational-approach-to-mathematical-induction} (also known as monotypes), whereby a predicate on $A$ is a $\mathbb{\quantale}$-relation $p: A \tobar A$ such that $p \leq \Delta$. Vectors and coreflexives are equivalent notions, in the sense that there is an isomorphism between (column) vectors and coreflexives: any vector $p: A \tobar 1$ gives the coreflexive\footnote{Notice that $R \otimes \Delta$ = $R \wedge \Delta$.} $(p;\qtop) \otimes \Delta: A \tobar A$; vice versa, any coreflexive $p: A \tobar A$ gives the vector $p; \qtop$. Given a column $\mathbb{\quantale}$-vector $p$ and a $\mathbb{\quantale}$-relation $R$, we notice that $$(R \setminus p)(b,*) = \bigwedge_a R(a,b) \multimap p(a,*)$$ gives exactly the formula we have used to define inductive predicates. Moreover, the same formula can be obtained if $p$ is a coreflexive by considering $R \setminus (p;\qtop)$. Since we will extensively work with coreflexives, we introduce the notation ${R}{\searrow} p$ for $R \setminus (p;\qtop)$. \begin{definition} \label{def:inductive-2} Let $R: A \tobar A$ be a $\mathbb{\quantale}$-relation. \begin{enumerate} \item A $\mathbb{\quantale}$-vector $p: A \tobar 1$ is \emph{$R$-inductive} if $R \setminus p \leq p$. We say that $R$ \emph{admits vector induction} if, for any $\mathbb{\quantale}$-vector $p: A \tobar 1$, we have: $$ R \setminus p \leq p \implies \qtop \leq p. $$ \item A coreflexive $p: A \tobar A$ is \emph{$R$-inductive} if ${R}{\searrow} p \leq p$. We say that $R$ \emph{admits coreflexive induction} if, for any coreflexive $p: A \tobar A$, we have: $$ {R}{\searrow} p \leq p \implies \Delta \leq p. $$ \end{enumerate} \end{definition} \begin{remark} \label{rem:condition-well-founded} Thanks to the correspondence between vectors and coreflexives, it is easy to see that the notions of vector and coreflexive induction are equivalent, and that they indeed correspond to the pointwise notion of an inductive $\mathbb{\quantale}$-relation as given in \autoref{def:inductive}. Morevoer, we can abstract \autoref{def:inductive-2} from vectors and monotypes, and say, in full generality, that a $\mathbb{\quantale}$-relation $R$ \emph{admits induction} if $$ R \setminus S \leq S \implies \qtop \leq S $$ for any $\mathbb{\quantale}$-relation $S$. Obviously, this definition subsumes those in \autoref{def:inductive-2}. One can also show that the vice versa holds too, and that the above three definitions of an inductive $\mathbb{\quantale}$-relation are equivalent. \end{remark} To prove (the quantitative{} refinement of) Newman's Lemma, we will not do indution on an arbitrary $\mathbb{\quantale}$-property, but on a \emph{Boolean} one. In previous section, we have modelled Boolean predicates as $\mathbb{\quantale}$-properties $p$ that are either equal to $\qtop$ or to $\Bot$. Even if correct, such a definition is operationally weak since it does not readily come with useful algebraic laws and proof techniques. We overcome the problem by giving a modality-based definition of Boolean $\mathbb{\quantale}$-properties in the spirit of the exponential modality of linear logic \cite{DBLP:journals/tcs/Girard87}. To do so, we proceed as follows: first, we define a way to extract a Boolean property out of an $\mathbb{\quantale}$-enriched one. Since we can inject Boolean properties into $\mathbb{\quantale}$-enriched ones, we can then pick a $\mathbb{\quantale}$-property, extract a Boolean predicate out of it, and the (re)enriched it in $\mathbb{\quantale}$. We say that a property is Boolean if it is invariant under the above procedure. Given a quantale $\mathbb{\quantale}$, there is a canonical adjucnction between $\mathbb{\quantale}$ and $\mathbbm{2}$ given by the maps $\varphi: \Omega}%{\mathsfit{V} \to \mathbb{2}$ and $\psi: \mathbb{2} \to \Omega}%{\mathsfit{V}$ defined thus: $$ \varphi(\varepsilon) \triangleq \begin{cases} \top & \text{ if } \varepsilon = \mathsfit{k} \\ \bot & \text{ otherwise} \end{cases} \qquad \psi(x) \triangleq \begin{cases} \mathsfit{k} & \text{ if } x = \top \\ \bot & \text{ otherwise} \end{cases} $$ Both $\varphi$ and $\psi$ form a so-called change of base functor \cite{Hoffman-Seal-Tholem/monoidal-topology/2014,Kelly/EnrichedCats}, and their composition $\psi \circ \varphi: \Omega}%{\mathsfit{V} \to \Omega}%{\mathsfit{V}$ is a change of base endofunctor.\footnote{ Change of base functors will play a crucial role in \autoref{sect:beyond-non-expansive-systems}} Since we define Boolean $\mathbb{\quantale}$-relations (and thus $\mathbb{\quantale}$-properties) as those that are invariant under the map $\psi \circ \varphi$, we introduce a special notation for the latter. \begin{definition} Define the (set-indexed family of) map(s) $\Box: \Vrel{\mathbb{\quantale}}(A,B) \to \Vrel{\mathbb{\quantale}}(A,B)$ by $\Box R \triangleq \psi \circ \varphi \circ R$. We say that a $\mathbb{\quantale}$-relation is \emph{Boolean} if $\Box R =R$. \end{definition} The following result (whose proof is straightforward) simply states that $\Box$ satisfies (some of) the axioms of a corelator \cite{DBLP:journals/pacmpl/LagoG22a,modal-reasoning-equal-metric-reasoning}. We will extensively use this fact in the proof of \autoref{prop:newmans-lemma}. \begin{proposition} The map $\Box$ obeys the following laws, where $R \otimes S: A \times B \to A' \times B'$ is defined pointwise, for $R:A \to A'$ and $S: B \to B'$. \begin{align*} \Delta &\leq \Box \Delta \label{eq:relator-id} \tag{rel-id} \\ \BoxR; \BoxS &\leq \Box(R; S) \label{eq:relator-comp} \tag{rel-comp} \\ \Box{R} &\leq R \label{eq:relator-dereliction} \tag{rel-der} \\ \Box R \otimes \Box S &\leq \Box(R \otimes S) \label{eq:relator-tensor} \tag{rel-tensor} \\ \Box R &\leq \Box \Box R \label{eq:relator-contraction} \tag{rel-contraction} \\ R \leq S &\implies \BoxR \leq \BoxS \label{eq:relator-monotone} \tag{rel-mon} \end{align*} \end{proposition} Using the map $\Box$ we can specialise the notion of coreflexive (and of a vector) to Boolean properties. \begin{definition} A Boolean property on $A$ is a corefliexive $p: A \tobar A$ (i.e. $p \leq \Delta$) such that $p = \Box p$. \end{definition} Before stating our quantitative{} version of Newman's Lemma, let us spell out some useful facts about Boolean properties. \begin{lemma} \label{lemma:newman-help-1} Given $\mathbb{\quantale}$-relations $R, S: A \tobar A$ and a Boolean property $p: A \tobar A$, we have: \begin{enumerate} \item $R \searrow p$ is a Boolean property. \item $(R \searrow p);(S \searrow p) = (R \vee S) \searrow p$. \item $R \searrow p = p \swarrow \dual{R}$. \item $R;(R \searrow p) \leq p;R$ and $(p \swarrow S);S \leq S; p$. \end{enumerate} \end{lemma} We are now ready to state and prove the quantitative{} refinement of the abstract Newman's Lemma by \cite{backshouse-calculational-approach-to-mathematical-induction}. \begin{proposition} \label{prop:newmans-lemma} Let $R, S: A \tobar A$ be $\mathbb{\quantale}$-relations such that $R \vee \dual{S}$ admits induction. Then $R; S \leq S^*; R^*$ implies $R^*; S^* \leq S^*; R^*$ \end{proposition} Before proving \autoref{prop:newmans-lemma}, let us observe the following elementary fact. \begin{lemma} \label{lemma:newmans-helper-2} $R^*; S; P^* \leq (R^*;S) \vee (R^*;R;S;P;P^*) \vee (S; P^*)$. \end{lemma} \begin{proof}[Proof of \autoref{prop:newmans-lemma}] Since $R \vee \dual{S}$ admits induction, for any coreflexive $p$ we have $$ (R \vee \dual{S}) \searrow p \leq p \implies \Delta \leq p. $$ By \autoref{lemma:newman-help-1}, $(R \vee \dual{S}) \searrow p = (R \searrow p);(p \swarrow S)$, so that we obtain the following induction principle: $$ (R \searrow p);(p \swarrow S) \leq p \implies \Delta \leq p. $$ We have to prove $R^*; S^* \leq S^*; R^*$ which, by adjunction, is equivalent to $$ \Delta \leq R^* \setminus (S^*;R^*) / S^*. $$ We notice that for any $\mathbb{\quantale}$-relation $P$, we have $\Delta \leq P$ if and only if $\Delta \leq \Box(P \wedge \Delta)$. In fact, the left to right direction follows since $\Box \Delta = \Delta$, whereas the right to left direction follows from \eqref{eq:relator-dereliction} ($\Delta \leq \Box(P \wedge \Delta) \leq P \wedge \Delta \leq P$). Therefore, to prove $\Delta \leq R^* \setminus (S^*;R^*) / S^*$, it is enough to show $$ \Delta \leq \underbrace{\Box((R^* \setminus (S^*;R^*) / S^*) \wedge \Delta)}_{p}. $$ Notice that $p$ is a Boolean coreflexive, and thus we can rely on inductiveness of $R \vee \dual{S}$ and obtain the proof obligation $(R \searrow p);(p \swarrow S) \leq p$. Since $p$ is Boolean, then so are $R \searrow p$ and $p \swarrow S$, so that \eqref{eq:relator-comp} gives us: \begin{align*} (R \searrow p);(p \swarrow S) = \Box(R \searrow p);\Box(p \swarrow S) \leq \Box((R \searrow p);(p \swarrow S)). \end{align*} Therefore, our thesis becomes $$ \Box((R \searrow p);(p \swarrow S)) \leq p = \Box((R^* \setminus (S^*;R^*) / S^*) \wedge \Delta). $$ By \eqref{eq:relator-monotone}, it is sufficient to prove $(R \searrow p);(p \swarrow S) \leq (R^* \setminus (S^*;R^*) / S^*) \wedge \Delta $ which amounts to show \begin{align*} (R \searrow p);(p \swarrow S) &\leq \Delta \\ (R \searrow p);(p \swarrow S) &\leq R^* \setminus (S^*;R^*) / S^*. \end{align*} The former inequation is straightforward as both $R \searrow p$ and $p \swarrow S$ are coreflexives (and thus $R \searrow p \leq \Delta$ and $p \swarrow S \leq \Delta$), since $p$ is. Let us now move to the second inequation. By adjunction, we have to show $$ R^*; (R \searrow p);(p \swarrow S); S^* \leq S^*;R^*. $$ By \autoref{lemma:newmans-helper-2}, we reduce the proof to the following three inequations: \begin{align*} R^*; (R \searrow p);(p \swarrow S) &\leq S^*;R^* \\ R^*;R; (R \searrow p);(p \swarrow S);S; S^* &\leq S^*;R^* \\ (R \searrow p);(p \swarrow S); S^* &\leq S^*;R^*. \end{align*} For the first one, since both $R \searrow p$ and $p \swarrow S$ are coreflexives, we have: $$ R^*; (R \searrow p);(p \swarrow S) \leq R^*; \Delta; \Delta \leq R^*; S^*. $$ We prove the third inequation in a similar fashion. Let us now move to the second one. For readability, let $P$ be $R^* \setminus (S^*;R^*) / S^*$, so that $p = \Box(P \wedge \Delta)$. We have: \begin{align*} R^*;R; (R \searrow p);(p \swarrow S);S; S^* &\leq R^*; p; R ; S; p; S^* \tag{\autoref{lemma:newman-help-1}, item 4} \\ &\leq R^*; p; S^* ; R^*; p; S^* \tag{Hypothesis} \\ &= R^*; \Box(P \wedge \Delta); S^* ; R^*; \Box(P \wedge \Delta); S^* \\ &\leq R^*; (P \wedge \Delta); S^* ; R^*; (P \wedge \Delta); S^* \tag{\ref{eq:relator-dereliction}} \\ &\leq R^*; P; S^* ; R^*; P; S^* \\ &= R^*; R^* \setminus (S^*;R^*) / S^*; S^* ; R^*; R^* \setminus (S^*;R^*) / S^*; S^* \\ &\leq S^*;R^*; R^*; R^* \setminus (S^*;R^*) / S^; S^* \\ &\leq S^*;R^*; R^* \setminus (S^*;R^*) / S^; S^* \\ &\leq S^*;S^*;R^* \\ &\leq S^*;R^*. \end{align*} \end{proof} \begin{corollary}[Newman's Lemma] \label{corollary:newman-lemma} Let $(A,R)$ be a $\Quantale$-ARS{.} If $R$ is SN, then $R$ is confluent if and only if it is locally confluent. \end{corollary} \begin{proof} We instantiate $R$ and $S$ in \autoref{prop:newmans-lemma} as $\dual{R}$ and $R$, respectively. Consequently, the hypothesis that $R \vee \dual{S}$ admits induction collapses to $\dual{R}$ admitting induction, which is equivalent to $R$ being SN. \autoref{prop:newmans-lemma} then precisely gives confluence of $R$ (assuming its local confluence). \end{proof} \begin{remark} In the proof of \autoref{corollary:newman-lemma}, we have actually used the equivalence between $\mathbb{\quantale}$-relations admitting induction and terminating (well-founded) ones, as proved in \autoref{prop:induction-iff-SN}. We can indeed safely do so as we have seen that the relational pointfree definition of an inductive relation (\autoref{def:inductive-2}) coincides with its pointwise counterpart (\autoref{def:inductive}). Nonetheless, a complete relational analysis of (quantitative{)} Newman's Lemma requires a relational account of termination too. Doing that is beyond the scope of this paper, although it can be done with a reasonable effort. As a guideline, we simply say that a $\mathbb{\quantale}$-relation $R$ is \emph{well-founded} if $$ S \leq S; R \implies S \leq \Bot $$ for any $S$ (similar definitions can be obtained restricting to vectors and monotypes, as in \autoref{rem:condition-well-founded}), and that $R$ is SN if $\dual{R}$ is well-founded. \end{remark} \section{QUANTITATIVE TERM REWRITING SYSTEMS: A SHORT PHENOMENOLOGY} Having introduced the general theory of quantitative{} abstract rewriting systems, in the remaining sections of this paper we shall introduce \emph{quantitative{} term rewriting systems} and their connection with quantitative{} algebras. Contrary to traditional term rewriting systems, there are several notions of a quantitative{} term rewriting system (and of their associated notion of a quantitative{} equational theory), each of which is associated with a suitable notion of non-expansiveness of functions. In the next section, we shall deal with \emph{non-expansive} quantitative{} term rewriting systems, leaving to \autoref{sect:beyond-non-expansive-systems} the analysis of \emph{graded} quantitative{} term rewriting systems, the most general class of quantitative{} term-based systems we will study in this work. Before diving into the theory of non-expansive systems, however, it is instructive to anticipate a bit of term-systems phenomenology. \paragraph{Non-Expansive Systems} Non-expansive term rewriting systems ($\Quantale$-TRS{s}, for short) are quantitative systems in which reducing terms inside contexts \emph{non-expansively} propagates distances. Therefore, if $e$ reduces to $f$ with distance $\varepsilon$, then $C[e]$ reduces to $C[f]$ with distance $\varepsilon$, too. That is, by thinking about the context $C$ as a function on terms, then $C$ is non-expansive with respect to the rewriting distance. To make this semantic choice coherent at a rewriting level, systems have to be \emph{linear}, as non-linearity of terms breaks non-expansiveness (cf. distance amplification in \autoref{sect:combinators-intro}). \paragraph{Additive Systems} Additive (term rewriting) systems constitute the subclass of $\Quantale$-TRS{s} whose quantale is idempotent. Even if quantitative{,} the monoidal structure of additive systems collapses to a \emph{cartesian} one, as the tensor product of an idempotent quantale coincides with the meet of its underlying lattice. The main consequence of that is that non-expansiveness of rewriting is semantically coherent even with \emph{non}-linearity of systems, so that we can have non-linear additive systems that do not suffer neither confluence nor distance amplification issues. The theory of additive systems is essentially the same as the one of traditional rewriting systems, the latter being the prime examples of additive systems. \paragraph{Graded Systems} Graded systems constitute the largest class of term-based quantitative{} rewriting systems. Contrary to non-expansive systems, in a graded system the distance generated by a reduction $e \qreduce{\varepsilon} f$ can be amplified (or reduced\footnote{In which case we may talk of \emph{contractive systems}.}) when performed in a context. Thus for instance, we may have that $e$ reduces to $f$ with distance $\varepsilon$, but $C[e]$ reduces to $C[f]$ with distance $\phi_{C}(\varepsilon)$. The map $\phi_C$ is known as the \emph{grade} or \emph{sensitivity} of the context $C$, and it gives the law determining how much distances are amplified by $C$. For instance, if we work with the Lawvere quantale, $\phi_C$ is usually a multiplication by a constant map, the intended semantic meaning of such a map being a generalised Lipschitz constant associated to $C$ when regarded as a function. Graded term rewriting systems are an example of \emph{modal} and \emph{coeffectful} systems; and because of their modal nature, they allow us to drop the linearity constraint of non-expansive systems without incurring in (semantic and rewriting) (in)consistency issues. The price to pay for that is the need for a more sophisticated (meta)theory than the one of non-expansive systems. The latter, in fact, can be seen as trivial graded systems in which all contexts have grade given by the identity function (i.e. no amplification). \section{Quantitative Term Rewriting: Non-Expansive Systems} \label{sect:qtrs} Let us now formally introduce non-expansive systems. Through this section, let $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \otimes, \mathsfit{k})$. be a fixed \emph{continuous} quantale. Before going any further, we shortly recall some of the (standard) notions and notation we will use in the rest of the paper. \begin{description} \item[Terms] For a signature $\Sigma$ and a countable set of variables $X$, we write $\terms{\Sigma}{X}$ for the collection of ($\Sigma$-)terms over $X$. We use small Latin letters $e, f, g, \hdots$ to range over terms, sometimes using letters $a, b, c, \hdots$ too. \item[Positions] Recall that a \emph{position} $p$ is a finite string of positive integers. We denote by $\lambda$ the empty string and by $pq$ the concatenation of positions $p$, $q$; we write $p\leqq$ if $p$ is a prefix of $q$, i.e, if there is $r$ such that $q=pr$. We write $p\parallel q$ if $p\not\leqq$ and $q\not\leqp$. Finally, we denote by $\subtermpos{e}{p}$ the subterm of $e$ at position $p$. If $\subtermpos{e}{p}=f$, we will also write $e[f]_{p}$. \item[Context] A context is a term over the signature $\Sigma \cup \{\Box\}$. We write $\mathcal{C}[\cdot]$ for a context containing a single occurrence of $\Box$ and use the notation $\mathcal{C}[e]$ to denote the term obtained by replacing the (single) occurrence of $\Box$ with $e$ in $\mathcal{C}[\cdot]$. \item[Substitution] We denote substitutions by $\sigma, \tau, \hdots$ and write $e^{\sigma}$ in place of $\sigma(e)$. Furthermore, given two substitutions $\sigma,\tau$, we write $\sigma \preceq \tau$ if there exists $\rho$ such that $\tau=\sigma\rho$, where $(\sigma\rho)(e) \triangleq \sigma(\rho(e))$. Given two term $e$ and $f$, if $e^{\sigma}=f^{\sigma}$, then $\sigma$ is a unifier of $e$ and $f$, while $e$ and $f$ are said to be \emph{unifiable}. Finally, recall that the \emph{most general unifier} ($\emph{mgu}$) of two unifiable terms is their minimal unifier with respect to $\preceq$. \item[Linearity] We say that a term $e$ is linear if it has no multiple occurrences of the same variable. We say that a mathematical expression (such as a relation or a predicate) involving terms is linear if all terms appearing in it are linear. \end{description} We are now ready to define non-expansive quantitative{} term rewriting systems, which we simply refer to as $\mathbb{\quantale}$-term rewriting systems. \begin{definition} \label{def:qtrs} A \emph{$\mathbb{\quantale}$-term rewriting system} ($\Quantale$-TRS{,} for short) is a pair $\mathcal{R} = (\Sigma, \mapsto_{\vrelone})$ consisting of a signature $\Sigma$ and a $\mathbb{\quantale}$-ternary relation\footnote{ In quantitative{} algebra, it is customary to consider ternary relations on a \emph{base} of the quantale, rather than on the quantale itself~\cite{An-Internal-Language-for-Categories-Enriched-over-Generalised-Metric-Spaces}. Thus for, instance, in the case of the Lawvere quantale, we should take relations over non-negative rationals in place of those over $[0,\infty]$. Although this choice makes definitions computationally lighter, for our results working with ternary relations over elements of the quantale or over a base thereof makes no difference. Consequently, we will continue working with full $\mathbb{\quantale}$-ternary relations. Nonetheless, the reader can safely pretend such relations to be over a base of $\mathbb{\quantale}$ (see any textbook on lattices and domains \cite{AbramskyJung/DomainTheory/1994,continuous-lattices-and-domains,DaveyPriestley/Book/1990} for the definition of a base of a complete lattice, and the work by \citet{An-Internal-Language-for-Categories-Enriched-over-Generalised-Metric-Spaces} for an example of ternary relations over a base of a quantale. } $\mapsto_{\vrelone}$ over $\Sigma$-terms. The (rewriting) $\mathbb{\quantale}$-ternary relation $\to_{R}$ generated by $\mapsto_{R}$ is defined by the rules in \autoref{figure:qtrs}. \end{definition} \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=\linewidth,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \[ \infer{\varepsilon \Vdash C[\subst{a}{\sigma}] \to_{\vrelone} C[\subst{b}{\sigma}]} {\varepsilon \Vdash a \mapsto_{\vrelone} b} \quad \infer{\delta \Vdash e \to_{\vrelone} f} {\varepsilon \Vdash e \to_{\vrelone} f & \delta \leq \varepsilon} \quad \infer{\bigvee \varepsilon_i \Vdash e \to_{\vrelone} f} {\varepsilon_1 \Vdash e \to_{\vrelone} f & \hdots & \varepsilon_n \Vdash e \to_{\vrelone} f} \quad \infer{\varepsilon \Vdash e \to_{\vrelone} f} {\forall \delta \ll \varepsilon.\ \delta \Vdash e \to_{\vrelone} f} \] \end{tcolorbox} } \caption{Definition of $\to_{R}$} \label{figure:qtrs} \end{figure} \begin{notation} We refer to a triple $(a, \varepsilon, b) \in {\mapsto_{R}}$, i.e. such that $a \qstepto{\varepsilon}_{R} b$ as a (reduction) \emph{rule}. We call $a$ the \emph{redex}, and $b$ the \emph{contractum}. Moreover, when $\vrelone$ is irrelevant or clear from the context, we shall write $\to$ in place of $\to_{\vrelone}$ and use the notation $e \qreduce{\varepsilon} f$ in place of $\varepsilon \Vdash e \to f$. \end{notation} The first defining rule of the relation $\to_{R}$ in \autoref{def:qtrs} is the main rewriting rule: it states that rewriting can be performed inside any context and on any instance of reductions in $\mapsto_{R}$. This rule --- which is standard in traditional rewriting --- reflects the (semantic) assumption that operation symbols in $\Sigma$ behave as \emph{non-expansive} functions: accordingly, contexts do not amplify rewriting distances. The reaming rules encode structural properties of quantitative{} rewriting: the first giving a form of quantitative{} weakening, the second stating that rewriting is closed under \emph{finite} join, and the third stating a generalised continuity property. On the Lawvere quantale, for instance, we can read $e \qreduce{\varepsilon} f$ as stating that $e$ reduces to $f$ within an error of at most $\varepsilon$. Equivalently, we can redeuce $e$ to the \emph{non}-semantically equivalent term $f$, which differs from $e$ of at most $\varepsilon$. Accordingly, the structural rules in \autoref{def:qtrs} respectively state that if $e \qreduce{\varepsilon} f$ and $\varepsilon \leq \delta$, then we also have $e \qreduce{\delta} f$; that rewriting is closed under (necessarily \emph{finite}) minima; and that rewriting satisfies the Archimedean property~\cite{plotkin-quantitative-algebras-2016}: to prove $e \qreduce{\varepsilon} f$, it is enough to prove $e \qreduce{\delta} f$ for any $\delta$ strictly bigger than $\varepsilon$ (i.e. $\delta > \varepsilon$). Any $\Quantale$-TRS{} $(\Sigma, \mapsto_{\vrelone})$ induces a $\Quantale$-ARS{} whose objects are $\Sigma$-terms and whose rewriting $\mathbb{\quantale}$-relation $R: \terms{\Sigma}{X} \tobar \terms{\Sigma}{X}$ defined by $$ R(e, f) \triangleq \bigvee \{\varepsilon \mid \varepsilon \Vdash e \to_{\vrelone} f\}. $$ Consequently, all definitions and results seen so far extend to $\Quantale$-TRS{s}. for that reason, we oftentimes say that a $\Quantale$-TRS{} $(\Sigma, \mapsto_{\vrelone})$ has a given property when we actually mean that its associated $\Quantale$-ARS{} $(\terms{\Sigma}{X}, R)$ has it. \begin{remark} \label{rem:structural-rules} \autoref{def:qtrs} stipulates that $\to_{R}$ must be closed under suitable structural rules, viz. weakening, closure under finite joins, and the so-called (\emph{infinitary}) Archimedean rule \cite{plotkin-quantitative-algebras-2016}. We have included such rules to stay as close as possible to the literature on quantitative{} equational theories, where structural rules are used to ensure completeness of equational proof systems. From a rewriting perspective, however, such rules can be safely (and maybe naturally) avoided, arguably with the exception of weakening. In fact, not only having weakening as the only structural rule is a natural design choice at a semantic level (making, e.g., the defining rules of $\to_{R}$ \emph{finitary}), but it also strengthen the theory of $\Quantale$-TRS{} we are going to develop. In particular, the presence of the Archimedean rule forces us to formulate, e.g., confluence results at the level of the $\mathbb{\quantale}$-relation $R$, and one may wonder whether confluence holds also at the level of the ternary relation $\to_{R}$. The answer is, in general, in the negative. Nonetheless, an affirmative answer can be given if we drop all structural rules but weakening. This suggests that it is worth considering an alternative, structurally-free definition of $\Quantale$-TRS{s}. We will follow this path in \autoref{sect:beyond-non-expansive-systems}, where we shall define graded systems using weakening as the only structural rule. \end{remark} Let us now see some examples of $\Quantale$-TRS s, focusing, in particular, to the systems presented in \autoref{section:long-intro}. \begin{example} Traditional term rewriting systems are nothing but $\maketrs{\mathbbm{2}}\text{s}$. \end{example} \begin{example} All the examples seen in \autoref{section:long-intro} are $\maketrs{\mathbb{L}}\text{s}$. In particular, systems \begin{align*} \mathcal{N} &= (\Sigma_{\mathcal{N}}, \mapsto_{N}) & \mathcal{B} &= (\Sigma_{\mathbf{\mathcal{B}}}, \mapsto_{B}) & \bck &= (\Sigma_{\bck}, \mapsto_{K}) & \mathcal{T} &= (\Sigma_{\mathcal{T}}, \mapsto_{T}) \end{align*} as well combinations thereof (e.g. system $\BCK_{\BA} = (\Sigma_{\bck} \cup \Sigma_{\mathcal{B}}, \mapsto_{\combreloneB})$) are all $\maketrs{\mathbb{L}}\text{s}$. \end{example} \begin{example} Any quantitative{} string rewriting system can be modelled as $\Quantale$-TRS{.}. In particular, all quantitative{} string rewriting systems of \autoref{sect:string-rewriting} can be gives as $\maketrs{\mathbb{L}}$. To do so, we consider modify the signature seen in \autoref{sect:string-rewriting} by taking $\Sigma_{\mathcal{M}} \triangleq \{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}, \mathtt{nil}\}$, where $\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}$ are \emph{unary} function symbol and $\mathtt{nil}$ is a constant acting as the empty string. Thus, for instance, we model the string $\texttt{A}\texttt{G}\texttt{T}\texttt{C}$ as the term $\texttt{A}(\texttt{G}(\texttt{C}(\texttt{T}(\mathtt{nil}))))$. Next, we adapt the rewriting relation previously introduced to act on terms (rather than strings). We thus obtain the rewrite $\mathbb{L}$-relation $\mapsto_{M}$ defined as follows, where $b,c \in \{\texttt{A}, \texttt{C}, \texttt{G}, \texttt{T}\}$ and $b \neq c$ in the third rule. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] $$ x \qstepto{1}_{M} b(x) \qquad b(x) \qstepto{1}_{M} x \qquad b(x) \qstepto{1}_{M} c(x) $$ \end{tcolorbox} } \noindent As seen in \autoref{sect:string-rewriting}, (the $\Quantale$-TRS{} version of) system $\mathcal{M} = (\Sigma_{\mathcal{M}}, \mapsto_{M})$ operationally describes the Levenshtein distance~\cite{string-algorithms} between DNA sequences. Further edit distances on DNA molecules can be easily obtained modifying system $\mathcal{M}$. For instance, considering the third defining rule of $\mapsto_{M}$ (i.e. $b(x) \qstepto{1}_{M} c(x)$) only, we obtain an operational description of the Hamming distance~\cite{string-algorithms}, whereas the following system gives the Eigen–McCaskill–Schuster distance (one obtains the Watson–Crick distance similarly) \cite{encyclopedia-of-distances}. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*2)/3,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} \texttt{A}(x)&\qstepto{1}_ \texttt{C}(x) & \texttt{G}(x) &\qstepto{1} \texttt{T}(x) & \texttt{A}(x) &\qstepto{1} \texttt{T}(x) \\ \texttt{A}(x) &\qstepto{0} \texttt{G}(x) & \texttt{G}(x)&\qstepto{1} \texttt{C}(x) & \texttt{C}(x) &\qstepto{0} \texttt{T}(x) \end{align*} \end{tcolorbox} } \end{example} \begin{example} \label{ex:quantitative-semilattices} Consider the signature $\Sigma_{\mathcal{L}}$ containing a single binary operation $\cup$ for nondeterministic choice and let $\mapsto_{L}$ be the following rewriting relation. { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*3)/4,colframe=black,colback=black!0!white,arc=0mm] \vspace{-0.3cm} \begin{align*} x \qstepto{0}_{L} x \cup x \qquad (x \cup y) \cup z \qstepto{0}_{L} x \cup (y \cup z) \qquad (x \cup y) \qstepto{0}_{L} (y \cup x) \end{align*} \end{tcolorbox} } As it is, this rewriting system is not that interesting. The key point here is the choice of the quantale used for distances. Contrary to previous examples, here we consider the \emph{strong} Lawvere quantale $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$. The choice of this quantale largely impacts on the definition of $\to_{L}$, which now gives a form of non-expansiveness of $\cup$ reflecting the \emph{ultrametric}~\cite{steen/CounterexamplesTopology/1995} structure of $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$: \[ \infer{x \cup y \qreduce{\max(\varepsilon,\delta)} x' \cup y'} {x \qreduce{\varepsilon} x' & y \qreduce{\delta} y'} \] The convertibility distance $\makedistance{L}$ gives the so-called theory of quantitative{} semilattices~\cite{plotkin-quantitative-algebras-2016} and axiomatises the Hausdorff distance between sets~\cite{Munkres/Topology/2000}. Ultrametricity of $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$ ultimately relies on its tensor product being idempotent, i.e. satisfying the law $\varepsilon \otimes \varepsilon = \varepsilon$. In the case of $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$, this law trivially holds as the tensor coincides with the meet. When the underlying quantale is idempotent, then quantitative{} and metric reasoning becomes similar to traditional, Boolean reasoning, up to the point that the linearity assumption mentioned in \autoref{sect:combinators-intro} (which we shall formally rely on it the next section) is not necessary to avoid distance trivialisation and to ensure confluence properties of systems. Finally, we can combine system $\mathcal{L} = (\Sigma_{\mathcal{L}}, \mapsto_{L})$ with, e.g., system $\bck$, this way obtaining a quantitative{} system for nondeterministic affine combinators. \end{example} We summarise the examples of $\Quantale$-TRS{s} seen so far in \autoref{table:examples-qtrs-1} (we will see further examples of $\Quantale$-TRS{s} in \autoref{sect:beyond-non-expansive-systems}). The rest of this section is dedicated to the development of a general theory of $\Quantale$-TRS{s} and to instantiate it to infer nice computational properties of systems in \autoref{table:examples-qtrs-1}. In particular, we shall prove (by means of general techniques) confluence of all of them. Before diving into that, however, it is useful to spend few words on quantitative{} equational theories. \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{System} & \textbf{Objects/Name} & \textbf{Distance Induced} \\ \hline $\mathcal{N} = (\Sigma_{\mathcal{N}}, \mapsto_{N})$ & Natural Numbers & Euclidean Distance \\ $\mathcal{B} = (\Sigma_{\mathbf{\mathcal{B}}}, \mapsto_{B})$ & Multi-distributions/Barycentric algebras & Total Variation distance \\ $\BCK_{\NATS} = (\Sigma_{\BCK_{\NATS}}, \mapsto_{\combrelone_\natrelone})$ & Affine combinators with Arithmetic & Higher-order Euclidean Distance \\ $\mathcal{T} = (\Sigma_{\mathcal{T}}, \mapsto_{T})$ & Ticking & Cost distance \\ $\mathcal{M} = (\Sigma_{\mathcal{M}}, \mapsto_{M})$ & DNA molecules & Edit distance \\ $\mathcal{L} = (\Sigma_{\mathcal{L}}, \mapsto_{L})$ & Quantitative (semi)lattices & Hausdorff distance \\ \hline \end{tabular} \caption{Main Examples of \emph{non-expansive} $\Quantale$-TRS{s}.} \label{table:examples-qtrs-1} \end{table*} \subsection{Quantitative Equational Theories} In this section, we formally introduce quantitative{} equational theories and their connection with $\Quantale$-TRS s. Approaching the former in light of the latter allows us to highlights some operationally questionable design choices in the definition of a quantitative{} equational theories \begin{definition} \label{def:quantitative-algebras} A quantitative{} equational theory is a pair $\mathcal{E}= (\Sigma, \approx_E)$, where $\Sigma$ is a signature and $\approx_E$ is a $\mathbb{\quantale}$-ternary relation over $\Sigma$-terms. The $\mathbb{\quantale}$-ternary (equality) relation $=_{E}$ generated by $\approx_E$ is defined by the rules in \autoref{figure:qalgebra}. \end{definition} \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*5)/6,colframe=black,colback=black!0!white,arc=0mm] \[ \infer{\varepsilon \Vdash e =_{E} f}{\varepsilon \Vdash e \approx_{E} f} \qquad \infer{\mathsfit{k} \Vdash e =_{E} e}{} \qquad \infer{\varepsilon \Vdash f =_{E} e}{\varepsilon \Vdash e =_{E} f} \qquad \infer{\varepsilon \otimes \delta \Vdash e =_{E} g} {\varepsilon \Vdash e =_{E} f & \delta \Vdash f =_{E} g} \] \[ \infer{\bigotimes_i \varepsilon_i \Vdash f(e_1, \hdots, e_n) =_{E} f(f_1, \hdots, f_n)} {\varepsilon_1 \Vdash e_1 =_{E} f_1 & \cdots & \varepsilon_n \Vdash e_n =_{E} f_n} \qquad \infer{\varepsilon \Vdash \subst{e}{\sigma} =_{E} \subst{f}{\sigma}} {\varepsilon \Vdash e =_{E} f} \] \vspace{-0.05cm} \[ \infer{\delta \Vdash e =_{E} f} {\varepsilon \Vdash e =_{E} f & \delta \leq \varepsilon} \qquad \infer{\bigvee \varepsilon_i \Vdash e =_{E} f} {\varepsilon_1 \Vdash e =_{E} f & \hdots & \varepsilon_n \Vdash e =_{E} f} \qquad \infer{\varepsilon \Vdash e =_{E} f} {\forall \delta \ll \varepsilon.\ \delta \Vdash e =_{E} f} \] \end{tcolorbox} } \caption{Quantitative Equational Theory of $=_{E}$} \label{figure:qalgebra} \end{figure} The first block of rules in \autoref{figure:qalgebra} states that $=_{E}$ is a quantitative{} \emph{equivalence} relation containing $\approx_{E}$, whereas the last block contains essentially the same structural rules defining a $\Quantale$-TRS. The second block of rules, instead, states that function symbols and substitution behave as \emph{non-expansive functions}. In fact, defining the $\mathbb{\quantale}$-relation $E$ by $$ E(e, f) \triangleq \bigvee \{\varepsilon \mid \varepsilon \Vdash e =_{E} f\} $$ we see that $E$ is reflexive, symmetric, and transitive. By regarding any $n$-ary function symbol $f$ as a function $f: \terms{\Sigma}{X}^n \to \terms{\Sigma}{X}$, we also see that $$ E(e_1, f_1) \otimes \cdots \otimes E(e_n, f_n) \leq E(f(e_1, \hdots, e_n), f(f_1, \hdots, f_n)), $$ meaning that function symbols indeed behave as non-expansive functions. \begin{remark} \label{rem:idempotent-equational-theories} Sometimes~\cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017}, quantitative{} equational theories are defined using the following rule to deal with function symbols. \[ \infer{\bigwedge_i \varepsilon_i \Vdash f(e_1, \hdots, e_n) =_{E} f(f_1, \hdots, f_n)} {\varepsilon_1 \Vdash e_1 =_{E} f_1 & \cdots & \varepsilon_n \Vdash e_n =_{E} f_n} \] Semantically, that means requiring function symbols to behave as \emph{strongly non-expansive} maps: $$ E(e_1, f_1) \wedge \cdots \wedge E(e_n, f_n) \leq E(f(e_1, \hdots, e_n), f(f_1, \hdots, f_n)). $$ In the case of the Lawvere quantale, for instance, we require the distance between two function applications to bound the \emph{maximum} distance between their arguments, rather than by the \emph{sum} of such distances. Strong non-expansiveness, however, does not properly interact with transitivity, which is based on the tensor, rather than the meet, of the quantale. Consider terms $t_1, t_2, s_1, s_2$ with $\varepsilon \Vdash t_1 =_{E} s_1$ and $\delta \Vdash t_2 =_{E} s_2$. For a binary function symbol $f$, we can consider the following two derivations: \[ \infer{\varepsilon \otimes \delta \Vdash f(e_1, e_2) =_{E} f(s_1, s_2)} { \infer{\varepsilon \Vdash f(t_1, t_2) =_{E} f(s_1,t_2)} {\varepsilon \Vdash t_1 =_{E} s_1 & \mathsfit{k} \Vdash t_2 =_{E} t_2 } & \infer{\delta \Vdash f(s_1, t_2) =_{E} f(s_1,s_2)} {\mathsfit{k} \Vdash s_1 =_{E} s_1 & \delta \Vdash t_2 =_{E} s_2 } } \qquad \quad \infer{\varepsilon \wedge \delta \Vdash f(e_1, e_2) =_{E} f(s_1, s_2)} { \varepsilon \Vdash t_1 =_{E} s_1 & \delta \Vdash t_2 =_{E} s_2 } \] From a rewriting perspective, these two derivations show that rewriting $t_1$ into $s_1$ and $t_2$ into $s_2$ inside $f$ \emph{sequentially} gives a different distance than performing the same rewriting \emph{in parallel}. This is not surprising: the non-expansiveness rule for function symbols is defined ultimately relying on the idempotent quantale $(\Omega}%{\mathsfit{V}, \leq, \wedge, \top)$, whereas transitivity relies on $(\Omega}%{\mathsfit{V}, \leq, \otimes, \rotatebox[origin=c]{180}{$\Bot$})$. Harmony is restored by taking the following transitivity rule, which amounts to instantiate \autoref{def:quantitative-algebras} with the idempotent quantale $(\Omega}%{\mathsfit{V}, \leq, \wedge, \rotatebox[origin=c]{180}{$\Bot$})$. \[ \infer{\varepsilon \wedge \delta \Vdash e =_{E} g} {\varepsilon \Vdash e =_{E} f & \delta \Vdash f =_{E} g} \] \end{remark} Following \autoref{rem:idempotent-equational-theories}, we introduce some further terminology and refer to equational theories over \emph{idempotent} quantales as \emph{additive} or \emph{idempotent} quantitative{} equational theories. We employ a similar terminology for quantitative{} rewriting systems. \begin{example} \begin{enumerate} \item Since the Boolean quantale is obviously idempotent, traditional rewriting systems and equational theories are additive (quantitative) systems. \item Since the strong Lawvere quantale is idempotent, system $\mathcal{L}$ of \autoref{ex:quantitative-semilattices} is additive. Its associated quantitative{} equational theory is the one of quantitative{} semilattices \cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017}, which is additive too. \item Consider the powerset quantale $\mathcal{P}(\{\texttt{A},\texttt{C},\texttt{G},\texttt{T}\})$ and the system $b \qstepto{\{b\}} c$, for $b, c \in \{\texttt{A},\texttt{C},\texttt{G},\texttt{T}\}$ and $b \neq c$. This way, we obtain a qualitative distance between molecules giving which bases change between two molecules. \item The open sets of a topological space form a frame~\cite{Vickers/Topology-via-logic} and thus an idempotent quantale. Taking open sets as distances between objects, we can model effectively measurable differences or approximated ones. This way, we stipulate that measuring can be done with a limited precision only. Such systems are indeed additive. \end{enumerate} \end{example} Any quantitative{} equational theory $(\Sigma, \approx_E)$ induces a $\Quantale$-TRS{} whose rewriting rules are given by the equations of $\approx_{E}$ (actually, it is preferable to consider a subset thereof obtained by giving equations an appropriate orientation), so that we can use $\Quantale$-TRS s to study properties of quantitative{} equational reasoning, the main ones being related to confluence, termination, and (therefore) metric word problems. In particular, given a quantitative{} equational theory $(\Sigma, \approx_E)$, we can define a $\Quantale$-TRS{} $(\Sigma, \mapsto_{R})$ such that $E = \makedistance{R}$. Consequently, by \autoref{prop:church-rosser}, if $R$ is confluent, then we recover the equational distance between terms by looking at their common reducts. If, additionally, the system is terminating (i.e. SN), then we can approximate such a distance by looking at normal forms only: this way, we also obtain decidability of the reachability metric word problem for $(\Sigma, \approx_E)$. It is thus desirable to develop handy techniques to prove confluence and termination of $\Quantale$-TRS s. \subsection{Confluence and Critical Pairs, Part I} \label{sect:qtrs-confluence} From (quantitative) Newman's Lemma (\autoref{prop:newmans-lemma}), we know that to prove confluence of a terminating $\Quantale$-TRS{} we only need to verify its \emph{local} confluence. Proving local confluence of a $\Quantale$-TRS, however, can be difficult, as reductions may happen inside arbitrary contexts and on arbitrary instances of reduction rules. It is thus natural to ask whether we can prove local confluence \emph{locally}, i.e. by looking at \emph{ground rewriting} only. In this section, we show that local confluence of a \emph{linear} $\Quantale$-TRS{} follows directly from local confluence of its \emph{critical pairs}~\cite{Huet80,terese}. Linearity, as we have already discussed in \autoref{section:long-intro}, is a crucial property in quantitative{} and metric reasoning: forcing non-expansiveness on non-linear systems often let distance trivialise~\cite{CrubilleDalLago/ESOP/2014,CrubilleDalLago/ESOP/2017,DBLP:phd/basesearch/Gavazzo19,Gavazzo/LICS/2018}, this way collapsing quantitative{} equational deduction to traditional, Boolean reasoning. On rewriting systems, non-linearity leads to further undesired consequences, as shown by the following example. \begin{example} \label{ex:linearity-for-confluence} Consider the signature $\Sigma \triangleq \{f, e, i\}$ with $f$ a binary function symbol and $e, i$ constants. Let $\mapsto_{R}$ be the reduction rule over the Lawvere quantale: \begin{align*} f(x,x) &\qstepto{0} x \\ e &\qstepto{1} i. \end{align*} It is easy to see that the system is confluent in the traditional, non-quantitative sense. Taking quantitative information into account, however, we have $e \stackrel{0}{\leftarrow} f(e, e) \qreduce{1} f(i, e)$, and thus $R(f(e, e),e) = 0$, $R(f(e, e), f(i, e)) = 1$. To close the diagram given by $e \stackrel{0}{\leftarrow} f(e, e) \qreduce{1} f(i, e)$, we need to reduce $e$ \emph{twice}: $$ e \qreduce{1} i \stackrel{0}{\leftarrow} f(i, i) \stackrel{1}{\leftarrow} f(i, e). $$ This gives $R^*(e, i) = 1$ and $R^*(f(i, e), f(i, i)) = 1$, this way breaking (local) confluence. \end{example} Let us now recall the notion of a critical pair and refine the well-known critical pair lemma~\cite{Huet80} to a quantitative{} setting. { \renewcommand{a}{c_1} \renewcommand{\bm{s}}{c_2} \renewcommand{b}{d_1} \renewcommand{\bm{d}}{d_2} \begin{definition} Let $(\Sigma, \mapsto_{R})$ be a $\Quantale$-TRS. \begin{enumerate} \item Let $a \qstepto{\varepsilon} b$, $\bm{s} \qstepto{\delta} \bm{d}$ be renamings of rewrite rules without common variables. Then $a \qstepto{\varepsilon} b$, $\bm{s} \qstepto{\delta} \bm{d}$ overlap at position $p$ if: \begin{itemize} \item $p$ is a function symbol position of $\bm{s}$; \item $a$ and $\subtermpos{\bm{s}}{p}$ are unifiable; \item If $p=\lambda$, then the two rules are not variants (i.e. they cannot be obtained one from the other by variables renaming) . \end{itemize} \item Let $a\qstepto{\varepsilon}b$, $\bm{s}\qstepto{\delta}\bm{d}$ overlapping at position $p$ and $\sigma$ be the mgu of $\subtermpos{\bm{s}}{p}$ and $a$. Then the term $\subst{\bm{s}}{\sigma}$ can be rewritten in two ways: \begin{align*} \subst{\bm{d}}{\sigma} \stackrel{\delta}{\leftarrow} \subst{\bm{s}}{\sigma} \qreduce{\varepsilon} \subst{\bm{s}}{\sigma}[\subst{b}{\sigma}]_{p} \end{align*} We call the triple $(a \qstepto{\varepsilon} b, p, \bm{s} \qstepto{\delta} \bm{d})$ a \emph{critical overlap} and the pair $(\subst{\bm{s}}{\sigma}[\subst{b}{\sigma}]_p,\subst{\bm{d}}{\sigma} )$ a \emph{critical pair}. \end{enumerate} \end{definition} } \begin{example} \begin{enumerate} \item The \maketrs{$\mathbb{L}$} of \autoref{ex:linearity-for-confluence} has no critical pair since there is no \emph{function symbol position} at which (renamings of) the rules $f(x,x) \qstepto{0} x$ and $e \qstepto{1} i$ overlap. \item Consider system $\mathcal{B}$ of Barycentric algebras and the (no-common-variable renamings of) commutativity and associativity rules: \begin{align*} x' \barplus{\epsilon_{1}} y' &\qstepto{0}_{\delta} y' \barplus{1 - \epsilon_{1}} x' \\ (x' \barplus{\epsilon_1} y') \barplus{\epsilon_2} z &\qstepto{0}_{\delta} x' \barplus{\epsilon_1 \epsilon_2} (y' \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z) \end{align*} with $\epsilon_1, \epsilon_2 \in (0,1)$. Then, we see that the substitution $\sigma$ mapping the variable $x'$ to $x$ and $y'$ to $y$ is the $\emph{mgu}$ of $\subtermpos {\big((x \barplus{\epsilon_1} y) \barplus{\epsilon_2} z)\big)}{1}$ and $x' \barplus{\epsilon} y'$. Consequently, the triple and the pair \begin{align*} \big(x' \barplus{\epsilon_1} y' \qstepto{0}_{\delta} y' \barplus{1 - \epsilon} x',\ &1,\ x \barplus{\epsilon_1 \epsilon_2} (y \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z)\big) \\ \big((y\barplus{1-\epsilon_{1}} x)\barplus{\epsilon_{2}} z,&\ x\barplus{\epsilon_{1}\epsilon_{2}} (y\barplus{\frac{\epsilon_{2}-\epsilon_{1}\epsilon_{2}}{1-\epsilon_{1}\epsilon_{2}}} z)\big) \end{align*} form a \emph{critical overlap} and a \emph{critical pair}, respectively. Diagramtically, we have a critical pair defined by the following peak: \begin{figure}[H] \centering \begin{tikzpicture}[thick,scale=0.80, every node/.style={transform shape, sibling distance=8cm},level 1/.style={sibling distance=45mm},level 2/.style={transform shape,sibling distance=45mm} ] \node (left) at (-4,0) {$(x\barplus{\epsilon_{1}}y)\barplus{\epsilon_{2}}z$} child {node (19){$(y\barplus{1-\epsilon_{1}} x)\barplus{\epsilon_{2}} z$} edge from parent [->] node [left] {0}} child { node (29) {$x\barplus{\epsilon_{1}\epsilon_{2}} (y\barplus{\frac{\epsilon_{2}-\epsilon_{1}\epsilon_{2}}{1-\epsilon_{1}\epsilon_{2}}} z)$} edge from parent [->] node [right] {0}}; \end{tikzpicture} \end{figure} \end{enumerate} \end{example} \begin{notation} Given a $\Quantale$-TRS{} $\mathcal{R} = (\Sigma, \mapsto_{R})$, we denote by $CP(\mathcal{R})$ the collection of its critical pairs. Moreover, we write $f_1 \qreduceleft{\varepsilon_1} e \qreduce{\varepsilon_2} f_2$ if $e \qreduce{\varepsilon_1} f_1$ and $e \qreduce{\varepsilon_2} f_2$. \end{notation} We are now ready to prove the quantitative{} refinement of the so-called Critical Pair Lemma~\cite{Huet80}. In its traditional version, such a lemma states that to prove local confluence of a rewriting relation it is enough to prove its local confluence on critical pairs only. From Newman's Lemma it thus follows that if a rewriting relation is terminating and locally confluent on critical pairs, then it is confluent. When we move to quantitative{} rewriting, the Critical Pair Lemma needs a further assumption, namely \emph{linearity}. { \renewcommand{a}{a_1} \renewcommand{b}{b_1} \renewcommand{\bm{s}}{a_2} \renewcommand{\bm{d}}{b_2} \begin{lemma}[Critical Pair] \label{lemma:critical-pair} Let $\mathcal{R} = (\Sigma, \mapsto_{\vrelone})$ be a \emph{linear} $\Quantale$-TRS{}. If $R$ is locally confluent on all critical pairs of $\mathcal{R}$, then it is locally confluent. \end{lemma} \begin{proof} We have to show $\dual{R};R \leq R^{*};\dual{{R^{*}}}$ given that $(\dual{R};R )(t,s) \leq (R^{*};\dual{{R^{*}}})(t,s)$, for any $(t,s)\in CP(\mathcal{R})$. Pointwise, we need to prove \begin{align*} \bigvee_{e}R (e, f_{1})\otimes R (e, f_{2})\leq\bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g) \end{align*} for arbitrary terms $f_{1}$ and $f_{2}$. Since $R(e,f) = \bigvee \{\varepsilon \mid \mrelto{\varepsilon}{e}{f}\}$ and the join distributes over the tensor, it is enough to show that for any local peak $f_{1}\qreduceleft{\varepsilon_1}e \qreduce{\varepsilon_2} f_{2}$, we have \begin{align*} \varepsilon_1 \otimes \varepsilon_2 \leq\bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g). \end{align*} We proceed by structural induction on $\mrelto{\varepsilon_1}{e}{f_{1}}$ and $\mrelto{\varepsilon_2}{e}{f_{2}}$ (see \autoref{figure:qtrs}). We begin with the structural rules, as those are easy. Suppose that one between $\mrelto{\varepsilon_1}{e}{f_{1}}$ and $\mrelto{\varepsilon_2}{e}{f_{2}}$ is obtained by one of the structural rules in \autoref{figure:qtrs}. Without loss of generality, we shall assume that this is the case for $\mrelto{\varepsilon_1}{e}{f_{1}}$. \begin{itemize} \item Suppose that $\mrelto{\varepsilon_1}{e}{f_{1}}$ is obtained by quantitative weakening, so that we have: \[ \infer{\mrelto{\varepsilon_1}{e}{f_{1}}} {\mrelto{\delta}{e}{f_{1}} & \varepsilon_1 \leq \delta} \] By induction hypothesis, we know that $ \delta \otimes \varepsilon_2 \leq \bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g), $ so that we conclude (recall that $\mathbb{\quantale}$ is integral) $$\varepsilon_1 \otimes \varepsilon_2 \leq \delta \otimes \varepsilon_2 \leq \bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g)$$. \item Suppose that $\mrelto{\varepsilon_1}{e}{f_{1}}$ is obtained by closure under finite join, so that we have $\varepsilon_1 = \bigvee_i \delta_i$ for some $\delta_1, \hdots, \delta_n$ and \[ \infer{\mrelto{\bigvee_i \delta_i }{e}{f_{1}}} { \mrelto{\delta_1}{e}{f_1} & \hdots & \mrelto{\delta_n}{e}{f_{1}}}. \] By induction hypothesis, we know that $\forall i \leq n.\ \delta_i \otimes \varepsilon_2 \leq \bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g)$ which gives, by the universal property of joins, \[ \bigvee_i(\delta_i \otimes \varepsilon_2) \leq \bigvee_{g} R^{*}(f_{1}, g)\otimes R^{*}(f_{2}, g) \] and thus the desired thesis, since $\bigvee_i(\delta_i \otimes \varepsilon_2) = (\bigvee_i \delta_i) \otimes \varepsilon_2$. \item Suppose that $\mrelto{\varepsilon_1}{e}{f_{1}}$ is obtained by the Archimedean property, so that we have: \[ \infer{ \mrelto{\varepsilon_1}{e}{f_{1}}} {\forall \delta_i\ll \varepsilon_1.\ \mrelto{\delta_i}{e}{f_{1}}} \] By induction hypothesis, we have $\delta_i \otimes \varepsilon_2 \leq \bigvee_{g}R^{*}(f_{1}, g) \otimes R^{*}(f_{2}, g)$, for any $\delta_i \ll \varepsilon_1$, and thus $$ \bigvee_{\delta_i \ll \varepsilon_1} \delta_i \otimes \varepsilon_2 \leq \bigvee_{g}R^{*}(f_{1}, g) \otimes R^{*}(f_{2}, g). $$ We conclude the thesis since $\varepsilon_1 = \bigvee_{\delta_i \ll \varepsilon_1} \delta_i$ \end{itemize} Let us now move to the main case, namely the one in which $f_{1}$ and $f_{2}$ are obtained from $e$ by closure under substitution and context of two rewriting rules. More precisely, suppose that $\mrelto{\varepsilon_1}{e}{f_{1}}$ is obtained by applying rule $\mrelstepto{\varepsilon_1}{a}{b}$ at position $p_{1}$ and $\mrelto{\varepsilon_2}{e}{f_{2}}$ is obtained by applying rule $\mrelstepto{\varepsilon_2}{\bm{s}}{\bm{d}}$ at position $p_{2}$. We also assume that the two rules have disjoint variables. Thus, we have that $\subtermpos{e}{p_{1}}=\subst{a}{\sigma}$, $\subtermpos{e}{p_{2}}=\subst{\bm{s}}{\sigma}$, $f_{1}=e[\subst{b}{\sigma}]_{p_{1}}$ and $f_{2}=e[\subst{\bm{d}}{\sigma}]_{p_{2}}$. We proceed by cases, depending on the relationship between $p_1$ and $p_2$. \begin{itemize} \item Suppose $p_{1}\parallel p_{2}$. Then, $e=e[\subst{a}{\sigma}]_{p_1}[\subst{\bm{s}}{\sigma}]_{p_2}$, $f_{1}=e[\subst{b}{\sigma}]_{p_1}[\subst{\bm{s}}{\sigma}]_{p_2}$ and $f_{2}=e[\subst{a}{\sigma}]_{p_1}[\subst{\bm{d}}{\sigma}]_{p_2}$. Applying $\mrelstepto{\varepsilon_2}{\bm{s}}{\bm{d}}$ to $\subtermpos{{f_{1}}}{{p_{2}}}$ and $\mrelstepto{\varepsilon_1}{a}{b}$ to $\subtermpos{{f_{2}}}{{p_{1}}}$, we have that $f_1\qreduce{\varepsilon_2} e[\subst{b}{\sigma}]_{p_1}[\subst{\bm{d}}{\sigma}]_{p_2} \qreduceleft{\varepsilon_1} f_{2}$. Consequently, $$ \varepsilon_1 \otimes \varepsilon_2 \leq R (f_{1},e[\subst{b}{\sigma}]_{p_1}[\subst{\bm{d}}{\sigma}]_{p_2}) \otimes R(f_{2},e[\subst{b}{\sigma}]_{p_1}[\subst{\bm{d}}{\sigma}]_{p_2}) \leq \bigvee_{g}R^{*}(f_{1}, g) \otimes R^{*}(f_{2},g). $$ \item Without loss of generality, suppose that $p_{1}\leq p_{2}$. Then, there is $q$ such that $p_{2}=p_{1}q$. We distinguish two cases: \begin{enumerate} \item If $\mrelstepto{\varepsilon}{a}{b}$ and $\mrelstepto{\delta}{\bm{s}}{\bm{d}}$ do not overlap at position $q$, then we have that either $\mrelstepto{\varepsilon}{a}{b}$ and $\mrelstepto{\delta}{\bm{s}}{\bm{d}}$ are variants and $p=\lambda$, or $p_{2}=p_{1}q_{1}q_{2}$ with $\subtermpos{a}{{q_{1}}}$ a variable, say $x$. In the latter case, we have that $f_1=e [\subst{b}{\sigma}]_{{p_1}}=e [\subst{b}{\sigma}[\subst{\bm{s}}{\sigma}]_{{q_1}{q_2}}]_{{p_1}}$, whereas $f_2= e[\subst{a}{\sigma}[\subst{\bm{d}}{\sigma}]_{{q_1}{q_2}}]_{{p_1}}$. Consider the substitution $\tau$ mapping $x$ to $\subst{b}{\sigma}[\subst{x}{\sigma}]_{q_2}$. Since $a$ is linear, applying $\mrelstepto{\delta}{\bm{s}}{\bm{d}}$ to $\subtermpos{{f_{1}}}{{p_{2}}}$ and $\mrelstepto{\varepsilon}{a}{b}$ to $\subtermpos{f_{2}}{{p_{1}}}$ with substitution $\tau$, we have that $\mrelto{\delta}{f_1}{g}$ and $\mrelto{\varepsilon}{f_2}{g}$, with $g=e[\subst{b}{\sigma}[\subst{\bm{d}}{\sigma}]_{{q_1}{q_2}}]_{p_1}$. Distance analysis proceeds as in the first case. \\ If instead, $\mrelstepto{\varepsilon}{a}{b}$ and $\mrelstepto{\delta}{\bm{s}}{\bm{d}}$ are variants with $q=\lambda$, i.e. $p_{1}=p_{2}$, then necessarily $f_{1}=f_{2}$, that is, $\mrelto{\mathsfit{k}}{f_{1}}{f_{1}=f_{2}}$ and $\mrelto{\mathsfit{k}}{f_{2}}{f_2}$. In this case, we have that $$\varepsilon\otimes\delta \leq \mathsfit{k} \otimes \mathsfit{k} \leq R (f_{1},f_{2} )\otimes R(f_{2},f_{2}) \leq \bigvee_{g}R^{*}(f_{1}, g)\otimes R^{*}(f_{2},g). $$ \item If $\mrelstepto{\varepsilon}{a}{b}$ and $\mrelstepto{\delta}{\bm{s}}{\bm{s}}$ overlap at position $q$. Then, $\subtermpos{\subst{a}{\sigma}}{q}=\subtermpos{(\subtermpos{e}{p_1})}{q} =\subtermpos{e}{p_2}=\subst{\bm{s}}{\sigma}$. That is, $\sigma$ is an unifier of $\subtermpos{a}{q}$ and $\bm{s}$. Let $\tau$ be their most general unifier, so that $\sigma= \rho \circ \tau $, for some substitution $\rho$. Then $(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q},\subst{b}{\tau} )$ is a critical pair. By hypothesis, we know that $$(\dual{R};R)(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q},\subst{b}{\tau} ) \leq \bigvee_{g} R^{*}(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q}, g)\otimes R^{*}(\subst{b}{\tau}, g),$$ and thus $\varepsilon\otimes\delta \leq \bigvee_{g} R^{*}(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q}, g)\otimes R^{*}(\subst{b}{\tau}, g)$. To prove the thesis, it is sufficient to prove $$ \bigvee_{g} R^{*}(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q}, g)\otimes R^{*}(\subst{b}{\tau}, g) \leq \bigvee_{v} R^*(f_1, v) \otimes R^*(f_2, v). $$ As usual, it is enough to show that for any $g$ such that $\mrelto{\eta}{\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_q}{g}$ and $\mrelto{\iota}{\subst{b}{\tau}}{g}$ we have $\eta \otimes \iota \leq \bigvee_{v} R^*(f_1, v) \otimes R^*(f_2, v)$. Fixed $g$ as above, we have: \begin{align*} f_{1} &=e[\subst{b}{\sigma}]_{p_{1}} =e[b^{\tau\rho}]_{p_{1}} \\ f_{2} &=e[ \subst{\bm{d}}{\sigma}]_{p_{2}} =e[ a^{\sigma} [\subst {\bm{d}} {\sigma} ]_{q} ]_{p_{1}} =e[a^{\tau\rho} [\bm{d}^{\tau\rho} ]_{q} ]_{p_{1}} \end{align*} Therefore, by very definition of $\to$, we have: $$ f_{2}=e[\subst{(\subst{a}{\tau}[\subst{\bm{d}}{\tau}]_{q})}{\rho}]_{p_{1}} \qreduce{\eta} e[\subst{g}{\rho}]_{p_1} \qreduceleft{\iota} e[b^{\tau\rho}]_{p_{1}} =f_{1}, $$ and we are done. \end{enumerate} \end{itemize} \end{proof} } \begin{theorem} \label{theorem:critical-pair-theorem} Any \emph{linear} and terminating $\Quantale$-TRS{} locally confluent on its critical pairs is confluent. \end{theorem} \begin{proof} It directly follows from \autoref{prop:newmans-lemma} and \autoref{lemma:critical-pair}. \end{proof} \begin{remark} Example\autoref{ex:linearity-for-confluence} shows that, contrary to what happens in the traditional case, the linearity assumption is indeed necessary in \autoref{theorem:critical-pair-theorem}. In fact, it is an easy exercise to prove that if $\mathbb{\quantale}$ is idempotent, then \autoref{lemma:critical-pair} extends to non-linear $\Quantale$-TRS{s}. \end{remark} \begin{example} It is easy to see that system $\mathcal{N}$ of natural numbers is terminating and locally confluent on all its critical pairs. By \autoref{theorem:critical-pair-theorem}, we conclude that $\mathcal{N}$ is confluent. \end{example} \begin{example} Most of the quantitative{} string rewriting systems introduced so far are not terminating, and thus we cannot rely on \autoref{theorem:critical-pair-theorem} to prove their confluence. One easy way to overcome the problem is to rephrase them as terminating systems. For instance, we can stipulate that molecules can only be deleted and that substitution of molecules is directed, in the sense that, e.g., $\texttt{A}$ can become $\texttt{C}$, $\texttt{G}$, and $\texttt{T}$, but not vice-versa (similarly, $\texttt{C}$ can become $\texttt{G}$ and $\texttt{T}$, and $\texttt{G}$ can only become a $\texttt{T}$). This way, we indeed obtain a terminating system locally confluent on its critical pairs, and thus a confluent system. Although this approach works fine as long as we are interested in reachability problems (and alike), some care is needed when dealing with optimal distances, as forcing termination may lead to increasing minimal distances between molecules.\footnote{ This observation generalises to a collection of interesting research problems asking whether completion algorithms on traditional rewriting systems can be extended to quantitative{} systems. Notice that this question may not have a Boolean answer: in fact, some completion procedures may be correct from the point of view of reachability problems, but not so when it comes to deal with optimal distances. } \end{example} \begin{example} System $\mathcal{T}_{\checkmark}$ is terminating and locally confluent, and thus confluent. System $\mathcal{T}$, instead, is not terminating, due to the rule $\writeop{n}{x} \qstepto{\varepsilon} \writeop{m}{x}$, with $\varepsilon \geq E(n,m)$ (here, $E$ denotes the Euclidean distance). We can easily fix that by imposing $m<n$, this way obtaining a terminating and locally confluent --- and thus confluent --- system. \end{example} \subsection{Confluence and Critical Pairs, Part II} \label{sect:qtrs-confluence-part-2} \autoref{theorem:critical-pair-theorem} constitutes a powerful tool to prove confluence of \emph{terminating} $\Quantale$-TRS{s}. In a quantitative{} setting, however, termination might be a too strong condition, and interesting $\Quantale$-TRS{s} may not satisfy it. As an example, consider system $\mathcal{B}$ of Barycentric algebras. Due to commutativity and left invariance, the system is obviously non-terminating. We may handle the former point by considering quantitative{} notions of rewriting modulo\footnote{We leave the detailed development of such a theory for future work.}~\cite{rewriting-modulo-1,rewriting-modulo-2,Huet80}, but the former one is a genuine quantitative{} reduction (and actually the quantitative{} essence of the total variation distance!). That makes simply not possible to prove confluence of $\mathcal{B}$ via critical pairs and Newmann's Lemma. And yet, by simply working out examples, we have the feeling that $\mathcal{B}$ is indeed confluent. To solve this issue, we modify \autoref{lemma:critical-pair} replacing local confluence with a stronger condition, namely \emph{strong confluence}~\cite{Huet80}. \begin{definition} We say that a $\mathbb{\quantale}$-relation $R: A \tobar A$ is \emph{strongly confluent} if it satisfies inequality \eqref{eq:strong-confluence-1} below and that it is \emph{strongly closed} if it satisfies inequalities \eqref{eq:strong-confluence-1} and \eqref{eq:strong-confluence-2}. As usual, we say that a $\Quantale$-ARS{} is strongly confluent (resp. strongly closed) if its rewriting $\mathbb{\quantale}$-relation is. \begin{align} \dual{R};R &\leq \reflex{R};\dualstar{R} \label{eq:strong-confluence-1} \tag{strong-1} \\ \dual{R};R &\leq R^{*};\dual{{\reflex{R}}}. \label{eq:strong-confluence-2} \tag{strong-2} \end{align} \end{definition} The next result states that if a $\Quantale$-TRS{} is \emph{linear}, then to prove that it is strongly closed, it is enough to look at its critical pairs. \begin{lemma} \label{lemma:strongly-closed-critical-pair} A \emph{linear} $\Quantale$-TRS{} $\mathcal{R} = (\Sigma, \mapsto_{R})$ is strongly closed if and only if $R$ is strongly closed on all the critical pairs of $\mathcal{R}$. \end{lemma} \begin{proof} The proof is essentially the same of the one of \autoref{lemma:critical-pair}, the only difference being that in the main case we replace local confluence with strong closure. \end{proof} Strong confluence (resp. closure) by itself is not immediately informative for our purposes. Its relevance is given by the following result stating that strong confluence (and thus strong closure) entails confluence. \begin{lemma} \label{lemma:strong-confluence-implies-confluence} Strong confluence implies confluence. \end{lemma} \begin{proof}[Sketch] Let $R: A \tobar A$ be a $\mathbb{\quantale}$-relation and $S = \dual{R}$ (our proof works for an arbitrary $S: A \tobar A$, actually). Assume \eqref{eq:strong-confluence-1}, i.e. $S; R \leq \reflex{R}; S^*$. We prove $S^*;R^* \leq R^*; S^*$. Pointiwse proofs rely on lexicographic induction. A lightweight, pointfree proof is obtained by observing that since $S^* = \mu X. \Delta \vee S;X$, we have $$ S^*;R^* = (\mu X. \Delta \vee S;X); R^* = \mu X. R^* \vee S;X. $$ Consequently, to prove $S^*;R^* \leq R^*; S^*$ it is sufficient to prove $\mu X. R^* \vee S;X \leq R^*; S^*$, which can be done using fixed point induction. \end{proof} \begin{theorem} \label{theorem:strongly-closed-critical-pairs} If a \emph{linear} $\Quantale$-TRS{} is strongly closed on all its critical pairs, then it is confluent. \end{theorem} \begin{proof} Directly from \autoref{lemma:strongly-closed-critical-pair} and \autoref{lemma:strong-confluence-implies-confluence}. \end{proof} We can now rely on \autoref{theorem:strongly-closed-critical-pairs} to prove confluence of $\mathcal{B}$. \begin{proposition} \label{prop:BA-is-confluent} System $\mathcal{B}$ is strongly closed, and thus confluent. \end{proposition} \begin{proof} By \autoref{theorem:strongly-closed-critical-pairs}, it is enough to prove that $\mathcal{B}$ is strongly closed on all critical pairs. To do so, we first notice that the associativity rule is `reversible' in the following sense: given $\epsilon_1, \epsilon_2 \in (0,1)$, the reduction $$ \mrelto{0}{(x \barplus{\epsilon_1} y) \barplus{\epsilon_2} z} {x \barplus{\epsilon_1 \epsilon_2} (y \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z)}. $$ has an inverse reduction $$ \mrel{0} {x \barplus{\epsilon_1 \epsilon_2} (y \barplus{\frac{\epsilon_1 - \epsilon_1\epsilon_2}{1 - \epsilon_1\epsilon_2}} z)} {\to^*} {( x \barplus{\epsilon_1} y) \barplus{\epsilon_2} z} $$ obtained by alternatively applying commutativity and associativity. We then verify that $\mathcal{B}$ is strongly closed. This is a routine case analysis on critical pairs. \end{proof} As for \autoref{lemma:critical-pair} and \autoref{theorem:critical-pair-theorem}, even for \autoref{lemma:strongly-closed-critical-pair} and \autoref{theorem:strongly-closed-critical-pairs} we can drop the linearity assumption if the underlying quantale is idempotent. This gives confluence of the Hausdorff distance. \begin{example} Mimicking \autoref{prop:BA-is-confluent} we see that system $\mathcal{L}$ is confluent too. \end{example} \subsection{Confluence and Critical Pairs, Part III} In previous sections, we have proved confluence of most of the systems introduced in \autoref{section:long-intro}: systems $\mathcal{N}$, $\mathcal{M}$, $\mathcal{B}$, $\mathcal{T}$, $\mathcal{L}$ are all confluent. An important class of systems not present in this list is the one of quantitative{} extensions\footnote{Affine combinators having no nontrivial quantitative{} reductions, they are essentially identical to their traditional counterpart (which indeed gives a confluent and terminating system).} of affine combinators. This class includes system $\BCK_{\NATS}$ (combinators plus arithmetic), $\BCK_{\BA}$ (probabilistic combinators), $\bck_{\mathcal{T}}$ (combinators with cost), and similar systems. Even if different, all these systems are obtained in essentially the same way, namely by joining systems together.\footnote{Algebraically, this operation corresponds to the \emph{sum} of algebraic theories \cite{DBLP:journals/tcs/HylandPP06}.} For instance, system $\BCK_{\BA}$ is obtained by joining systems $\bck$ of pure affine combinators and $\mathcal{B}$.\footnote{Further systems can be obtained either by modifying the `effectful' layer (hence adding to $\bck$, for instance, quantitative{} output, nondeterminism, etc) or by replacing $\bck$ itself with other rewriting systems modelling programming languages, such as concurrent ones \cite{Milner/Communication-and-concurrency/1989,handbook-process-algebra}.} It is then natural to ask whether confluence of such systems can be proved \emph{compositionally} in terms of confluence of their component subsystems. In our case, for instance, we know that both $\mathcal{B}$ and $\bck$ are confluent,\footnote{Since $\bck$ alone does not have truly quantitative{} behaviours, its confluence can be proved as for its traditional counterpart.} and we would like to infer confluence of $\BCK_{\BA}$. Indeed, we can do so relying on the quantitative{} refinement of the Hindley-Rosen Lemma (\autoref{lemma:hindley-rosen}). Let us begin by formalising the idea of joining $\Quantale$-TRS{s}. \begin{definition} Given $\Quantale$-TRS{s} $\mathcal{R} = (\Sigma_{\mathcal{R}}, \mapsto_{R})$ and $\mathcal{S} = (\Sigma_{\mathcal{S}}, \mapsto_{S})$ with disjoint signatures, we define their \emph{sum} as the $\Quantale$-TRS{} $\mathcal{R} + \mathcal{S} = (\Sigma_{\mathcal{R}\mathcal{S}}, \mapsto_{RS})$ defined thus: $\Sigma_{\mathcal{R}\mathcal{S}} \triangleq \Sigma_{\mathcal{R}} \cup \Sigma_{\mathcal{R}}$ and $\mapsto_{RS} \triangleq {\mapsto_{R}} \cup {\mapsto_{S}}$. \end{definition} \begin{example} We immediately see that $\BCK_{\BA} = \bck + \mathcal{B}$. Extensions of $\bck$ with ticking are obtained as $\bck + \mathcal{T}$ and $\bck + \mathcal{T}_{\checkmark}$, whereas $\bck + \mathcal{L}$ gives nondeterministic affine combinators. Finally, notice that even formally different, system $\BCK_{\NATS}$ is essentially $\bck + \mathcal{N}$. \end{example} To relate the sum of $\Quantale$-TRS{s} $\mathcal{R}$, $\mathcal{S}$ as above with Newman's Lemma, we first observe that the $\mathbb{\quantale}$-relation $RS$ associated to $\mathcal{R} + \mathcal{S}$ coincides with $R \vee S$. \begin{lemma} \label{lemma:sum-qtrs-equal-union} For all $\Quantale$-TRS{s} $\mathcal{R} = (\Sigma_{\mathcal{R}}, \mapsto_{R})$, $\mathcal{S} = (\Sigma_{\mathcal{S}}, \mapsto_{S})$, we have $RS = R \vee S$. \end{lemma} \begin{proof} Straightforward. \end{proof} \autoref{lemma:sum-qtrs-equal-union} puts ourselves in the condition to rely on the quantitative{} Hindley-Rosen Lemma to prove confluence of $\mathcal{R} + \mathcal{S}$. Accordingly, to infer confluence of $R \vee S$ ($= RS$) we need to have confluence of $R$ and $S$ as well as commutation of $R$ with $S$. Whereas the former usually is our starting hypothesis, the latter requires a specific analysis. For the cases we are interested in, such an analysis is smooth, as systems $\mathcal{R}$ and $\mathcal{S}$ are essentially independent, in the sense that $\mathcal{R} + \mathcal{S}$ does not create new critical pairs. \begin{lemma} \label{lemma:strcl} Given two linear $\Quantale$-TRS{} $\mathcal{R}$, $\mathcal{S}$ as above, if the collection of critical pairs obtained by overlapping of a rule of $\mathcal{R}$ and a rule of $\mathcal{S}$ is empty, then $R$ strongly commutes with $S$, and thus $R$ commutes with $S$ \end{lemma} \begin{proof} The proof that $R$ strongly commutes with $S$ is a simplified instance of the proof of \autoref{lemma:critical-pair} and \autoref{lemma:strongly-closed-critical-pair}. From that, commutation of $R$ with $S$ follows by \autoref{lemma:strong-confluence-implies-confluence}. \end{proof} Using \autoref{lemma:strcl}, we obtain the necessary hypotheses to apply \autoref{lemma:hindley-rosen} and conclude confluence of $\BCK_{\BA} = \bck + \mathcal{B}$. \begin{proposition} \label{prop:bck-plus-ba-is-confluent} System $\bck + \mathcal{B}$ is confluent. \end{proposition} \begin{proof} Confluence of $\bck + \mathcal{B}$ immediately follows \autoref{lemma:hindley-rosen}, provided that $K$ commute with $B$. That is indeed the case, since $\bck$ and $\mathcal{B}$ have no common critical pair, and thus they (strongly) commute, by \autoref{lemma:strcl}. \end{proof} In a similar fashion, one proves that, e.g., $\bck + \mathcal{N}$ and $\bck + \mathcal{T}$ are confluent, as well as combinations thereof. Moreover, as usual, if the underlying quantale is idempotent, we can drop the linearity assumption in \autoref{lemma:strcl}, so that by enriching $\bck$ over $\lawvere^{\max}} %{\bm{\mathcal{S}}\bm{\mathcal{L}}$ (rather than on $\mathbb{L}$), we see that $\bck + \mathcal{L}$ is confluent too. These results complete the (confluence) analysis of the examples introduced in \autoref{section:long-intro} as well as our general results on linear (or non-expansive) $\Quantale$-TRS{s}. The next --- last --- section of this work outlines a possible way to go beyond linearity: we are going to move from non-expansive to \emph{Lipschitz continuous} $\Quantale$-TRS{s}. \section{BEYOND NON-EXPANSIVENESS: GRADES, MODALITIES, AND LIPSCHITZ CONTINUITY} \label{sect:beyond-non-expansive-systems} In this section, we introduce a new class of quantitative{} term rewriting systems --- namely \emph{graded} rewriting systems --- that allows us to model non-linear systems avoiding, at the same time, distance trivialisation and lack of confluence issues. So far, in fact, we have focused on linear, non-expansive functions and term constructors. This is reflected almost everywhere in our definitions: for instance, the rule \[ \infer{\mrel{\varepsilon}{\mathcal{C}[\subst{e}{\sigma}]}{\to_{R}}{\mathcal{C}[\subst{f}{\sigma}]}} {\mrel{\varepsilon}{e}{\mapsto_{R}}{f}} \] states that differences produced by $\mapsto_{R}$ are \emph{non-expansively} propagated by $\to_{R}$ through contexts (hence term constructors) and substitution. This can be seen even more clearly in quantitative{} algebraic theories, where the rule \[ \infer{ \mrel {\varepsilon_1 \otimes \cdots \otimes \varepsilon_n} {f(e_1, \hdots, e_n)} {=_{E}} {f(f_1, \hdots, f_n)} } {\mrel{\varepsilon_1}{e_1}{=_{E}}{f_1} & \cdots & \mrel{\varepsilon_1}{e_1}{=_{E}}{f_1} } \] precisely tells us that the function symbol $f$ behaves as a non-expansive function. We have also seen stronger forms of non-expansiveness, namely strong (or ultra) non-expansiveness: \[ \infer{ \mrel {\varepsilon_1 \wedge \cdots \wedge \varepsilon_n} {f(e_1, \hdots, e_n)} {=_{E}} {f(f_1, \hdots, f_n)} } {\mrel{\varepsilon_1}{e_1}{=_{E}}{f_1} & \cdots & \mrel{\varepsilon_1}{e_1}{=_{E}}{f_1} } \] As already remarked, strong non-expansiveness is, in its essence, just ordinary non-expansiveness on an idempotent quantale. Even if we can view all of that from a more semantic point of view in terms of arrows and constructions in suitable categories of $\mathbb{\quantale}$-spaces \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}, such a level of abstraction is not necessary for our goals: it is sufficient to notice that, given a $\mathbb{\quantale}$-relation $R: A \tobar A$, $f: A^n \to A$ is non-expansive (with respect to $R$) if: \begin{align*} \bigotimes_i R(a_i, b_i) &\leq R(f(a_1, \hdots, a_n), f(b_1, \hdots, b_n)) \end{align*} Non-expansive maps, however, are not the only maps one is interested in when working with metric spaces. Another interesting class of transformation is the one of \emph{contractions} and, more generally, the one of \emph{Lipschitz continuous} functions \cite{metric-spaces}. Such maps have been extensively studied in the context of metric (program) semantics, due to their link with differential privacy \cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017} and (bounded) linear and coeffectful types \cite{Orchard-icfp-2019,modal-reasoning-equal-metric-reasoning,Gaboradi-et-al/ICFP/2016}. Moving from an original observation by \citet{Lawvere/GeneralizedMetricSpaces/1973}, generalisations of non-expansive maps to $\mathbb{\quantale}$-relations have been given in terms of change of base functors~\cite{Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19}, and corelators (viz. comonadic lax extension) \cite{DBLP:journals/pacmpl/LagoG22a}. In a nutshell, we allow functions $f: A \to A$ to amplify distances, but in a controlled way. Such a way is given by a (family of suitable) function(s) $\phi: \Omega}%{\mathsfit{V} \to \Omega}%{\mathsfit{V}$, so that we require: $$ \phi(R(a, b)) \leq R(f(a),f(b)). $$ The map $\phi$ is sometimes called the \emph{sensitivity} of $f$ and gives the law describing how much differences between outputs are affected by differences between inputs. Accordingly, we think about sensitivity as generalising Lipschitz constants; and indeed, multiplication by a constant is a typical example of a map $\phi$ on the Lawvere quantale. Technically speaking, we shall define function sensitivity by means of change of base functors~\cite{Kelly/EnrichedCats} which, in our simplified setting, take the form of quantale homomorphisms \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}. Any change of base functor $\phi: \Omega}%{\mathsfit{V} \to \Omega}%{\mathsfit{V}$ induces a map on $\mathbb{\quantale}$-relations sending a $\mathbb{\quantale}$-relation $R$ to the $\mathbb{\quantale}$-relation $\bang{\phi}R$ mapping $(a, b)$ to $\phi(R(a, b))$. Maps $\bang{\phi}$ are examples of (graded) relational modalities known as \emph{corelators} \cite{DBLP:journals/pacmpl/LagoG22a}.\footnote{They actually provided a canonical example of a corelator.} Even if the theory of graded rewriting systems we are going to define can be given in full generality in terms of corelators, we shall work with change of base functors only. The authors hope this will choice will help the reader understanding this last section. When we move from unary to $n$-ary functions, it does make sense to talk about \emph{the} sensitivity of $f$; instead, we should talk about the sensitivity of $f$ \emph{on a given argument}. Assuming $f$ to have sensitivity $\phi_i$ on the $i$th argument, we then obtain the following new, finer notion of non-expansiveness: \begin{align*} \bigotimes_i \bang{\phi_i}R(a_i, b_i) &\leq R(f(a_1, \hdots, a_n), f(b_1, \hdots, b_n)). \end{align*} Armed with this new notion of non-expansiveness, let us see how to make rewriting systems non-expansive in this new sense. The resulting notion, the one of a graded rewriting system, is the main subject of this last section. As usual, before introducing such systems in full generality, let us warm up with a concrete example. \paragraph{Graded Combinatory Logic} The main example of a graded system we will deal with is \emph{graded combinatory logic} \cite{DBLP:conf/lics/Atkey18,dagnino-1}, a generalisation of Abramsky's bounded combinatory logic \cite{abramsky-combinatory-1,abramsky-combinatory-logic-2}. Recall that system $\bck$ (as well as its extensions) is based on \emph{affine} combinators only. In particular, we have seen how adding the (cartesian) combinator $\texttt{W}$ leads to distance trivialisation and non-confluent behaviours. The reason is that the reduction rule $\texttt{W} \cdot x \cdot y \qstepto{0} x \cdot y \cdot y$ duplicates the variable $y$, and thus the distance between combinators $\texttt{W} \cdot e \cdot f$ and $\texttt{W} \cdot e \cdot f'$ is \emph{duplicated} when reducing $\texttt{W}$. One way to overcome this problem is to refine system $\bck$ by introducing graded exponential modalities $\bangg{n}$ constraining the usage of terms. The function symbol $\bangg{n}$ is an example of a coeffectful modality \cite{Orchard-icfp-2019,DBLP:conf/esop/GhicaS14,Mycroft-et-al/ICFP/2014,Gaboradi-et-al/ICFP/2016} and can be thought as providing $n$ copies of its argument, so that we can break linearity up to usage $n$. From a metric point of view, $\bangg{n}$ is a function symbol whose sensitivity is given by the multiplication by $n$ function, meaning that whenever we have terms $e$, $f$ that are $\varepsilon$-apart, $\bangg{n}e$ and $\bangg{n}f$ are stipulated to be $n\varepsilon$ apart. According to this strategy, the reduction $\texttt{W} \cdot x \cdot y \qstepto{0} x \cdot y \cdot y$ is replaced by $$ \texttt{W} \cdot x \cdot \bangg{n+m} y \qstepto{0} x \cdot \bangg{n}y \cdot \bangg{m}y. $$ But that is not the end of the story. The introduction of $\bangg{n}$ affects not only ground reductions; it also hugely impacts on the definition of $\mapsto$, which now becomes \emph{modal}. In fact, suppose to have combinators $e$, $f$ that are $\varepsilon$-apart. If we now want to reduce $e$ to $f$ under the scope of $\bangg{n}$, we \emph{cannot} non-expansively propagate $\varepsilon$ through $\bangg{n}e$ and $\bangg{n}f$, as we usually do in linear system. Instead, we have to amplify $\varepsilon$ by $n$, this way obtaining the rule: \[ \infer{\bangg{n}e \qreduce{n\varepsilon} \bangg{n}f} {e \qstepto{\varepsilon} f} \] All of that extends to reductions inside arbitrary combinator (context) $C$. When reducing $C[e]$ to $C[f]$, we have to amplify the distance $\varepsilon$ between $t$ and $s$ according to \emph{how} (much) $C$ uses its argument, i.e. according to the sensitivity of $\mathcal{C}$ regarded as a term function. Writing $\degctx{\mathcal{C}}$ for such a sensitivity, we obtain the rule \[ \infer{\mathcal{C}[e] \qreduce{\degctx{\mathcal{C}}(\varepsilon)} \mathcal{C}[f]} {e \qstepto{\varepsilon} f} \] showing us that contexts now act not only on terms, but also on distances between them (or, taking a modal perspective \cite{modal-reasoning-equal-metric-reasoning,DBLP:journals/pacmpl/LagoG22a}, contexts act also on possible worlds). Before giving a complete definition of the system of graded combinators, there is one last point we need to clarify: how do we determine the sensitivity of a context? A natural solution often employed in the literature on modal and graded calculi is to rely on a type system tracking variable usage in terms. Following such a proposal, we would work with judgements of the form $x:\phi_1, \hdots, x:\phi_n \vdash e$ stating that $x_i$ has sensitivity $\phi_i$ in $e$. Introducing type systems, however, would require unnecessary work in large measure independent of rewriting. We shall avoid that by recursively computing the grade of a variable (and even of a variable position) in a term \cite{dagnino-1}. Nonetheless, the reader may find useful to think in terms of type systems at first since trying to design typing rules helps to isolate the compositional properties a good notion of sensitivity should satisfy. First, we need to be able to \emph{add} and \emph{multiply} sensitivity functions, so to model nested and parallel use of terms. For instance, if a variable $x$ is used $n$ times by $e$ and $m$ times by $f$, it will by used $n+m$ times by $f(e, f)$, provided that $f$ is non-expansive (e.g. $x$ is used $n+m$ times by $\texttt{B} \cdot e \cdot f \cdot y$). Similarly, if $g$ is a function symbol with sensitivity $j$ (e.g. take $\bangg{j}$ as $g$), then $x$ will be used $jn$ times in $g(e)$. Such operations are obviously available in the concrete example of graded combinators, where sensitivity is given by multiplication by a constant: in the general setting of quantale homomorphisms, we shall model multiplication as function composition and addition as pointwise tensor product. Secondly, we need to have distinguished functions modelling linear and zero sensitivity, i.e. neutral elements for multiplication and addition, respectively. With no surprise, those will be the identity and the constant $\mathsfit{k}$ quantale homomorphisms.\footnote{Another approach is to model term sensitivity axiomatically \cite{DBLP:conf/esop/GhicaS14,Gaboradi-et-al/ICFP/2016,Orchard-icfp-2019} by means of suitable semi-ring like structures $\mathcal{G}$ and then define $\mathcal{G}$-indexed relational extensions (i.e. corelators) to model the action of grades on $\mathbb{\quantale}$-relations \cite{DBLP:journals/pacmpl/LagoG22a,modal-reasoning-equal-metric-reasoning}.} Let us now formally introduce system $\mathcal{W}$ of bounded combinators. The signature of the system is defined thus $$ \Sigma_{\mathcal{W}} \triangleq \{\texttt{B}, \texttt{C}, \texttt{K}, \texttt{W}_{n,m}, \texttt{D}, \delta_{n,m}, \texttt{F}_n, \bangg{n}, \cdot \mid n,m \in [0,\infty]\}.$$ $\Sigma_{\mathcal{W}}$ contains basic combinators $\texttt{B}$, $\texttt{C}$, $\texttt{K}$, as well as the (family of) combinator(s) $\texttt{W}_{n,m}$ and (families of) combinators $\texttt{F}_{n}$, $\texttt{D}$, and $\delta_{n,m}$ manipulating the function symbol(s) $\bangg{n}$. In fact, in addition to the usual binary function symbol for application, we have a $[0,\infty]$-family of exponential modalities $\bangg{n}$ \cite{DBLP:journals/tcs/GirardSS92,DBLP:conf/esop/GhicaS14}. Each function $\bangg{n}$ has sensitivity (constant multiplication by) $n$, which allows us to leave application non-expansive (meaning that it has sensitivity one on each argument). Notice that the signature $\Sigma_{\mathcal{W}}$ specifies for each function symbol not only its arity, but also the sensitivity of all its arguments. We shall refer to such signatures as \emph{graded signatures}. In general, we will employ the notation $f: (\phi_1, \hdots, \phi_n)$ to state that $f$ is an $n$-ary function symbol with sensitivity $\phi_i$ on its $i$th argument. To define rewriting $\mathbb{\quantale}$-relations for $\mathcal{W}$, we first need to define the sensitivity of a variable in a term. Actually, we look at the sensitivity of a variable position in a term. We define the grade $\degree{p}{e}$ of a variable position $p$ in $e$ (i.e. $e_{|p}$ is a variable) as follows, where $X$ ranges over basic combinators: \begin{align*} \degree{\lambda}{\varone} &\triangleq 1 \\ \degree{p}{X} &\triangleq 0 \\ \degree{ip}{e_1 \cdot e_2} &\triangleq \degree{p}{e_i} \\ \degree{1p}{\bangg{n}e} &\triangleq n \cdot \degree{p}{e}. \end{align*} We then define the grade of a variable $x$ in a term $t$ as $\degree{x}{t} \triangleq \sum \{\degree{p}{e} \mid e_{|p} = x\}$. For instance, the variable $x$ has sensitivity $9 = 3 + (3 \cdot 2)$ (regarded as the multiplication-by-three function) in $e \triangleq \bangg{3}(x \cdot \bangg{2} (\texttt{I} \cdot x))$, as it is under the scope both of $\bangg{3}$ and of $\bangg{2}$, and the latter, in turn, is itself under the scope of $\bangg{3}$ (and morally, we can think about a nested $\bangg{n} \bangg{m}$ as a unique $\bangg{nm}$). Indeed, the sensitivity of $x$ at position $1$ is $3$, whereas at position $1212$ is $6$. \[ \xymatrix{ & \bangg{3} \ar[d]^{1} & & \\ & \cdot \ar[ld]_{1} \ar[rd]^{2} & & \\ x & & \bangg{2} \ar[d]^{1} & \\ & & \cdot \ar[ld]_{1} \ar[rd]^{2} & \\ & \texttt{I} & & x } \] \begin{notation} Given a context $\mathcal{C}$, we write $\degctx{\mathcal{C}}$ for the sensitivity of the (unique occurrence of the) hole in $\mathcal{C}$. \end{notation} We now have all the ingredients to define system $\mathcal{W}$, whose (ground) rewriting $\mathbb{\quantale}$-relation $\mapsto_{W}$ and its extension $\to_{W}$ are defined in \autoref{figure:graded-combinators}. As usual, the $\mathbb{L}$-relation $W$ is defined by $W(t,s) \triangleq \inf\{\varepsilon \mid t \qreduce{\varepsilon}_{W} s\}$. \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*5)/6,colframe=black,colback=black!0!white,arc=0mm] \[ \texttt{B} \cdot x \cdot y \cdot z \qstepto{0}_{W} x \cdot (y \cdot z) \qquad \texttt{C} \cdot x \cdot y \cdot z \qstepto{0}_{W} x \cdot z \cdot y \qquad \texttt{K} \cdot x \cdot \bangg{0}y \qstepto{0}_{W} x \] \vspace{-0.2cm} \[ \texttt{D} \cdot \bangg{1} x \qstepto{0}_{W} x \qquad \delta_{n,m} \cdot \bangg{nm} x \qstepto{0}_{W} \bangg{n} \bangg{m} x \qquad \texttt{F}_n \cdot \bangg{n} x \cdot \bangg{n} y \qstepto{0}_{W} \bangg{n}(x \cdot y) \] \vspace{-0.2cm} \[ \texttt{W}_{n,m} \cdot x \cdot \bangg{n+m} y \qstepto{0}_{W} x \cdot \bangg{n} y \cdot \bangg{m}y \] \vspace{-0.2cm} \[ \infer {C[t^\sigma] \qreduce{\degctx{\mathcal{C}}(\varepsilon)}_{W} C[s^\sigma]} {t \qstepto{\varepsilon}_{W} s} \] \end{tcolorbox} } \caption{System $\mathcal{W}$ of graded combinators} \label{figure:graded-combinators} \end{figure} \subsection{Modal and Graded Rewriting: $(\mathbb{\quantale}, \mathbb{\Phi})$-Systems} Now that the reader has familiarised with informal ideas behind modal and graded rewriting systems, we introduce such systems formally. To do so, we first recall the notion of a quantale homomorphism. \begin{definition} Given quantales $\mathbb{\quantale} = (\Omega}%{\mathsfit{V}, \leq, \otimes, \mathsfit{k})$, $\mathbb{\Theta} = (\Theta, \sqsubseteq, \boxtimes, \mathsfit{j})$ a lax quantale homomorphism is a \emph{monotone} map $h: \Omega}%{\mathsfit{V} \to \Theta$ such that \begin{align*} \mathsfit{j} &\sqsubseteq h(\mathsfit{k}) \\ h(\varepsilon) \boxtimes h(\delta) &\sqsubseteq h(\varepsilon \otimes \delta). \end{align*} If we replace the above inequalities with full equalities and require $h$ to be \emph{continuous} (i.e. $h(\bigvee_i \varepsilon_i) = \bigsqcup_i h(\varepsilon_i)$), then we say that $h$ is a \emph{quantale homomorphism}. \end{definition} From now on, we shall work with quantale homomorphisms on the same quantale $\mathbb{\quantale}$. We denote such maps by $\phi, \psi, \hdots$ and refer to them as \emph{change of base (endo)functors} (CBEs, for short) \cite{Lawvere/GeneralizedMetricSpaces/1973, Hoffman-Seal-Tholem/monoidal-topology/2014}. \begin{remark} CBEs have been successfully employed to study general program distances \cite{Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19} and modal coeffectful reasoning \cite{modal-reasoning-equal-metric-reasoning,DBLP:journals/pacmpl/LagoG22a}. In such setting, actually, one works with \emph{lax} quantale homomorhpisms rather than with full homomorphisms. Although most (but not all) the results presented in this section can be given in terms of lax homomorphisms, important theorems such as confluence of orthogonal systems seem to require full homomorphisms. For that reason, we shall directly work with the latter maps. \end{remark} \begin{example} \begin{enumerate} \item The main example of CBEs we consider is multiplication by a constant on the Lawvere quantale (and variations thereof). Given $\kappa \in \mathbb{R}_{\geq 0}$, we regard $\kappa$ as mapping $\varepsilon \in [0,\infty]$ to $\kappa\varepsilon \in [0,\infty]$. Notice that we do not allow multiplication by infinity. That is because, from a rewriting perspective, multiplying by $\infty$ is semantically meaningless. Nonetheless, we could even include multiplication by infinity in our analysis, provided that we restrict our definition to finitely continuous CBEs \cite{Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19} and that we carefully define multiplication between zero and infinity (as first observed by \citet{GaboardiEtAl/POPL/2017}, algebra forces multiplication to become non-commutative, so that $0 \cdot \infty \neq \infty \cdot 0$).\footnote{ Extended multiplication is defined thus, for $y \neq 0$: $ x \cdot \infty \triangleq \infty$, $\infty \cdot 0 \triangleq 0$, $\infty \cdot y \triangleq \infty$. } \item Recall that in \autoref{section:qars} we have introduced a relational box modality relying on the map $\psi : \mathbb{2} \to \Omega}%{\mathsfit{V}$ and its right adjoint $\varphi : \Omega}%{\mathsfit{V} \to \mathbb{2}$. The map $\psi \circ \varphi$ is a CBE. \item Other examples of CBEs, especially on quantales of modal predicates, can be found in the literature on relational reasoning about coeffects \cite{DBLP:journals/pacmpl/LagoG22a}. \end{enumerate} \end{example} CBEs are closed under composition and the identity function $\bm{1}: \Omega}%{\mathsfit{V} \to \Omega}%{\mathsfit{V}$ is a CBE. Moreover, we can extend the order $\leq$ and the multiplication $\otimes$ of $\mathbb{\quantale}$ to CBEs pointwise. Finally, we denote by $\qunit^{\star}$ the constant $\mathsfit{k}$ CBE. \begin{remark} Any CBE $\phi$ induces an action $\bang{\phi}$ on $\mathbb{\quantale}$-relations defined by $\bang{\phi}R(a, b) \triangleq \phi(R(a,b)). $ The map $\bang{\phi}$ is an example of a corelator \cite{DBLP:journals/pacmpl/LagoG22a}. \end{remark} We now introduce a new class of rewriting systems, which we dub \emph{$(\mathbb{\quantale}, \mathbb{\Phi})$-systems} (or \emph{$\mathbb{\Phi}$-systems} for short). Let us fix a quantale $\mathbb{\quantale}$ and a structure $\mathbb{\Phi} = (\Phi, \leq, \circ, \bm{1}, \otimes, \qunit^{\star})$, where $\Phi$ is a set of CBEs containing the identity and constant $\mathsfit{k}$-functions, and closed under function composition and tensor. \begin{definition} \begin{enumerate} \item The \emph{modal arity} of an $n$-ary function symbol $f$ is a tuple $(\phi_1, \hdots, \phi_n)$ with $\phi_i \in \Phi$. Given a function symbol $f$ with modal arity $(\phi_1, \hdots, \phi_n)$ (notation $f: (\phi_1, \hdots, \phi_n)$), we say that $f$ has sensitivity (or modal grade) $\phi_i$ on its $i$th argument. \item A \emph{$\mathbb{\Phi}$-graded signature} is a set $\Sigma$ containing function symbols with their modal arity. Given a $\mathbb{\Phi}$-graded signature $\Sigma$ and a set $X$ of variables, the collection of $\terms{\Sigma}{X}$ is defined as usual. \item Given a term $e$ and a position $p$ for a variable in $e$, we define the grade $\degree{p}{e}$ of $p$ in $e$ as follows: \begin{align*} \degree{\lambda}{e} &\triangleq \bm{1} \\ \degree{ip}{f(e_1, \hdots, e_n)} &\triangleq \phi_i \circ \degree{p}{e_i} \qquad \quad (f: (\phi_1, \hdots, \phi_n)) \in \Sigma. \end{align*} \end{enumerate} \end{definition} Given a term $e$ and a variable $x$, we can compute the grade $\degree{\varone}{e}$ of $x$ in $e$ by `summing' the grades of all position $p$ such that $e_{|p} = x$. Formally, $\degree{\varone}{e} \triangleq \bigotimes\{\degree{p}{e} \mid e_{|p} = x\}$, where $\bigotimes \emptyset \triangleq \qunit^{\star}$. Notice that we can equivalently define $\degree{\varone}{e}$ recursively as follows: \begin{align*} \degree{x}{x} &\triangleq \bm{1} \\ \degree{x}{y} &\triangleq \qunit^{\star} \\ \degree{x}{f(e_1, \hdots, e_n)} &\triangleq \bigotimes \phi_i \circ \degree{x}{e_i} \qquad \quad (f: (\phi_1, \hdots, \phi_n)) \in \Sigma. \end{align*} We are now ready to define $(\mathbb{\Phi}$-)graded term rewriting systems. \begin{definition} \label{def:cbetrs} A \emph{$(\mathbb{\quantale},\mathbb{\Phi})$-term rewriting system} ($(\Quantale,\mathbb{\Phi})$-TRS{,} for short) is a pair $\mathcal{R} = (\Sigma, \mapsto_{\vrelone})$ consisting of a $\mathbb{\Phi}$-signature $\Sigma$ and a $\mathbb{\quantale}$-ternary relation. The (rewriting) $\mathbb{\quantale}$-ternary relation $\to_{R}$ generated by $\mapsto_{R}$ is defined thus: \[ \infer{\degctx{\mathcal{C}}(\varepsilon) \Vdash C[\subst{a}{\sigma}] \to_{\vrelone} C[\subst{b}{\sigma}]} {\varepsilon \Vdash a \mapsto_{\vrelone} b} \quad \infer{\delta \Vdash e \to_{\vrelone} f} {\varepsilon \Vdash e \to_{\vrelone} f & \delta \leq \varepsilon} \] We say that the system is \emph{balanced} if for any rule $\mrel{\varepsilon}{a}{\mapsto}{b}$ we have $\degree{\varone}{a} = \degree{\varone}{b}$, for any variable $\varone$. From now on, we assume all $(\Quantale,\mathbb{\Phi})$-TRS{s} to be balanced. \end{definition} Compared with the definition of (linear) $\Quantale$-TRS{s}, \autoref{def:cbetrs} has two main differences: first, the definition of the full rewriting relation $\to$ now takes into account the grade of the context; secondly, we omit all structural rules besides weakening. The omission of such rules (cf. \autoref{rem:structural-rules}) allows us to strengthen our results: in particular, whereas a critical pair lemma can be given for $(\Quantale,\mathbb{\Phi})$-TRS{s} extended with \emph{all} structural rules, our proof of confluence of orthogonal $(\Quantale,\mathbb{\Phi})$-TRS{s} seems not to scale to $(\Quantale,\mathbb{\Phi})$-TRS{s} extended with the Archimedean (infinitary) rule. \begin{notation} We extend to $(\Quantale,\mathbb{\Phi})$-TRS{s} all notational conventions introduced for $\Quantale$-TRS{s}. \end{notation} Let us now have a closed look at the definition of $\to$. First, it is instructive to characterise it inductively as follows: \[ \infer{\mrel{\varepsilon}{a}{\to}{b}} {\mrel{\varepsilon}{a}{\mapsto}{b}} \qquad \infer{\mrel{\varepsilon}{\subst{e}{\sigma}}{\to}{\subst{f}{\sigma}}} {\mrel{\varepsilon}{e}{\to}{f}} \qquad \infer{\phi_i(\varepsilon) \Vdash f(g_1, \hdots, e, \hdots, g_n) \to f(g_1, \hdots, f, \hdots, g_n) } {\varepsilon \Vdash e \to f & f: (\phi_1, \hdots, \phi_n) \in \Phi} \] This characterisation clearly shows that performing reductions inside function symbols amplify distances, whereas applying substitutions does not. This is because in the definition of $\to$ we apply \emph{the same} substitution on terms. Intuitively, this reflects the fact that passing identical (i.e. at a null distance) arguments to a Lipschitz continuous function produces identical results, and thus there is no distance amplification. Indeed, rephrasing \autoref{def:cbetrs} to (balanced) \emph{equational theories}, we obtain the following substitution rule \[ \infer{\varepsilon \otimes \degree{\varone}{e}(\delta) \Vdash e[g/\varone] =_{E} f[v/\varone]} {\varepsilon \Vdash e =_{E} f & \delta \Vdash g =_{E} v} \] When $g$ and $v$ coincide --- so that $\delta = \mathsfit{k}$ --- we obtain $\varepsilon \otimes \degree{\varone}{e}(\delta) = \varepsilon \otimes \degree{\varone}{e}(\mathsfit{k}) = \varepsilon \otimes \mathsfit{k} = \varepsilon$, meaning that the distance $\varepsilon$ is non-expansively propagated. As usual, any $(\Quantale,\mathbb{\Phi})$-TRS{} $(\Sigma, \mapsto_{\vrelone})$ induces a $\Quantale$-ARS{} whose objects are $\Sigma$-terms and whose rewriting $\mathbb{\quantale}$-relation is defined by $$ R(e, f) \triangleq \bigvee \{\varepsilon \mid \varepsilon \Vdash e \to_{\vrelone} f\}. $$ \begin{example} { \renewcommand{\code}[1]{\underline{#1}} The main example of a $(\Quantale,\mathbb{\Phi})$-TRS{} we consider is system $\mathcal{W}$, as introduced at the beginning of this section. As for system $\bck$ of affine combinators, we can also consider extensions of $\mathcal{W}$. For instance, we can consider functions $f: \mathbb{N}^m \to \mathbb{N}$ with Lipschitz constants $(\phi_1, \hdots, \phi_m)$ and add to $\Sigma_{\mathcal{W}}$ (properly extended with constants $\code{0}$, $\code{1}, \hdots$ for natural numbers) a combinator $\code{f}$ for any such a function, together with rules $$ \code{f} \cdot \bangg{\phi_1} \code{n}_1 \cdots \bangg{\phi_m} \code{n}_m \qstepto{0} \code{f(n_1, \hdots, n_m)}. $$ Another interesting example of a $(\Quantale,\mathbb{\Phi})$-TRS{} is obtained by grading the signature of system $\mathcal{B}$: any function symbol $+_{\epsilon}$ now takes signature $(\epsilon, 1-\epsilon)$, where by $\epsilon$ (resp. $1-\epsilon$) we mean multiplication by $\epsilon$ (resp. $1-\epsilon$). Notice that such a signature actually makes $+_\epsilon$ a \emph{contraction}~\cite{metric-spaces}. \citet{plotkin-quantitative-algebras-2016} have shown that the system obtained from such a signature together with the (equational) rules of idempotency, commutativity, and associativity provides a (quantitative) equational axiomatisation of the (finitary) Wasserstein-Kantorovich distance \cite{Villani/optimal-transport/2008}. } \end{example} \paragraph{Modal and Graded Equational Theories} Before moving to the metatheory of $(\Quantale,\mathbb{\Phi})$-TRS{s}, we extend quantitative{} equational theories to a graded setting \cite{dagnino-1}. \begin{definition} \label{def:graded-quantitative-algebras} A graded (quantitative{)} equational theory is a pair $\mathcal{E}= (\Sigma, \approx_E)$, where $\Sigma$ is a $\mathbb{\Phi}$-signature and $\approx_E$ is a $\mathbb{\quantale}$-ternary relation over $\Sigma$-terms. The $\mathbb{\quantale}$-ternary (equality) relation $=_{E}$ generated by $\approx_E$ is defined by the rules in \autoref{figure:graded-algebra}. We say that a graded equational theory is balanced if whenever $\varepsilon \Vdash e \approx_E f$, we have $\degree{\varone}{e} = \degree{\varone}{f}$, for any variable $x$. Notice that if equations in $\approx_{E}$ are balanced, then so are equations in $=_{E}$. \end{definition} \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*5)/6,colframe=black,colback=black!0!white,arc=0mm] \[ \infer{\varepsilon \Vdash e =_{E} f}{\varepsilon \Vdash e \approx_{E} f} \qquad \infer{\mathsfit{k} \Vdash e =_{E} e}{} \qquad \infer{\varepsilon \Vdash f =_{E} e}{\varepsilon \Vdash e =_{E} f} \qquad \infer{\varepsilon \otimes \delta \Vdash e =_{E} g} {\varepsilon \Vdash e =_{E} f & \delta \Vdash f =_{E} g} \] \[ \infer{\bigotimes_i \phi_i(\varepsilon_i) \Vdash f(e_1, \hdots, e_n) =_{E} f(f_1, \hdots, f_n)} {\varepsilon_1 \Vdash e_1 =_{E} f_1 & \cdots & \varepsilon_n \Vdash e_n =_{E} f_n & f: (\phi_1, \hdots, \phi_n) \in \Sigma} \qquad \infer{\varepsilon \Vdash \subst{e}{\sigma} =_{E} \subst{f}{\sigma}} {\varepsilon \Vdash e =_{E} f} \] \vspace{-0.05cm} \[ \infer{\delta \Vdash e =_{E} f} {\varepsilon \Vdash e =_{E} f & \delta \leq \varepsilon} \qquad \infer{\bigvee \varepsilon_i \Vdash e =_{E} f} {\varepsilon_1 \Vdash e =_{E} f & \hdots & \varepsilon_n \Vdash e =_{E} f} \qquad \infer{\varepsilon \Vdash e =_{E} f} {\forall \delta \ll \varepsilon.\ \delta \Vdash e =_{E} f} \] \end{tcolorbox} } \caption{Graded Quantitative Equational Theory of $=_{E}$} \label{figure:graded-algebra} \end{figure} As for quantitative equational theories, we see that $E$ is reflexive, symmetric, and transitive; and that by regarding any $n$-ary function symbol $f$ as a function $f: \terms{\Sigma}{X}^n \to \terms{\Sigma}{X}$, we have $$ \bang{\phi_1}E(e_1, f_1) \otimes \cdots \otimes \bang{\phi_n}E(e_n, f_n) \leq E(f(e_1, \hdots, e_n), f(f_1, \hdots, f_n)), $$ meaning that function symbols behave as (generalised) Lipschitz continuous functions. Moreover, it is an easy exercise to prove that for any balanced graded theory the substitution rule \[ \infer{\varepsilon \otimes \degree{\varone}{e}(\delta) \Vdash e[g/\varone] =_{E} f[v/\varone]} {\varepsilon \Vdash e =_{E} f & \delta \Vdash g =_{E} v} \] is valid, from which we obtain the following substitution inequality: $$ E(e,f) \otimes \bang{\degree{\varone}{e}}E(g, v) \leq E(e[g/\varone], f[v/\varone]). $$ Finally, at this point of the work it should be clear that the connection between quantitative{} equational theories and $\Quantale$-TRS{s} extend \emph{mutatis mutandis} to graded equational theories and $(\Quantale,\mathbb{\Phi})$-TRS{{s}, modulo the addition of structural rules in the definition of the latter. \subsection{Confluence and Critical Pairs, Part IV} We now progressively extend the theory of $\Quantale$-TRS{s} to $(\Quantale,\mathbb{\Phi})$-TRS{s}, beginning with \autoref{theorem:critical-pair-theorem}. The notion of an overlap and of a critical pair straightforwardly extend to $(\Quantale,\mathbb{\Phi})$-TRS{s}. Notice, however, that if $a_1 \qstepto{\varepsilon_1} b_1$ and $a_2 \qstepto{\varepsilon_2} b_2$ overlap at position $p$, so that there is a substitution $\sigma$ such that $\subst{a_2}{\sigma} = \mathcal{C}[a_1^{\sigma}]$ (with $\mathcal{C} = a_2^{\sigma}[-]_{p}$), then the critical pick is \begin{align*} \subst{b_2}{\sigma} \stackrel{\varepsilon_2}{\leftarrow} \subst{a_1}{\sigma} \qreduce{\degctx{\mathcal{C}}(\varepsilon_2)} \mathcal{C}[\subst{b_2}{\sigma}]. \end{align*} We thus have all the ingredients to extend \autoref{lemma:critical-pair} to $(\Quantale,\mathbb{\Phi})$-TRS{s}. Compared to its $\Quantale$-TRS{} counterpart, however, the critical pair lemma for $(\Quantale,\mathbb{\Phi})$-TRS{s} presents a major difference: we can relax the linearity assumption and require rewriting rules to be \emph{left-linear} only. \begin{lemma}[Critical Pair, Graded] \label{lemma:critical-pair-graded} Let $\mathcal{R} = (\Sigma, \mapsto_{\vrelone})$ be a \emph{left-linear} (balanced) $(\Quantale,\mathbb{\Phi})$-TRS{.} If $R$ is locally confluent on all critical pairs of $\mathcal{R}$, then it is locally confluent. \end{lemma} \begin{remark} \label{rem:critical-pair-graded} Due to the absence of structural rules (besides weakening) in \autoref{def:cbetrs}, \autoref{lemma:critical-pair-graded} can be strengthen to to prove local confluence of $\to_{R}$ given its local confluence on critical pairs of $\mathcal{R}$. \end{remark} \begin{proof}[Proof of \autoref{lemma:critical-pair-graded}] { \renewcommand{a}{a_1} \renewcommand{a}{a_2} \renewcommand{b}{b_1} \renewcommand{b}{b_2} Following \autoref{rem:critical-pair-graded}, we prove confluence of $\to$. The proof proceeds as for \autoref{lemma:critical-pair}, the main difference being the case of nested, non-critical redexes. We analyse this case in detail, and then extend it to the full case of a general peak. Suppose to have rules $a \qstepto{\varepsilon_1} b$, $a \qstepto{\varepsilon_2} b$. Without loss of generality, we consider the case in which the second reduction happens inside (an instance) of the first one. So there is a variable $x$ in $a$ and a substitution instance such that $\subst{x}{\sigma}$ contains $\subst{a}{\sigma}$. Since $\mathcal{R}$ is left linear, we know that there is just one occurence of $x$ in $a$. Say it is at position $p$, so that we have $a[x]_{p}$. Say also that the relevant occurrence of $\subst{a}{\sigma}$ in $\subst{x}{\sigma}$ is at position $q$, so that $\subst{x}{\sigma}[\subst{a}{\sigma}]_{q}$ and, consequently, $\subst{a}{\sigma}[\subst{x}{\sigma}[\subst{a}{\sigma}]_{q}]_{p}$ and $\subst{a}{\sigma}[\subst{a}{\sigma}]_{pq}$. The rule $a \qstepto{\varepsilon_1} b$ may, in general, duplicate the single occurrence of $x$ in $a$. Say we have $b[x]_{p_1, \hdots, p_n}$, meaning that $b$ has $n$ occurrences of $x$, each at position $p_i$. Therefore, reducing $\subst{a}{\sigma}$ gives $b^{\sigma}[x^{\sigma}]_{p_1, \hdots, p_n}$, and thus $b^{\sigma}[a^{\sigma}]_{p_1q, \hdots, p_nq}$. We can now reduce each of the $n$ occurrences of $a^{\sigma}$ in $b^{\sigma}$. The distance obtained for each reduction, however, is not $\varepsilon_2$ but $\degree{p_iq}{b^{\sigma}}(\varepsilon_2)$. Putting things together, we obtain the following reduction diagram: \[ \xymatrix{ & \subst{a}{\sigma}[\subst{a}{\sigma}]_{pq} \ar[ld]_{\varepsilon_1} \ar[rd]^{\degree{pq}{a^{\sigma}}(\varepsilon_2)}& \\ b^{\sigma}[a^{\sigma}]_{p_1q, \hdots, p_nq} \ar@{->>}[rd]_{\bigotimes_i \degree{p_iq}{b^\sigma}(\varepsilon_2) \quad} & & a^{\sigma}[b^{\sigma}]_{pq} \ar[ld]^{\varepsilon_1} \\ & b^{\sigma}[b^{\sigma}]_{p_1q, \hdots, p_nq} & } \] To obtain local confluence, we claim $$ \varepsilon_1 \otimes \degree{pq}{a^{\sigma}}(\varepsilon_2) \leq \bigotimes_i \degree{p_iq}{b^\sigma}(\varepsilon_2) \otimes \varepsilon_1. $$ Since the system is (left-linear and) balanced, we have $$\degree{p}{a} = \degree{x}{a} = \degree{x}{b} = \bigotimes_i \degree{p_i}{b}.$$ Moreover, we notice that \begin{align*} \degree{pq}{a^{\sigma}} &= \degree{p}{a} \circ \degree{q}{x^\sigma} \\ \bigotimes_i \degree{p_iq}{b^\sigma} &= \bigotimes_i \degree{p_i}{b} \circ \degree{q}{x^\sigma} \end{align*} Writing $\delta$ for $\degree{q}{x^\sigma}$, we obtain the desired inequality as follows: \begin{align*} \varepsilon_1 \otimes \degree{pq}{a^{\sigma}}(\varepsilon_2) = \degree{p}{a}(\delta) \otimes \varepsilon_1 = \bigotimes_i \degree{p_i}{b}(\delta) \otimes \varepsilon_1 &= \bigotimes_i \degree{p_iq}{b^\sigma}(\varepsilon_2) \otimes \varepsilon_1. \end{align*} This shows how to deal with nested non-critical redexes in isolation. In the general case, we have to consider all of that happening inside a larger term $e$. This means nothing by considering cases of the form $\mathcal{C}[a^\sigma[a^\sigma]_{pq}]$. The proof proceeds exactly as in the isolated case, with the main difference that distances should now be scaled by $\degctx{\mathcal{C}}$. But, due to the structural properties of CBEs, this creates no problem at all. } \end{proof} \begin{theorem} \label{theorem:critical-pair-theorem-graded} Any \emph{left-linear} and terminating (balanced) $(\Quantale,\mathbb{\Phi})$-TRS{} locally confluent on its critical pairs is confluent. \end{theorem} \begin{proof} It directly follows from \autoref{prop:newmans-lemma} and \autoref{lemma:critical-pair-graded}. \end{proof} \subsection{Orthogonality} Even if useful on many $(\Quantale,\mathbb{\Phi})$-TRS{s}, \autoref{theorem:critical-pair-theorem-graded} can only be used to infer local confluence of non-terminating systems, system $\mathcal{W}$ being a prime example of such a system. Yet, examples seem to suggest $\mathcal{W}$ to be indeed a confluent system. To make the latter intution into a proved mathematical result, we notice that the hypotheses of \autoref{theorem:critical-pair-theorem-graded} are trivially satisfied in the case of system $\mathcal{W}$, as the latter simply has no critical pair. Taking advantage of this observation, we now generalise the well-known result \cite{rosen-70} that orthogonality implies confluence to a quantitative{} and graded setting. \begin{definition} A $(\Quantale,\mathbb{\Phi})$-TRS{} is \emph{orthogonal} if it is left-linear and has no critical pair. \end{definition} As already remarked, our prime example of an orthogonal $(\Quantale,\mathbb{\Phi})$-TRS{} is system $\mathcal{W}$ of graded combinators. To prove confluence of orthogonal systems we employ Tait and Martin-L\"of technique \cite{Barendregt/Book/1984}, properly instantiated to our rewrtiting setting (see also Aczel's technique \cite{aczel-general-church-rosser}). We extend $\to$ to a $\mathbb{\quantale}$-ternary relation $\circlearrow$ allowing us to perform arbitrary (even nested) reductions in a term at once. \begin{definition} Given a $(\Quantale,\mathbb{\Phi})$-TRS{} $\mathcal{R} = (\Sigma, \mapsto_{R})$, we inductively define the multi-step reduction $\circlearrow$ by the rules in \autoref{figure:multi-step-reduction}. We define the $\mathbb{\quantale}$-relation $\mathring{R}$ by $$ \mathring{R}(e, f) \triangleq \bigvee \{\varepsilon \mid \varepsilon \Vdash e \circlearrow_{R} f\}. $$ \end{definition} \begin{figure} { \centering \begin{tcolorbox}[boxrule=0.5pt,width=(\linewidth*5)/6,colframe=black,colback=black!0!white,arc=0mm] \[ \infer{\mathsfit{k} \Vdash \varone \circlearrow_{R} \varone}{} \] \vspace{-0.05cm} \[ \infer{\bigotimes_i \phi_i(\varepsilon_i) \Vdash f(e_1, \hdots, e_n) \circlearrow_{R} f(f_1, \hdots, f_n)} {\varepsilon_1 \Vdash e_1 \circlearrow_{R} f_1 & \cdots & \varepsilon_n \Vdash e_n \circlearrow_{R} f_n & f: (\phi_1, \hdots, \phi_n) \in \Sigma_{\mathcal{R}} } \] \vspace{-0.05cm} \[ \infer{\varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\delta_i) \Vdash a[v_1, \hdots, v_n/x_1, \hdots, x_n] \circlearrow_{R} b[w_1, \hdots, w_n/x_1, \hdots, x_n]} {\delta_1 \Vdash v_1 \circlearrow_{R} w_1 & \cdots & \delta_n \Vdash v_n \circlearrow_{R} w_n & \varepsilon \Vdash a \mapsto_{R} b } \] \vspace{-0.05cm} \[ \infer{\delta \Vdash e \circlearrow_{R} f} {\varepsilon \Vdash e \circlearrow_{R} f & \delta \leq \varepsilon} \] \end{tcolorbox} } \caption{Multi-step reduction $\circlearrow_{\mathcal{R}}$} \label{figure:multi-step-reduction} \end{figure} \begin{notation} We extend the usual notational convention to $\circlearrow$. Moreover, in what follows we often employ the vector notation $\vect{\varphi}$ for finite sequences $\varphi_1, \hdots, \varphi_n$ of symbols. \end{notation} We immediately notice that since $\circlearrow$ allows us to reduce several redexes in a term simultaneously, it gives a substitution property similar to the one of graded quantitative{} equational theory. \begin{lemma}[Substitution Lemma] \label{lemma:graded-substituion-lemma} The following inference is valid \[ \infer{ \varepsilon \otimes \bigotimes \degree{\varone_i}{e}(\delta_i) \Vdash e[\vect{v}/\vect{x}] \circlearrow_{R} f[\vect{w}/\vect{x}]} {\varepsilon \Vdash e \circlearrow_{R} f & \delta_1 \Vdash v_1 \circlearrow_{R} w_1 & \cdots & \delta_n \Vdash v_n \circlearrow_{R} w_n } \] Consequently, we also obtain the following substitution inequality: $$ \mathring{R}(e,f) \otimes \bigotimes_i \bang{\degree{\varone_i}{e}} \mathring{R}(v_i, w_i) \leq \mathring{R}(e[\vect{v}/\vect{x}], f[\vect{w}/\vect{x}]). $$ \end{lemma} \begin{proof}[Proof sketch] By induction on the definition of $\circlearrow_{R}$ and following the pattern of graded and quantitative substitution lemmas \cite{Gavazzo/LICS/2018,DBLP:phd/basesearch/Gavazzo19,DBLP:journals/pacmpl/LagoG22a}. \end{proof} Given a $(\Quantale,\mathbb{\Phi})$-TRS{} $\mathcal{R} = (\Sigma, \to_{R})$, we are going to prove confluence of $R$ by actually proving a stronger result, namely confluence of $\to_{R}$. This is possible thanks to the absence of (infinitary) structural rules in the definition of a $(\Quantale,\mathbb{\Phi})$-TRS{} (\autoref{def:cbetrs}). To achieve such a result, we shall prove that $\circlearrow_{R}$ has the diamond property. Since ${\to_{R}} \subseteq {\circlearrow_{R}} \subseteq {\to_{R}^*}$ (and thus $R \leq \mathring{R} \leq R^*$), confluence of $\to_{R}$ follows. Before proving the diamond property for $\circlearrow_{R}$, let us remark a useful property of orthogonal systems, namely that if we have a (necessarily unique) rule $a \qstepto{\varepsilon} b$ and we reduce a term of the form $a[\vect{v}/\vect{x}]$, then either we reduce the (instance) of redex $a$, or the term obtained is itself an instance of the redex $a$, i.e. it is of the form $a[\vect{w}/\vect{x}]$, for some terms $\vect{w}$. \begin{remark} Given an orthogonal $(\Quantale,\mathbb{\Phi})$-TRS{,} suppose to have a reduction $a[\vect{v}/\vect{x}] \qmultireduce{\varepsilon} e$ not an instance of weakening. Then, either: \begin{enumerate} \item $e = b[\vect{w}/\vect{x}]$ with $v_i \qmultireduce{\delta_i} w_i$, for any $i$; $a \qstepto{\eta} b$; and $\varepsilon = \eta \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)$. Or \item $e = a[\vect{w}/\vect{x}]$ with $v_i \qmultireduce{\delta_i} w_i$, for any $i$, and $\varepsilon = \bigotimes_i \degree{x_i}{a}(\delta_i)$. \end{enumerate} \end{remark} \begin{proposition} Let $\mathcal{R} = (\Sigma, \mapsto_{R})$ be an orthogonal $(\Quantale,\mathbb{\Phi})$-TRS. Then, the relation $\circlearrow_{R}$ has the diamond property. That is, if $f_1 \stackrel{\varepsilon_1}{\circleleftarrow} e \qmultireduce{\varepsilon_2} f_2$, there there exists a term $f$ such that $f_1 \qmultireduce{\delta_1} f \stackrel{\delta_2}{\circleleftarrow} f_2$ and $\varepsilon_1 \otimes \varepsilon_2 \leq \delta_1 \otimes \delta_2$. \end{proposition} \begin{proof} The proof is by induction on $e$ with a case analysis on the defining clauses of $\circlearrow$. The interesting case is for $e = a[\vect{v}/\vect{x}]$. By previous remark, there are two possibilities for the reduction $a[\vect{v}/\vect{x}] \qmultireduce{\varepsilon_1} f_1$ (the case for weakening is straightforward). \begin{enumerate} \item $f_1 = b[\vect{w}/\vect{x}]$ with $v_i \qmultireduce{\delta_i} w_i$, $a \qstepto{\varepsilon} b$, and $\varepsilon_1 = \varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)$. Since $\mathcal{R}$ is orthogonal, the rule $a \qstepto{\varepsilon} b$ is unique and $f_2$ must be of the form $a[\vect{u}/\vect{x}]$ with $v_i \qmultireduce{\eta_i} u_i$ and $\varepsilon_2 = \bigotimes_i \degree{x}{a}(\eta_i)$ (otherwise, we would have a critical pair). That is, we have the peak: \[ \xymatrix@=1.8pc{ & a[\vect{v}/\vect{x}]\ar[ld]_{ \varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)}|\circ \ar[rd]^{\bigotimes_i \degree{x_i}{a}(\eta_i)}|\circ & \\ b[\vect{w}/\vect{x}] & & a[\vect{u}/\vect{x}] } \] By induction hypothesis on each $v_i$, we close the diagram \[ \vcenter{ \xymatrix@=0.9pc{ & v_i\ar[ld]_{\delta_i}|\circ \ar[rd]^{\eta_i}|\circ & \\ w_i \ar[rd]_{\hat{\eta}_i}|\circ & & u_i \ar[ld]^{\hat{\delta}_i}|\circ \\ & z_i & } } \qquad \quad \delta_i \otimes \eta_i \leq \hat{\eta}_i \otimes \hat{\delta}_i, \] so that we obtain \[ \xymatrix@=1.8pc{ & a[\vect{v}/\vect{x}]\ar[ld]_{ \varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)}|\circ \ar[rd]^{\bigotimes_i \degree{x_i}{a}(\eta_i)}|\circ & \\ b[\vect{w}/\vect{x}] \ar[rd]_{ \bigotimes_i \degree{x_i}{b}(\hat{\eta}_i)}|\circ & & a[\vect{u}/\vect{x}] \ar[ld]^{\varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\hat{\delta}_i)}|\circ \\ & b[\vect{z}/\vect{x}] & } \] Since the system is balanced, $\delta_i \otimes \eta_i \leq \hat{\eta}_i \otimes \hat{\delta}_i$ infer $$ \bigotimes_i \degree{x_i}{a}(\delta_i) \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)(\eta_i) \leq \bigotimes_i \degree{x_i}{b}(\hat{\eta}_i) \otimes \bigotimes_i \degree{x_i}{b}(\hat{\delta}_i) $$ and thus $$ \varepsilon \otimes \bigotimes_i \degree{x_i}{a}(\delta_i) \otimes \bigotimes_i \degree{x_i}{a}(\delta_i)(\eta_i) \leq \bigotimes_i \degree{x_i}{b}(\hat{\eta}_i) \otimes \varepsilon \otimes \bigotimes_i \degree{x_i}{b}(\hat{\delta}_i). $$ \item $f_1 = a[\vect{w}/\vect{x}]$ with $v_i \qmultireduce{\delta_i} w_i$ and $\varepsilon_1 = \bigotimes_i \degree{x_i}{a}(\delta_i)$. Now, if $e \qmultireduce{\varepsilon_2} f_2$ is an instance of a rule $a \qstepto{\varepsilon} b$ (i.e. the `base' case in the definition of $\circlearrow$), then we proceed as in the previous case. Otherwise, we must have $f_2 = a[\vect{u}/\vect{x}]$ with $v_i \qmultireduce{\eta_i} u_i$ and $\varepsilon_2 = \bigotimes_i \degree{x}{a}(\eta_i)$. That is, we have the peak: \[ \xymatrix@=1.8pc{ & a[\vect{v}/\vect{x}]\ar[ld]_{ \bigotimes_i \degree{x_i}{a}(\delta_i)}|\circ \ar[rd]^{\bigotimes_i \degree{x_i}{a}(\eta_i)}|\circ & \\ a[\vect{w}/\vect{x}] & & a[\vect{u}/\vect{x}] } \] We proceed applying the induction hypothesis as in previous point and close the diagram as follows, relying on the substitution lemma: \[ \xymatrix@=1.8pc{ & a[\vect{v}/\vect{x}]\ar[ld]_{ \bigotimes_i \degree{x_i}{a}(\delta_i)}|\circ \ar[rd]^{\bigotimes_i \degree{x_i}{a}(\eta_i)}|\circ & \\ a[\vect{w}/\vect{x}] \ar[rd]_{ \bigotimes_i \degree{x_i}{a}(\hat{\eta}_i)}|\circ & & a[\vect{u}/\vect{x}] \ar[ld]^{\bigotimes_i \degree{x_i}{a}(\hat{\delta}_i)}|\circ \\ & a[\vect{z}/\vect{x}] & } \] Indeed, $\delta_i \otimes \eta_i \leq \hat{\eta}_i \otimes \hat{\delta}_i$ implies $$ \bigotimes_i \degree{x_i}{a}(\delta_i) \otimes \bigotimes_i \degree{x_i}{a}(\eta_i) \leq \bigotimes_i \degree{x_i}{a}(\hat{\eta}_i) \otimes \bigotimes_i \degree{x_i}{a}(\hat{\delta}_i). $$ \end{enumerate} \end{proof} \begin{corollary} \label{corollary:multistep-is-diamond} Let $\mathcal{R} = (\Sigma, \mapsto_{R})$ be an orthogonal $(\Quantale,\mathbb{\Phi})$-TRS. Then $\mathring{R}$ has the diamond property. \end{corollary} We can finally prove confluence of orthogonal systems. \begin{theorem} \label{thm:orthogonality-implies-confluence} Let $\mathcal{R} = (\Sigma, \mapsto_{R})$ be an orthogonal $(\Quantale,\mathbb{\Phi})$-TRS. Then, $R$ is confluent. \end{theorem} \begin{proof Let $S \triangleq \dual{R}$. We prove $S^*;R^* \leq R^*;S^*$. By adjunction, it is sufficient to prove $S^* \leq (R^*;S^*) / R^*$. We proceed by fixed point induction, shwoing $\Delta \leq (R^*;S^*) / R^*$ and $S; (R^*;S^*) / R^* \leq (R^*;S^*) / R^*$. The former is straightforward, whereas for the latter it is sufficient to prove $S; (R^*;S^*) / R^*;R^* \leq R^*;S^*$, i.e. $S;R^*;S^* \leq R^*;S^*$. Since $S \leq \mathring{S}$, it is enough to show $\mathring{S};R^*;S^* \leq R^*;S^*$ and thus $R^* \leq \mathring{S} \setminus (R^*;S^*)/ S^*$. We do a second fixed point induction, hence proving $\Delta \leq \mathring{S} \setminus (R^*;S^*)/ S^*$ and $R; \mathring{S} \setminus (R^*;S^*)/ S^* \leq \mathring{S} \setminus (R^*;S^*)/ S^*$. The former obviously holds since $\mathring{S} \leq S^*$, wheres for the latter we first use adjunction and reduce the proof obligation to $$ \mathring{S}; R; \mathring{S} \setminus (R^*;S^*)/ S^*; S^* \leq R^*;S^*, $$ i.e. $\mathring{S}; R; \mathring{S} \setminus (R^*;S^*) \leq R^*;S^*$. Since $R \leq \mathring{R}$, it is sufficient to prove $\mathring{S}; \mathring{R}; \mathring{S} \setminus (R^*;S^*) \leq R^*;S^*$. Now, by \autoref{corollary:multistep-is-diamond}, we have $\mathring{S}; \mathring{R} \leq \mathring{R};\mathring{S}$, and thus: \begin{align*} \mathring{S}; \mathring{R}; \mathring{S} \setminus (R^*;S^*) \leq \mathring{R};\mathring{S}; \mathring{S} \setminus (R^*;S^*) \leq \mathring{R}; R^*;S^* \leq R^*;R^*;S^* \leq R^*;S^*. \end{align*} \end{proof} \begin{remark} By replacing $R$ with $\to_{R}$ (and thus replacing algebraic operations on $\mathbb{\quantale}$-relations with their ternary relation counterparts) in the proof of \autoref{thm:orthogonality-implies-confluence}, we obtain confluence of $\to_{R}$. \end{remark} We conclude this section by observing that system $\mathcal{W}$ is orthogonal, and thus by \autoref{thm:orthogonality-implies-confluence} it is confluent. \begin{theorem} System $\mathcal{W}$ of graded combinatory logic is confluent. \end{theorem} To the best of the authors' knowledge, this is the first confluence result for a system of graded combinators endowed with a quantitative and modal operational (reduction) semantics. Such a result can seen as a first step towards a foundational study of operational properties of graded and coeffectful calculi. \section{CONCLUSION, RELATED, AND FUTURE WORK} \label{sect:conclusion} In this paper, we have started the development a systematic theory of metric and quantitative rewriting systems. The abstract nature of the notion of a distance employed makes our framework robust and allows for several conceptual interpretations of our rewriting systems. The latter, in fact, can be thought not only as metric and quantitative systems, but also as substructural (e.g. fuzzy or monoidal) and modal or coeffectful systems, this way suggesting possible applications of our theory to the development of quantitative and modal operational semantics of coffectful programming languages. We have shortly hinted at that at the end of previous section (the authors are currently working on applications of quantitative{} rewriting to study operational properties of foundational graded $\lambda$-calculus). We have focused on fundamental definitions and confluence properties of abstract systems, as well as linear and graded term rewriting systems. Developing a general theory of quantitative{} rewriting systems is an ambitious project that cannot be exhausted in a single paper. Among the many possible extensions of the theory presented in this paper, we mention the development of a theory of reduction strategies and their application to metric word problems; the design of completion algorithms for quantitative{} term rewriting systems (both linear and graded), as well as their quantitative{} correctness with respect to families of metric word problems; and the study of inductive and termination properties of quantitative systems. The latter, in particular, seem to suggest that new rewriting properties can be discovered by pushing the quantitative{} enrichment one step forward, this way making the notions of termination, induction, confluence, etc quantitative{} themselves. Finally, we plan to investigate applications of quantitative{} systems in the spirit of those outlined in \autoref{section:long-intro}. \paragraph{Related Work} To the best of the authors' knowledge, this is the first systematic analysis of quantitative{} and metric rewriting systems. This, of course, does not mean that isolated forms of quantitative{} rewriting have not been proposed in the literature. For instance, specific forms of weighted reductions have been employed \cite{cost-analysis-term-rewriting-1,cost-analysis-term-rewriting-2} in the study of cost analysis of rewriting systems. Measured abstract rewriting systems, i.e. \emph{abstract} rewriting systems with a reduction relation enriched in a monoid, have been introduced by \citet{van-oostrom-2016} to study normalisation properties by random descent. In that context, a quantitative notion of confluence is introduced which, however, differs from ours in the way it compares distances between objects. In fact, given a peak $b_1 \stackrel{\varepsilon_1}{\leftarrow} a \qreduce{\varepsilon_2} b_2$ and a valley $b_1 \qreduce{\delta_1} b \stackrel{\delta_1}{\leftarrow} b_2$, it is required $\varepsilon_1 \otimes \delta_1 \leq \varepsilon_2 \otimes \delta_2$. Even if this requirement has a natural reading when it comes to study normalisation properties of rewriting, it does not fit the algebra of quantitative{} relations and seems ineffective when applied to the study of distances. We also remark that measured rewriting systems have been studied in the context of abstract rewriting only, whereas our theory of quantitative{} rewriting systems covers both abstract and (graded) term-based systems. At the time of writing, the authors have discovered that abstract \emph{fuzzy} rewriting systems have been studied by \citet{fuzzy-rewriting-1,fuzzy-rewriting-2} relying on the theory of fuzzy relations \cite{Fuzzy-relational-systems}. Even if the aforementioned theory of fuzzy rewriting systems does \emph{not} cover term-based systems (neither non-expansive nor graded), the development of fuzzy abstract rewriting systems is in line with our \autoref{section:qars}. In particular, \citet{fuzzy-rewriting-1,fuzzy-rewriting-2} define fuzzy notions of confluence and prove a quantitative{} Newman's lemma similar to (the pointwise version of) ours. In fact, instantiating the theory of \autoref{section:qars} to Fuzzy quantales, we obtain an extension\footnote{\autoref{section:qars} contains results on abstract $\mathbb{\quantale}$-systems, such as the quantitative{} Hindley-Rosen lemma, that are not given for Fuzzy systems.} of the theory of abstract Fuzzy rewriting systems. Remarkably, our pointwise analysis of quantitative{} Newman's Lemma is close to the one by \citet{fuzzy-rewriting-1}. Besides the absence of a theory term-based systems, major differences between our work and the one on Fuzzy rewriting can be found even at the level of abstract systems. First, as already remarked, our approach is more general and subsumes (and extends) Fuzzy rewriting. Moreover, our theory of abstract $\mathbb{\quantale}$-systems is largely pointfree and builds upon general relational techniques nontrivially extending the relational theory of abstract rewriting by \citet{backshouse-calculational-approach-to-mathematical-induction}, as well as other pointfree theories of rewriting systems \cite{Struth-abstract-abstract-reduction,Struth-algebraic-notions-of-termination}. In addition to all of that, we mention that, curiously, Fuzzy rewriting systems have not been applied to metric reasoning. This is an interesting observation, as it turns out that general quantitative{} Fuzzy equational and algebraic theories \cite{fuzzy-equational-logic} have been developed before the quantitative{} algebraic theories by \citet{plotkin-quantitative-algebras-2016}. However, even if mathematically sophisticated, such fuzzy theories have not been applied (to the best of the authors' knowledge) neither to metric reasoning nor to the semantics of programming languages. Contrary to the case of rewriting systems, general theories of quantitative equational reasoning have been developed. In addition to the aforementioned Fuzzy equational theories, we mention the rich research line on quantitative algebras and equational theories~\cite{plotkin-quantitative-algebras-2016,plotkin-quantitative-algebras-2017,plotkin-quantitative-algebras-2018,plotkin-quantitative-algebras-2018-bis,plotkin-quantitative-algebras-2021,plotkin-quantitative-algebras-2021-bis,DBLP:conf/lics/MioSV21}. With the exception of the recent work by \citet{dagnino-1}, such theories are usually not graded and, to the best of the authors' knowledge, are not capable to describe non-linear systems, such as system $\mathcal{W}$ of graded combinators. From that point of view, our definition of a graded quantitative{} equational theory can be seen as a first extension of quantitative{} equational theories to graded systems. \section*{Acknowledgements} The authors would like to thank Melissa Antonelli, Francesco Dagnino, Ugo Dal Lago, and Claudia Faggian for their helpful suggestions and stimulating conversations on the subject.
1,314,259,992,757
arxiv
\section{Introduction} \label{sec:introduction} String representations recently received a lot of attention, especially for planar graphs. Scheinerman \cite{cit:scheinerman} had asked in 1984 whether every planar graph can be represented as the intersection graph of segments in the plane. This was settled partially by Chalopin, Gon\c{c}alves and Ochem \cite{cit:chalopin-string}, who showed that every planar graph has a 1-string representation, i.e., a representation as an intersection graph of strings such that any two strings may cross at most once. Extending their result, in 2009 Chalopin and Gon\c{c}alves finally settled Scheinerman's conjecture in the positive~\cite{cit:chalopin-seg}. We later showed that 1-string representations of planar graphs can be achieved even with orthogonal curves with at most 2 bends \cite{cit:jocg}. A number of other papers gave string representations for subclasses of planar graphs that are simpler to build and/or have other useful properties, see for example \cite{FMP91,KobourovUeckertVerbeek,cit:chaplick,cit:mfcs,cit:cccg}. Testing whether a graph has a string representation is NP-hard \cite{cit:kratochvil-II,cit:middendorf} and in NP \cite{cit:schaefer}; the latter is not obvious because string representations may require exponentially many bends for non-planar graphs \cite{KratochvilM1994}. \iffalse One of the reasons that the proofs for building 1-string representations \cite{cit:chalopin-string,cit:chalopin-seg,cit:jocg} are quite involved is that an incremental approach does not seem to work. This approach is used by many algorithms that draw planar graphs and works as follows: Fix a vertex order $v_1,\dots,v_n$, and draw for increasing $k$ the graph $G_k$ induced by $v_1,\dots,v_k$ by adding the next vertex $v_{k+1}$. For this to work, some invariant must be kept on the drawing $\Gamma_k$ of $G_k$. One of the key ingredients for such an invariant is that $\Gamma_k$ ``reflects'' the planar embedding of $G_k$ in some sense, so that the neighbours of $v_{k+1}$ in $G_k$ are ``kept together'' in $\Gamma_k$, allowing to add $v_{k+1}$. For 1-string representations, an edge $(v,w)$ means that the strings $\curve{v}$ and $\curve{w}$ cross exactly once, and hence ``trade places'' in some sense, making it difficult to maintain a planar embedding. \fi \iffalse We found 1-string representations of planar graphs hard to build (and also hard to verify the correctness of created 1-string representation), because it is hard to find for each edge {\em where} the crossing occurs. More specifically, in our constructions the order of the crossings along the curve of a vertex seemed to be in no particular order, and certainly not in the order in which the edges appeared at the vertex in a planar embedding. This also prevented most of the typically ``incremental'' approaches to planar graph drawing (add a few vertices at a time while maintaining some invariant) to work, because we could not keep the ends of strings of predecessors of a vertex closely together in general. \fi \medskip\noindent{\bf Our results: } In this paper, we study the following question: Does every planar graph have a 1-string representation where the order of crossings along curves {\em preserves} the planar embedding in the sense that the order of crossings along the curve of $v$ corresponds to the cyclic order of edges around $v$ in some planar embedding? This is motivated by that we found string representations quite hard to read; during our work on \cite{cit:jocg} we struggled to verify correctness in some cases because the crossing of curves for an edge occurred at unexpected places. Furthermore, having an order-preserving string representation could make it easier to create such representations by using the typical incremental approach that adds one vertex on the outer-face at a time; for this it would be especially helpful if such representations were also {\em outer-string} in the sense that ends of strings are on the infinite region defined by the representation. We show the following: \begin{itemize} \item Not all planar graphs have order-preserving 1-string representations. In fact, we can construct a planar 3-tree that has no such representation. \item For some subclasses of planar partial 3-trees, we construct order-pre\-ser\-ving 1-string representations. For outer-planar graphs, these are additionally outer-string (and use segments), while for the other graph classes we show that order-preserving outer-1-string representations do not always exist. \end{itemize} We are not aware of any previous results on order-preserving 1-string repre\-sen\-tations. (On the other hand, string-representations of planar graphs obtained from contact representations are usually order-preserving, but strings then intersect twice, at least for some edges.) \todo{TB: Changed here to avoid some of the criticisms.} The closest related results are on the {\em abstract graph realizability problem} \cite{cit:kratochvil-II,cit:middendorf}, which asks to draw a graph such that only a given set of edge-pairs are allowed to cross. \iffalse not cited yet: Middendorf and Pfeiffer~\cite{cit:stretching} \fi \section{Definitions} A {\em string representation} $\ensuremath{\mathcal{R}}$ assigns a curve $\curve{v}$ in the plane to every vertex $v$ in a graph in such a way that $(v,w)$ is an edge if and only if $\curve{v}$ intersects $\curve{w}$. (Throughout the paper, bold-face $\curve{x}$ always denotes the curve assigned to vertex $x$.) We demand that $\curve{u}$ and $\curve{v}$ intersect only if there is a proper crossing, i.e., any sufficiently small circle centered at an intersection-point crosses $\curve{u}$, $\curve{v}$, $\curve{u}$, $\curve{v}$ in that order. (In particular no curve $\curve{u}$ should end on another curve $\curve{v}$, though such a touching-point could always be resolved into a proper crossing by extending $\curve{u}$ a bit.) We also do not allow three curves to share a point. A {\em 1-string representation} is a string representation such that any two curves cross at most once. A {\em segment representation} uses straight-line segments in place of strings. A {\em $B_k$-VPG-representation} uses orthogonal curves with at most $k$ bends as strings. A string representation $\ensuremath{\mathcal{R}}$ divides the plane into connected regions. The {\em contour} is the infinite region of $\Bbb{R}^2-\ensuremath{\mathcal{R}}$. A string representation is called {\em weakly outer-string} if all vertex curves are incident to the contour. It is called {\em outer-string} if all vertex curves have an end incident to the contour.% \footnote{One could distinguish this further by whether both ends must be on the contour or whether one end suffices. All our outer-string constructions have both ends on the contour, while all our impossibility-results hold even if only one end is required to be on the contour, so the distinction does not matter for the results in our paper.} A weakly outer-string representation can be made outer-string by ``doubling back'' along the curve of each vertex, but this does not work for an outer-1-string representation, because doubling back along the curve would make some curves cross twice. \iffalse One can easily see that a graph has an end-outer 1-string representation if and only it is a {\em circle graph} (i.e., intersection-graph of chords of a circle), by drawing a closed curve along the contour-boundary that connects the ends of strings. \fi See \cite{Cabello2016,cit:Keil2016} and the references therein for more on outer-string representations. In this paper, we only consider connected graphs. A graph is called {\em planar} if it can be drawn in the plane without crossing. Such a planar drawing $\Gamma$ defines, by enumerating edges around vertices in clockwise order, a {\em rotation scheme}, i.e., an assignment of a cyclic order of edges at each vertex. From the rotation scheme, one can read the {\em faces}, i.e., the vertices and edges that are incident to each connected piece of $\Bbb{R}^2-\Gamma$. A {\em plane graph} is a planar graph with a fixed rotation scheme. An {\em outer-planar graph} is a planar graph that has a rotation scheme such that all vertices are incident to one face. An {\em outer-plane graph} is a plane graph with the rotation system that describes such an embedding. A {\em $k$-tree} (used here only for $k=2,3$) is a graph that has a vertex order $v_1,\dots,v_n$ such that $v_1,\dots,v_k$ is a clique, and each $v_i$ for $i>k$ has exactly $k$ neighbours in $v_1,\dots,v_{i-1}$, and they form a clique. A {\em partial $k$-tree} is a subgraph of a $k$-tree. Every outer-planar graph is a partial 2-tree. Fix a rotation scheme of a graph. We say that a 1-string representation is {\em order-preserving} with respect to the rotation scheme if for any vertex $v$, we can walk along curve $\curve{v}$ from one end to the other and encounter the crossings with $\curve{w_1},\dots, \curve{w_k}$ in the same order in which the neighbours $w_1,\dots,w_k$ of $v$ appear in the cyclic order of edges around $v$. This leaves open the choice which neighbour of $v$ should be $w_1$, since the order at $v$ is cyclic while the order along $\curve{v}$ is not.% \footnote{Once we fix how to break up the cyclic order at all vertices, an order-preserving 1-string representation can be described abstractly as a graph $H$ and can be realized if and only if $H$ is planar. Hence the problem is interesting only if we keep this choice.} \section{Graphs with no order-preserving representations} \label{sec:does-not-exits} In this section, we show that there exist planar graphs that have no 1-string representation that preserves the order of any planar embedding. To define them, we need the following graph operation: Given a plane graph $G$, the {\em stellation} of $G$ is obtained by inserting a new vertex into every face of $G$, and making it adjacent to all vertices incident to that face. The {\em triple-stellation} of $G$ is obtained by stellating $G$ to get $G'$, stellating $G'$ to get $G''$, and finally stellating $G''$. \begin{lemma} \label{lem:stellation} Let $G$ be a plane graph with minimum degree 3 and at least $|V(G)|+1$ faces that are triangles. Then the triple-stellation $G'''$ of $G$ has no order-preserving 1-string representation with respect to this rotation scheme. \end{lemma} \begin{proof} Assume for contradiction we had such a 1-string representation $\ensuremath{\mathcal{R}}$, and let $\ensuremath{\mathcal{R}}_G$ be the induced 1-string representation of $G$, which is also order-preserving. The following notation will be helpful: If $a,c$ are neighbours of $b$, then let $\curve{b}[a,c]$ be the stretch of $\curve{b}$ between the intersection with $\curve{a}$ and $\curve{c}$. Consider a face-vertex-incidence in $G$, which can be described by giving a vertex $b$ and two neighbours $a,c$ of $b$ that are consecutive in the clockwise order at $b$. We call such a face-vertex-incidence {\em unbroken} if (in $\ensuremath{\mathcal{R}}_G$) $\curve{b}[a,c]$ contains no other crossing, else we call it {\em broken}. Since $\ensuremath{\mathcal{R}}_G$ is order-preserving, for every vertex $b$ in $G$ only one face-vertex-incidence at $b$ is broken. Since $G$ has at least $|V(G)|+1$ triangular faces, there exists a face $T=\{u,v,w\}$ of $G$ such that all face-vertex-incidences at $T$ are unbroken. We will find a contradiction at the stellation vertices that were placed in $T$. See also Fig.~\ref{fig:stellation}. Let $x$ be the vertex that (during the stellation of $G$ to get $G'$) was placed in face $T$. We claim that $\curve{x}$ must intersect $\curve{u}$ in $\curve{u}[v,w]$. To see this, recall that $\deg_G(u)\geq 3$, hence $u$ has at least one other neighbour $u'$ in $G$. Since the face-incidence at $u$ is unbroken, $\curve{u}[v,w]$ contains no other crossing of $\ensuremath{\mathcal{R}}_G$, so $\curve{u'}$ intersects $\curve{u}$ outside this stretch. Since $T$ is a face in $G$, the (clockwise or counter-clockwise) order of neighbours at $u$ in $G'$ contains $u',v,x,w$. To maintain this order in the string representation, the intersection between $\curve{x}$ and $\curve{u}$ (in $\ensuremath{\mathcal{R}}$) must be on $\curve{u}[v,w]$. Similarly one argues that $\curve{x}$ intersects $\curve{v}[u,w]$ and $\curve{w}[u,v]$. Let $C$ be the region bounded by $\curve{u}[v,w]\cup \curve{w}[u,v]\cup \curve{v}[w,u]$. Curve $\curve{x}$ intersects $\delta C$ three times, and no more since curves intersect at most once in a 1-string representation. So $\curve{x}$ starts (say) inside $C$, crosses $\delta C$ to go outside, crosses $\delta C$ to go inside, and then crosses $\delta C$ again to end outside. Between the second and third crossing, $\curve{x}$ contains a stretch that is inside $C$; after possible renaming of $\{u,v,w\}$ we assume that this is $\curve{x}[v,w]$. This stretch splits $C$ into two parts, say $C'$ (incident to parts of $\curve{u}$) and $C^r$ (incident to the crossing of $\curve{v}$ and $\curve{w}$). \todo{Unimportant change-request: Make $\curve{w}$ and $\curve{v}$ cross ``the other way''. Extend $\curve{y}$ to cross $\curve{w}$ from outside.} \begin{figure}[ht] \hspace*{\fill} \includegraphics[width=0.2\linewidth]{figures_fig1_graph.pdf} \hspace*{\fill} \includegraphics[width=0.35\linewidth]{figures_fig1_representation.pdf} \hspace*{\fill} \caption{For the proof of Lemma~\ref{lem:stellation}.} \label{fig:stellation} \end{figure} Let $y$ be the vertex that (during the stellation of $G'$ to get $G''$) was placed in the face $\{v,w,x\}$ of $G'$. Since $v,w,x$ all have degree 3 or more in $G'$, as before one argues that $\curve{y}$ must intersect $\curve{x}[v,w]$, $\curve{w}[x,v]$ and $\curve{v}[w,x]$. Curve $\curve{y}$ intersects $\delta C'$ (in $\curve{x}[v,w]$), but cannot intersect $\delta C'$ a second time, else it would cross $\curve{u}$ (but $(u,y)\not\in E$) or would cross one of $\curve{x},\curve{v},\curve{w}$ twice (which is not allowed). Hence $\curve{y}$ starts inside $C'$, then crosses $\curve{x}$, and then crosses one of $\curve{v}$ and $\curve{w}$. Up to renaming of $\{v,w\}$ we may assume that $\curve{y}$ crosses $\curve{v}$ first. Hence $\curve{y}[x,v]$ splits $C^r$ into two parts, say $C''$ (incident to parts of $\curve{w}$) and $C'''$ (incident to the crossing of $\curve{v}$ and $\curve{x}$). Now finally consider the vertex $z$ that was placed in $\{x,y,v\}$ when stellating $G''$ to obtain $G'''$. As before one argues that $\curve{z}$ has an end inside $C'$, because it crosses $\curve{x}$ in stretch $\curve{x}[v,y]\subset \curve{x}[v,w]$, and it cannot cross $C'$ again. But we can also see that $\curve{z}$ has an end inside $C''$, since it crosses $\curve{y}[x,v]$ and crosses no other curve on the boundary of $C''$. But this means that $\curve{z}$ has both ends outside $C'''$, contradicting that it must intersect the boundary of $C'''$ three times to respect the edge-orders at $x,y,v$. Contradiction, so $G'''$ does not have an order-preserving 1-string representation. \qed \end{proof} \begin{theorem} \label{thm:notOrder} There exists a planar 3-tree that has no order-preserving 1-string representation. \end{theorem} \begin{proof} Start with an arbitrary planar 3-tree $G$ with $n\geq 6$ vertices; this has minimum degree 3 and $2n-4\geq n+2$ triangular faces in its (unique) rotation scheme. Stellating a 3-tree gives again a 3-tree, so by Lemma~\ref{lem:stellation} the triple-stellation of $G$ is a 3-tree that has no order-preserving 1-string representation. \qed \end{proof} \section{Order-preserving outer-1-string representations} Now we turn towards positive results and show that every outer-plane graph has an order-preserving outer-1-string representation. We first discuss one existing result that does not quite achieve this. It is easy to show that every outer-planar graph can be represented as touching-graph of line segments (see e.g.~\cite{KobourovUeckertVerbeek} for much broader results). The standard way to do this (see also Fig.~\ref{fig:outplEx}) results, after extending the segments a bit, in a segment-representation that is order-preserving and weakly outer-string. However, this does not quite achieve our goal, because the ends of segments are not necessarily on the outer-face. \begin{figure}[ht] \hspace*{\fill} \includegraphics[page=1,width=0.3\linewidth]{figures_outplEx.pdf} \hspace*{\fill} \includegraphics[page=2,width=0.3\linewidth]{figures_outplEx.pdf} \hspace*{\fill} \includegraphics[page=3,width=0.3\linewidth]{figures_outplEx.pdf} \hspace*{\fill} \caption{An outer-planar graph, a weakly segment-representation that is not outer-segment at $\curve{e}$, and a representation as circle graph that is not order-preserving at $\curve{c}$.} \label{fig:outplEx} \end{figure} We instead give two other constructions. The first one uses that any outer-planar graph is a {\em circle graph}, i.e., the intersection graph of chords of a circle \cite{cit:wessel}. This obviously gives an end-outer-segment representation, but it need not be order-preserving (see Fig.~\ref{fig:outplEx}). Our first construction hence re-proves this result and maintains invariants to ensure that the representation is indeed order-preserving. The resolution in this representation could be very bad, and we therefore give a second construction where the curves are orthogonal instead. We use one bend for each vertex curve here, and so obtain a $B_1$-VPG-representation. Since there are $n$ vertices and at most $n$ bends, the representation \todo{Minor rewording, and an "at least" changed to "at most".} can be embedded into a grid of size $O(n) \times O(n)$. In our proofs, we use that any 2-connected outer-planar graph $G$ can be built up as follows \cite[Lemma 3]{cit:govindran}: Fix an edge $(u,v)$. Now repeatedly add an \emph{ear}, i.e., a path $P = u_0,u_1, \ldots, u_k,u_{k+1}$ with $k\geq 1$ where $(u_0,u_{k+1})$ is an edge on the outer-face of the current graph $G'$, and $u_1,\dots,u_k$ are new vertices that induce a path and have no edges to $G'$ other than $(u_0,u_1)$ and $(u_k,u_{k+1})$. A crucial requirement of the constructed representation $\ensuremath{\mathcal{R}}$ of such a subgraph is the following {\em order-condition}: If $w$ and $w'$ are the counterclockwise and clockwise neighbours of $v$ on the outer-face, then we encounter the neighbours of $v$ in order, starting with $w$ and ending with $w'$, while walking along $\curve{v}$. Put differently, the broken face-vertex-incidence is the one with the outer-face. We consider $\curve{v}$ to be directed so that it intersects first $\curve{w}$ and last $\curve{w'}$. The second crucial ingredient for both proofs is to reserve for edges (somewhat similar as was done for faces in \cite{cit:jocg,cit:cccg,cit:chalopin-string,cit:mfcs}) a region that can be used to attach subgraphs. Thus define a \emph{private region} $\region{u}{v}$ of edge $(u,v)$ to be a region that contains an end of $\curve{u}$ and an end of $\curve{v}$ and does not intersect any other curve or private regions of $\ensuremath{\mathcal{R}}$. Both constructions maintain such a private region $\region{u}{v}$ for every outer-face edge $(u,v)$. Moreover, if $v$ is the clockwise neighbour of $u$, then $S_{\curve{u}\curve{v}}$ contains the tail of $\curve{u}$ and the head of $\curve{v}$. \subsection{Circle-chord representation} We now re-prove that outer-planar graphs are circle graphs, and show that furthermore the order can be preserved. \begin{theorem} \label{thm:circleGraph} Every outer-plane graph has an order-preserving representation as intersection graph of chords of a circle $C$. \end{theorem} \begin{proof} It suffices to prove the claim for a 2-connected outer-planar graph $G$ since every outer-planar graph $G'$ is an induced subgraph of a 2-connected outer-planar graph $G$, and therefore a string representation for $G$ also yields one for $G'$ by deleting curves of vertices in $G-G'$. We create a representation $\ensuremath{\mathcal{R}}$ while building up the graph via adding ears, and maintain curve directions and private regions as explained before. Each private region $\region{u}{v}$ is bounded by parts of circle $C$ and a chord of $C$ and does not contain the crossing of $\curve{u}$ and $\curve{v}$. Further, the tail of $\curve{u}$ and the head of $\curve{v}$ are in the interior of the circular arc that bounds $\region{u}{v}$. In the base case, $G$ is an edge $(u,v)$ which can be represented by two chords through the center of $C$. See Fig.~\ref{fig:circleGraph}. \todo{Unimportant change-request: The figure feels different, style-wise, from the ones you drew with inkscape. Perhaps redraw.} We reserve two private regions for $(u,v)$, because the outer-face of a single-edge graph should be viewed as containing this edge twice (we can add ears twice at it). All conditions are easily verified. \begin{figure}[ht] \includegraphics[width=0.2\linewidth,page=1]{figures_chord_base.pdf} \hspace*{\fill} \hspace*{2mm} \includegraphics[width=0.18\textwidth]{figures_fig5_graph.pdf} \hspace*{\fill} \includegraphics[width=0.56\linewidth,page=2,trim= 0 0 0 200,clip]{figures_chord_base.pdf} \caption{The base case, and adding chords for an ear.} \label{fig:circleGraph} \end{figure} For the induction step, let us assume that $G$ was obtained by adding an ear $P=u,x_1,\dots,x_k,v$ at some edge $(u,v)$, with $u$ the counter-clockwise neighbour of $v$ on the outer-face. Let $C[u,v]$ be the arc of $C$ between the tail of $\curve{u}$ and the head of $\curve{v}$ that lies inside $\region{u}{v}$. Let $u'$ and $v'$ be two points on $C$ just outside $C[u,v]$ but still within $\region{u}{v}$. If $k=1$, then we add $x_1$ by using chord $\overline{u' v'}$ for $\curve{x_1}$. If $k>1$, then we insert $2k-2$ points on the interior of $C[u,v]$ and create chords for $\curve{x_1},\dots,\curve{x_k}$ so that everyone intersects as required. See Fig.~\ref{fig:circleGraph}, which also shows the private regions that we define for the new outer-face edges. Since $\region{u}{v}$ was convex, all new curves are inside it and do not intersect any other curves. The orientation of these new curves is determined by the order-condition: $\curve{x_i}$ should be oriented so that it intersects first $\curve{x_{i+1}}$ (where $x_0:=u$) and then $\curve{x_{i-1}}$ (where $x_{k+1}:=v$). In particular this means that the private region $\region{x_i}{x_{i+1}}$ contains the tail of $\curve{x_i}$ and the head of $\curve{x_{i+1}}$, and hence satisfies the condition on private regions. It remains to check that the order-condition is satisfied for $\curve{u}$. Since $\region{u}{v}$ contained the tail of $\curve{u}$, this means that $\curve{x_1}$ becomes the first curve to be intersected by $\curve{u}$, which is correct since $x_1$ is the clockwise neighbour of $u$ on the outer-face. Likewise one argues that the order-condition holds for $\curve{v}$. Hence all conditions hold, and after repeating for all ears we obtain an order-preserving representation as intersection graph of chords of a circle. \qed \end{proof} \subsection{$B_1$-VPG representation} Now we create, for any outer-planar graph, a $B_1$-VPG representation that is order-preserving and outer-string. However, the ends will not be on a circle; instead they will lie on a closed curve $S$ that we maintain throughout the construction and that surrounds the entire representation $\ensuremath{\mathcal{R}}$ without truly intersecting any curve. All vertices are 1-bend poly-lines with slopes $\pm 1$ (after rotating by 45$^\circ$ this gives the $B_1$-VPG representation); this allows us to use an orthogonal curve for $S$. Fig.~\ref{fig:types} illustrates types of private regions that we will use for this construction: $\region{u}{v}$ contains no bend of $\curve{u}$ or $\curve{v}$, and it is an isosceles right triangle whose hypotenuse lies on $S$. \begin{figure} \hspace*{\fill} \includegraphics[width=0.6\textwidth,trim=0 0 70 0,clip]{figures_types2.pdf} \hspace*{\fill} \includegraphics[width=0.3\textwidth]{figures_fig3_representation.pdf} \hspace*{\fill} \caption{Three types of private regions (three more can be obtained by flipping horizontally), and the base case.} \label{fig:types} \label{fig:edge} \end{figure} \begin{theorem} \label{thm:rotation} Every outer-planar graph $G$ has an order-preserving outer-1-string $B_1$-VPG-representation $\ensuremath{\mathcal{R}}$. \end{theorem} \begin{proof} As before it suffices to prove the claim for 2-connected outer-planar graphs $G$. We proceed by induction on the number of vertices, building $\ensuremath{\mathcal{R}}$ while adding ears. In the base case, $G$ is an edge $(u,v)$ which can be represented by two 1-bend curves positioned and oriented as shown in Fig.~\ref{fig:edge}, which also shows the private region. We use a horizontal segment for $S$ (this can be expanded into a closed curve surrounding $\ensuremath{\mathcal{R}}$ arbitrarily). \todo{Unimportant change-request: Change $S$ into a closed curve} For the induction step, let us assume that $G$ was obtained by adding an ear $P=u,x_1,\dots,x_k,v$ at some edge $(u,v)$, with $u$ the counter-clockwise neighbour of $v$ on the outer-face. After possible rotation the hypotenuse of the private region $S_{\curve{u}\curve{v}}$ is horizontal with $\region{u}{v}$ above it. We distinguish cases: \begin{enumerate} \item \textit{$\curve{u}$ and $\curve{v}$ have different slopes in $\region{u}{v}$ and $k=1$ (i.e. we add one vertex $x$).} \todo{Unimportant change-request for the representations: Make the parts of $\curve{u}$ and $\curve{v}$ that get cut dotted.} \begin{figure} \hspace*{\fill} \includegraphics[width=0.18\textwidth]{figures_fig4_graph.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig4_representation.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig4_representation2.pdf} \hspace*{\fill} \caption{Adding a single node if $\curve{u}$ and $\curve{v}$ have different slopes.} \label{fig:case1} \end{figure} We add a 1-bend curve $\curve{x}$ with the bend pointing downwards. See Fig.~\ref{fig:case1}, which also shows the private regions that we define for $(u,x)$ and $(x,v)$. Curve $\curve{x}$ fits entirely inside $\region{u}{v}$ by placing the bend in the interior of $\region{u}{v}$ and shortening $\curve{u}$ and $\curve{v}$ appropriately so that the ends of $\curve{x}$ are vertically aligned with those of $\curve{u}$ and $\curve{v}$. We can now easily find a new curve $S'$ by adding ``detours'' to $S$ that reach the hypotenuses of the new private regions. These detours are inside $\region{u}{v}$ and hence intersect no other curves (since we shortened $\curve{u}$ and $\curve{v}$). So the new curve $S'$ is a closed curve that surround the new representation as desired. The orientation of $\curve{x}$ is again determined by the order-condition, and exactly as in Theorem~\ref{thm:circleGraph} one argues that this respects the order-condition at $\curve{u}$ and $\curve{v}$, since our choice of curve for $\curve{x}$ ensures that it crosses $\curve{u}$ {\em after} the crossing of $\curve{u}$ with $\curve{v}$. \item \textit{$\curve{u}$ and $\curve{v}$ have different slopes in $\region{u}{v}$ and $k>1$ (i.e. we add at least two vertices $x_1,\dots,x_k$.)} \begin{figure} \hspace*{\fill} \includegraphics[width=0.18\textwidth]{figures_fig5_graph.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig5_representation.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig5_representation2.pdf} \hspace*{\fill} \caption{Adding 2 or more nodes if $\curve{u}$ and $\curve{v}$ have different slopes. } \label{fig:case2} \end{figure} We add a path of 1-bend curves $\curve{x_1}, \curve{x_2}, \ldots, \curve{x_k}$ with their bends at the top, and define private regions as illustrated in Fig.~\ref{fig:case2}. Each curve $\curve{x_i}$ is oriented as required by the order-condition, and again one verifies the order-condition for $\curve{u}$ and $\curve{v}$. We can re-use the same $S$. \item \textit{$\curve{u}$ and $\curve{v}$ have the same slope inside $\region{u}{v}$.} \begin{figure} \hspace*{\fill} \includegraphics[width=0.18\textwidth]{figures_fig7_graph.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig7_representation.pdf} \hspace*{\fill} \includegraphics[width=0.38\textwidth]{figures_fig7_representation2.pdf} \hspace*{\fill} \caption{Adding one or more vertices if $\curve{u}$ and $\curve{v}$ have the same slope. We only show two of the four possible configurations.} \label{fig:case6} \label{fig:case3} \end{figure} We add a path of 1-bend curves $\curve{x_1}, \curve{x_2}, \ldots, \curve{x_k}$ (possibly $k=1$) with their bends at the top, and define private regions as illustrated in Fig.~\ref{fig:case3}. Each curve $\curve{x_i}$ is oriented as required by the order-condition, and one verifies all conditions using the same $S$. \iffalse $\curve{x_1}, \curve{x_2}, \ldots, \curve{x_k}$ placed so that the right end of $\curve{x_1}$ and left end of $\curve{x_k}$ attach to $S$ outside $S_{\curve{u}\curve{v}}$, and the remaining ends attach to the interior of $S_{\curve{u}\curve{v}}$, and so that the curves $\curve{x_1}, \curve{x_2}, \ldots, \curve{x_k}$ do not intersect any undesired curves (see Fig.~\ref{fig:case4} and~\ref{fig:case6}). \item The remaining cases (slope $+1$ and $-1$, and $+1$) are symmetric. \fi \end{enumerate} \iffalse In every case, we can verify that the constructed representation is order-preserving and that it contains private regions for all the edges on the outer-face. No two curves intersect more than once, all of them have one bend and attach to $S$ so the representation is $B_1$-VPG end-1-outer-string. \fi After having represented the entire graph in this way, we are order-preserving due to the order-condition, outer-string due to poly-line $S$, and $B_1$-VPG (after a 45$^\circ$-rotation) since every curve has one bend. \qed \end{proof} In our $B_1$-VPG-representation, every vertex-curve is an \L in one of the four possible rotations \rule{0.4pt}{1.2ex}\rule{1.2ex}{0.4pt}\xspace, \rule{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace, \rule{0.4pt}{1.2ex}\rule[1.2ex]{1.2ex}{0.4pt}\xspace, \rule[1.2ex]{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace. (All four may be used, since private regions get rotated in Case 1.) We would have preferred a representation that uses \L (or the two shapes \rule{0.4pt}{1.2ex}\rule{1.2ex}{0.4pt}\xspace and \rule{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace), because then the stretching-techniques by Middendorf and Pfeiffer \cite{cit:stretching} could have been applied to obtain another segment-representation. It is easy to create a representations with \L only if we need not be order-preserving (use \rotatebox{45}{\rule[1.2ex]{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace} in Case 1) or need not be outer-string (see also Lemma~\ref{lem:SP}), but finding an outer-string order-preserving representation using only {\L}s remains open. \iffalse \begin{figure} \includegraphics[width=0.5\textwidth, trim=300 0 0 0,clip]{edge} \caption{The base case for the proof of Lemma~\ref{lem:3-bend}. {\bf Change this figure!} Turn $u$ the other way, and then create two private regions, both in the ``parallel'' configuration.} \label{fig:edge} \end{figure} \fi \iffalse \begin{figure} \includegraphics[width=\textwidth]{case4} \caption{Case 4.} \label{fig:case4} \end{figure} \fi \iffalse \subsection{A simpler shape for $S$} There are two properties of the construction in Theorem~\ref{thm:rotation} that could be improved: \begin{enumerate*}[label=(\alph*)] \item We may have to use all four possible rotations of a curve with one bend (because in Case 1 the new private region may be rotated $90^\circ$); \item The shape of $S$ may be quite complicated, and the contour of $\ensuremath{\mathcal{R}}$ therefore hard to see. \end{enumerate*} We now show that we can improve on (b) if we allow more bends, and on both (a) and (b) for triangle-free outer-planar graphs. \begin{theorem} \label{thm:main} Every outer-planar graph has an order-preserving $B_3$-VPG 1-outer-string representation $\ensuremath{\mathcal{R}}$ such that all curves are either {\tt L} or the union of two {\tt L}s, and all curves end on a straight-line segment $S$ and stay to one side of it. \end{theorem} \begin{proof} The proof is almost verbatim the one of Theorem~\ref{thm:rotation}, except that we handle Case 1 by using a different shape. We again prefer to rotate the picture by 135$^\circ$ degrees, then every vertex-curve should be either a ``peak'' or a ``twin peak'' as illustrated in Fig.~\ref{fig:shapes}. Recall that Case 1 was the situation where the two curves $\curve{u}$ and $\curve{v}$ intersect the private region $\region{u}{v}$ with different slopes and we add a single vertex $x$. We now extend the curve $\curve{x}$ used in this case so that it becomes a twin peak and reaches the same line segment $S$ where we define private regions. See Fig.~\ref{fig:case1}. Since we extended $\curve{x}$ without changing crossings, all invariants hold again, and so the representation is as desired. \iffalse . The only case handled differently is case 1 when ear $P$ is a single vertex and $x$ and $\curve{u}$ and $\curve{v}$ intersect $\region{u}{v}$ with slopes $-1$ and $+1$ respectively. In this case, we add a $3$-bend curve $\curve{x}$ placed so that both its ends attach to $S$ outside $S_{\curve{u}\curve{v}}$ and so that it intersects only $\curve{u}$ and $\curve{v}$ (see Fig.~\ref{fig:case1}). This can be done as the interior of $S_{\curve{u}\curve{v}}$ is not intersected by any other curve. We orient the curve from right to left so that it intersects $\curve{u}$ before $\curve{v}$. This satisfies the condition on the order of intersections for $\curve{u}$, $\curve{v}$ and $\curve{x}$ so the representation is order-preserving. The resulting representation contains two new private regions $S_{\curve{u}\curve{x}}$ and $S_{\curve{x}\curve{v}}$ where both the curves have the same slope of $-1$ and $+1$ respectively. The rest of the cases remain identical. We can now observe that none of the cases produces a rotated private region. Thus, all the curves in the constructed representation attach to the same side of a straight line segment $S$, all the 1-bend curves have the same shape (a $135^\circ$ rotation of an L) and all the 3-bend curves have the same shape. All the curves intersect at most once, and the representation is order-preserving. \fi \qed \end{proof} \begin{figure} \includegraphics[width=0.5\textwidth]{shapes} \hspace*{\fill} \includegraphics[width=0.3\textwidth,trim=400 10 0 30,clip]{case1} \caption{The two shape used for Theorem~\ref{thm:main}, and the modified construction in Case 1. {\bf Fix picture:} reverse direction of $x$.} \label{fig:case1_B3} \label{fig:shapes} \end{figure} \iffalse \begin{figure} \includegraphics[width=\textwidth]{case1} \caption{Illustration for the proof of Theorem~\ref{thm:main} (the direction is drawn reversed; fix).} \label{fig:case1} \end{figure} \fi Notice that a twin peak is used only if we add an ear with a single vertex, i.e., we had a triangle $\{u,v.x\}$. Therefore: \iffalse The only case which introduces a curve that in $45^\circ$ rotation does not have the shape of an L is case 1 when that added ear consists of a single vertex. If none of the ears is a single vertex, the construction uses L-shaped curves only. None of the ears is a single vertex if and only if the constructed graph does not contain a triangle. Thus, we can state the following corollary: \fi \begin{corollary} \label{cor:1-bend} \iffalse Triangle-free outer-planar graphs have order-preserving $B_1$-VPG end-1-outer-string representations such that all the curves have L-shapes and all the curves attach to the same side of a straight line segment $S$. \fi Every triangle-free outer-planar graph has an order-preserving $B_1$-VPG 1-outer-string representation $\ensuremath{\mathcal{R}}$ such that all curves are an $L$ and all curves attach to the same side of one straight-line segment $S$. \end{corollary} \subsection{Reducing the number of bends} We now aim to reduce the number of bends even further. The main tool is the following result by Middendorf and Pfeiffer: \begin{lemma}[\cite{cit:stretching}] \label{lem:stretching} Let $\ensuremath{\mathcal{R}}$ be a string representation where all curves are \L or \rule{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace. Then there exists an equivalent segment representation $\ensuremath{\mathcal{R}}'$. \end{lemma} One can verify that the construction by Middendorf and Pfeiffer does not change the order of intersections along curves, hence if $\ensuremath{\mathcal{R}}'$ was order-preserving and/or end-1-outer-string, then so is $\ensuremath{\mathcal{R}}'$. Applying Lemma~\ref{lem:stretching} to the construction Theorem~\ref{thm:main} (viewing a twin peak as the union of two {\tt L}s) and Corollary~\ref{cor:1-bend}, we hence obtain: \begin{corollary} Every outer-planar graph has an order-preserving end-1-outer-string representation where every curve is a poly-line with 1 bend. Every triangle-free outer-planar graph has order-preserving end-1-outer-segment representation. \end{corollary} \fi \iffalse \begin{proof} We construct a $B_3$-VPG end-1-outer-string representation as described in Theorem~\ref{thm:main}. By splitting every 3-bend curve at the second bend, we obtain a representation which consists of \L and \rule{1.2ex}{0.4pt}\rule{0.4pt}{1.2ex}\xspace curves, and apply Lemma~\ref{lem:stretching} to stretch those into segments. \end{proof} Applying Lemma~\ref{lem:stretching} to all triangle-free outer-planar graphs (see Corollary~\ref{cor:1-bend}, we obtain: \begin{corollary} \label{cor:outersegment} Triangle-free planar graphs have order-preserving end-outer-segment representations where all the curves attach to the same side of a straight line segment $S$. \end{corollary} \fi \iffalse We can improve on this even more for bipartite outer-planar graphs, where we can make the line segments horizontal or vertical. It is well-known that any planar bipartite graph can be represented by touching horizontal and vertical line-segments \cite{Fraysseix}, but it is not clear whether this construction in general would give a segment-representation that is end-outer-segment and order-preserving for outer-planar graphs. We hence give a specialized construction. \begin{theorem} \label{thm:bipartite} Every bipartite outer-planar graph $G$ has an order-preserving $B_0$-VPG end-outer-segment representations. \end{theorem} \begin{proof} As before we only consider the case where $G$ is 2-connected. We build the representation similarly as in Theorem~\ref{thm:rotation}, except that we represent every vertex as line segment of slope $+1$ (for one vertex class) and $-1$ (for the other one). Fig.~\ref{fig:bipartite-base} shows the base case. For the induction step, observe that $u,v$ must have different slopes since $(u,v)$ is an edge, and we always add an even number of vertices since the graph is bipartite. So we are always in Case 2. The construction in this case is similar to the one in Theorem~\ref{thm:rotation}, but since we use line segments we need a more complicated new curve $S'$ and some of the private regions are rotated. See Fig.~\ref{fig:bipartite-induction}. \qed \end{proof} \begin{figure} \hspace*{\fill} \includegraphics[width=0.3\textwidth,trim=300 0 0 0,clip]{bipartite-edge} \includegraphics[width=0.6\textwidth]{bipartite-induction} \hspace*{\fill} \caption{A $B_0$-VPG representation for outer-planar bipartite graphs. {\bf Change the figure:} The base case should have two private regions. The label of Case 1 should be Case 2 (or disappear).} \label{fig:bipartite} \label{fig:bipartite-base} \label{fig:bipartite-induction} \end{figure} \fi \section{Beyond outer-planar graphs?} One wonders what other graph classes might have order-preserving 1-string representations, preferably outer-string ones. We study this here for some graph classes. We start with the {\em series-parallel graphs}, which are the same as the partial 2-trees, and hence generalize outer-planar graphs. \begin{lemma} \label{lem:SP} Every series-parallel graph $G$ has a 1-string representation with {\L}s that is order-preserving for some planar embedding of $G$. \end{lemma} \begin{proof} It is easy to show that every 2-tree has a representation by touching true {\L}s, i.e., each vertex is assigned an {\L} (not rotated and not degenerated into a line segment), curves are disjoint except at ends, and $(u,v)$ is an edge if and only if the end of $\curve{u}$ lies on the interior of $\curve{v}$ or vice versa.% \footnote{We have not been able to find a direct reference for this, but it follows for example from the works of Chaplick et al. \cite{ChaplickKobourovUeckert2012} or with an iterative approach similar to the 6-sided contact representations in \cite{AlamBiedl2011}. } See also Fig.~\ref{fig:SP}. Extending the {\L}s slightly gives a 1-string representation, and it is order-preserving for a planar embedding easily derived from the touching {\L} representation. \todo{minor rewordings} Details are provided in Appendix~\ref{sec:appendix}. \qed \end{proof} \begin{figure}[t] \hspace*{\fill} \includegraphics[width=0.22\linewidth]{figures_fig11_graph.pdf} \hspace*{\fill} \includegraphics[width=0.25\linewidth]{figures_fig11_middle.pdf} \hspace*{\fill} \includegraphics[width=0.25\linewidth]{figures_fig11_right.pdf} \hspace*{\fill} \caption{Representing series-parallel graphs by touching {\protect\L}s, and converting this into a planar drawing with the same order.} \label{fig:SP} \end{figure} It would be interesting to know whether this result can be extended to the so-called {\em planar Laman-graphs}, which have a representation by touching {\L}s \cite{KobourovUeckertVerbeek}, but not all {\L}s are necessarily in the same rotation and so it is not clear whether this is order-preserving. Of particular interest would be planar bipartite graphs, which can even be represented by horizontal and vertical touching line segments \cite{FMP91}, but again it is not clear how to make this order-preserving. As for having strings additionally end at the contour for series-parallel graphs: this is not always possible. Let $H$ be the graph obtained by subdividing every edge in a $K_{2,3}$; one verifies that $H$ is series-parallel. It is easy to see (see also \cite{Cabello2016}) that $H$ cannot be outer-string, since $K_{2,3}$ is not outer-planar. So $H$ has no outer-string representation, much less one that is 1-string and order-preserving. Now we turn to partial 3-trees. We showed in Theorem~\ref{thm:notOrder} that there exist planar 3-trees (hence partial 3-trees) that do not have an order-preserving 1-string representation. We now study some subclasses of partial 3-trees that are superclasses of outer-planar graphs. An {\em IO-graph} is a planar graph $G$ that has an independent set $I$ such that $G-I$ is a 2-connected outer-planar graph $O$ for which all vertices in $I$ are inside inner faces of $O$. A {\em Halin}-graph is a graph that consists of a tree $T$ and a cycle $C$ that connects all leaves of $T$. Both types of graphs are well-known to be partial 3-trees. In \cite{cit:cccg}, we gave 1-string representations for both Halin graphs and IO-graphs; the latter uses only unrotated {\L}s. Independently, Francis and Lahiri also constructed 1-string representations of Halin-graphs, using only unrotated {\L}s \cite{cit:francis}. Inspection of both constructions shows that these respect the standard planar embedding (where $O$ respectively $C$ is one face). We hence have: \begin{theorem}[based on \cite{cit:cccg,cit:francis}] Every IO-graph and every Halin-graph has an order-preserving 1-string representation in which every vertex is an {\L}. \end{theorem} In these constructions, the ends of the strings are not on the outer-face, and we now show that this is unavoidable. This is obvious for Halin-graphs, since the subdivided $K_{2,3}$ is an induced subgraph of a Halin-graph. As for IO-graphs, define the {\em wheel} $W_n$ be the graph that consists of a cycle $C=\{v_1,\dots,v_n\}$ with $n$ vertices and one universal vertex $c$ connected to all of them. Let the {\em extended wheel-graph} $W^+_n$ be the wheel-graph $W_n$ with additionally a vertex $w_i$ incident to $v_i$ and $v_{i+1}$ for $i=1,\dots,n$ (and $w_{n+1}:=w_1$). Notice that $W_n^+$ is an IO-graph. The proof of the following is presented in Appendix~\ref{sec:appendix}. \begin{theorem} \label{thm:IOnotEndOrder} For $n\geq 7$, the IO-graph $W^+_n$ has no order-preserving outer-1-string representation. \end{theorem} \section{Final remarks} \label{sec:conclusions} In this paper, we studied 1-string representations that respect a planar embedding. \iffalse in the sense that we encounter crossings along each curve $\curve{v}$ in the same order in which the corresponding edges appear in the clockwise order around $v$. We showed that such 1-string representations do not exist for all planar graphs (not even for all planar 3-trees), but exist for series-parallel graphs, IO-graphs and Halin-graphs. If we additionally demand that all strings have their ends on the contour, then this does not exist for series-parallel graphs, Halin-graphs or IO-graphs, but we can create it for outer-planar graphs, and additionally restrict the curves to be orthogonal with 1 bend or chords of a circle. \fi \iffalse We leave a number of open problems: \begin{itemize} \item Given an order-preserving 1-string representation, can we stretch it into an order-preserving representation by segments? This seems unlikely, given that not all pseudo-line arrangements are stretchable \cite{cit:shor}, but since we want a planar graph, and we are allowed to change which vertex-face-incidence is broken across the ends of a string, this proof does not directly transfer. \item \fi As for open problems, what other graph classes have order-preserving 1-string representations? A natural candidate to investigate would be the 2-outer-planar graphs, for which Lemma~\ref{lem:stellation} cannot be applied since a triple-stellation is never 2-outer-planar. Other interesting candidates would be planar bipartite graphs (or more generally planar Laman-graphs), or planar 4-connected graphs. Secondly, what is the complexity of testing whether an order-preserving 1-string representation exists? Given the NP-hardness of the abstract graph realization problem \cite{cit:kratochvil-II,cit:middendorf}, this is very likely NP-hard if we are allowed to prescribe an arbitrary rotation scheme (not from a planar drawing). But is it NP-hard for plane graphs? \todo{fairly major rewording here} One unsatisfactory aspect of our definition of ``order-preserving'' is that graphs with an end-contact representation (i.e., with disjoint strings where for every edge one string ends on the other string) do not automatically have an order-preserving 1-string representation: We can obtain a 1-string representation by extending the strings slightly, but it does not need to be order-preserving. A reviewer hence suggested to us the following alternate model: Thicken each string slightly, and consider the cyclic order of intersections while walking around the thickened string. Let now ``order-preserving'' mean that the cyclic order of neighbours around a vertex forms a subsequence of the intersections encountered while walking ``around'' its string. With this, any end-contact representation becomes an order-preserving 1-string representation after extending the curves a bit. This includes for example planar bipartite graphs and Laman graphs. Since this model's restriction is weaker, all our positive results transfer, but the proofs of the negative results no longer hold. Are there plane graphs that do not have an order-preserving 1-string representation in this new model? \bibliographystyle{splncs03}
1,314,259,992,758
arxiv
\section{Introduction} \label{sec_introduction} \subsection{The problem statement and background} For $\sigma\ge 0$ and for arbitrary initial data, we prove local existence and uniqueness of solutions in Sobolev spaces to the free boundary incompressible Euler equations in vacuum: \begin{subequations} \label{euler} \begin{alignat}{2} \partial_t u+ \nabla_uu + \nabla p&=0 &&\text{in} \ \ Q \,, \label{euler.a}\\ \operatorname{div} u &= 0 &&\text{in} \ \ Q \,, \label{euler.b}\\ p &= \sigma \, H \ \ &&\text{on} \ \ \partial Q \,, \label{euler.c}\\ (\partial_t+\nabla_u)|_{\partial Q}&\in T(\partial Q)\,, &&\ \ \label{euler.d}\\ u &= u_0 &&\text{on} \ \ Q_{t=0} \,, \label{euler.e}\\ Q_{t=0} &= \Omega\,, && \label{euler.f} \end{alignat} \end{subequations} where $Q= \cup_{0\le t\le T}\{t\}\times \Omega(t)$, $\Omega(t) \subset {\mathbb R}^n$, $n=2$ or $3$, $\partial Q= \cup_{0\le t\le T}\{t\}\times \partial\Omega(t)$, $\nabla_uu = u^j\partial u^i/\partial x^j$, and where Einstein's summation convention is employed. The vector field $u$ is the Eulerian or spatial velocity field defined on the time-dependent domain $\Omega(t)$, $p$ denotes the pressure function, $H$ is twice the mean curvature of the boundary of the fluid $\partial \Omega(t)$, and $\sigma$ is the surface tension. Equation (\ref{euler.a}) is the conservation of momentum, (\ref{euler.b}) is the conservation of mass, (\ref{euler.c}) is the well-known Laplace-Young boundary condition for the pressure function, (\ref{euler.d}) states that the free boundary moves with the velocity of the fluid, (\ref{euler.e}) specifies the initial velocity, and (\ref{euler.f}) fixes the initial domain $\Omega$. Almost all prior well-posedness results were focused on {\it irrotational} fluids (potential flow), wherein the additional constraint $\operatorname{curl}u=0$ is imposed; with the irrotationality constraint, the Euler equations (\ref{euler}) reduce to the well-known water-waves equations, wherein the motion of the interface is decoupled from the rest of the fluid and is governed by singular boundary integrals that arise from the use of complex variables and the equivalence of incompressibility and irrotationality with the Cauchy-Riemann equations. For 2D fluids (and hence 1D interfaces), the earliest local existence results were obtained by Nalimov \cite{Na1974}, Yosihara \cite{Yo1982}, and Craig \cite{Cr1985} for initial data near equilibrium. Beale, Hou, \& Lowengrub \cite{BeHoLo1993} proved that the linearization of the 2D water wave problem is well-posed if a Taylor sign condition is added to the problem formulation, thus preventing Rayleigh-Taylor instabilities. Using the Taylor sign condition, Wu \cite{Wu1997} proved local existence for the 2D water wave problem for arbitrary (sufficiently smooth) initial data. Later Ambrose \cite{Am2003} and Ambrose \& Masmoudi \cite{AmMa2005}, proved local well-posedness of the 2D water wave problem with surface tension on the boundary replacing the Taylor sign condition. In 3D, Wu \cite{Wu1999} used Clifford analysis to prove local existence of the full water wave problem with {\it infinite depth}, showing that the Taylor sign condition is always satisfied in the irrotational case by virtue of the maximum principle holding for the potential flow. Lannes \cite{La2005} provided a proof for the {\it finite depth case with varying bottom} by implementing a Nash-Moser iteration. The first well-posedness result for the full Euler equations with zero surface tension, $\sigma=0$, is due to Lindblad \cite{Li2004} with the additional ``physical condition'' that \begin{equation}\label{lindblad} \nabla p \cdot n < 0 \text{ on } \partial Q, \end{equation} where $n$ denotes the exterior unit normal to $\partial \Omega(t)$. The condition (\ref{lindblad}) is equivalent to the Taylor sign condition, and provided Christodoulou \& Lindblad \cite{ChLi2000} with enough boundary regularity to establish {\it a priori} estimates for smooth solutions to (\ref{euler}) together with (\ref{lindblad}) and $\sigma=0$. (Ebin \cite{Eb1987} provided a counterexample to well-posedness when (\ref{lindblad}) is not satisfied.) Nevertheless, local existence did not follow in \cite{ChLi2000}, as finding approximations of the Euler equations for which existence and uniqueness is known and which retain the transport-type structure of the Euler equations is highly non-trivial, and this geometric transport-type structure is crucial for the a priori estimates. In \cite{Li2003}, Lindblad proved well-posedness of the linearized Euler equations, but the estimates were not sufficient for well-posedness of the nonlinear problem. The estimates were improved in \cite{Li2004}, wherein Lindblad implemented a Nash-Moser iteration to deal with the manifest loss of regularity in his linearized model and thus established the well-posedness result in the case that (\ref{lindblad}) holds and $\sigma=0$. Local existence for the case of positive surface tension, $\sigma >0$, remained open, and although the Laplace-Young condition (\ref{euler.c}) provides improved regularity for the boundary, the required nonlinear estimates are more difficult to close due to the complexity of the mean curvature operator, and the need to study time-differentiated problems, which do not arise in the $\sigma=0$ case. It appears that the use of the time-differentiated problem in Lindblad's paper \cite{Li2004} is due to the use of certain tangential projection operators, but this is not necessary. We note that our energy function is different from that in \cite{Li2004}, and provides better control of the Lagrangian coordinate. After completing this work, we were informed of the paper of Schweizer \cite{Sch2006} who studies the Euler equations for $\sigma>0$ in the case that the free-surface is a graph over the two-torus. In that paper, he obtains a priori estimates under a smallness assumption for the initial surface; well-posedness follows under the additional assumption that there is no vorticity on the boundary. We also learned of the paper by Shatah and Zeng \cite{ShZe2006} who establish a priori estimates for both the $\sigma=0$ and $\sigma>0$ cases without any restrictions on the initial data. \subsection{Main results} We prove two main theorems concerning the well-posedness of (\ref{euler}). The first theorem, for the case of positive surface tension $\sigma>0$, is new; for our second theorem, corresponding to the zero surface tension case, we present a new proof that does not require a Nash-Moser procedure, and has optimal regularity. \begin{theorem}[Well-posedness with surface tension]\label{theorem1} Suppose that $\sigma>0$, $\Gamma$ is of class $H^{5.5}$, and $u_0 \in H^{4.5}(\Omega)$. Then, there exists $T>0$, and a solution ($u(t)$,$p(t)$,$\Omega(t)$) of (\ref{euler}) with $u \in L^\infty(0,T; H^{4.5}(\Omega(t)))$, $p \in L^\infty(0,T; H^{4}(\Omega(t)))$, and $\Gamma(t) \in H^{5.5}$. The solution is unique if $u_0 \in H^{5.5}$ and $\Gamma \in H^{6.5}$. \end{theorem} \begin{theorem}[Well-posedness with Taylor sign condition]\label{theorem2} Suppose that $\sigma=0$, $\partial \Omega$ is of class $H^{3}$, and $u_0 \in H^{3}(\Omega)$ and condition (\ref{lindblad}) holds at $t=0$. Then, there exists $T>0$, and a unique solution ($u(t)$,$p(t)$,$\Omega(t)$) of (\ref{euler}) with $u \in L^\infty(0,T; H^{3}(\Omega(t)))$, $p \in L^\infty(0,T; H^{3.5}(\Omega(t)))$, and $\partial\Omega(t) \in H^{3}$. \end{theorem} \subsection{Lagrangian representation of the Euler equations.} The Eulerian problem (\ref{euler}), set on the moving domain $\Omega(t)$, is converted to a PDE on the fixed domain $\Omega$, by the use of Lagrangian variables. Let $\eta(\cdot, t):\Omega \rightarrow \Omega(t)$ be the solution of $$ \partial_t\eta(x,t) = u(\eta(x,t),t), \ \ \ \eta(x,0)=\text{Id}\,. $$ and set $$v(x,t):= u(\eta(x,t),t), \ \ q(x,t):=p(\eta(x,t),t), \ \ \text{and} \ \ a(x,t) := [\nabla \eta(x,t)]^{-1}\,.$$ The variables $v$, $q$ and $a$ are functions of the fixed domain $\Omega$ and denote the material velocity, pressure, and pull-back, respectively. Thus, on the fixed domain, (\ref{euler}) transforms to \begin{subequations} \label{leuler} \begin{alignat}{2} \eta&=\text{Id} + \int_0^t v\ \ \ &&\text{in} \ \Omega \times (0,T]\,, \label{leuler.a}\\ \partial_t v+ a\,\nabla q&=0 &&\text{in} \ \Omega \times (0,T]\,, \label{leuler.b}\\ \text{Tr}(a \, \nabla v) &= 0 &&\text{in} \ \Omega \times (0,T] \,, \label{leuler.c}\\ q\, a^T N/|a^TN| &=-\sigma \, \Delta_g(\eta) \ \ &&\text{on} \ \Gamma \times (0,T] \,, \label{leuler.d}\\ (\eta,v) &= (\text{Id}, u_0) &&\text{on} \ \Omega\times\{t=0\} \,, \label{leuler.e} \end{alignat} \end{subequations} where $N$ denotes the unit normal to $\Gamma$, and $\Delta_g$ is the surface Laplacian with respect to the induced metric $g$ on $\Gamma$, written in local coordinates as \begin{equation}\label{gstuff} \Delta_g = \sqrt{g}^{-1} \partial_\alpha [\sqrt{g} g^{\alpha\beta} \partial_\beta ]\,, \ g^{\alpha \beta} = [g_{\alpha\beta}]^{-1}\,, \ g_{\alpha\beta} = \eta_{,\alpha}\cdot \eta_{,\beta}\,, \text{ and } \sqrt{g} = \sqrt{\det{g}} \,. \end{equation} \begin{theorem}[$\sigma>0$]\label{ltheorem1} Suppose that $\sigma>0$, $\partial \Omega$ is of class $H^{5.5}$, and $u_0 \in H^{4.5}(\Omega)$ with $\operatorname{div} u_0=0$. Then, there exists $T>0$, and a solution ($v$,$q$) of (\ref{leuler}) with $v \in L^\infty(0,T; H^{4.5}(\Omega))$, $q \in L^\infty(0,T; H^{4}(\Omega))$, and $\Gamma(t) \in H^{5.5}$. The solution satisfies $$ \sup_{t\in[0,T]} \left( |\partial\Omega(t)|_{5.5}^2 + \sum_{k=0}^3 \|\partial_t^k v(t)\|_{4.5-1.5k}^2 +\sum_{k=0}^2 \|\partial_t^k q(t)\|_{4-1.5k}^2 \right) \le \tilde M_0 $$ where $\tilde M_0$ denotes a polynomial function of $\|\Gamma\|_{5.5}$ and $\|u_0\|_{4.5}$. The solution is unique of $u_0 \in H^{5.5}(\Omega)$ and $\Gamma \in H^{6.5}$. \end{theorem} \begin{remark} Our theorem is stated for a fluid in vacuum, but the analogous theorem holds for a vortex sheet, i.e., for the motion of the interface separating two inviscid immiscible incompressible fluids; the boundary condition (\ref{euler.c}) is replaced by $[p]_\pm = \sigma H$, where $[p]_\pm$ denotes the jump in pressure across the interface. \end{remark} For the zero-surface-tension case, we have \begin{theorem}[$\sigma=0$ and condition (\ref{lindblad})]\label{ltheorem2} Suppose that $\sigma=0$, $\Gamma $ is of class $H^{3}$, $u_0 \in H^{3}(\Omega)$, and condition (\ref{lindblad}) holds at $t=0$. Then, there exists $T>0$, and a unique solution ($v$,$q$) of (\ref{leuler}) with $v \in L^\infty(0,T; H^{3}(\Omega))$, $q \in L^\infty(0,T; H^{3}(\Omega))$, and $\Gamma(t) \in H^{3}$. \end{theorem} Because of the regularity of the solutions, Theorems \ref{ltheorem1} and \ref{ltheorem2} imply Theorems \ref{theorem1} and \ref{theorem2}, respectively. \begin{remark} Note that in 3D, we require less regularity on the initial data than \cite{Li2004}. \end{remark} \begin{remark} Since the vorticity satisfies the equation $\partial_t \operatorname{curl}u + \pounds _u \operatorname{curl} u=0$, where $\pounds_u$ denotes the Lie derivative in the direction $u$, it follows that if $\operatorname{curl}u_0=0$, then $\operatorname{curl} u(t)=0$. Thus our result also covers the simplified case of irrotational flow. In particular, Theorem \ref{ltheorem1} shows that the 3D irrotational water-wave problem with surface tension is well-posed. In the zero surface tension case, our result improves the regularity of the data required by Wu \cite{Wu1999}. \end{remark} \subsection{General methodology and outline of the paper} \subsubsection{Artificial viscosity and the smoothed $\kappa$-problem} Our methodology begins with the introduction of a smoothed or {\it approximate} problem (\ref{smooth}), wherein two basic ideas are implemented: first, we smooth the transport velocity using a new tool which we call {\it horizontal convolution by layers}; second, we introduce an {\it artificial viscosity} term in the Laplace-Young boundary condition ($\sigma >0 $) which simultaneously preserves the transport-type structure of the Euler equations, provides a PDE for which we can prove the existence of unique smooth solutions, and for which there exist a priori estimates which are independent of the artificial viscosity parameter $\kappa$. With the addition of the artificial viscosity term, the dispersive boundary condition is converted into a parabolic-type boundary condition, and thus finding solutions of the smoothed problem becomes an easier matter. On the other hand, the a priori estimates for the $\kappa$ problem are more difficult than the formal estimates for the Euler equations. The horizontal convolution is defined in Section \ref{sec_convolution}. The domain $\Omega$ is partitioned into coordinate charts, each the image of the unit cube in ${\mathbb R}^3$. A double convolution is performed in the horizontal direction only (this is equivalent to the tangential direction in coordinate patches near the boundary). While there is no smoothing in the vertical direction, our horizontal convolution commutes with the trace operator, and avoids the need to introduce an extension operator, the latter destroying the natural transport structure. The development of the horizontal convolution by layers is absolutely crucial in proving the regularity of the weak solutions that we discuss below. Furthermore, it is precisely this tool which enables us to prove Theorem \ref{theorem2} without the use of Nash-Moser iteration. To reiterate, this horizontal smoothing operator preserves the essential transport-type structure of the Euler equations. \subsubsection{Weak solutions in a variational framework and a fixed-point, $\sigma>0$} The solution to the smoothed $\kappa$-problem (\ref{smooth}) is obtained via a topological fixed-point procedure, founded upon the analysis of the linear problem (\ref{smoothlinear}). To solve the linear problem, we introduce a few new ideas. First, we penalize the pressure function; in particular, with $\epsilon>0$ the penalization parameter, we introduce the penalized pressure function $q_\epsilon ={\frac{1}{\epsilon}} \text{Tr}( a \, \nabla w)$. Second, we find a new class of $[H^{\frac{3}{2}}(\Omega)]'$-weak solutions of the penalized and linearized smoothed $\kappa$-problem in a variational formulation. The penalization allows us to perform difference quotient analysis in order to prove regularity of our weak solutions; without penalization, difference quotients of weak solutions do not satisfy the ``divergence-free'' constraint and as such cannot be used as test functions. Furthermore, the penalization of the pressure function avoids the need to analyze the highest-order time-derivative of the pressure, which would otherwise be highly problematic. In the setting of the penalized problem, we crucially rely on the horizontal convolution by layers to establish regularity of our weak penalized solution. Third, we introduce the Lagrange multiplier lemmas, which associate a pressure function to the weak solution of a variational problem for which the test-functions satisfy the incompressibility constraint. These lemmas allow us to pass to the limit as the penalization parameter tends to zero, and thus, together with the Tychonoff fixed-point theorem, establish solutions to the smoothed problem (\ref{smooth}). At this stage, however, the time interval of existence and the bounds for the solution depend on the parameter $\kappa$. \subsubsection{Solutions of the $\kappa$-problem for $\sigma=0$ via transport} For the $\sigma=0$ problem, we use horizontal convolution to smooth the transport velocity as well as the moving domain. Existence and uniqueness of this smoothed $\kappa$ problem (\ref{smoothl}) is found using simple transport-type arguments that rely on the pressure gaining regularity just as in the fixed-domain case. Once again, the time interval of existence and the bounds for the solution a priori depend on $\kappa$. \subsubsection{A priori estimates and $\kappa$-asymptotics} We develop a priori estimates which show that the energy function $E_\kappa(t)$ in Definition \ref{kenergy} associated to our smoothed problem (\ref{smooth}) is bounded by a constant depending only on the initial data and not on $\kappa$. The estimates rely on the Hodge decomposition elliptic estimate (\ref{divcurl0}). In Section \ref{sec_divcurl}, we obtain estimates for the divergence and curl of $\eta$, $v$ and there space and time derivatives. The main novelty lies in the curl estimate for $\eta$. The remaining portion of the energy is obtained by studying boundary regularity via energy estimates. These nonlinear boundary estimates for the surface tension case $\sigma>0$ are more complicated than the ones for the $\sigma=0$ case with the Taylor sign condition (\ref{lindblad}) since it is necessary to analyze the time-differentiated Euler equations, which is not essential in the $\sigma=0$ case (unless optimal regularity is sought). We note that the use of the smoothing operator in Definition \ref{def_convolution}, where a double convolution is employed, is necessary in order to find exact (or perfect) derivatives for the highest-order error terms. The idea is that one of the convolution operators is moved onto a function which is a priori not smoothed, and commutation-type lemmas are developed for this purpose. We obtain the a priori estimate $$ \sup_{t\in[0,T]} E_\kappa(t) \le M_0 + T P(\sup_{t\in[0,T]}E_\kappa(t))\,, $$ where $M_0$ depends only on the data, and $P$ is a polynomial. The addition of the artificial viscosity term allows us to prove that $E_\kappa(t)$ is continuous; thus, following the development in \cite{CoSh2005b}, there exists a sufficiently small time $T$, which is independent of $\kappa$, such that $\sup_{t\in[0,T]}E_\kappa(t) < \tilde M_0$ for $\tilde M_0 > M_0$. We then find $\kappa$-independent nonlinear estimates for the $\sigma=0$ case for the energy function (\ref{Elin}). \vspace{.1 in} \noindent {\bf Outline.} Sections \ref{sec_convolution}--\ref{uniqueness} are devoted to the case of positive surface tension $\sigma >0$. Sections \ref{L1}--\ref{L13} concern the problem with zero surface tension $\sigma=0$ together with the Taylor sign condition (\ref{lindblad}) imposed. \subsection{Notation} \label{1} Throughout the paper, we shall use the Einstein convention with respect to repeated indices or exponents. We specify here our notation for certain vector and matrix operations. \begin{itemize} \item[] We write the Euclidean inner-product between two vectors $x$ and $y$ as $x\cdot y$, so that $x\cdot y=x^i\ y^i$. \item[] The transpose of a matrix $A$ will be denoted by $A^T$, {\it i.e.}, $(A^T)^i_j=A^j_i$. \item[] We write the product of a matrix $A$ and a vector $b$ as $A\ b$, {\it i.e}, $(A\ b)^i=A^i_j b^j$. \item[] The product of two matrices $A$ and $S$ will be denoted by $A\cdot S$, {\it i.e.}, $(A\cdot S)^i_j=A^i_k\ S^k_j$. \item[] The trace of the product of two matrices $A$ and $S$ will be denoted by $A : S$, {\it i.e.}, $(A: S)^i_j=A^i_k\ S^k_i$. \end{itemize} For $\Omega$, a domain of class $H^s$ ($s\ge 2$), there exists a well-defined extension operator that we shall make use of later. \begin{lemma} There exists $E(\Omega)$, a linear and continuous operator from $H^r(\Omega)$ into $H^r({\mathbb R}^3)$ ($0\le r\le s$), such that for any $v\in H^r(\Omega)$ ($0\le r\le s$), $E(\Omega)(v)=v$ in $\Omega$. \end{lemma} We will use the notation $H^s(\Omega)$ to denote either $H^s(\Omega;\mathbb{R})$ (for a pressure function, for instance) or $H^s(\Omega;\mathbb{R}^3)$ (for a velocity vector field) and we denote the standard norm of $H^s(\Omega)$ ($s\ge 0$) by $\|\cdot\|_s$. The $H^s(\Omega)$ inner-product will be denoted $(\cdot,\cdot)_s$. We shall use the following notation for derivatives: $\partial_t$ or $(\cdot)_t$ denotes the partial time derivative, $\partial$ denotes the tangential derivative on $\Gamma$ (or in a small enough neighborhood of $\Gamma$), and $\nabla$ denotes the three-dimensional gradient. Letting $(x^1,x^2)$ denote a local coordinate system on $\Gamma$, for $\alpha=1,2$, we let either $\partial_\alpha$ or $(\cdot),_\alpha$ denote $\frac{\partial}{\partial x^\alpha}$. We define: $$ \partial^\alpha:= g_0^{\alpha\beta} \partial_\beta\,, \ \ |\partial^k \phi|^2 = \partial^{\alpha_1} \partial^{\alpha_2}\cdot\cdot\cdot\partial^{\alpha_k} \partial_{\alpha_1} \partial_{\alpha_2}\cdot\cdot\cdot\partial_{\alpha_k} $$ for integers $k\ge 0$, where $g_0 = g_{t=0}$ is the (induced) metric on $\Gamma$. In particular, $|\partial^0 \phi| = |\phi|$, $|\partial^1 \phi|^2 = |\partial \phi|^2 = \partial^\alpha \phi \partial_\alpha\phi$. $\partial^k\phi$ will mean any $k$th tangential derivative of $\phi$. The area element on $\Gamma$ in local coordinates is $dS_0 = \sqrt{g_0} dx^1\wedge dx^2$ and the pull-back of the area element $dS$ on $\Gamma(t)=\eta(\Gamma)$ is given by $\eta^*(dS)=\sqrt{g} dS_0$. Let $\{U_i\}_{i=1}^K$ denote an open covering of $\Gamma$, and let $\{\xi_i\}_{i=1}^K$ denote the partition of unity subordinate to this cover. The $L^2(\Gamma)$ norm is $$ |\phi|_0:= \|\phi\|_{L^2(\Gamma)} =\left( \int_{\Gamma} \phi^2 dS_0 \right)^{\frac{1}{2}}\,, $$ and the $H^k(\Gamma)$ norm for integers $k\ge 1$ is $$ | \phi|_k:= \|\phi\|_{H^k(\Gamma)} = \left(\sum_{i=1}^k \sum_{l=1}^K|\xi_l \partial^i\phi|^2_0 \right)^2\,. $$ Similarly, for the Hilbert space inner-products, we use $$ [\phi,\psi]_0:=[\phi,\psi]_{L^2(\Gamma)} = \int_\Gamma \phi\, \psi \, dS_0, \ \ \ \ [\phi,\psi]_k:=[\phi,\psi]_{H^k(\Gamma)} = [\phi,\psi]_0 + \sum_{i=1}^k \sum_{l=1}^K [\xi_l\partial^i \phi, \xi_l\partial_i \psi]_0 \,. $$ Fractional-order spaces are defined via interpolation using the trace spaces of Lions (see, for example, \cite{Adams1978}). The dual of a Banach space $X$ is denoted by $X'$, and the corresponding norm in $X'$ will be denoted $\|\cdot\|_{X'}$. For $L\in H^s(\Omega)'$ and $v\in H^s(\Omega)$, the duality pairing between $L$ and $v$ is denoted by $\langle L,v\rangle_s$. Throughout the paper, we shall use $C$ to denote a generic constant, which may possibly depend on the coefficient $\sigma$, or on the initial geometry given by $\Omega$ (such as a Sobolev constant or an elliptic constant), and we use $P(\cdot)$ to denote a generic polynomial function of $(\cdot)$. For the sake of notational convenience, we will often write $u(t)$ for $u(t,\cdot)$. \section{Convolution by horizontal layers and the smoothed transport velocity} \label{sec_convolution} Let $\Omega \subset {\mathbb R}^n$ denote an open subset of class $H^6$, and let $\{U_i\}_{i=1}^K$ denote an open covering of $\Gamma:= \partial \Omega$, such that for each $i \in \{1,2,...,K\}$, \begin{gather} \theta_i :(0,1)^2\times (-1,1) \rightarrow U_i \ \text{ is an $H^6$ diffeomorphism}\,, \nonumber\\ U_i \cap \Omega = \theta_i ( (0,1)^3 ) \ \text{ and } \ U_i \cap \Gamma = \theta_i ( (0,1)^2\times \{0\} ) \,, \nonumber\\ \theta_i(x_1,x_2,x_3)=(x_1,x_2,\psi_i(x_1,x_2)+x_3) \text{ and } \det\nabla \theta_i =1 \text{ in } (0,1)^3\,. \nonumber \end{gather} Next, for $L > K$, let $\{U_i\}_{i=K+1}^{L}$ denote a family of open sets contained in $\Omega$ such that $\{U_i\}_{i=1}^{L}$ is an open cover of $\Omega$. Let $\{\alpha_i\}_{i=1}^{L}$ denote the partition of unity subordinate to this covering. Thus, each coordinate patch is locally represented by the unit cube $(0,1)^3$ and for the first $K$ patches (near the boundary), the tangential (or horizontal) direction is represented by $(0,1)^2 \times \{0\}$. \begin{definition}[Horizontal convolution]\label{def_convolution} Let $0\le\rho\in\mathcal D((0,1)^2)$ denote an even Friederich's mollifier, normalized so that $\displaystyle\int_{(0,1)^2}\rho=1$, with corresponding dilated function $$\displaystyle\rho_{\frac{1}{\delta}}(x)=\frac{1}{\delta^2}\rho\left(\frac{x}{\delta}\right)\,, \ \ \ \delta >0 .$$ For $w\in H^1((0,1)^3)$ such that $\displaystyle\text{supp}(w)\subset [\delta,1-\delta]^2\times(0,1)$, set \begin{align*} \rho_{\frac{1}{\delta}}\star_h w(x_H,x_3)=\int_{\mathbb R^2} \rho_{\frac{1}{\delta}}(x_H-y_H) w(y_h,x_3) dy_H \,, \ \ y_H=(y_1,y_2)\,. \end{align*} \end{definition} We then have the following tangential integration by parts formula \begin{align*} \rho_{\frac{1}{\delta}}\star_h w,_\alpha(x_H,x_3)=\int_{\mathbb R^2} \rho_{\frac{1}{\delta}},_\alpha(x_H-y_H) w(y_h,x_3) dy_H\,, \ \ \alpha=1,2\,, \end{align*} while \begin{align*} \rho_{\frac{1}{\delta}}\star_h w,_3(x_H,x_3)=\int_{\mathbb R^2} \rho_{\frac{1}{\delta}}(x_H-y_H) w,_3(y_h,x_3) dy_H \,. \end{align*} It should be clear that $\star_h$ smooths $w$ in the horizontal directions, but not in the vertical direction. Fubini's theorem ensures that \begin{equation}\label{fubini} \|\rho_{\frac{1}{\delta}}\star_h w\|_{s,(0,1)^3}\le C_s \|w\|_{s,(0,1)^3} \text{ for any } s\ge 0 \,, \end{equation} and we shall often make implicit use of this inequality. \begin{remark} The horizontal convolution $\star_h w$ does not smooth $w$ in the vertical direction, however, it does commute with the trace operator, so that $$ \left.\left(\rho_{\frac{1}{\delta}}\star_h w\right)\right|_{(0,1)^2\times\{0\}} =\rho_{\frac{1}{\delta}}\star_h \left.w\right|_{(0,1)^2\times\{0\}} \,, $$ which is essential for our methodology. Also, note that $\star_h$ smooths without the introduction of an extension operator, required by standard convolution operators on bounded domains; the extension to the full space would indeed be problematic for the transport structure of the divergence and curl of solutions to the Euler-type PDEs that we introduce. \end{remark} \begin{definition}[Smoothing the velocity field] \label{smoothv} For $v \in L^2(\Omega)$ and any ${\kappa} \in (0,\frac{{\kappa}_0}{2})$ with $$ {\kappa}_0 = \min_{i=1}^K \, \text{dist}\left( \displaystyle\text{supp}(\alpha_i \circ \theta_i) \,, \, [(0,1)^2\times\{0\}]^c\cap\partial [0,1]^3 \right) \,, $$ set $$ v_\kappa = \sum_{i=1}^K \sqrt{\alpha_i} \left[ \rho_{ \frac{1}{{\kappa}} }\star_h [\rho_{ \frac{1}{{\kappa}} } \star_h (( \sqrt{\alpha_i} v)\circ \theta_i)] \right] \circ \theta_i^{-1} + \sum_{i=K+1}^L \alpha_i v \,. $$ \end{definition} It follows from (\ref{fubini}) that there exists a constant $C>0$ which is independent of $\kappa$ such that for any $v \in H^s(\Omega)$ for $s\ge 0$, \begin{equation}\label{neednow} \|v_\kappa\|_s \le C \|v\|_s \ \text{ and } \ |v_\kappa|_{s-1/2} \le C |v|_{s-1/2} \,. \end{equation} The smoothed particle displacement field is given by \begin{equation}\label{etak} \eta_\kappa = \text{Id} + \int_0^t v_\kappa \,. \end{equation} For each $x \in U_i$, let $\tilde x = \theta^{-1}_i(x)$. The difference of the velocity field and its smoothed counterpart along the boundary $\Gamma$ then takes the form \begin{equation}\label{diffv} v_\kappa(x) - v(x) = \sum_{i=1}^K \int\int_{B(0,{{\kappa}})^2} \zeta_i(x) \rho_{\frac{1}{{\kappa}}}({\tilde y})\rho_{\frac{1}{{\kappa}}}({\tilde z}) \left[ (\zeta_i v)(\theta_i( \tilde x -(\tilde y+\tilde z) )) - (\zeta_i v)(\theta_i(\tilde x)) \right] d{\tilde z} \, d{\tilde y} \,, \end{equation} where $\zeta_i(x)=\sqrt{\alpha_i(\theta_i(\tilde x))}$. Combining (\ref{euler.a}), (\ref{etak}), and (\ref{diffv}), \begin{equation}\label{diffe} \eta_\kappa(x) - \eta(x) = \sum_{i=1}^K \int\int_{B(0,{{\kappa}})^2} \zeta_i(x) \rho_{\frac{1}{{\kappa}}}({\tilde y})\rho_{\frac{1}{{\kappa}}}({\tilde z}) \left[ (\zeta_i \eta)(\theta_i( \tilde x -(\tilde y+\tilde z) )) - (\zeta_i \eta)(\theta_i(\tilde x)) \right] d{\tilde z} \, d{\tilde y} \,, \end{equation} For any $u \in H^{1.5}(\Gamma)$, and for $y \in B(x, {\kappa})$, where $B(x,{\kappa})$ denotes the disk of radius ${\kappa}$ centered at $x$, the mean value theorem shows that $$ |u(y)-u(x)| \le C |r^{-1}|_{L^q(B(x,{\kappa}))}|\partial u|_{L^p(B(x,{\kappa}))}, \qquad r=\text{radial coordinate}, $$ so that in particular, with $p=4$ and $q=\frac{4}{3}$, $$ |u(y)-u(x)| \le C \sqrt{\kappa} |\partial u|_{L^4} \le C\kappa | u|_{1.5} \,, $$ the last inequality following from the Sobolev embedding theorem. Hence, for $U \in H^{1.5}(\Gamma)$, \begin{equation}\label{Linf_est} |U_\kappa(x) - U(x)|_{L^\infty} \le C \sqrt{\kappa} |U|_{1.5} \,. \end{equation} Note that the constant $C$ depends on $\max_{i\in\{1,...,K\}}|\theta_i|_{5.5}$. Letting $\zeta_i = \sqrt{\alpha_i}$, and $R=(0,1)^2$, we also have that for any $\phi \in L^2(\Gamma)$, \begin{align} \int_\Gamma v_\kappa \, \phi & = \sum_{i=1}^K \int_R \rho_{\frac{1}{{\kappa}}} \star_h \rho_{\frac{1}{{\kappa}}} \star_h \zeta_i v(x)\, \zeta_i \phi(x) = \sum_{i=1}^K \int_R \rho_{\frac{1}{{\kappa}}} \star_h \zeta_i v(x)\, \rho_{\frac{1}{{\kappa}}}\star_h \zeta_i \phi(x) \nonumber \\ &\qquad\qquad = \int_\Gamma \sum_{i=1}^K [\rho_{\frac{1}{{\kappa}}} \star_h (\zeta_i v \circ \theta_i)] \circ \theta_i^{-1}\, [\rho_{\frac{1}{{\kappa}}}\star_h (\zeta_i \phi \circ \theta_i)] \circ \theta_i^{-1} \,. \label{selfadjoint} \end{align} Finally, we need the following \begin{lemma}[Commutation-type lemma] \label{commutator} Suppose that $g\in L^2(\Gamma)$ satisfies $\text{dist} (\text{supp}(g), \partial R) < \kappa_0)$ and that $f \in H^s(\Gamma)$ for $s> 1$. Then independently of ${\kappa} \in (0,\kappa_0)$, there exists a constant $C>0$ such that $$ \left| \rho_{\frac{1}{{\kappa}}} \star_h [fg] - f \rho_{\frac{1}{{\kappa}}} \star_h g \right|_{0,R} \le C \, {\kappa} |f|_{s+1,R} \, |g|_{0,R} \,. $$ We also have $$ \left\| \rho_{\frac{1}{{\kappa}}} \star_h [fg] - f \rho_{\frac{1}{{\kappa}}} \star_h g \right\|_{0,[0,1]^3} \le C \, {\kappa} \|f\|_{s+\frac{3}{2},[0,1]^3} \, \|g\|_{0,[0,1]^3} $$ whenever $g\in L^2(\Omega)$, $f\in H^s(\Omega)$ and $$\frac{\kappa}{2}<\min(\hbox{dist}({supp}\ fg, \{1\}\times [0,1]^2), \text{dist}(\text{supp}\ fg, \{0\}\times [0,1]^2)).$$ \end{lemma} \begin{proof} Let $\triangle= \rho_{\frac{1}{{\kappa}}} \star_h [fg] - f \rho_{\frac{1}{{\kappa}}} \star_h g $. Then \begin{align*} |\triangle (x)| &= \left|\int_{B(x,{\kappa})} \rho_{\frac{1}{{\kappa}}}(x -y) [f(y)-f(x)]g(y) dy \right| \le C\, {\kappa} |f|_{s+1,R} \int_{B(x,{\kappa})} \rho_{\frac{1}{{\kappa}}}(x-y) |g(y)| dy\,, \end{align*} so that \begin{align*} |\triangle |_{0,R} & \le C\, {\kappa} |f|_{s+1,R} \, \left| \rho_{\frac{1}{{\kappa}}}\star_h|g|\, \right|_{0,R} \le C\, {\kappa} |f|_{s+1,R} \, | g |_{0,R} \,. \end{align*} The inequality on $[0,1]^3$ follows the identical argument with an additional integration over the vertical coordinate. The hypothesis on the support of $fg$ makes the integral well-defined. \end{proof} \begin{remark} Higher-order commutation-type lemmas will be developed for the case of zero surface tension in Section \ref{L7}. \end{remark} \section{Closed convex set used for the fixed-point for $\sigma >0$} \label{2} In order to construct solutions for our approximate model (\ref{smooth}), we use a topological fixed-point argument which necessitates the use of high-regularity Sobolev spaces. In particular, we shall assume that the initial velocity $u_0$ is in $H^{13.5}(\Omega)$ and that $\Omega$ is of class $C^\infty$; after establishing our result for the smoothed initial domain and velocity, we will show that both $\Omega$ and $u_0$ can be taken with the optimal regularity stated in Theorem \ref{ltheorem1}. For $T>0$, we define the following closed convex set of the Hilbert space $L^2(0,T;H^{13.5}(\Omega))$: \begin{align*} C_T=\{ v \in L^2(0,T;H^{13.5}(\Omega))| \ \sup_{[0,T]} \|v\|_{13.5}\le 2 \|u_0\|_{13.5}+1 \}, \end{align*} It is clear that $C_T$ is non-empty, since it contains the constant (in time) function $u_0$, and is a convex, bounded and closed subset of the separable Hilbert space $L^2(0,T;H^{13.5}(\Omega))$. Let $v\in C_T$ be given, and define $\eta$ by (\ref{leuler.a}), the Bochner integral being taken in the separable Hilbert space $H^{13.5}(\Omega)$. Henceforth, we assume that $T>0$ is given such that independently of the choice of $v\in C_T$, we have the injectivity of $\eta(t)$ on $\overline\Omega$, the existence of a normal vector to $\eta(\Omega,t)$ at any point of $\eta(\Gamma,t)$, and the invertibility of $\nabla\eta(t)$ for any point of $\overline\Omega$ and for any $t\in [0,T]$. Such a condition can be achieved by selecting $T$ small enough so that \begin{align} \|\nabla\eta-\text{Id}\|_{L^\infty(0,T;H^{13.5}(\Omega))}&\le \epsilon_0\,, \label{eta} \end{align} for $\epsilon_0>0$ taken sufficiently small. Condition (\ref{eta}) holds if $T\|\nabla u_0\|_{H^{2}}\le\epsilon_0$. Thus, \begin{equation}\label{aequation} a=[\nabla\eta]^{-1} \end{equation} is well-defined. Then choosing $T>0$ even smaller, if necessary, there exists $\kappa_0>0$ such that for any $\kappa\in (0,\frac{\kappa_0}{2})$, we have the injectivity of $\eta_\kappa(t)$ on $\Omega$ for any $t\in [0,T]$; furthermore, $\nabla\eta_\kappa$ satisfies the condition (\ref{eta}) with $\eta_\kappa$ replacing $\eta$. We let $n_\kappa(\eta_\kappa(x))$ denote the exterior unit normal to $\eta_\kappa(\Omega)$ at $\eta_\kappa(x)$ with $x\in\Gamma$. Our notational convention will be as follows: if we choose $\bar v \in C_T$, then $\bar \eta$ is the flow map coming from (\ref{leuler.a}), and $\bar a$ is the associated pull-back, $\bar a = [\nabla\bar \eta]^{-1}$. Thus, a bar over the velocity field, will imply a bar over the Lagrangian variable and the associated pull-back. For a given $v_\kappa$, our notation is as follows: \begin{gather*} \eta_\kappa(t)= \operatorname{Id}+ \int_0^t v_\kappa\ \text{ and } \ \eta_\kappa(0)=\operatorname{Id} \,, \\ a_\kappa = \operatorname{Cof} \nabla \eta_\kappa\,, \ \ J_\kappa = \det \nabla \eta_\kappa \,, \ \ {g_\kappa}_{\alpha\beta} = \partial_\alpha\eta_\kappa \cdot \partial_\beta \eta_\kappa\,. \end{gather*} We take $T$ (which a priori depends on $\kappa$) even smaller if necessary to ensure that for $t\in[0,T]$, \begin{subequations} \label{deteta} \begin{align} \sqrt{ g(t)}^{-1} &\le 2 \sqrt{ g_0}^{-1} \,, \label{deteta.a} \\ \sqrt{ g_\kappa(t)}^{-1} &\le 2 \sqrt {g_0}^{-1} \,, \label{deteta.b}\\ {\frac{1}{2}}\le J_\kappa(t) &\le {\frac{3}{2}} \,.\label{deteta.c} \end{align} \end{subequations} \begin{lemma} \label{smoothvbisk} For $v\in C_T$, and for any $s\ge 0$, we have independently of the choice of $v\in C_T$ that \begin{align*} \sup_{[0,T]}|v_\kappa|_{s} \le C_{\kappa,s} \, P(\|u_0\|_{13.5})\,. \end{align*} \end{lemma} \begin{proof} By the standard properties of the convolution a.e in $[0,T]$: \begin{equation} \label{bis1} |v_\kappa|_s\le [\frac{C}{\kappa^{s-13}}+1] |v|_{13}\le [\frac{C}{\kappa^{s-13}}+1] [2 \|u_0\|_{13.5}+1], \end{equation} where we have used the definition of $C_T$ for the second inequality. \end{proof} Recall that $\{\theta_i\}_{i=1}^K$ is our open cover of $\Gamma$. Given $\bar v \in C_T$, we define the matrix ${{\bar b^l}_\kappa}=[\nabla({{{\bar{\eta}^{\kappa}}}}\circ\theta_l)]^{-1}$, and assume that $T>0$ is sufficiently small so that independently of $\bar v\in C_T$, we have the following determinant-type condition for ${{\bar b^l}_\kappa}$: \begin{equation} \label{matrix} {\frac{1}{2}}\le ({{\bar b^l}_\kappa})_3^3\sum_{i=1}^3 [({{\bar b^l}_\kappa})_i^3]^2,\ \text{in}\ (0,1)^3. \end{equation} Such a condition is indeed possible since at time $t=0$ we have $({{\bar b^l}_\kappa})_3^3\sum_{i=1}^3 [({{\bar b^l}_\kappa})_i^3]^2=1+\psi_l,_1^2+\psi_l,_2^2.$ \section{The smoothed $\kappa$-problem and its linear fixed-point formulation} \label{3} Unlike the case of zero surface tension, for $\sigma>0$ there does not appear to be a simple sequence of approximate problems for the Euler equations (\ref{euler}) which can be solved only with simple transport-type arguments. For the surface tension case, the problem is crucially variational in nature, and the addition of an artificial viscosity term on the boundary $\Gamma$ seems unavoidable in order to be able to construct a sequence of approximate or smoothed solutions. As we shall make precise below, our construction of the approximating sequence of problems is based on smoothing the transport velocity by use of the horizontal convolution by layers (see Definition \ref{smoothv}), and hence smoothing the Lagrangian flow map and associated pull-back. Simultaneously, we introduce a new type of parabolic-type artificial viscosity boundary operator on $\Gamma$ (of the same order in space as the surface tension operator). Note that unlike the case of interface motion in the fluid-structure interaction problem that we studied in \cite{CoSh2005b}, there is not a unique choice of the artificial viscosity term; in particular, other choices of artificial viscosity are possible for the asymptotic limit as the artificial viscosity is taken to zero. We can now define our sequence of smoothed $\kappa$-problems. For our artificial viscosity parameter $\kappa \in (0, \frac{\kappa_0}{2})$, let $(v,q)$ be the solution of \begin{subequations} \label{smooth} \begin{alignat}{2} \eta&=\text{Id} + \int_0^t v\ \ \ &&\text{in} \ \Omega \times (0,T]\,, \label{smooth.a}\\ \partial_t v+ J_\kappa^{-1}a_\kappa\,\nabla q&=0 &&\text{in} \ \Omega \times (0,T]\,, \label{smooth.b}\\ \text{Tr}(a_\kappa \, \nabla v) &= 0 &&\text{in} \ \Omega \times (0,T] \,, \label{smooth.c}\\ -\sigma \,\frac{\sqrt{g}}{\sqrt{g_\kappa}} \Delta_g(\eta)\cdot n_k(\eta_\kappa) \, n_k(\eta_\kappa) - \kappa \Delta_0[ v \cdot n_\kappa(\eta_\kappa)]n_\kappa(\eta_\kappa) &= q\, n_\kappa(\eta_\kappa) \ \ &&\text{on} \ \Gamma \times (0,T] \,, \label{smooth.d}\\ (\eta,v) &= (\text{Id}, u_0) &&\text{on} \ \Omega\times\{t=0\} \,, \label{smooth.e} \end{alignat} \end{subequations} where $n_\kappa(\eta_\kappa) = \frac{(a_\kappa)^T N}{|(a_\kappa)^T N|}$, and $\Delta_0= \sqrt{g_\kappa}^{-1} \partial_\alpha [\sqrt{g_0} g_0^{\alpha\beta} \partial_\beta ]$. Note that on $\Gamma$, $ \sqrt{g_\kappa} = |a_\kappa^TN|$, and that $(g_\kappa)_{\alpha\beta} = \eta_\kappa,_\alpha \cdot \eta_\kappa,_\beta$. In order to obtain solutions to the sequence of approximate $\kappa$-problems (\ref{smooth}), we study a linear problem whose fixed-point will provide the desired solutions. If we denote by $\bar v$ an arbitrary element of $C_T$, and ${{{\bar{\eta}^{\kappa}}}}$, ${{\bar a}_\kappa}$, and $\bar J_\kappa$ are the associated smoothed Lagrangian variables given by Definition \ref{smoothv}, then we define $w$ to be the solution of \begin{subequations} \label{smoothlinear} \begin{alignat}{2} \partial_t w+ \bar J_\kappa^{-1}\bar a_\kappa\,\nabla q&=0 &&\text{in} \ \Omega \times (0,T]\,, \label{smoothlinear.b}\\ \text{Tr}(\bar a_\kappa \, \nabla w) &= 0 &&\text{in} \ \Omega \times (0,T] \,, \label{smoothlinear.c}\\ -\sigma \frac{\sqrt{\bar g}}{\sqrt{\bar g_\kappa}} [\Delta_{\bar g}(\bar\eta)\cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})] \, {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) - \kappa \Delta_{\bar 0}[ w \cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]\, {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) &= q\, {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) \ \ &&\text{on} \ \Gamma \times (0,T] \,, \label{smoothlinear.d}\\ (\eta,w) &= (\text{Id}, u_0) &&\text{on} \ \Omega\times\{t=0\} \,, \label{smoothlinear.e} \end{alignat} \end{subequations} where $\bar g_{\alpha\beta}= \bar\eta,_\alpha \cdot \bar\eta,_\beta$, and $\Delta_{\bar 0}= \sqrt{\bar g_\kappa}^{-1} \partial_\alpha [\sqrt{g_0} g_0^{\alpha\beta} \partial_\beta ]$. For a solution $w$ to (\ref{smoothlinear}), a fixed point of the map $\bar v \mapsto w$ provides a solution of our smoothed problem (\ref{smooth}). In the following sections, we assume that $\bar v\in C_T$ is given, and $\kappa$ is in $(0,\frac{\kappa_0}{2})$. Until Section \ref{sec_divcurl}, wherein we study the asymptotic behavior of the problem (\ref{smooth}) as $\kappa\rightarrow 0$, the parameter $\kappa$ is fixed. \section{Hodge decomposition elliptic estimates} Our estimates are based on the following standard elliptic estimate: \begin{proposition}\label{prop1} For an $H^r$ domain $\Omega$, $r \ge 3$, if $v \in L^2(\Omega)$ with $\operatorname{curl}v \in H^{s-1}(\Omega)$, ${\operatorname{div}}v\in H^{s-1}(\Omega)$, and $v \cdot N|_{\Gamma} \in H^{s -{\frac{1}{2}}}(\Gamma)$ for $1 \le s \le r$, then there exists a constant $C>0$ depending only on $\Omega$ such that \begin{equation} \begin{array}{c} \|v\|_s \le C\left( \|v\|_0 + \|\operatorname{curl} v\|_{s-1} + \|\operatorname{div} v\|_{s-1} + |v \cdot N|_{s-{\frac{1}{2}}}\right)\,, \\ \|v\|_s \le C\left( \|v\|_0 + \|\operatorname{curl} v\|_{s-1} + \|\operatorname{div} v\|_{s-1} + |v \cdot T_\alpha|_{s-{\frac{1}{2}}}\right)\,, \end{array} \label{divcurl0} \end{equation} where $T_\alpha$, $\alpha=1,2$ are the tangent vectors to $\Gamma$. \end{proposition} The first estimate with $V \cdot N$ is standard (see, for example, \cite{Taylor1996}), while the second with $V\cdot T_\alpha$ follows from the fact that $T_\alpha \cdot N=0$. \section{Weak solutions for the penalized problem and their regularity} \label{4} The aim of this section is to establish the existence of the solution $w_{\epsilon}$ to the penalized version (of the divergence-free condition) of linearized and smoothed $\kappa$-problem (\ref{smoothlinear}). In particular, we study the weak form of this problem with the pressure function $q$, approximated by the penalized pressure $$q^\epsilon =-{\frac{1}{\epsilon}} \text{Tr}( \bar a_\kappa \nabla w) \ \ \text{ for } 0 < \epsilon <<1\,.$$ In this section, as well as in Sections \ref{6} and \ref{7}, we let \begin{equation} \label{Nu0} N(u_0,x,y) = P(\|u_0\|_{13.5}, x,y) \end{equation} denote a generic polynomial function of $\|u_0\|_{13.5}$, $x$, and $y$, where $x$ and $y$ will typically denote norms of various quantities. \subsection{Step 1. Galerkin sequence.} By introducing a basis $(e_l)_{l=1}^{\infty}$ of $H^1 (\Omega)$ and $L^2(\Omega)$, and taking the approximation at rank $l\ge 2$ under the form $\displaystyle w_l (t,x) =\sum_{k=1}^l y_k (t)\ e_k (x)\ ,$ satisfying on $[0,T]$, the system of ordinary differential equations \begin{align*} &\text{(i)}\ (\bar J_\kappa\ {w_l}_{t}, \phi)_{0} +\kappa [w_l\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 -\sigma [L_{\bar g}\bar\eta\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \nonumber\\ &\qquad\qquad - ((\bar a_\kappa)_i^j q_l,\ \phi^i,_j)_{0}=0, \ \forall \phi\in span(e_1,...,e_l)\,,\\ &\text{(ii)}\ w_l (0)=(u_0)_l,\ \text{in}\ \Omega\ , \end{align*} where $L_{\bar g} = \frac{\sqrt{\bar g}}{\sqrt{g_0}} \Delta_{\bar g}$, $\displaystyle q_l= -\frac{1}{\epsilon} ({{\bar a}_\kappa})_i^j w_l^i,_j$, and $(u_0)_l$ denotes the $L^2(\Omega)$ projection of $u_0$ on $span(e_1,...,e_l)$, we see that the Cauchy-Lipschitz theorem gives us the local well-posedness for $w_l$ on some $[0,T_{max}]$. The use of the test function $w_l$ in this system of ODEs (which is allowed as it belongs to $span(e_1,...,e_l)$) gives us in turn the energy law for any $t\in(0,T_{\max})$, \begin{align*} \frac{1}{2} \|{\bar J_\kappa}^{\frac{1}{2}} {w_l}(t)\|^2_{0} + \kappa \int_0^t &[w_l\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),w_l\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 + \epsilon \int_0^t\|{q_l}\|_0^2 \nonumber\\ -{\frac{1}{2}}\int_0^t ((\bar J_\kappa)_t w_l, w_l)_0& = \frac{1}{2} \|{(u_0)_l}\|^2_{0} +\sigma\int_0^t [L_{\bar g}\bar\eta\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),w_l\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0, \end{align*} which, with the control of $\bar\eta^\kappa$ provided by the definition of $C_T$, gives the bound \begin{align} \frac{1}{4} \|{w_l}(t)\|^2_{0} &+ C\kappa \int_0^t |w_l\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})|_1^2+ \epsilon \int_0^t\|{q_l}\|_0^2 \le C N(u_0). \label{g1} \end{align} \subsection{Step 2. Weak solution $w_{\epsilon}$ of the penalized problem.} We then infer from (\ref{g1}) that $w_l$ is defined on $[0,T]$, and that there is a subsequence (still denoted with the subscript $l$) satisfying \begin{subequations} \label{g2} \begin{align} w_{l}&\rightharpoonup w_{\epsilon}\ \ \text{in}\ L^2(0,T; L^2(\Omega)), \\ {q_l}&\rightharpoonup {q_{\epsilon}}\ \ \text{in}\ L^2(0,T; L^2(\Omega))\ , \end{align} \end{subequations} where \begin{equation} q_{\epsilon}=-\frac{1}{\epsilon} ({{\bar a}_\kappa})_i^j {w_{\epsilon}},_j^i\ . \end{equation} We can also rewrite (\ref{g2}) as \begin{subequations} \label{g3} \begin{align} w_{l}&\rightharpoonup w_{\epsilon}\ \ \text{in}\ L^2(0,T; L^2(\Omega)), \\ \operatorname{div}( w_l\circ{{{\bar{\eta}^{\kappa}}}}^{-1})({{{\bar{\eta}^{\kappa}}}})&\rightharpoonup \operatorname{div} (w_\epsilon\circ{{{\bar{\eta}^{\kappa}}}}^{-1})({{{\bar{\eta}^{\kappa}}}})\ \ \text{in}\ L^2(0,T; L^2(\Omega))\ \label{g3.b}, \end{align} \end{subequations} which with the bound (\ref{g1}) and the definition of the normal ${\bar{n}^{\kappa}}$ provides \begin{equation} \label{g4} w_l\cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})\rightharpoonup w_{\epsilon}\cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})\ \ \text{in}\ L^2(0,T; H^1(\Gamma)). \end{equation} It follows from standard arguments and the ODE defining $w_l$, that ${w_{\epsilon}}_t\in L^2(0,T;H^{\frac{3}{2}}(\Omega)')$, ${w_{\epsilon}}\in \mathcal{C}^0([0,T];H^{\frac{3}{2}}(\Omega)')$ with ${w_{\epsilon}}(0)=u_0$, and that for $\phi\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$, \begin{align} \int_0^T \langle\bar J_\kappa\, {w_{\epsilon}}_{t},\phi\rangle_{{\frac{3}{2}}} &+\kappa \int_0^T [\partial ({{w_{\epsilon}}}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})),\partial (\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}))]_0 \nonumber\\ &-\int_0^T \langle {q_{\epsilon}},({{\bar a}_\kappa})_i^j \phi^i,_j\rangle_{\frac{1}{2}}= \sigma \int_0^T [L_{\bar g} \bar\eta\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \ . \label{wepsilonvar} \end{align} Since by definition ${{\bar a}_\kappa}=\text{Cof}\nabla\bar\eta_\kappa$, this implies that in $\Omega$, \begin{equation} \label{transport} {w_{\epsilon}}_t+\nabla{p_{\epsilon}}({{{\bar{\eta}^{\kappa}}}})=0, \end{equation} where ${p_{\epsilon}}\circ{{{\bar{\eta}^{\kappa}}}}={q_{\epsilon}}$ in $\Omega$. Since $\nabla{p_{\epsilon}}({{{\bar{\eta}^{\kappa}}}})\in L^2(0,T;H^{-1}(\Omega))$, this equality is true in $L^2(0,T;H^{-1}(\Omega))$ as well. \subsection {Step 3. $w_\epsilon$ is bounded in $L^2(0,T;H^1(\Omega))$ independently of $\epsilon$} Denoting ${u_{\epsilon}}={w_{\epsilon}}\circ{{{\bar{\eta}^{\kappa}}}}^{-1}$, by integrating (\ref{transport}) in time from $0$ to $t$, we obtain the important formula \begin{equation} \label{g5} \displaystyle \operatorname{curl}{u_{\epsilon}}({{{\bar{\eta}^{\kappa}}}})=\operatorname{curl} u_0+\int_0^t B({\bar{u}^{\kappa}},{u_{\epsilon}})({{{\bar{\eta}^{\kappa}}}}) \ \ \ \text{ in } L^2(0,T;H^{-1}(\Omega)) \,, \end{equation} with \begin{align*} B({\bar{u}^{\kappa}},{u_{\epsilon}})&= -({\bar{u}^{\kappa}}^i,_2 {u_{\epsilon}}_3,_i -{\bar{u}^{\kappa}}^i,_3{u_{\epsilon}}_2,_i,\ {\bar{u}^{\kappa}}^i,_3 {u_{\epsilon}}_1,_i -{\bar{u}^{\kappa}}^i,_1{u_{\epsilon}}_3,_i,\ {\bar{u}^{\kappa}}^i,_1 {u_{\epsilon}}_2,_i -{\bar{u}^{\kappa}}^i,_2{u_{\epsilon}}_1,_i). \end{align*} \begin{remark} Note well that our approximated and penalized $\kappa$-problem preserves the structure of the original Euler equations as can be seen by (\ref{transport}). As a result, (\ref{g5}) contains only first-order derivatives of the velocity. \end{remark} Our next task is to prove that $w_\epsilon$ in $L^2(0,T;H^1(\Omega))$. For suppose that this was the case; then, (\ref{g5}) together with bounds on the divergence of $w_\epsilon$ and $w_\epsilon \cdot N$ on $\Gamma$, provide bounds for $w_\epsilon$ in $L^2(0,T;H^1(\Omega))$ (by the Hodge elliptic estimate (\ref{divcurl0})) which are independent of $\epsilon >0$. We proceed by showing that appropriately convolved velocity fields are bounded independently of the parameter of convolution in $L^2(0,T;H^1(\Omega))$. This is the first instance that our horizontal convolution by layers is crucially required. \subsubsection{ For any subdomain $\omega \subset\subset \Omega$, $w_\epsilon \in L^2(0,T;H^1(\omega))$}\label{6.3.1} We analyze the third component of (\ref{g5}), the other components being treated similarly. This leads us to the following equality in $L^2(0,T;H^{-1}(\Omega))$: \begin{align*} ({{\bar a}_\kappa})_2^j {w_{\epsilon}},_j^1-({{\bar a}_\kappa})_1^j {w_{\epsilon}},_j^2=-\operatorname{curl} u_0^3+\int_0^t [\ -\bar v,_j^i ({{\bar a}_\kappa})_2^j{w_{\epsilon}},_l^1({{\bar a}_\kappa})_i^l+\bar v,_j^i ({{\bar a}_\kappa})_1^j{w_{\epsilon}},_l^2({{\bar a}_\kappa})_i^l] . \end{align*} Our goal is to prove that $w_\epsilon \in H^1(\Omega)$. To proceed, we let $\sigma_p$ denote a standard sequence of Friederich's mollifier in ${\mathbb R}^3$ with support $B(0, 1/p)$, and establish that $\sigma_p \star w_\epsilon$ is bounded in $H^1(\omega)$ for any $\omega \subset\subset \Omega$. For this purpose, we choose $\psi\in\mathcal{D}(\Omega)$, and find that \begin{align} ({{\bar a}_\kappa})_2^j (\psi{w_{\epsilon}}),_j^1-({{\bar a}_\kappa})_1^j (\psi{w_{\epsilon}}),_j^2=&-\psi\operatorname{curl} u_0^3+({{\bar a}_\kappa})_2^j \psi,_j{w_{\epsilon}}^1-({{\bar a}_\kappa})_1^j \psi,_j{w_{\epsilon}}^2\nonumber\\ &+\int_0^t [\ -\bar v,_j^i ({{\bar a}_\kappa})_2^j(\psi{w_{\epsilon}}),_l^1({{\bar a}_\kappa})_i^l+\bar v,_j^i ({{\bar a}_\kappa})_1^j(\psi{w_{\epsilon}}),_l^2({{\bar a}_\kappa})_i^l] \nonumber\\ &-\int_0^t [\ -\bar v,_j^i ({{\bar a}_\kappa})_2^j\psi,_l{w_{\epsilon}}^1({{\bar a}_\kappa})_i^l+\bar v,_j^i ({{\bar a}_\kappa})_1^j\psi,_l{w_{\epsilon}}^2({{\bar a}_\kappa})_i^l]. \label{g6} \end{align} In order to proceed, we shall need to identify curl-type structures (in Lagrangian variables) for $\sigma_p \star w_\epsilon$; this requires the following: for $\displaystyle\frac{1}{p}\le \text{dist}(\text{supp}\psi,\Omega^c)$, and $f \in C^\infty(\Omega)$, we have the equality in $H^{-1} (\Omega)$ \begin{align*} \sigma_p\star [f(\psi{w_{\epsilon}}),_j]-f \sigma_p\star [(\psi{w_{\epsilon}}),_j]=\int_{\mathbb R^3} (\sigma_p),_j (x-y) (f(y)-f(x)) \psi {w_{\epsilon}}(y) dy\\ -\int_{\mathbb R^3} \sigma_p (x-y) f,_j(y) \psi {w_{\epsilon}}(y) dy\,, \end{align*} showing that $\sigma_p\star [f(\psi{w_{\epsilon}}),_j]-f\ \sigma_p\star [(\psi{w_{\epsilon}}),_j]\in L^2(\Omega)$, with \begin{align} \bigl\|\sigma_p\star [f(\psi{w_{\epsilon}}),_j]-f\ \sigma_p\star [(\psi{w_{\epsilon}}),_j]\bigr\|_0\le C [\ \|\sigma,_j\|_{0,\mathbb R^3}+\|\sigma\|_{0,\mathbb R^3}] \|\nabla f\|_{L^\infty(\Omega)} \|{w_{\epsilon}}\|_0. \label{g7} \end{align} We thus infer from (\ref{g6}) and (\ref{g7}) that the vorticity structure satisfies \begin{align} ({{\bar a}_\kappa})_2^j \sigma_p\star(\psi{w_{\epsilon}}),_j^1-({{\bar a}_\kappa})_1^j \sigma_p\star(\psi{w_{\epsilon}}),_j^2=& \int_0^t [\ -\bar v,_j^i ({{\bar a}_\kappa})_2^j\sigma_p\star(\psi{w_{\epsilon}}),_l^1({{\bar a}_\kappa})_i^l\nonumber\\ &+\bar v,_j^i ({{\bar a}_\kappa})_1^j\sigma_p\star(\psi{w_{\epsilon}}),_l^2({{\bar a}_\kappa})_i^l] +R_1, \label{g8} \end{align} with $\|R_1\|_{L^2(0,T;L^2(\Omega))}\le N(u_0)$, where $N(u_0)$ is defined in (\ref{Nu0}). Next, we infer from (\ref{g3}) and (\ref{g7}) that the divergence structure satisfies \begin{align} ({{\bar a}_\kappa})_i^j \sigma_p\star(\psi{w_{\epsilon}}),_j^i=R_2, \label{g9} \end{align} with $\|R_2\|_{L^2(0,T;L^2(\Omega))}\le N(u_0).$ Since we also have $\psi{w_{\epsilon}}=0$ on $\Gamma$, so that with (\ref{divcurl0}), we have a.e. in $(0,T)$ \begin{align*} \|\sigma_p\star(\psi{w_{\epsilon}})(t)\|_1 \le \|R_1(t)\|_0+\|R_2(t)\|_0+N(u_0)\int_0^t \|\sigma_p\star(\psi{w_{\epsilon}})\|_1 \,, \end{align*} and thus \begin{align} \int_0^T \|\sigma_p\star(\psi{w_{\epsilon}})\|_1^2\le N(u_0). \label{g10} \end{align} Since this inequality does not depend on $p$, this implies that $\psi {w_{\epsilon}}\in L^2(0,T;H^1(\Omega))$, and therefore ${w_{\epsilon}}\in L^2(0,T;H^1(\omega))$, with an estimate depending {\it a priori} on $\omega\subset\subset \Omega$. \subsubsection{ The horizontal convolved-by-layers velocity fields are in $L^2(0,T;H^1(\Omega))$}\hfill\break Fix $l\in \{1,...,K\}$, and set $$W(l)=w_\epsilon\circ\theta_l \ \text{ and } \ {{\bar b^l}_\kappa}=[\nabla({\bar\eta}_\kappa\circ\theta_l)]^{-1}.$$ Hence, in $(0,1)^2\times(\frac{1}{p},1)$ for $p>1$, the Lagrangian ``divergence-free'' constraint is given by \begin{align} ({{\bar b^l}_\kappa})_i^j (\alpha_l W(l)),_j^i=-({{\bar b^l}_\kappa})_i^j \alpha_l,_j\ W(l)^i - \alpha_l\epsilon\,q_\epsilon(\theta_l)\,, \label{divcond} \end{align} where the crucial observation is that the right-hand side of (\ref{divcond}) is in $L^2(0,T; L^2([0,1]^3))$. Now for $\displaystyle\frac{1}{m}\le \text{dist}(\text{supp}\alpha_l,\partial (0,1) \times(0,1)^2)$, and $f$ smooth in $[0,1]^3$, we have by Lemma \ref{commutator} that for $\beta=1,2$, $\rho_m\star_h [f(\alpha_l W(l)),_\beta]-f \rho_m\star_h [(\alpha_l W(l)),_\beta]\in L^2((0,1)^2)$, with the estimate a.e in $\displaystyle(\frac{1}{p},1)$: \begin{align*} \bigl|\rho_m\star_h [f(\alpha_l W(l)),_\beta]- f\ \rho_m\star_h &[(\alpha_l W(l)),_\beta]\bigr|_{0,(0,1)^2\times\{y\}}\nonumber\\ &\le C_\rho |\nabla f|_{L^\infty((0,1)^2\times\{y\})} \ |W(l)|_{0,(0,1)^2\times\{y\}}. \end{align*} This leads to \begin{align} \bigl\|\rho_m\star_h [f(\alpha_l W(l)),_\beta]- f\ \rho_m\star_h [(\alpha_l W(l)),_\beta]&\bigr\|_{0,(0,1)^3}\nonumber\\ &\le C_\rho \|\nabla f\|_{L^\infty((0,1)^3)} \|W(l)\|_{0,(0,1)^3}. \label{g12} \end{align} Now, for the case of the vertical derivative, we will need to express $W(l),_3$ in terms of $W(l),_1$, $W(l),_2$, $\operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)$ and $\operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l)$, where \begin{align*} \operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l)&=({{\bar b^l}_\kappa})_i^j W(l),_j^i,\\ \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l}^1 W(l)&=({{\bar b^l}_\kappa})^i_2 W(l),_i^3-({{\bar b^l}_\kappa})^i_3 W(l),_i^2,\\ \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l}^2 W(l)&=({{\bar b^l}_\kappa})^i_3 W(l),_i^1-({{\bar b^l}_\kappa})^i_1 W(l),_i^3,\\ \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l}^3 W(l)&=({{\bar b^l}_\kappa})^i_1 W(l),_i^2-({{\bar b^l}_\kappa})^i_2 W(l),_i^1. \end{align*} Notice that the first three lines above can be written as the following vector field: \begin{align*} (\operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l), \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l}^1 W(l), \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l}^2 W(l))=\sum_{i=1}^3 M_i^\kappa W(l),_i, \end{align*} where the $M_i^\kappa$ are smooth matrix fields depending on ${{\bar b^l}_\kappa}$. From condition (\ref{matrix}), since $$\det M_3^\kappa= ({{\bar b^l}_\kappa})_3^3\ \sum_{i=1}^3 [({{\bar b^l}_\kappa})_i^3]^2\ge {\frac{1}{2}},$$ we see that $M_3^\kappa$ is invertible on $[0,T]$ (regardless of the choice of $\bar v\in C_T$). Therefore, \begin{align} W(l),_3=\operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l)\ V^\kappa+M^\kappa \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)+\sum_{i=1}^2 A_i^\kappa W(l),_i, \label{g13} \end{align} where $M^\kappa$ and the $A_i^\kappa$ are smooth matrix fields depending on ${{\bar b^l}_\kappa}$, and $V^\kappa$ is a vector field depending on ${{\bar b^l}_\kappa}$. From (\ref{g6}), we have that \begin{align*} \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)=\operatorname{curl} u_0(\theta_l) +\sum_{i=1}^3 \int_0^t N_i^\kappa W(l),_i. \end{align*} where the $N_i^\kappa$ are smooth matrix fields depending on ${{\bar b^l}_\kappa}$. By using (\ref{g13}) and the fact that $\operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l)\in L^2(0,T; L^2(\Omega))$ from (\ref{g3}), we obtain after time differentiating that \begin{align*} [\operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)]_t-N_3^\kappa M^\kappa \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)=\sum_{\beta=1}^2 P_\beta^\kappa W(l),_\beta, + N^\kappa_3 V^\kappa \operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l) \end{align*} where $P_\beta^\kappa$, $\beta=1,2$, are smooth matrix fields depending on ${{\bar b^l}_\kappa}$. Therefore, \begin{align} \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} W(l)=A^\kappa\operatorname{curl} u_0(\theta_l) +A^\kappa \int_0^t( B_\beta^\kappa W(l),_\beta, + \operatorname{div}_{\bar\eta_\kappa\circ\theta_l} W(l)) \label{g14} \end{align} where $A^\kappa$ and $B_\beta^\kappa$, $\beta=1,2$, are smooth matrix fields depending on ${{\bar b^l}_\kappa}$. With (\ref{g12}) and (\ref{g14}), we infer in a similar way as for (\ref{g9}) that on $(0,1)^2\times(\frac{1}{p},1)$ we have \begin{align} \operatorname{curl}_{\bar\eta_\kappa\circ\theta_l} \rho_m\star_h [\alpha_l W(l)]= A^\kappa\sum_{\beta=1}^2 \int_0^t B_\beta^\kappa \rho_m\star_h[\alpha_l\circ\theta_l W(l)],_\beta+R_3, \label{g15} \end{align} with $\|R_3\|_{L^2(0,T;L^2((0,1)^3))}\le N(u_0)$. Therefore, with (\ref{g13}) and (\ref{g14}), we have that \begin{align} [\alpha_l(\theta_l) W(l)],_3= M^\kappa A^\kappa\sum_{\beta=1}^2 \int_0^t B_\beta^\kappa [\alpha_l(\theta_l) W(l)],_\beta+\sum_{\beta=1}^2 A_\beta^\kappa [\alpha_l(\theta_l)W(l)],_\beta+R_4, \label{g16} \end{align} with $\|R_4\|_{L^2(0,T;L^2((0,1)^3))}\le N(u_0)$. Thus, for any test function $\varphi\in H^1((0,1)^3)$, \begin{align*} \int_{(0,1)^2\times\frac{1}{p}} \alpha_l(\theta_l) W(l)\cdot\varphi= \int_{(0,1)^2\times(\frac{1}{p},0)} [\alpha_l(\theta_l) W(l)],_3 \cdot\varphi +\int_{(0,1)^2\times(\frac{1}{p},0)} \alpha_l(\theta_l) W(l) \cdot\varphi,_3. \end{align*} Now, since for $\beta=1,2$, we have \begin{align*} \int_{(0,1)^2\times(\frac{1}{p},0)} [\alpha_l(\theta_l) W(l)],_\beta \cdot\varphi= -\int_{(0,1)^2\times(\frac{1}{p},0)} [\alpha_l(\theta_l) W(l)] \cdot\varphi,_\beta, \end{align*} using (\ref{g16}), we infer that \begin{align*} \bigr|\int_{(0,1)^2\times\frac{1}{p}} \alpha_l(\theta_l) W(l)\cdot\varphi\bigr|\le C\ (\|W(l)\|_{0,(0,1)^3}+\|R_4\|_{0,(0,1)^3})\|\varphi\|_{1,(0,1)^3}, \end{align*} implying (independently of $p>1$) the following trace estimate for $W(l)$ (not just its normal component): \begin{equation} \label{g17} \int_0^T |\alpha_l(\theta_l)\ W(l)|^2_{-{\frac{1}{2}},(0,1)^2\times\frac{1}{p}}\le N(u_0). \end{equation} Similarly as (\ref{g15}), we also have the divergence relation \begin{align} \operatorname{div}_{\bar\eta_\kappa\circ\theta_l} \rho_m\star_h [\alpha_l W(l)]= C^\kappa\sum_{\beta=1}^2 \int_0^t D_\beta^\kappa \rho_m\star_h[\alpha_l\circ\theta_l W(l)],_\beta+R_5, \label{g18} \end{align} with $\|R_5\|_{L^2(0,T;L^2((0,1)^3))}\le N(u_0)$, and $C^\kappa$ and $D_\beta^\kappa$, $\beta=1,2$, are smooth matrix fields in terms of ${{\bar b^l}_\kappa}$. From (\ref{g15}) and (\ref{g18}), we then infer, just as in (\ref{g10}), that \begin{align*} \int_0^T \|\rho_m\star_h [\alpha_l W(l)]\circ(\bar\eta_\kappa\circ\theta_l)^{-1} \|^2_{1,\Omega_p^l}\le&\ N(u_0) +\int_0^T | \rho_m\star_h [\alpha_l W(l)]\circ(\bar\eta_\kappa\circ\theta_l)^{-1}\cdot \bar n_\kappa |^2_{{\frac{1}{2}},\partial\Omega_p^l}, \end{align*} where $\Omega_p^l=\theta_l((0,1)^2\times(\frac{1}{p},1))$. Thus, \begin{align*} \int_0^T \| \rho_m\star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^2\times(\frac{1}{p},1)}\le&\ N(u_0) +\int_0^T |\rho_m\star_h [\alpha_l(\theta_l) W(l)]|^2_{{\frac{1}{2}},(0,1)^2\times\frac{1}{p}}. \end{align*} Now, from the properties of the convolution, \begin{align*} \frac{1}{m} \left|\rho_m\star_h [\alpha_l(\theta_l) W(l)] \, \right|_{{\frac{1}{2}},(0,1)^2\times\frac{1}{p}} \le C \left|\rho_m\star_h [\alpha_l(\theta_l) W(l)] \, \right|_{-{\frac{1}{2}},(0,1)^2\times\frac{1}{p}}, \end{align*} which, with (\ref{g17}), leads us (independently of $p>1$) to \begin{align*} \frac{1}{m^2} \int_0^T \| \rho_m\star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^2\times(\frac{1}{p},1)}\le N(u_0), \end{align*} for any $0<\displaystyle\frac{1}{m}\le \text{dist}(\text{supp}\alpha_l(\theta_l), \partial (0,1)^2\times(0,1))$. Since this estimate holds for any $p>1$, we then infer that \begin{align} \frac{1}{m^2} \int_0^T \| \rho_m\star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^3}\le N(u_0), \label{g19} \end{align} for any $0<\displaystyle\frac{1}{m}\le\text{dist}(\text{supp}\alpha_l(\theta_l),\partial(0,1)^2\times(0,1))$. Therefore, $\rho_m\star_h [\alpha_l(\theta_l) W(l)]\in H^1((0,1)^3)$ (which was not a priori known since our convolution smooths only in the horizontal directions), with a bound depending {\it a priori} on $m$. \subsubsection{Control of the horizontal convolved-by-layers velocity fields independently of $m$}\hfill\break From (\ref{g15}) and (\ref{g18}), we infer that \begin{align*} &\int_0^T \|\rho_m\star_h [\alpha_l W(l)]\circ(\bar\eta_\kappa\circ\theta_l)^{-1} \|^2_{1,\bar\eta_\kappa(\Omega)} \\ & \qquad\qquad\qquad\qquad \le\ N(u_0) + \int_0^T | \rho_m\star_h [\alpha_l(\theta_l) W(l)] \circ(\bar\eta_\kappa\circ\theta_l)^{-1}\cdot {\bar{n}^{\kappa}} |^2_{{\frac{1}{2}},\partial\bar\eta_\kappa(\Omega)}, \end{align*} and thus, \begin{align} &\int_0^T \| \rho_m \star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^3} \label{g20} \\ & \qquad\qquad \le\ N(u_0)\nonumber +\int_0^T | \rho_m\star_h [\alpha_l(\theta_l) W(l)] \cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)|^2_{{\frac{1}{2}},(0,1)^2\times\{0\}}. \nonumber \end{align} Next, we have for any $x\in (0,1)^2\times\{0\}$: \begin{align*} \rho_m\star_h [\alpha_l(\theta_l) W(l)]\cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)(x)&= \rho_m\star_h [\alpha_l(\theta_l) W(l)\cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)](x) + f(x), \end{align*} with \begin{align*} f(x)=\int_{\mathbb R^2} \rho_m(x_H-y_H) \alpha_l(\theta_l) W(l)(y_H,x_3)\cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)(x_H,x_3)-{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)(y_H,x_3)] dy_H. \end{align*} Therefore, with (\ref{g20}), we obtain \begin{align} &\int_0^T \|\rho_m\star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^3} \nonumber\\ &\qquad \le N(u_0) +|f|_{{\frac{1}{2}},(0,1)^2\times\{0\}} +\int_0^T \left| \rho_m\star_h [\alpha_l(\theta_l) W(l)\cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)]\right|^2_{{\frac{1}{2}},(0,1)^2\times\{0\}}\nonumber\\ &\qquad \le N(u_0) +|f|_{{\frac{1}{2}},(0,1)^2\times\{0\}} +\int_0^T | \alpha_l(\theta_l) W(l)\cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)| ^2_{{\frac{1}{2}},(0,1)^2\times\{0\}}\nonumber\\ &\qquad\le N(u_0) +|f|_{{\frac{1}{2}},(0,1)^2\times\{0\}}+\int_0^T | \alpha_l w_\epsilon\cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa)|^2_{{\frac{1}{2}},\Gamma}\nonumber\\ &\qquad\le N(u_0) +|f|_{{\frac{1}{2}},(0,1)^2\times\{0\}}, \label{g21} \end{align} where we have used the trace control (\ref{g4}) for the last inequality in (\ref{g21}). We now turn our attention to $|f|_{{\frac{1}{2}},(0,1)^2\times\{0\}}$. We first have that \begin{align} \|f\|_{0,(0,1)^3}&\le \frac{C}{m} \|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)} \|\rho_m\star_h\alpha_l(\theta_l)|W(l)|\|_{0,(0,1)^3} \le \frac{C}{m} N(u_0), \label{g22} \end{align} where we have used the definition of $C_T$ to bound $\|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)}$. Next, we have for $\beta=1,2$ that \begin{align*} f,_\beta(x)&=\int_{\mathbb R^2} \rho_m,_\beta(x_H-y_H) \alpha_l(\theta_l) W(l)(y_H,x_3)\cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)(x_H,x_3)-{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)(y_H,x_3)] dy_H\\ &\ \ \ + \int_{\mathbb R^2} \rho_m(x_H-y_H) \alpha_l(\theta_l) W(l)(y_H,x_3) dy \cdot {\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l),_\beta(x), \end{align*} showing that \begin{align} \|f,_\beta\|_{0,(0,1)^3} &\le \frac{C}{m} \|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)}\sum_{i=1}^3 \|\, |\rho_m,_\beta|\, \star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3} \nonumber \\ & \qquad\qquad\qquad\qquad + \|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)}\sum_{i=1}^3\|\rho_m\star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3}\nonumber\\ &\le C \|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)}\sum_{i=1}^3\||(\rho,_\beta)_m| \star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3} \nonumber\\ &\qquad\qquad\qquad\qquad + \|{\bar{n}^{\kappa}}(\bar\eta_\kappa)\|_{H^3(\Omega)}\sum_{i=1}^3\|\rho_m \star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3}\nonumber\\ &\le C \sum_{i=1}^3\||(\rho,_\beta)_m| \star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3} + C \sum_{i=1}^3\|\rho_m\star_h\alpha_l(\theta_l)|W(l)^i|\|_{0,(0,1)^3}\nonumber\\ &\le C \|\alpha_l(\theta_l)W(l)\|_{0,(0,1)^3} \le N(u_0). \label{g23} \end{align} Next, for the vertical derivative, \begin{align*} f,_3(x)&=\int_{\mathbb R^2} \rho_m(x_H-y_H) [\alpha_l(\theta_l) W(l)],_3(y_H,x_3)\cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)]_{(y_H,x_3)}^{(x_H,x_3)} dy_H\\ &\ \ \ + \int_{\mathbb R^2} \rho_m(x_H-y_H) [\alpha_l(\theta_l) W(l)](y_H,x_3)\cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l),_3]_{(y_H,x_3)}^{(x_H,x_3)} dy_H\,, \end{align*} where $[\cdot ]_{(y_H,x_3)}^{(x_H,x_3)} = [\cdot](x_H,x_3)-[\cdot](y_H,x_3)$. Notice that for a smooth matrix field $A$ in $(0,1)^3$ and for $\beta=1,2$, \begin{align*} G(x)&=\int_{\mathbb R^2} \rho_m(x_H-y_H) A(y_h,x_3) [\alpha_l(\theta_l) W(l)],_\beta(y_H,x_3) \cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)]_{(y_H,x_3)}^{(x_H,x_3)} dy_H, \end{align*} satisfies \begin{align*} G(x)=&-m\int_{\mathbb R^2} (\rho,_\beta)_m(x_H-y_H) A(y_h,x_3) [\alpha_l(\theta_l) W(l)](y_H,x_3) \cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)]_{(y_H,x_3)}^{(x_H,x_3)} dy_H\\ &-\int_{\mathbb R^2} \rho_m(x_H-y_H) A,_\beta(y_h,x_3) [\alpha_l(\theta_l) W(l)](y_H,x_3) \cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l)]_{(y_H,x_3)}^{(x_H,x_3)} dy_H\\ &-\int_{\mathbb R^2} \rho_m(x_H-y_H) A(y_h,x_3) [\alpha_l(\theta_l) W(l)](y_H,x_3) \cdot[{\bar{n}^{\kappa}}(\bar\eta_\kappa\circ\theta_l),_\beta]_{(y_H,x_3)}^{(x_H,x_3)} dy_H, \end{align*} showing, just as for (\ref{g23}), that $\|G\|_{0,(0,1)^3}\le N(u_0)$. Therefore, with (\ref{g16}), we see that the first integral term appearing in the expression of $f,_3$ is bounded in a similar way, implying that \begin{align} \|f,_3\|_{0,(0,1)^3}\le N(u_0). \label{g24} \end{align} Consequently, with (\ref{g22}), (\ref{g23}), (\ref{g24}), we obtain that \begin{equation} \label{g25} \|f\|_{1,(0,1)^3}\le N(u_0). \end{equation} Therefore, (\ref{g21}) implies that \begin{align} \label{g25'} \int_0^T \|\rho_m\star_h [\alpha_l(\theta_l) W(l)] \|^2_{1,(0,1)^3}&\le N(u_0) . \end{align} \subsubsection{Control of $w_\epsilon$ in $L^2(0,T;H^1(\Omega))$}\hfill\break Since (\ref{g25'}) holds independently of $m$ sufficiently large, this implies that \begin{align*} \int_0^T \|\alpha_l(\theta_l) W(l) \|^2_{1,(0,1)^3}&\le N(u_0) . \end{align*} Since we proved in subsection \ref{6.3.1} that $w_\epsilon$ is bounded in $L^2(0,T;H^1(\omega))$ independently of $\epsilon$ for each domain $\omega\subset\subset \Omega$, this provides us with the estimate \begin{align} \int_0^T \|w_\epsilon \|^2_{1}&\le N(u_0), \end{align} independently of $\epsilon>0$. \begin{remark} In the two-dimensional case, a simpler proof of Step 3 is possible, founded upon a scalar potential function for the velocity field. For conciseness, we consider a simply-connected domain, the non-simply connected case being treated similarly by local charts. Once again, we let ${u_{\epsilon}}={w_{\epsilon}}\circ{{{\bar{\eta}^{\kappa}}}}^{-1}$. From (\ref{g3.b}) and (\ref{g4}), let $w^\epsilon_{\tau}\in L^2(0,T;H^1(\Omega))$ such that \begin{align*} \operatorname{div}(w^\epsilon_{\tau}({{{\bar{\eta}^{\kappa}}}}^{-1}))({{{\bar{\eta}^{\kappa}}}})&= \operatorname{div}({w_{\epsilon}}({{{\bar{\eta}^{\kappa}}}}^{-1}))({{{\bar{\eta}^{\kappa}}}}) \ \ \text{in}\ L^2(0,T; L^2(\Omega)), \\ w^\epsilon_{\tau}\cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})&= {w_{\epsilon}}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})\ \ \text{in}\ L^2(0,T; H^1(\Gamma))\ . \end{align*} We infer the existence of $\psi^\epsilon\in L^2(0,T;H^1_0({{{\bar{\eta}^{\kappa}}}}(\Omega)))$ such that ${u_{\epsilon}}=w^\epsilon_{\tau}({{{\bar{\eta}^{\kappa}}}}^{-1})+(-\psi^\epsilon,_2,\psi^\epsilon,_1)$. Now, from (\ref{g6}), we see that in $L^2(0,T;H^{-1}(\Omega))$, we have for $\bar\psi^\epsilon=\psi^\epsilon\circ{{{\bar{\eta}^{\kappa}}}}$, \begin{equation} \label{g6bis} \displaystyle -(\bar{a}_\kappa)_i^k ((\bar{a}_\kappa)_i^j \bar\psi^\epsilon,_j),_k=f^\epsilon_{\tau}-\int_0^t A^\kappa_{ij} \bar\psi^\epsilon,_{ij}, \end{equation} where $f^\epsilon_\tau$ is bounded in $L^2(0,T;L^2(\Omega))$. It is readily seen that $\bar\psi^\epsilon$ is the unique solution of this equation in $L^2(0,T;H^1_0(\Omega))$. We now establish that this uniqueness provides extra regularity for $\bar\psi^\epsilon$. By defining the mapping $\Theta$ from $L^2(0,T;H^2(\Omega)\cap H^1_0(\Omega))$ into itself by associating to any $\xi$ in this space, the solution $\Theta\xi$ (for almost all $t\in[0,T]$) of \begin{equation*} \displaystyle -({{\bar a}_\kappa})_i^k (({{\bar a}_\kappa})_i^j \Theta\xi,_j),_k=f_{\tau}-\int_0^t A^\kappa_{ij} \xi,_{ij} , \end{equation*} we see that for $t_1$ small enough (depending on Sobolev constants and on $\|{\bar{u}^{\kappa}}\|_{L^\infty(0,T;H^3(\Omega))}$) $\Theta$ is contractive from $L^2(0,t_1;H^2(\Omega)\cap H^1_0(\Omega))$ into itself, which provides a fixed-point for $\Theta$ in this space. It is thus a solution of (\ref{g6bis}) on $[0,t_1]$. By uniqueness of such a solution, we have that $\bar\psi^\epsilon\in L^2(0,t_1;H^2(\Omega))$ and thus that ${w_{\epsilon}}\in L^2(0,t_1;H^1(\Omega))$. By defining a mapping similar to $\Theta$, but this time starting from $\displaystyle t_2\in [\frac{t_1}{2},t_1]$ such that ${w_{\epsilon}}(t_2)\in H^1(\Omega)$ instead of $u_0$ (which ensures that the new $f_\tau$ is still in $L^2(0,t_2;L^2(\Omega))$), we obtain the same conclusion on $[t_2,t_2+t_1]$, leading us to $\displaystyle {w_{\epsilon}}\in L^2(0,{\frac{3}{2}} t_1;H^1(\Omega))$. By induction, we then find ${w_{\epsilon}}\in L^2(0,T;H^1(\Omega))$. \end{remark} \begin{remark} Whereas Hodge decompositions with vector potentials $\psi$ are possible in higher dimension, it turns out that a Dirichlet condition $\psi=0$ for the associated elliptic problem is not possible. This in turn is problematic for any uniqueness argument in $L^2(0,T;H^1(\Omega))$ for $\psi$, since it does not seem possible to find a boundary condition that would be naturally associated to the second order operators appearing on both sides of the three-dimensional analogous of (\ref{g6bis}). \end{remark} \section{Pressure as a Lagrange multiplier} \label{5} We will need two Lagrange multiplier lemmas for our pressure function in our analysis as the penalization parameter $\epsilon\rightarrow 0$. We begin with a lemma that is necessary for a new Hodge-type decomposition of the velocity field. \begin{lemma}\label{lemma_lagrange} For all $l \in H^{\frac{1}{2}}(\Omega)$, $t\in [0,T]$, there exists a constant $C>0$ and $\phi(l) \in H^{\frac{3}{2}}(\Omega)$ such that $({{\bar a}_\kappa})_i^j (t) \phi^i,_j =l$ in $\Omega$ and \begin{equation}\label{v-p} \|\phi(l)\|^2_{{\frac{3}{2}}} \le C\|l\|^2_{{\frac{1}{2}}}. \end{equation} \end{lemma} \begin{proof} Let $\psi(l)$ be the solution of \begin{subequations} \label{laplacien} \begin{align} ({{\bar a}_\kappa})_i^j[ ({{\bar a}_\kappa})_i^k \psi(l),_k],_j&=l\ \text{in}\ \Omega\\ \psi(l)&=0\ \text{on}\ \Gamma. \end{align} \end{subequations} We then see that $\phi^i(l)=({{\bar a}_\kappa})_i^j \psi(l),_j$ satisfies the statement of the lemma. The inequality (\ref{v-p}) is a simple consequence of the properties of $l$ and of the condition $\bar v\in C_T$. \end{proof} We can now follow \cite{SolSca1973}. For $p\in H^{\frac{1}{2}}(\Omega)'$, define the linear functional on $H^{\frac{3}{2}}(\Omega)$ by $\langle p,({{\bar a}_\kappa})_i^j (t) \varphi^i,_j\rangle_ {{\frac{1}{2}}}$, where $\varphi\in H^{\frac{3}{2}}(\Omega)$. By the Riesz representation theorem, there is a bounded linear operator $Q (t): (H^{\frac{1}{2}}(\Omega))'\rightarrow H^{\frac{3}{2}}(\Omega)$ such that $$ \forall \varphi\in H^{\frac{3}{2}}(\Omega),\ \langle p,\ ({{\bar a}_\kappa})_i^j (t) \varphi^i,_j\rangle_ {{\frac{1}{2}}}= (Q(t)p,\ \varphi)_{{\frac{3}{2}}}. $$ Letting $\varphi=Q(t)p$ shows that \begin{equation}\label{Qp0} \|Q(t)p\|_{{\frac{3}{2}}} \le C \|p \|_ {H^{\frac{1}{2}}(\Omega)'} \end{equation} for some constant $C>0$. Using Lemma \ref{lemma_lagrange}, we see that \begin{equation*} \forall l\in H^{\frac{1}{2}}(\Omega),\ \langle p,\ l\rangle_ {{\frac{1}{2}}}= (Q(t)p,\ \phi(l))_{{\frac{3}{2}}}, \end{equation*} and thus \begin{equation}\label{Qp} \|p\|_{H^{\frac{1}{2}}(\Omega)'}\le C \|Q(t)p\|_{{\frac{3}{2}}}, \end{equation} which shows that $R(Q(t))$ is closed in $H^{\frac{3}{2}}(\Omega)$. Let ${\mathcal V}_{\bar v}(t)= \{ v\in L^2(\Omega) \ | \ ({{\bar a}_\kappa})^j_i(t) v^i,_j(t)=0\}$. Since ${\mathcal V}_{\bar v}(t)\cap H^{\frac{3}{2}}(\Omega) = R(Q(t))^\perp$, it follows that \begin{equation}\label{hodge} H^{\frac{3}{2}}(\Omega) = R(Q(t)) \oplus_ {H^{\frac{3}{2}}(\Omega)} {\mathcal V}_{\bar v}(t)\cap H^{\frac{3}{2}}(\Omega). \end{equation} We can now introduce our first Lagrange multiplier \begin{lemma} \label{Lagrange} Let ${\mathfrak L}(t)\in H^{\frac{3}{2}} (\Omega)'$ be such that ${\mathfrak L}(t) \varphi=0$ for any $\varphi\in {\mathcal V}_{\bar v}(t)\cap H^{\frac{3}{2}}(\Omega)$. Then there exists a unique $q(t)\in H^{\frac{1}{2}} (\Omega)'$, which is termed the pressure function, satisfying $$\forall \varphi\in H^{\frac{3}{2}} (\Omega),\ \ {\mathfrak L}(t) (\varphi)=\langle q(t),\ ({{\bar a}_\kappa})_i^j \varphi^i,_j\rangle_{{\frac{1}{2}}}.$$ Moreover, there is a $C>0$ (which does not depend on $t\in [0,T]$ and on the choice of $\bar v\in C_T$) such that$$\|q(t)\|_{H^{\frac{1}{2}} (\Omega)'}\le C\ \|{\mathfrak L}(t)\|_{H^{\frac{3}{2}}(\Omega)'}.$$ \end{lemma} \begin{proof} By the decomposition (\ref{hodge}), for $\varphi\in{H^{\frac{3}{2}}(\Omega, {\mathbb R}^3)}$, we let $\varphi=v_1+v_2$, where $v_1 \in {\mathcal V}_{\bar v} (t)\cap H^{\frac{3}{2}}(\Omega)$ and $v_2 \in R(Q(t))$. From our assumption, it follows that $$ {\mathfrak L}(t)(\varphi) = {\mathfrak L}(t)(v_2) = ( \psi(t), v_2)_{H^{\frac{3}{2}}(\Omega)} = ( \psi(t), \varphi)_{H^{\frac{3}{2}}(\Omega)}, $$ \ for a unique \ $\psi(t) \in R(Q(t))$. From the definition of $Q(t)$ we then get the existence of a unique $q(t)\in H^{\frac{1}{2}} (\Omega)'$ such that $$\forall \varphi\in H^{\frac{3}{2}} (\Omega),\ \ {\mathfrak L}(t) (\varphi) =\langle q(t),\ ({{\bar a}_\kappa})_i^j \varphi^i,_j\rangle_{{\frac{1}{2}}}.$$ The estimate stated in the lemma is then a simple consequence of (\ref{Qp}). \end{proof} We also need the case where the pressure function is in $H^{\frac{1}{2}}(\Omega)$. We start, as above, with a simple elliptic result: \begin{lemma}\label{lemma_lagrangebis} For all $l \in H^{\frac{1}{2}}(\Omega)'$, $t\in [0,T]$, there exists a constant $C>0$ and $\phi(l) \in H^{\frac{1}{2}}(\Omega)$ such that $({{\bar a}_\kappa})_i^j (t) \phi^i,_j =l$ in $\Omega$ and \begin{equation}\label{v-pbis} \|\phi(l)\|^2_{{\frac{1}{2}}} \le C\|l\|^2_{H^{\frac{1}{2}}(\Omega)'}. \end{equation} \end{lemma} \begin{proof}Let $\psi(l)$ be the solution of (\ref{laplacien}). Since $\psi$ is linear and continuous from $H^1(\Omega)'$ into $H^1(\Omega)$ and from $L^2(\Omega)$ into $H^2(\Omega)$, by interpolation, we have that $\psi$ is linear and continuous from $H^{\frac{1}{2}}(\Omega)'$ into $H^{\frac{3}{2}}(\Omega)$. We then see that $\phi^i(l)=({{\bar a}_\kappa})_i^j \psi(l),_j$ satisfies the statement of the lemma. \end{proof} For $p\in H^{\frac{1}{2}}(\Omega)$, we define the linear functional on $X(t)$ by $\langle ({{\bar a}_\kappa})_i^j (t) \varphi^i,_j,p\rangle_ {{\frac{1}{2}}}$, where $\varphi\in X(t)=\{\psi\in H^{\frac{1}{2}}(\Omega)|\ ({{\bar a}_\kappa})_i^j \psi^i,_j\in H^{\frac{1}{2}}(\Omega)'\}$. By the Riesz representation theorem, there is a bounded linear operator $Q (t): H^{\frac{1}{2}}(\Omega)\rightarrow X(t)$ such that $$ \forall \varphi\in X(t),\ \langle ({{\bar a}_\kappa})_i^j (t) \varphi^i,_j,p\rangle_ {{\frac{1}{2}}}= (Q(t)p,\ \varphi)_{X(t)}. $$ Letting $\varphi=Q(t)p$ shows that \begin{equation}\label{Qp0bis} \|Q(t)p\|_{X(t)} \le C \|p \|_ {H^{\frac{1}{2}}(\Omega)} \end{equation} for some constant $C>0$. Using Lemma \ref{lemma_lagrangebis}, we see that \begin{equation*} \forall l\in H^{\frac{1}{2}}(\Omega)',\ \langle l,\ p\rangle_ {{\frac{1}{2}}}= (Q(t)p,\ \phi(l))_{X(t)}, \end{equation*} and thus \begin{equation}\label{Qpbis} \|p\|_{H^{\frac{1}{2}}(\Omega)}\le C \|Q(t)p\|_{X_t}, \end{equation} which shows that $R(Q(t))$ is closed in $X(t)$. Since ${\mathcal V}_{\bar v}(t)\cap X(t) = R(Q(t))^\perp$, it follows that \begin{equation}\label{hodgebis} X(t) = R(Q(t)) \oplus_ {X(t)} {\mathcal V}_{\bar v}(t)\cap X(t). \end{equation} Our second Lagrange multiplier lemma can now be stated. \begin{lemma} \label{Lagrangebis} Let ${\mathfrak L}(t)\in X(t)'$ be such that ${\mathfrak L}(t) \varphi=0$ for any $\varphi\in {\mathcal V}_{\bar v}(t)\cap H^{\frac{1}{2}}(\Omega)$. Then there exists a unique $q(t)\in H^{\frac{1}{2}} (\Omega)$, which is termed the pressure function, satisfying $$\forall \varphi\in X(t),\ \ {\mathfrak L}(t) (\varphi)=\langle ({{\bar a}_\kappa})_i^j \varphi^i,_j, q(t)\rangle_{{\frac{1}{2}}}.$$ Moreover, there is a $C>0$ (which does not depend on $t\in [0,T]$ and on the choice of $\bar v\in C_T$) such that$$\|q(t)\|_{H^{\frac{1}{2}} (\Omega)}\le C\ \|{\mathfrak L}(t)\|_{X(t)'}.$$ \end{lemma} \begin{proof} By the decomposition (\ref{hodgebis}), for $\varphi\in X(t)$, we let $\varphi=v_1+v_2$, where $v_1 \in {\mathcal V}_{\bar v} (t)\cap H^{\frac{1}{2}}(\Omega)$ and $v_2 \in R(Q(t))$. From our assumption, it follows that $$ {\mathfrak L}(t)(\varphi) = {\mathfrak L}(t)(v_2) = ( \psi(t), v_2)_{X(t)} = ( \psi(t), \varphi)_{X(t)}, $$ \ for a unique \ $\psi(t) \in R(Q(t))$. From the definition of $Q(t)$ we then get the existence of a unique $q(t)\in H^{\frac{1}{2}} (\Omega)$ such that $$\forall \varphi\in X(t),\ \ {\mathfrak L}(t) (\varphi) =\langle ({{\bar a}_\kappa})_i^j \varphi^i,_j,\ q(t)\rangle_{{\frac{1}{2}}}.$$ The estimate stated in the lemma is then a simple consequence of (\ref{Qpbis}). \end{proof} \section{Existence of a solution to the linearized smoothed $\kappa$-problem (\ref{smoothlinear})} \label{6} In this section, we prove the existence of a solution $w$ to the linear problem (\ref{smoothlinear}), constructed as the limit $\epsilon\rightarrow 0$. The analysis requires establishing the regularity of the weak solution. Note that the extra regularity on $u_0$ is needed in order to ensure the regularity property for $w$, $q$, and their time derivatives as stated in the next theorem, without having to consider the variational limits of the time differentiated penalized problems. \begin{theorem} \label{uniqueweak} Suppose that $u_0 \in H^{13.5}(\Omega)$ and $\Omega$ is of class $C^\infty$. Then, there exists a unique weak solution $w$ to the linear problem (\ref{smoothlinear}), which is moreover in $L^2 (0,T;H^{13.5}(\Omega))$. Furthermore, \begin{gather*} \partial_t^i w\in L^2 (0,T;H^{13.5-3i}(\Omega))\cap L^\infty (0,T;H^{12.5-3i}(\Omega)), \ \ \ i=1,2,3,4\,,\\ \partial_t^i q\in L^2 (0,T;H^{11.5-3i}(\Omega))\cap L^\infty (0,T;H^{10.5-3i}(\Omega)), \ \ \ i=0,1,2,3\,. \end{gather*} \end{theorem} \begin{proof} \noindent{\bf Step 1. The limit as $\epsilon \rightarrow 0$.} Let $\epsilon=\frac{1}{m}$; we first pass to the weak limit as $m\rightarrow \infty$. The inequality (\ref{g1}) provides the following bound, independent of $\epsilon$: $$\int_0^T \frac{1}{\epsilon}\|({{\bar a}_\kappa})_i^j {w_{\epsilon}}^i,_j\|^2_{0}+|{w_{\epsilon}}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})|^2_1+\|{w_{\epsilon}}\|^2_0\ dt\le N(u_0)$$ which provides a subsequence $\{w_{\frac{1}{m_l}}\}$ such that \begin{subequations} \label{weakconvergence} \begin{align} w_{\frac{1}{m_l}} \rightharpoonup w \ \ \text{ in } \ \ L^2(0,T; L^2(\Omega))\ ,\\ ({{\bar a}_\kappa})_i^j w_{\frac{1}{m_l}}^i,_j \rightharpoonup ({{\bar a}_\kappa})_i^j w^i,_j \ \ \text{ in } \ \ L^2(0,T; L^2(\Omega))\ ,\\ w_{\frac{1}{m_l}}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) \rightharpoonup w\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) \ \ \text{ in } \ \ L^2(0,T; H^1(\Gamma))\ . \end{align} \end{subequations} The justification for $w\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})$ being the third weak limit in (\ref{weakconvergence}) comes from the identity $({{\bar a}_\kappa})_i^j {w_{\epsilon}}^i,_j=\operatorname{div}({w_{\epsilon}}\circ{{{\bar{\eta}^{\kappa}}}}^{-1})({{{\bar{\eta}^{\kappa}}}})$ and the fact that ${\bar{n}^{\kappa}}$ is the normal to ${{{\bar{\eta}^{\kappa}}}}(\Omega)$. Moreover, since (\ref{g1}) also shows that $\|({{\bar a}_\kappa})_i^j w_{\frac{1}{m}}^i,_j\|_{L^2(0,T;L^2(\Omega))}\rightarrow 0$ as $m\rightarrow\infty$, we then have $\|(\bar a_\kappa)_i^j w^i,_j\|_{L^2(0,T;L^2(\Omega))}=0$, {\it i.e.} \begin{equation} \label{divfree} (\bar a_\kappa)_i^j w^i,_j=0\ \text{in}\ L^2(0,T;L^2(\Omega)). \end{equation} Now, let us denote $u=w\circ{{{\bar{\eta}^{\kappa}}}}^{-1}$, so that thanks to (\ref{divfree}) and (\ref{g5}) we have \begin{subequations} \label{divcurl} \begin{align} \operatorname{div}u&=0\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(\Omega)\ ,\\ \operatorname{curl}u({{{\bar{\eta}^{\kappa}}}})&=\operatorname{curl} u_0+\int_0^t B(\nabla{\bar{u}^{\kappa}},\nabla u)\ \text{in}\ H^{-1}(\Omega). \end{align} \end{subequations} By proceeding as in Step 3 of Section \ref{4}, the trace regularity $(u\cdot{\bar{n}^{\kappa}})({{{\bar{\eta}^{\kappa}}}})\in L^2(0,T;H^1(\Gamma))$ and the system (\ref{divcurl}) then yield \begin{equation*} \|w\|_{L^2(0,T;H^{\frac{3}{2}}(\Omega))}\le N(u_0), \end{equation*} where $N(u_0)$ is defined in (\ref{Nu0}). \noindent{\bf Step 2. The equation for $w$ and the pressure.} Now, for any $y\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$ and $l= (\bar a_\kappa)_i^j y^i,_j$, we see that for a solution $\varphi$ almost everywhere on $(0,T)$ of the elliptic problem \begin{align*} ({{\bar a}_\kappa})_i^j [\bar J_\kappa^{-1}({{\bar a}_\kappa})_i^k \varphi,_k],_j&=l\ \text{in}\ \Omega\\ \varphi&=0\ \text{on}\ \Gamma, \end{align*} if we let $e^i=\bar J_\kappa^{-1}({{\bar a}_\kappa})_i^k \varphi,_k$, and set $v=y-e$, we have that $e$ and $v$ are both in $L^2(0,T;H^{\frac{3}{2}}(\Omega))$, with \begin{align*} \int_0^T [ \|e\|^2_{\frac{3}{2}}+\|v\|^2_{\frac{3}{2}}] \le & C \int_0^T \|y\|^2_{\frac{3}{2}},\\ ({{\bar a}_\kappa})_i^j v^i,_j =& 0. \end{align*} Since $(\bar a_\kappa)_i^j w^i,_j=0\ \text{in}\ L^2(0,T;L^2(\Omega))$, we infer that $(\bar a_\kappa)_i^j w_t^i,_j=-[(\bar a_\kappa)_i^j]_t w^i,_j\in L^2(0,T;H^{\frac{1}{2}}(\Omega))$, and that \begin{equation*} \langle\bar J_\kappa\ w_t,e\rangle_{\frac{3}{2}}=([(\bar a_\kappa)_i^j]_t w^i,_j,\varphi)_0. \end{equation*} But $v$ also satisfies the variational equation \begin{align*} \int_0^T \langle\bar J_\kappa\ {w_{\epsilon}}_{t},v\rangle_{{\frac{3}{2}}} +\kappa \int_0^T& [{{w_{\epsilon}}}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})),v\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 = \sigma \int_0^T [L_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,v\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \ , \end{align*} leading to \begin{align*} \lim_{\epsilon\rightarrow 0} \int_0^T \langle\bar J_\kappa\ {w_{\epsilon}}_{t},y\rangle_{{\frac{3}{2}}}=& \int_0^T ([(\bar a_\kappa)_i^j]_t w^i,_j,\varphi)_0 + \sigma \int_0^T [L_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,v\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0\\& -\kappa \int_0^T [ {w}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),v\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1. \end{align*} We then see that as $\epsilon\rightarrow 0$, \begin{equation} \label{wt} \int_0^T \|{w_{\epsilon}}_t\|^2_{H^{\frac{3}{2}}(\Omega)'}\le N(u_0). \end{equation} By standard arguments, we infer that ${w_{\epsilon}}_t\rightharpoonup w_t$ in $L^2(0,T;H^{\frac{3}{2}}(\Omega)')$. This ensures that $w\in \mathcal{C}^0([0,T];L^2(\Omega))$, and the condition ${w_{\epsilon}}(0)=u_0$ provides $w(0)=u_0$. Furthermore, we also have for any $\phi\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$ such that $({{\bar a}_\kappa})_i^j \phi^i,_j=0$ in $(0,T)\times\Omega$, the variational equation \begin{align*} \int_0^T \langle\bar J_\kappa\ w_{t},\phi\rangle_{{\frac{3}{2}}} &+\kappa \int_0^T [{w}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 = \sigma \int_0^T [L_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \ . \end{align*} Next, since $w_t\in L^2(0,T;H^{\frac{3}{2}}(\Omega)')$, the Lagrange multiplier lemma \ref{Lagrange} shows that there exists $q\in L^2(0,T;H^{\frac{1}{2}}(\Omega)')$ such that for any $\phi\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$, \begin{align} \int_0^T \langle\bar J_\kappa\ w_{t},\phi\rangle_{{\frac{3}{2}}} &+\kappa \int_0^T [{w}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 \nonumber \\ & -\int_0^T \langle q,({{\bar a}_\kappa})_i^j \phi^i,_j\rangle_{\frac{1}{2}}= \sigma \int_0^T [L_{\bar g} \bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \ . \label{wvar} \end{align} Now, if we have another solution $\tilde w\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$ such that $\tilde w(0)=u_0$ and $\tilde w_t\in L^2(0,T;H^{\frac{3}{2}}(\Omega)')$, we then see, by using $w-\tilde w$ as a test function in the difference between (\ref{wvar}) and its counterpart with $\tilde w$, that we get $w-\tilde w=0$, ensuring uniqueness to the weak solution of (\ref{smoothlinear}). \noindent {\bf Step 3. Regularity of $w$.} We can now study the regularity of $w$ via difference quotient techniques. We will denote $\mathbb R^3_+=\{x\in\mathbb R^3|\ x_3> 0\}$, $S_0=B(0,1)\cap \{x\in\mathbb R^3|\ x_3= 0\}$ and $B_+(0,r)=B(0,r)\cap\mathbb R^3_+$ . We denote by $\theta$ a $C^\infty$ diffeomorphism from $B(0,1)$ into a neighborhood $V$ of a point $x_0\in\Gamma$ such that $\theta(B(0,1)\cap\mathbb R^3_+)=V\cap\Omega$, with $\det\nabla\theta=1$. We consider the smooth cut-off function $\psi(x)=e^{\frac{1}{|x|^2-{\frac{1}{2}}}}$ if $x\in B(0,{\frac{1}{2}})$, and $\psi(x)=0$ elsewhere, and with the use of the test function $[D_{-h}[\psi D_h (w\circ\theta)]]\circ\theta^{-1}\in L^2(0,T;H^{\frac{3}{2}}(\Omega))$ in (\ref{wvar}), with $h=|h| e_\alpha (\alpha=1,2)$, we obtain: \begin{align*} I_1+\kappa I_2+I_3= \sigma \int_0^T [L_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,[D_{-h}[\psi D_h (w\circ\theta)]]\circ\theta^{-1}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_{0} \ , \end{align*} with \begin{align*} I_1&=\int_0^T \langle\bar J_\kappa\ w_{t},[D_{-h}[\psi D_h (w\circ\theta)]]\circ\theta^{-1}\rangle_{{\frac{3}{2}}},\\ I_2&=\int_0^T [\partial ({w}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})),\partial ([D_{-h}[\psi D_h (w\circ\theta)]]\circ\theta^{-1}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}))]_0,\\ I_3&=-\int_0^T \langle q,({{\bar a}_\kappa})_i^j[D_{-h}[\psi D_h (w\circ\theta),_j^i]]\circ\theta^{-1}\rangle_{\frac{1}{2}}. \end{align*} For $I_1$, we simply have \begin{align} I_1=&\|\sqrt{\psi}\ w\circ\theta(t)\|^2_{L^2(B_+(0,1))}-\|\sqrt{\psi}\ u_0\circ\theta\|^2_{L^2(B_+(0,1))}\nonumber\\ &\ + \int_0^T \langle D_h[\bar J_\kappa(\theta)]\ w_{t}\circ\theta,\psi D_h (w\circ\theta)\rangle_{{\frac{3}{2}}}\nonumber\\ \ge & \|\sqrt{\psi}\ w\circ\theta(t)\|^2_{L^2(B_+(0,1))} -N(u_0)-\int_0^T \|w_t\|_{H^{\frac{3}{2}}(\Omega)'} \|D_h[\bar J_\kappa(\theta)]\psi D_h (w\circ\theta)\|_{{\frac{3}{2}}}\nonumber\\ \ge & \|\sqrt{\psi}\ w\circ\theta(t)\|^2_{L^2(B_+(0,1))} -C_\delta N(u_0)-\delta \int_0^T \|\sqrt{\psi} D_h (w\circ\theta)\|^2_{{\frac{3}{2}}}, \label{I1} \end{align} where we have used (\ref{wt}) for the last inequality, and where the choice of $\delta>0$ will be made precise later. For $I_2$, we have, if we define in $B_+(0,1)$, $W=w\circ\theta$ and ${N^\kappa}={\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})(\theta)$, \begin{align} I_2 &=\int_0^T \int_{S_0}\frac{G_{\alpha\beta}}{\sqrt{ a_0}} [W\cdot{N^\kappa}],_\alpha [D_{-h}[\psi D_h W]\cdot{N^\kappa}],_\beta\nonumber\\ &=\int_0^T \int_{S_0}\frac{G_{\alpha\beta}}{\sqrt{ a_0}} [D_h W\cdot{N^\kappa}],_\alpha [\psi D_h W\cdot{N^\kappa}],_\beta\nonumber\\ &\ \ + \int_0^T \int_{S_0} [D_h[ \frac{G_{\alpha\beta}}{\sqrt{ a_0}} [W\cdot{N^\kappa}],_\alpha]- \frac{G_{\alpha\beta}}{\sqrt{ a_0}} [D_h W\cdot{N^\kappa}],_\alpha ] [\psi D_h W\cdot{N^\kappa}],_\beta\nonumber\\ &\ \ + \int_0^T \int_{S_0}[\frac{G_{\alpha\beta}}{\sqrt{ a_0}} [ W\cdot{N^\kappa}],_\alpha](\cdot+h) [\psi D_h W\cdot D_h{N^\kappa}],_\beta, \label{I21} \end{align} where $G_{\alpha\beta}=\theta,_\alpha\cdot\theta,_\beta$ and $a_0=\operatorname{det} G$. In this section, we will denote by $\|\cdot\|_{s,\Theta}$ and $|\cdot|_{s,\partial\Theta}$ the standard norms of $H^s(\Theta)$ and $H^s(\partial\Theta)$. For the first term appearing in the right-hand side of the second inequality, we have \begin{align*} \frac{g_{0\alpha\beta}}{\sqrt{ a_0}} [D_h W\cdot{N^\kappa}],_\alpha [\psi D_h W\cdot{N^\kappa}],_\beta &=\frac{g_{0\alpha\beta}}{\sqrt{ a_0}} \psi [D_h W\cdot{N^\kappa}],_\alpha [D_h W\cdot{N^\kappa}],_\beta\\ &\ \ +\frac{g_{0\alpha\beta}}{\sqrt{ a_0}} \sqrt{\psi}[D_h W\cdot{N^\kappa}],_\alpha D_h W\cdot{N^\kappa} \frac{\psi,_\beta}{\sqrt{\psi}}, \end{align*} and thus, since $\psi$, $\sqrt{\psi}$ and $\displaystyle\frac{\nabla\psi}{\sqrt{\psi}}$ are chosen smooth, we infer that \begin{align*} \int_0^T\int_{S_0} \frac{g_{0\alpha\beta}}{\sqrt{ a}} [D_h W\cdot{N^\kappa}],_\alpha [\psi D_h W\cdot{N^\kappa}],_\beta &\ge C \int_0^T \| \sqrt{\psi} D_h W\cdot{N^\kappa} \|^2_{1,S_0}\\ &\ \ - N(u_0) . \end{align*} The other terms in (\ref{I21}) are easily estimated leading to the estimate: \begin{equation} \label{I2} I_2\ge C \int_0^T \| \sqrt{\psi} D_h W\cdot{N^\kappa} \|^2_{1,S_0} - N(u_0). \end{equation} Concerning $I_3$, we have \begin{align*} I_3&=-\int_0^T \langle q,(\bar b_\kappa )_i^j[D_{-h}[\psi D_h (W)]^i,_j\circ\theta^{-1}]\rangle_{\frac{1}{2}}, \end{align*} with $\bar b_\kappa=[\nabla(\bar\eta_\kappa\circ\theta)]^{-1}$. Now since $(\bar b_\kappa )_i^j W^i,_j=0$, we obtain \begin{align*} (\bar b_\kappa )_i^j D_{-h} D_h W^i,_j =&-D_{-h}D_h (\bar b_\kappa )_i^j W^i,_j - D_h [(\bar b_\kappa )_i^j] (\cdot-h) D_{-h} W^i,_j\\ &- D_{-h} [(\bar b_\kappa )_i^j(\cdot+h)] D_{h} W^i,_j, \end{align*} and thus \begin{align} |I_3|&\le C \int_0^T \|q\|_{H^{\frac{1}{2}}(\Omega)'} [\ \|\psi D_h W^i,_j\|_{{\frac{1}{2}},B_+(0,1)}+ \|\frac{D_h \psi}{\sqrt{\psi}}\ \sqrt{\psi} D_h W^i,_j\|_{{\frac{1}{2}}, B_+(0,1)}\nonumber\\ &\hskip 4cm +\|\sqrt{\psi}D_h W\|_{{\frac{3}{2}},B_+(0,1)}]\nonumber\\ &\le C_\delta N(u_0)+\delta \int_0^T |\sqrt{\psi} D_h W^i,_j|^2_{{\frac{1}{2}},B_+(0,1)}, \label{I3} \end{align} where $\delta>0$ is arbitrary. Now, let $\Theta$ be a smooth domain included in $B_+(0,1)$ and containing $B_+(0,{\frac{1}{2}})$. The inequalities (\ref{I1}), (\ref{I2}) and (\ref{I3}) yield \begin{equation} \label{traceh1} \int_0^T |\sqrt{\psi} D_h W \cdot N^\kappa|^2_{1,\partial\Theta}\le C_\kappa\ N(u_0) + \delta \int_0^T \|\sqrt{\psi} D_h W\|^2_{{\frac{3}{2}},\Theta}. \end{equation} We now define in $B_+(0,1)$ \begin{align*} \operatorname{div}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}W&=\operatorname{div}(W\circ\theta^{-1}\circ{{{\bar{\eta}^{\kappa}}}}^{-1})({{{\bar{\eta}^{\kappa}}}}\circ\theta)=\operatorname{div}(u)({{{\bar{\eta}^{\kappa}}}}\circ\theta),\\ \operatorname{curl}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}W&=\operatorname{curl}(u)({{{\bar{\eta}^{\kappa}}}}\circ\theta). \end{align*} Thus, (\ref{divcurl}) translates in $B_+(0,1)$ to \begin{align*} \operatorname{div}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}W&=0,\\ [\operatorname{curl}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}W](t)&=[\operatorname{curl} u_0]\circ\theta+\int_0^t B(\nabla{\bar{u}^{\kappa}},\nabla u)({{{\bar{\eta}^{\kappa}}}}\circ\theta),\nonumber\\ &=[\operatorname{curl} u_0]\circ\theta+\int_0^t B(\nabla{\bar{u}^{\kappa}},\nabla W({{{\bar{\eta}^{\kappa}}}}\circ\theta)^{-1} \nabla ({{{\bar{\eta}^{\kappa}}}}\circ\theta)^{-1})({{{\bar{\eta}^{\kappa}}}}\circ\theta), \end{align*} and thus \begin{subequations} \label{W1} \begin{align} \operatorname{div}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}(\sqrt{\psi} D_h W)&=-\sqrt{\psi} D_h (\bar b_\kappa)_i^j\ W^i,_j(\cdot+h) + {\frac{1}{2}} \frac{\psi,_j}{\sqrt{\psi}}\ (\bar b_\kappa)_i^j D_h W^i,\\ [\operatorname{curl}_{{{{\bar{\eta}^{\kappa}}}}\circ\theta}(\sqrt{\psi} D_h W)](t)&=R(W)+\int_0^t B(\nabla{\bar{u}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}\circ\theta), \nabla [\sqrt{\psi} D_h W] [\nabla({{{\bar{\eta}^{\kappa}}}}\circ\theta)]^{-1}), \end{align} \end{subequations} with \begin{align*} \int_0^T \|R(W)\|^2_{{\frac{1}{2}},\Theta}\le C N(u_0). \end{align*} With the trace estimate (\ref{traceh1}) and the control of $W$ in $L^2(0,T;H^{\frac{3}{2}}(\Theta))$, we can then infer as we did in Step 3 of Section \ref{4} that \begin{equation*} \int_0^T \|\sqrt{\psi} D_h W\|^2_{{\frac{3}{2}},\Theta}\le C_\kappa\ N(u_0) + C_\kappa \delta \int_0^T \|\sqrt{\psi} D_h W\|^2_{{\frac{3}{2}},\Theta}, \end{equation*} and thus with a choice of $\delta$ small enough, \begin{equation*} \int_0^T \|\sqrt{\psi} D_h W\|^2_{{\frac{3}{2}},\Theta}\le C_\kappa\ N(u_0) , \end{equation*} yielding \begin{equation*} \int_0^T |\sqrt{\psi} D_h W|^2_{1,\partial\Theta}\le C_\kappa\ N(u_0). \end{equation*} Since this estimate is independent of $h$, we get the trace estimate \begin{equation*} \int_0^T |\sqrt{\psi} W|^2_{2,\partial\Theta}\le C_\kappa\ N(u_0), \end{equation*} and thus with this trace estimate and the div and curl system (\ref{W1}), still with arguments similar as in Step 2 of Section \ref{4}, \begin{equation*} \int_0^T |\sqrt{\psi} W|^2_{{\frac{5}{2}},\Theta}\le C_\kappa\ N(u_0). \end{equation*} By patching together all the estimates obtained on each chart defining $\Omega$, we thus deduce that \begin{equation} \label{wcd} \int_0^T \|w\|^2_{{\frac{5}{2}}}\le C_\kappa\ N(u_0). \end{equation} Now, for the pressure, we see that for any $y\in X^{\frac{1}{2}}(t)=\{\phi\in H^{\frac{1}{2}}(\Omega)|\ ({{\bar a}_\kappa})_i^k(t) \phi^i,_k\in H^{\frac{1}{2}}(\Omega)'\}$, for $\varphi$ a solution of the elliptic problem \begin{align*} ({{\bar a}_\kappa})_i^j[ ({{\bar a}_\kappa})_i^k \varphi^i,_k],_j&=({{\bar a}_\kappa})_i^k(t) y^i,_k\ \text{in}\ (H^{\frac{1}{2}})'(\Omega)\\ \varphi&=0\ \text{on}\ \Gamma, \end{align*} we have by interpolation that $\varphi\in H^{\frac{3}{2}}(\Omega)$. If we once again let $e=({{\bar a}_\kappa})_i^k \varphi,_k$, and set $v:=y-e$, we have that $e\in H^{\frac{1}{2}}(\Omega)$, $v\in V(t)=\{\phi\in H^{\frac{1}{2}}(\Omega)|\ ({{\bar a}_\kappa})_i^k(t) \phi^i,_k=0\}$, with $\|e\|_{{\frac{1}{2}}}+\|v\|_{{\frac{1}{2}}}\le C \|y\|_{X^{\frac{1}{2}}(t)}$. Now, by proceeding in the same fashion as in Step 2 above, we see that thanks to our decomposition and the regularity (\ref{wcd}), $w_t\in L^2(0,T;X^{\frac{1}{2}}(t)')$ with \begin{equation*} \int_0^T \|w_t\|^2_{X^{\frac{1}{2}}(t)'}\le N(u_0). \end{equation*} By the Lagrange multiplier Lemma \ref{Lagrangebis}, we then infer \begin{equation} \label{qud} \int_0^T \|q\|^2_{{\frac{1}{2}}}\le N(u_0). \end{equation} Next, by using $D_{-h}D_h[\psi D_{-h}D_h w]$ as a test function in (\ref{wvar}), we infer, similarly to how we obtained (\ref{wcd}), that the estimates (\ref{wcd}) and (\ref{qud}) imply that \begin{equation} \label{wsd} \int_0^T \|w\|^2_{{\frac{7}{2}}}\le N(u_0). \end{equation} We now explain the additional estimates employed for this higher-order differencing. We need the fact that independently of any horizontal vector $h$, there exists a constant $C>0$ such that for $\text{Supp}\psi+h\subset\Theta$, we have that \begin{align} \forall f\in H^{\frac{3}{2}}(\Theta),\ \|\sqrt{\psi}D_h f\|_{{\frac{1}{2}},\Theta}\le& C\ \|f\|_{{\frac{3}{2}},\Theta},\nonumber\\ \forall f\in H^{\frac{1}{2}}(\Theta),\ \|\sqrt{\psi} D_h f\|_{H^{\frac{1}{2}}(\Theta)'}\le& C\ \| f\|_{{\frac{1}{2}},\Theta}. \label{int} \end{align} The first inequality easily follows by interpolation. For the second one, if $f\in L^2(\Theta)$ we notice that for any $\phi\in H^1(\Theta)$, since the difference quotients are in an horizontal direction, \begin{align*} \int_{\Theta} \sqrt{\psi} D_h f\ \phi &= \int_{\Theta} \sqrt{\psi} f D_{-h}\phi+ \int_{\Theta} D_{-h}\sqrt{\psi} f \phi(\cdot-h)\\ &\le C \|f\|_{0,{\Theta}}\|\phi\|_{1,{\Theta}}, \end{align*} which shows that there exists $C>0$ such that \begin{equation*} \forall f\in L^2(\Theta),\ \|\sqrt{\psi} D_h f\|_{H^1(\Theta)'}\le C\ \| f\|_{0,{\Theta}}. \end{equation*} By interpolating with the obvious inequality (for some $C>0$) \begin{equation*} \forall f\in H^1(\Theta),\ \|\sqrt{\psi} D_h f\|_{0,\Theta}\le C\ \| f\|_{1,\Theta}, \end{equation*} we then get (\ref{int}). Now, the pressure solves the elliptic equation \begin{subequations} \label{press} \begin{align} \Delta p&=-({\bar{u}^{\kappa}})^i,_j u^j,_i\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(\Omega),\\ p&=-[\sigma \Delta_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})\ {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})+\kappa \Delta_{\bar 0} (w\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}))\ {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]({{{\bar{\eta}^{\kappa}}}}^{-1}) \end{align} \end{subequations} Using the same change of variables that provides the pressure estimate (\ref{ex2}), and using the elliptic estimates for coefficients with Sobolev class regularity as in \cite{Eb2002}, we find that \begin{equation*} \int_0^t \|q\|^2_{{\frac{3}{2}}}\le N(u_0, \sup_{[0,t]}|\bar w_\kappa|_4, \int_0^t\| w\|_{\frac{7}{2}}^2), \end{equation*} where the right-hand side is defined in (\ref{Nu0}). Therefore with (\ref{wsd}), \begin{equation*} \int_0^t \|q\|^2_{{\frac{3}{2}}}\le N(u_0, \sup_{[0,t]}\|\bar w_\kappa\|_4). \end{equation*} Higher-order regularity results follow successively by appropriate higher-order difference quotients, leading to, for $n\ge 1$, \begin{equation} \label{wqnd} \int_0^t \|w\|^2_{n+{\frac{3}{2}}}+\int_0^t \|q\|^2_{n-{\frac{1}{2}}}\le C N(u_0,\sup_{[0,t]}|\bar w_\kappa|_{n+2}). \end{equation} Now, since $w_t=-({\bar{a}^\kappa})_i^j q,_j$ in $\Omega$, we then infer that for $n\ge 2$, \begin{equation} \label{wtqnd} \int_0^t \|w_t\|^2_{n-{\frac{3}{2}}}\le N(u_0,\sup_{[0,t]}|\bar w_\kappa|_{n+2}), \end{equation} and thus in $[0,t]$, \begin{equation*} \|w(t)\|_{13.5}\le \|u_0\|_{13.5}+\sqrt{t}\ N(u_0,\sup_{[0,t]} |\bar w_\kappa|_{17}). \end{equation*} By Lemma \ref{smoothvbisk} (for the smoothing operation given in Definition \ref{smoothv} on $C_T$), we have that \begin{equation} \label{w12} \|w(t)\|_{13.5}\le \|u_0\|_{13.5}+\sqrt{t}\ N_0(u_0,C^0_\kappa), \end{equation} where we use $C_\kappa^0$ to denote a fixed (nongeneric) constant which depends on $\kappa$. \section{Existence of a fixed-point solution of the smoothed $\kappa$-problem with surface tension} \label{7} Let $A: (\bar w \in B_0) \mapsto w$, with $w$ a solution of (\ref{smoothlinear}). By the relation (\ref{w12}), we see that if we take $T_\kappa\in (0,T)$ such that $$\sqrt{T_\kappa}N_0(u_0,C^0_\kappa)\le 1,$$ then \begin{equation} \label{stable} A(C_{T_\kappa})\subset C_{T_\kappa}. \end{equation} We now prove that $A$ is weakly lower semi-continuous in $C_{ T_\kappa}$. To this end, let $(\bar w^n)_{n=0}^\infty$ be a weakly convergent sequence (in $L^2(0,T_\kappa;H^{13.5}(\Omega))$) toward a weak limit $\bar w$. Necessarily, $\bar w\in C_{T_\kappa}$. By the usual compactness theorems, we have the successive strong convergent sequences \begin{align*} \bar\eta^n&\rightarrow \bar\eta\ \ \text{in}\ L^2(0,T_\kappa;H^{12.5}(\Omega)),\\ (\bar\eta^n)_\kappa&\rightarrow \bar\eta_\kappa\ \ \text{in}\ L^2(0,T_\kappa;H^{12.5}(\Omega)). \end{align*} Now, if we let $w^n=A(\bar w^n)$, we obtain from the stability of $C_{T_\kappa}$ by $A$ and (\ref{wtqnd}) the following bounds: \begin{align*} \int_0^T \|w^n_t\|^2_{10.5}&\le C N(u_0),\\ \sup_{[0,T]}\| w_n\|_{13.5}&\le 2\|u_0\|_{13.5}+1. \end{align*} We thus have the existence of a weakly convergent subsequence $(w^{\sigma(n)})$ in the space $L^2(0,T_\kappa;H^{13.5}(\Omega))$, to a limit $l\in C_{T_\kappa} $. By compactness, from our bound on $w^n_t$, \begin{align*} w^{\sigma(n)}&\rightarrow l\ \ \text{in}\ L^2(0,{T}_\kappa;H^{12.5}(\Omega)). \end{align*} From the strong convergence of $(\bar\eta^n)^\kappa$, we then infer from the relations $(\bar a^n_\kappa),_j^i w^n,_j^i=0$ in $\Omega$, that \begin{equation} \label{ldiv} (\bar a_\kappa),_j^i l,_j^i=0\ \text{ in} \Omega. \end{equation} Moreover, we see that \begin{equation*} p^{\sigma(n)}\rightharpoonup p\ \text{in}\ L^2(0,T_\kappa;H^{11.5}(\Omega)), \end{equation*} with $p$ the solution of \begin{align*} \triangle p&=-({\bar{u}^{\kappa}})^i,_j l^j,_i\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(\Omega),\\ p&=-[\sigma \Delta_{\bar g}\bar{\eta}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})+\kappa \Delta_0(w\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]({{{\bar{\eta}^{\kappa}}}}^{-1}). \end{align*} From the relations (\ref{wvar}) for each $n$, we see from the previous weak and strong convergence that \begin{align} \int_0^T \langle \bar J_\kappa l_{t},\phi\rangle_{{\frac{3}{2}}} &+\kappa \int_0^T [{l}\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}),\phi\cdot{\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_1 \nonumber\\ &-\int_0^T \langle q,({{\bar a}_\kappa})_i^j \phi^i,_j\rangle_{\frac{1}{2}}= \sigma \int_0^T [L_{\bar g}\bar{\eta} \cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}}) ,\phi\cdot {\bar{n}^{\kappa}}({{{\bar{\eta}^{\kappa}}}})]_0 \ , \label{lvar} \end{align} which together with (\ref{ldiv}), and the fact that $l\in C_{T_\kappa}$ implies that $l=A(\bar w)$. By uniqueness of the limit, we then infer that \begin{equation*} w^n\rightharpoonup w\ \text{in}\ L^2(0,T_\kappa;H^{13.5}(\Omega)), \end{equation*} By the Tychonoff fixed-point theorem, we then conclude the existence of a fixed-point $\bar w=w$ in the closed convex set $C_{T_\kappa}$ of the separable Banach space $L^2(0,T_\kappa;H^{13.5}(\Omega))$. This fixed-point satisfies the smoothed system (\ref{smooth}), if we denote $\displaystyle \eta=\text{Id}+\int_0^\cdot w$ and $u=w\circ{\eta^\kappa}^{-1}$. It is also readily seen that $w$, $q$ and their time derivatives have the regularity stated in Theorem \ref{uniqueweak}. \end{proof} \section{Estimates for the divergence and curl} \label{sec_divcurl} \begin{definition}[Energy function for the smoothed $\kappa$-problem] \label{kenergy} We set $$ E^{2D}_\kappa(t) = \sum_{k=0}^3 \|\partial_t^k \eta(t)\|_{4.5-k}^2 + \|v_{ttt}(t)\|_0^2 \ \text{ and } \ E^{3D}_\kappa(t) = \sum_{k=0}^4 \|\partial_t^k \eta(t)\|_{5.5-k}^2 + \|v_{tttt}(t)\|_0^2 \,. \ \ $$ We use $E_\kappa(t)$ to denote the energy function when the dimension is clear. \end{definition} We use these energy functions to construct solutions for the Euler equations. The increase in the derivative count from the 2D case to the 3D case is necessitated by the Sobolev embedding theorem. We will show that solutions of the $\kappa$-problem (\ref{smooth}) have bounded energy $E_\kappa(t)$ for $t\in [0,T]$ when $T$ is taken sufficiently small, and that the bound is, in fact, independent of $\kappa$; as such, we will prove that the limit as $\kappa \rightarrow 0$ of the sequence of solutions to the $\kappa$-problem converges to a solution of the Euler equations. Our estimates begin with the following \begin{lemma}[Divergence and curl estimates] \label{lemma1} Let ${\mathfrak n}:=\text{dim}(\Omega)=2$ or $3$. Letting $L_1=\operatorname{curl}$ and $L_2=\operatorname{div}$, and let $\eta_0:=\eta(0)$ and $$ M_0:= P(\|u_0\|_{2.5 + {\mathfrak n}}, |\Gamma| _{4 + {\mathfrak n}}, \sqrt{\kappa}\|u_0\|_{1.5+3{\mathfrak n}}, \sqrt{\kappa} |\Gamma|_{1+3{\mathfrak n}}) $$ denote a polynomial function of its arguments. Then for $j=1,2$, \begin{align*} & \sup_{t\in[0,T]} \|\sqrt{\kappa} L_j \eta(t)\|^2_{2.5+{\mathfrak n}} + \sum_{k=0}^{{\mathfrak n}+1} \left( \sup_{t\in[0,T]} \|L_j \partial_t^k\eta(t)\|^2_{1.5+{\mathfrak n}-k} + \int_0^T \|\sqrt{\kappa} L_j \partial_t^k v\|^2_{2.5+{\mathfrak n}-k} \right) \\ &\qquad\qquad\qquad\qquad\qquad\qquad \le M_0 + C\,T\, P(\sup_{t\in[0,T]} E_\kappa(t)) \,. \end{align*} \end{lemma} \begin{proof} In Eulerian variables, equation (\ref{smooth.b}) is written as $u^i_t + u^i,_l (u_\kappa)^l + p,_i=0$, where the transport velocity is the horizontally smoothed vector $u_\kappa$. Taking the curl of this equation and using the formula $(\operatorname{curl} u)^i = \epsilon_{ijk} u^k,_j$ with $\varepsilon_{ijk}$ denoting the permutation symbol, we see that $\varepsilon_{ijk} [ \partial_t u^k,_j + u^k,_{jl}u_\kappa^l + u^k,_lu_\kappa^l,_j] =0$. Thus, defining the bilinear form $B^i(\nabla u, \nabla u_\kappa)= \epsilon_{ijk} u^k,_l (u_\kappa)^l,_j$, we can write the vorticity equation as $\frac{D}{Dt}\operatorname{curl} u = B(\nabla u, \nabla u_\kappa)$. (When the transport velocity is divergence-free, then $B$ is the familiar vortex-stretching term.) Composing this equation with $\eta_\kappa$, switching to Lagrangian variables via the chain rule, and integrating this from $0$ to $t$, we have \begin{equation} \label{basic0} \varepsilon_{ijk} v^k,_r {a_\kappa}^r_j = \operatorname{curl} u_0 + \int_0^t B_{a_\kappa}(\tau) d\tau, \ \ \ \ B^i_{a_\kappa} := \varepsilon_{ijk} J_\kappa^{-2} v^k,_r {a_\kappa}^r_l (v_\kappa)^l,_m {a_\kappa}^m_j. \end{equation} This is the time-integrated Lagrangian form of the vorticity equation. We will need to space-differentiate this equation once more for the estimate on $\operatorname{curl}\eta$. Hence, \begin{equation} \label{basic} \varepsilon_{ijk} \nabla v^k,_{r} {a_\kappa}^r_j = \nabla \operatorname{curl} u_0^i +\varepsilon_{ikj} v^k,_r \nabla {a_\kappa}^r_j + \int_0^t \nabla B_{a_\kappa} (\tau) d\tau \,. \end{equation} We begin with the estimates for the case that ${\mathfrak n}=2$; we set $E_\kappa(t)= E_\kappa^{2D}(t)$ and proceed with the estimate for $\operatorname{curl} \eta$. Using that $\nabla v^k,_{r} {a_\kappa}^r_j = \partial_t(\nabla \eta^k,_r {a_\kappa}^r_j) - \nabla \eta^k,_{r} \partial_t{a_\kappa}^r_j$, we see that $$ \varepsilon_{ijk} \partial_t(\nabla \eta^k,_r {a_\kappa}^r_j) = \nabla \operatorname{curl} u_0^i + \varepsilon_{ijk} \nabla \eta^k,_{r} \partial_t{a_\kappa}^r_j +\varepsilon_{ikj} v^k,_r \nabla {a_\kappa}^r_j + \int_0^t \nabla B_{a_\kappa} (\tau) d\tau \,. $$ Integrating once again in time from $0$ to $t$ yields $$ \varepsilon_{ijk} \nabla \eta^k,_r {a_\kappa}^r_j = t\nabla \operatorname{curl} u_0^i + \varepsilon_{ijk} \int_0^t (\nabla \eta^k,_{r} \partial_t{a_\kappa}^r_j +v^k,_r \nabla {a_\kappa}^r_j) + \int_0^t\int_0^{t'} \nabla B_{a_\kappa} \,, $$ where \begin{align} \nabla B_{a_\kappa} &= \varepsilon_{ijk}\left[ J_\kappa^{-2} (\nabla v^k,_r {v_\kappa}^l,_m + v^k,_r \nabla{v_\kappa}^l,_m) {a_\kappa}^r_l{a_\kappa}^m_j \right. \nonumber \\ & \qquad \left. + J_\kappa^{-2} v^k,_r {v_\kappa}^l,_m (\nabla {a_\kappa}^r_l{a_\kappa}^m_j +{a_\kappa}^r_l\nabla {a_\kappa}^m_j) + (\nabla J_\kappa^{-2}) \, v^k,_r {v_\kappa}^l,_m {a_\kappa}^r_l{a_\kappa}^m_j \right] \label{Ba} \end{align} and \begin{equation}\label{derivative_a} \begin{array}{c} \nabla {a_\kappa}^m_j = J_\kappa^{-1} ( {a_\kappa}^s_r{a_\kappa}^m_j - {a_\kappa}^m_r{a_\kappa}^s_j) \nabla {\eta_\kappa}^r,_s\,, \\ \partial_t {a_\kappa}^m_j = J_\kappa^{-1} ( {a_\kappa}^s_r{a_\kappa}^m_j - {a_\kappa}^m_r{a_\kappa}^s_j) {v_\kappa}^r,_s \,, \\ \nabla J_\kappa = {a_\kappa}^r_s \nabla {\eta_\kappa}^r,_s \,. \end{array} \end{equation} Since $\|v_\kappa\|_s \le C \|v\|_s$ (and similarly for $\eta_\kappa$), we will write (\ref{Ba}) and (\ref{derivative_a}) in the following way: \begin{align*} \nabla B_{a_\kappa} &\sim J_\kappa^{-2} a_\kappa^2 \nabla v \nabla^2 v + J_\kappa^{-3} a_\kappa^3 (\nabla v)^2 \nabla^2 \eta \,,\\ \nabla a_\kappa &\sim J_\kappa^{-1} a_\kappa^2 \nabla^2 \eta\,, \\ \partial_t a_\kappa &\sim J_\kappa^{-1} a_\kappa^2 \nabla v \,,\\ \nabla J_\kappa &\sim a_\kappa \nabla^2 \eta \,, \end{align*} where note that we are not distinguishing between $\eta_\kappa$ and $\eta$ or between $v_\kappa$ and $v$ in the highest-order terms. The point is that the precise structure of these equations is not important for our estimates; we need only be careful with the derivative count appearing in these expressions. The power on each expression is merely to indicate the number of times such a term appears. Next, with the fundamental theorem of calculus, \begin{align*} \varepsilon_{ijk} \nabla \eta^k,_r {a_\kappa}^r_j = \nabla \operatorname{curl} \eta^i + \varepsilon_{ijk} \nabla \eta^k,_r \int_0^t \partial_t {a_\kappa}^r_j\,, \end{align*} so that \begin{equation}\label{zs3} \nabla (\operatorname{curl} \eta^i - t \operatorname{curl} u_0^i) = \varepsilon_{ijk}\left[ \nabla \eta^k,_r \int_0^t \partial_t {a_\kappa}^r_j + \int_0^t (\nabla \eta^k,_{r} \partial_t{a_\kappa}^r_j +v^k,_r \nabla {a_\kappa}^r_j) \right] + \int_0^t\int_0^{t'} \nabla B_{a_\kappa} \,. \end{equation} Let $$ F:=P(J_\kappa^{-1}, a_\kappa, \nabla v) \text{ and } F_1:=P(F, \nabla^2 \eta, \nabla^2 v) $$ denote polynomial functions of their arguments. We then express (\ref{zs3}) as $$ \nabla \operatorname{curl} \eta^i \sim t \nabla\operatorname{curl} u_0^i+ \nabla^2 \eta \int_0^t F + \int_0^t F\, \nabla^2 \eta + \int_0^t\int_0^{t'} F\, (\nabla^2 \eta + \nabla^2 v) \,, $$ and taking two more spatial derivatives yields \begin{align*} &\nabla^3 \operatorname{curl} \eta^i \sim t \nabla^3\operatorname{curl} u_0^i+ \nabla^4 \eta \int_0^t F + \nabla^3 \eta \int_0^t F_1 + \nabla^2 \eta \int_0^t F_1 \\ &\qquad\qquad +\left(\int_0^t\int_0^{t'} + \int_0^t\right) \left[ F_1 \, (\nabla^3 \eta + \nabla^3 v) + F\, \nabla^4 \eta \right] + \int_0^t\int_0^{t'}\left[ F_1\, \nabla^3 v + F \, \nabla^4 v \right] \,. \end{align*} Since $\int_0^t\int_0^{t'} F \, \nabla^4 v = - \int_0^t\int_0^{t'} F_t\, \nabla^2\eta + \int_0^t F_t\, \nabla^4 \eta$, \begin{align} &\nabla^3 \operatorname{curl} \eta^i \sim t \nabla^3\operatorname{curl} u_0^i+ \nabla^4 \eta \int_0^t F + (\nabla^3\eta+\nabla^2 \eta) \int_0^t F_1 \nonumber\\ &\qquad\qquad +\left(\int_0^t\int_0^{t'} + \int_0^t\right) \left[ F_1 \, (\nabla^3 \eta + \nabla^3 v) + (F+F_t)\, \nabla^4 \eta \right] + \int_0^t\int_0^{t'} F_1\, \nabla^3 v \,. \label{zzss0} \end{align} We use interpolation to compute $\|\nabla^3\operatorname{curl} \eta\|_{0.5}=\|\operatorname{curl} \eta\|_{3.5}$. We begin with the highest-order term: \begin{align*} \left\|\int_0^t (F+F_t)\, \nabla^4 \eta \right\|_0 &\le \sup_{t\in[0,T]} \left\|F+F_t\right\|_{L^\infty} \, \left\|\int_0^t\eta\right\|_4 \le C \sup_{t\in[0,T]}\left\|F+F_t\right\|_2 \, \left\|\int_0^t\eta\right\|_4 \, \\ \left\|\int_0^t (F+F_t)\, \nabla^4 \eta \right\|_1 &\le \sup_{t\in[0,T]} (\|F+F_t\|_{L^\infty} \, \left\|\int_0^t\eta\right\|_5 +\sup_{t\in[0,T]} \|F+F_t\|_{L^4} \, \left\|\int_0^t\nabla^4\eta\right\|_{L^4}\\ &\qquad \qquad \qquad \le C \sup_{t\in[0,T]}\|F+F_t\|_2 \, \left\|\int_0^t\eta\right\|_5 \,. \end{align*} Since $\|F+F_t\|_2 \le C \|F\|_{L^\infty} \,\|v_t\|_{2.5}$, by the interpolation theorem 7.17 in \cite{Adams1978}, $$ \left\|\int_0^t (F+F_t)\, \nabla^4 \eta \right\|_{0.5} \le C \sup_{t\in[0,T]}\|F\|\, \|v_t\|_{2.5} \left\|\int_0^t\eta\right\|_{4.5} \le C\,T\, \sup_{t\in[0,T]}\|F\|\, \|v_t\|_{2.5} \, \|\eta\|_{4.5} $$ The other terms have similar estimates in the $H^{0.5}(\Omega)$-norm, so that \begin{equation}\label{curl_eta_2D} \sup_{t\in[0,T]}\|\operatorname{curl} \eta\|_{3.5}^2 \le T \|u_0\|_{4.5} ^2 + t\sup_{t'\in[0,t]} \|\int_0^t v\|_{3.5}^2 + T\sup_{t\in[0,T]} \|v_t\|_{2.5} \|\eta\|_{4.5} ^2\,. \end{equation} By differentiating (\ref{zzss0}) once more in space, the same interpolation estimates show that $$ \sup_{t\in[0,T]}\|\sqrt{\kappa}\operatorname{curl} \eta\|_{4.5}^2 \le T \|\sqrt{\kappa} u_0\|_{5.5} ^2 + T \int_0^T \| v\|_{4.5}^2 + T\sup_{t\in[0,T]} \|v_t\|_{2.5} \|\sqrt{\kappa}\eta\|_{5.5} ^2\,. $$ Next, we rewrite (\ref{basic0}) as \begin{equation}\label{curlvss} \operatorname{curl} v = \operatorname{curl} u_0 + \varepsilon_{ijk} v^k,_r \int_0^t \partial_t {a_\kappa}^r_j + \int_0^t B_{a_\kappa} \,. \end{equation} Using the fact that $H^{s}(\Omega)$ is a multiplicative algebra for $s>1$, It follows from (\ref{Ba}) and (\ref{derivative_a}) that $\sup_{t\in[0,T]}\|\operatorname{curl} v(t)\|_{2.5} \le \|u_0\|_{4.5} + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. Differentiating the above expression for $\operatorname{curl} v$ yields $$ \operatorname{curl} v_t = \varepsilon_{ijk} v^k,_r \partial_t {a_\kappa}^r_j + B_{a_\kappa} +\varepsilon_{ijk} \partial_t v^k,_r \int_0^t \partial_t {a_\kappa}^r_j + B_{a_\kappa} \,, $$ so with the fundamental theorem of calculus and our generic polynomial function $F$, \begin{equation}\label{zs3b} \operatorname{curl} v_t \sim P(\nabla u_0) + \nabla v_t \int_0^t F + \int_0^t F_t, \ \ \ F_t \sim F\, \nabla v_t \,. \end{equation} Again using the properties of the multiplicative algebra, we see that $\sup_{t\in[0,T]}\|\operatorname{curl} v_t(t)\|_{1.5}^{2} \le P(\|u_0\|_{4.5}) + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. From the time differentiation of (\ref{zs3b}), \begin{align} \operatorname{curl} v_{tt} &\sim \nabla v_t F + \nabla v_{tt}\int_0^t F \nonumber \\ & \sim \nabla v_t(0) P(\nabla u_0) + \nabla v_{tt} \int_0^t F + \int_0^t [F \nabla v_{tt} + F \nabla v_t \nabla v_t ] \,.` \label{zs4} \end{align} We must estimate the $H^{0.5}(\Omega)$-norm of the three terms on the right-hand side of (\ref{zs4}) by using interpolation. Let $L$ denote the linear form given by $L(w) = \int_0^t F w$. Then $$\|L(w)\|_0 \le C_0 \|\int_0^t w\|_0, \ \ \ C_0= \sup_{[0,t]} \|F\|_{L^\infty} .$$ Letting $F_1:=P(J_\kappa^{-1},a_\kappa, \nabla v, \nabla^2\eta, \nabla^2 v)$, it is easy to check that $$\|L(w)\|_1 \le C_1 \|\int_0^t w\|_1, \ \ \ C_1= \sup_{[0,t]} \|F_1\|_{L^\infty} .$$ By the interpolation theorem 7.17 in \cite{Adams1978}, $$ \|\int_0^t F \nabla v_{tt} \|_{0.5} \le \sqrt{C_0}\sqrt{C_1} \, \, \|\int_0^t v_{tt}\|_{1.5} \,, $$ so that by Jensen's inequality and Sobolev embedding, $\|\int_0^t F \nabla v_{tt} \|_{0.5}^2 \le C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. All of the other time-dependent terms in (\ref{zs4}) have the same bound by the same interpolation procedure. For the time $t=0$ term, interpolation provides the estimate $$\|P(\nabla u_0) \nabla v_t(0)\|_{0.5} \le C P(\|u_0\|_{4.5}) \|v_t(0)\|_{1.5} \le C P(\|u_0\|_{4.5}) \|q(0)\|_{2.5} \le M_0.$$ The initial pressure $q(0)$ solves the Dirichlet problem \begin{align*} \Delta q(0) & = {\mathfrak i}_0:=(u_0)^i,_j({u_0})_\kappa)^j,_i \text{ in } \Omega \,, \\ q(0) &={\mathfrak b}_0:= \frac{\sqrt{g_0}}{\sqrt{{g_0}_\kappa}} {\Pi_0}^i_j g_0^{\alpha\beta} {\eta_0}^i,_{\alpha \beta} N^i_\kappa + \kappa \Delta_0(u_0\cdot N_\kappa) \text{ on } \Gamma \,. \end{align*} Since $\|q(0)\|_{2.5}\le C(\|{\mathfrak i}_0\|_{0.5} + | {\mathfrak b}_0 |_{2})\le M_0$, we see that $\sup_{t\in[0,T]}\|\operatorname{curl} v_{tt}(t)\|_{0.5}^{2} \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. Differentiating (\ref{zs4}) with respect to time, we see that $\operatorname{curl} v_{ttt} \sim \nabla v_{ttt} \int_0^t F + F\, \nabla v_{tt} + F \, \nabla v_t \, \nabla v_t$ so that by the fundamental theorem of calculus \begin{align*} F\, \nabla v_{tt} + F \, \nabla v_t \, \nabla v_t & = F(_0) [ \nabla v_{tt}(0) + \nabla v_t(0)\,\nabla v_t(0)] \\ &\qquad + \int_0^t [ F\, \nabla v_{ttt} + F \, \nabla v_t \, \nabla v_{tt} + F \, \nabla v_t \, \nabla v_{t}\, \nabla v_{t}]\,, \end{align*} so that \begin{align} &\int_0^T \| \sqrt{\kappa} \operatorname{curl} v_{ttt}\|_{0.5}^2 \le \int_0^T \left\|\sqrt{\kappa} F(0) [ \nabla v_{tt}(0) + \nabla v_t(0)\,\nabla v_t(0)] \right\|_{0.5}^2 \nonumber \\ &\qquad \int_0^T \left\| \sqrt{\kappa} \nabla v_{ttt}\int_0^t F\right\|_{0.5}^2 + \int_0^T \left\| \int_0^t [ F\, \sqrt{\kappa}\nabla v_{ttt} + F \, \nabla v_t \, \sqrt{\kappa}\nabla v_{tt} + F \, \nabla v_t \, \nabla v_{t}\, \sqrt{\kappa}\nabla v_{t}] \right\|_{0.5}^2 \,. \label{zs6} \end{align} We repeat the interpolation estimates between $L^2(\Omega)$ and $H^1(\Omega)$ just as for the estimates for $\operatorname{curl} v_{tt}$; for example, $$ \left\|\int_0^t F \sqrt{\kappa}\nabla v_{ttt} \right\|_{0.5} \le \sqrt{C_0}\sqrt{C_1} \, \, \left\|\int_0^t \sqrt{\kappa} v_{ttt}\right\|_{1.5} \,, $$ so that by Jensen's inequality and Sobolev embedding, $$ \|\int_0^t F \sqrt{\kappa} \nabla v_{ttt} \|_{0.5}^2 \le C\, t\, \sup_{[0,t]}E_\kappa \, \|\sqrt{\kappa} v_{ttt}\|_{0.5}^2\,.$$ Thus, integrating from $0$ to $T$ gives the estimate $$ \int_0^T\|\int_0^t F \sqrt{\kappa} \nabla v_{ttt} \|_{0.5}^2 \le C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) \, \int_0^T \|\sqrt{\kappa} v_{ttt}\|_{0.5}^2 \le C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) \,. $$ The other time-dependent terms in (\ref{zs6}) have the same bound by the same argument. The time $t=0$ terms require analysis of the elliptic problem for $q_t(0)$: \begin{align*} \Delta q_t(0) & \sim {\mathfrak i}_1 := \nabla [ P(\nabla u_0)\, \nabla q(0)] + F\, \nabla v_t(0) \text{ in } \Omega \,, \\ q_t(0) &\sim {\mathfrak b}_1:= Q(\partial \eta_0) \partial^2 u_0 + Q(\partial \eta_0) \partial u_0 \partial ^2 \eta_0 \\ & \qquad \qquad\qquad + \kappa \Delta_0 (v_t(0) \cdot N_\kappa) + \kappa \Delta_0( u_0 \cdot Q(\partial {\eta_0}_\kappa) \partial {u_0}_\kappa) \text{ on } \Gamma \,. \end{align*} By interpolation estimates (as above), $$ \int_0^T \left\|\sqrt{\kappa} F(0) [ \nabla v_{tt}(0) + \nabla v_t(0)\,\nabla v_t(0)] \right\|_{0.5}^2 \le \kappa T\,\|P (\nabla u_0)\|_{L^\infty}^2 \|v_{tt}(0)\|_{1.5}^2, $$ and since time-differentiation of the Euler equations shows that $$ v_{tt}(0) = - \nabla u_0\, \nabla q(0) - \nabla q_t(0), $$ interpolation provides the estimate \begin{align*} \sqrt{\kappa}\|v_{tt}(0)\|_{1.5} & \le \sqrt{\kappa}\| \nabla^2 u_0 \, \nabla q(0)\|_{0.5} + \sqrt{\kappa}\| \nabla u_0 \, \nabla^2 q(0)\|_{0.5} + \sqrt{\kappa}\|q_t(0)\|_{2.5} \\ & \le M_0 + \sqrt{\kappa}\|{\mathfrak i}_1\|_{0.5} + \sqrt{\kappa}|{\mathfrak b}_1|_2 \\ & \le M_0 + \sqrt{\kappa}|{\mathfrak b}_1|_2 \\ \end{align*} where we have used the elliptic estimate $\|q(0)\|_{2.5} \le M_0$ (from above) for both the second and third inequalities. (The remaining estimate for $|{\mathfrak b}_1|_2 $ places the regularity constraints on the polynomial function $M_0$ in the hypothesis of the lemma.) Because $H^2(\Gamma)$ is a multiplicative algebra, the bound for $\sqrt{\kappa}|{\mathfrak b}_1|_2 $ is controlled by the highest-order terms $\sqrt{\kappa} |v_t(0)|_4$ and $\sqrt{\kappa} |{u_0}_\kappa|_5\le \sqrt{\kappa} C \|u_0\|_{5.5}$. Now, $$ \sqrt{\kappa} |v_t(0)|_4 \le \sqrt{\kappa} \|q(0)\|_{5.5} \le \sqrt{\kappa} \| {\mathfrak i}_0\|_{3.5} + \sqrt{\kappa} | {\mathfrak b}_0|_5\,, $$ and $\| {\mathfrak i}_0\|_{3.5}$ is bounded by $P(\|u_0\|_{4.5})$ while the highest-order terms in $\sqrt{\kappa} | {\mathfrak b}_0|_5$ require bounds on $\sqrt{\kappa} \|\eta_0\|_{7.5}$ and $\sqrt{\kappa} \|u_0\|_{7.5}$ With our definition of $M_0$, we see that $$\kappa T\,\|P (\nabla u_0)\|_{L^\infty}^2 \|v_{tt}(0)\|_{1.5}^2 \le M_0$$ and hence $\int_0^T \|\sqrt{\kappa} \operatorname{curl} v_{ttt}\|_{0.5}^2 \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. The proof that $\int_0^T \|\sqrt{\kappa} \operatorname{curl} v_{tt}\|_{1.5}^2 \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$ is essentially identical. The divergence estimates begin with the fundamental equation ${a_\kappa}^j_i v^i,_j =0$. By taking one derivative of this equation and integrating-by-parts in time, we find that $$ \nabla \operatorname{div} \eta = \nabla \eta^i,_j \int_0^t \partial_t {a_\kappa}^j_i + \int_0^t( \partial_t {a_\kappa}^j_i \nabla \eta^i,_j - \nabla {a_\kappa}^j_i v^i,_j) \,. $$ Computing the $H^{2.5}(\Omega)$-norm of this equation yields the estimate $\sup_{t\in[0,T]} \| \operatorname{div} \eta(t)\|_{3.5}^2 \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. The divergence estimates for $v$, $v_t$, $v_{tt}$, $\sqrt{\kappa} v_{tt}$, and $\sqrt{\kappa} v_{ttt}$ follow the same argument as the corresponding curl estimates. \vspace{.2 in} In the case that ${\mathfrak n}=3$, the estimates are found in the same way, with one minor change. Set $E_\kappa(t)=E_\kappa^{3D}(t)$. The estimates for $\operatorname{curl} \eta$, which rely on Sobolev embedding, require greater regularity on $v_t$. The estimate (\ref{curl_eta_2D} becomes \begin{equation}\nonumber \sup_{t\in[0,T]}\|\operatorname{curl} \eta\|_{3.5}^2 \le T \|u_0\|_{4.5} ^2 + t\sup_{t'\in[0,t]} \|\int_0^t v\|_{3.5}^2 + T\sup_{t\in[0,T]} \|v_t\|_{3} \|\eta\|_{4.5} ^2\,, \end{equation} and similarly, $$ \sup_{t\in[0,T]}\|\sqrt{\kappa}\operatorname{curl} \eta\|_{4.5}^2 \le T \|\sqrt{\kappa} u_0\|_{5.5} ^2 + T \int_0^T \| v\|_{4.5}^2 + T\sup_{t\in[0,T]} \|v_t\|_{3} \|\sqrt{\kappa}\eta\|_{5.5} ^2\,. $$ \end{proof} \section{Some geometric identities} \label{Section_apriori} \def{\bar{n}^{\kappa}}{{\tilde n_\kappa}} \def{v^\kappa}{ {\tilde{v}_\kappa} } \def{\tau_\kappa}{{\tau_\kappa}} \def{{{\bar{\eta}^{\kappa}}}}{{\tilde \eta_\kappa}} \def{\tilde g_\kappa}{{\tilde g_\kappa}} \def{p^\kappa}{{\Pi^\kappa}} \def{{\tilde a}^k}{{{\tilde a}^k}} \def\int_{0}^{T}{\int_{T'-\epsilon}^{T'}} \def\int_{0}^{T}{\int_{0}^{T}} We will usually omit writing $dS_0$ in our surface integrals, and for convenience we set $\sigma=1$. Let $\Pi^i_j$ denote the projection operator onto the direction normal to $\eta(\Gamma)$, defined as $\Pi^i_j = \delta^i_j - g^{\alpha \beta} \eta^i_{,\alpha} \delta_{jl} \eta^l_{,\beta}$, where $g$ is the induced metric on $\eta(\Gamma)$ defined in (\ref{gstuff}). The mean curvature vector motivates us to introduce the projection operator $\Pi$. In particular, we have the important formula \begin{align} -\sqrt{g}Hn\circ\eta& = \sqrt{g}\Delta_g(\eta)\nonumber\\ & = \sqrt{g}\left[ g^{\alpha\mu}( \delta^{ij} - g^{\nu\beta}\eta^i,_\beta\eta^j,_\nu) \eta^j,_{\mu \alpha} +( g^{\alpha\beta} g^{\mu\nu} - g^{\alpha\nu} g^{\mu\beta}) \eta^i,_\beta \eta^j,_\nu \eta^j,_{\mu\alpha}) \right] \nonumber\\ &= \sqrt{g} g^{\mu\alpha} \Pi^i_j \eta^j,_{\mu\alpha}\,. \label{importantF} \end{align} where the last equality follows since $( g^{\alpha\beta} g^{\mu\nu} - g^{\alpha\nu} g^{\mu\beta}) \eta^i,_\beta \eta^j,_\nu \eta^j,_{\mu\alpha} =0$. For a vector field $F$ on $\Gamma$, $\Pi \, F = [n \cdot F] \, n$, i.e., $\Pi = n\otimes n$. We let \begin{equation}\label{Q} {Q}(\partial \eta) = f_1(\partial \eta)/f_2(\sqrt{g}) \,, \end{equation} denote a generic rational function where $f_1$ and $f_2$ are smooth functions. We record for later use that that $n= \frac{a^T N}{|a^TN|}= \frac{\eta,_1 \times \, \eta,_2}{|\eta,_1 \times \, \eta,_2|}$ and that $|a^TN|= \sqrt{\det g}$ on $\Gamma$, as \begin{align*} |\eta,_1 \times \, \eta,_2|^2 & = \varepsilon_{ijk}\eta^j,_1 \eta^k,_2 \,\varepsilon_{irs}\eta^r,_1 \eta^s,_2\\ &= (\delta_{jr}\delta_{ks}-\delta_{js}\delta_{kr}) \eta^k,_1 \eta^j,_s \eta^r,_1 \eta^s,_2 = |\eta,_1|^2|\eta,_2|^2 - [\eta,_1 \cdot \eta,_2]^2 = \det g \,, \end{align*} where $\varepsilon_{ijk}$ denotes the permutation symbol of $(1,2,3)$. We will use the symbol ${Q}$ to denote any smooth (tensor) function that can be represented as (\ref{Q}). \begin{remark} The $L^\infty$-norm of the numerator of ${Q}$ is bounded by a polynomial of the energy function, while the $L^\infty$-norm of the denominator of ${Q}$ is uniformly controlled by (\ref{deteta.a}). Thus, the generic constant $C$ which appears in the following inequalities may depend on a polynomial of $\det g_0$. In particular, $\|{Q}(\partial \eta)\|_{\L^\infty} \le C(\det g_0) \|P (\partial \eta)\|_{L^\infty}$. \end{remark} For a vector field $F$ on $\Gamma$, $F \cdot N = F \cdot n + F \cdot (N-n)$ and $$|N-n|_{L^\infty} \le \int_0^t |n_t|_{L^\infty} = \int_0^t |{Q}(\partial \eta) \partial v|_{L^\infty} \le C\, t\, P(E_\kappa(t))\,,$$ the last inequality following from (\ref{deteta.a}). If $|\Pi F |_s \le M_0 + C P(E_\kappa(t))$, then $|F\cdot N|_s$ satisfies the same bound. \section{$\kappa$-independent estimates for the smoothed problem and existence of solutions in 2D}\label{kapriori} All of the variables in the smoothed $\kappa$-problem (\ref{smooth}) implicitly depend on the parameter $\kappa$. In this section, where we study the asymptotic behavior of the solutions to (\ref{smooth}) as $\kappa \rightarrow 0$, we will make this dependence explicit by placing a $\sim$ over each of the variables. We set $E_\kappa(t)= E_\kappa^{2D}(t)$. \begin{remark}\label{2Dvs3D} The only difference between the 2D and 3D cases arise from the embedding of $\tilde v_t \in L^\infty(\Omega)$. In 2D, $\tilde v_t \in H^{2.5}(\Omega)$ is sufficient, while in 3D, we need $\tilde v_t \in H^{3}(\Omega)$. \end{remark} The pressure function $\tilde q$ can be formulated to solve either a Dirichlet problem with boundary condition (\ref{kbc}) or a Neumann problem found by taking the inner-product of the Euler equations with ${\tilde a_\kappa}^T N$. We use the latter. \begin{lemma}[Pressure estimates] With $(\tilde v, \tilde q)$ a solution of the $\kappa$-problem (\ref{smooth}) \begin{align} & \|\tilde q(t)\|^2_{3.5} +\|\tilde q_{t}(t)\|^2_{2.5} +\|\tilde q_{tt}(t)\|^2_{1} \le C \, P( E_\kappa(t)) \,. \label{qestimate} \end{align} \end{lemma} \begin{proof} Denoting $\tilde a_\kappa$ by $A$, we define the divergence-form elliptic operator $L_A$ and corresponding Neumann boundary operator $B_A$ as $$ L_A = \partial_j ( \tilde J_\kappa^{-1} A^j_i A^l_i \partial_l)\,, \ \ \ B_A = \tilde J_\kappa^{-1} A^j_i A^l_i N_j \partial_l \,. $$ For $k=0,1,2$, we analyze the Neumann problems \begin{align} L_a (\partial^k_t \tilde q) = f_k \ \text{ in } \ \Omega \ \ & \ \ B_a(\partial^k_t \tilde q) = g_k \ \text{ on } \ \Gamma \label{Neumann} \end{align} with \begin{alignat*}{2} f_0&= \partial_t A^j_i \, \tilde v^i,_j \qquad &&g_0= - \tilde v_t\cdot \sqrt{\tilde g_\kappa} \tilde n_\kappa\\ f_1&= -{L_A}_t(\tilde q) - \partial_t^2A^j_i\, \tilde v^i,_j- \partial_tA^j_i\, \tilde v_t^i,_j \qquad &&g_1= {B_A}_t(\tilde q) - \tilde v_t \cdot (\sqrt{\tilde g_\kappa} \tilde n_\kappa)_t - \tilde v_{tt} \cdot \sqrt{\tilde g_\kappa} \tilde n_\kappa \\ f_2&= -2{L_A}_{t}(\tilde q_t) -{L_A}_{tt}(\tilde q) - \partial_t^3A^j_i\, \tilde v^i,_j \qquad &&g_2= 2{B_A}_t(\tilde q_t) + {B_A}_{tt}(\tilde q) - \tilde v_{ttt}\cdot \sqrt{\tilde g_\kappa} \tilde n_\kappa \\ &\qquad - 2\partial_t^2A^j_i\, \tilde v_t^i,_j-\partial_tA^j_i\, \tilde v_{tt}^i,_j \qquad && \qquad -2\tilde v_{tt}\cdot (\sqrt{\tilde g_\kappa} \tilde n_\kappa)_t -\tilde v_{t}\cdot (\sqrt{\tilde g_\kappa} \tilde n_\kappa)_{tt} \,. \end{alignat*} For $s\ge 1$, elliptic estimates provide the inequality \begin{equation}\label{elliptic} \| \partial^k_t \tilde q(t) \|_s \le C_s [P( \|\eta\|_{4.5})\|f_k\|_{s-2} + |g_k|_{s-3/2} + \| \tilde q \|_0] \,, \end{equation} where $\|\cdot \|_{-1}$ denotes the norm on $[H^1(\Omega)]'$. We remark that the usual $H^s$ elliptic estimates require that coefficient have the regularity $\partial^{s-1}(A^l_i A^j_i) \in L^\infty(\Omega)$, however $\partial^{s-1}(A^l_i A^j_i) \in L^2(\Omega)$ is sufficient. See see \cite{Eb2002} or the quasilinear estimates in \cite{Taylor1996}. As we cannot guarantee that solutions $\tilde q$ to the $\kappa$-problem (\ref{smooth}) have zero average, we use $\|\tilde q\|_0 \le C\|\tilde q\|_1$ and the $H^1$ elliptic estimate for the Dirichlet problem $L_A(q)= f_0$ in $\Omega$ with $-q= \Delta_{\tilde g}\tilde \eta\cdot \tilde n_\kappa + \kappa \Delta_0( \tilde v\cdot \tilde n_\kappa)$ on $\Gamma$. Thus, $\|\tilde q\|_1 \le C (\|f_0\|_0 + |\Delta_{\tilde g}\tilde \eta\cdot \tilde n_\kappa + \kappa \Delta_0( \tilde v\cdot \tilde n_\kappa)|_{0.5}) \le C\, P(E_\kappa(t))$. From (\ref{derivative_a}), it is clear that $\|f_0\|_{1.5}^2 + |g_0|_{2}^2 \le C P(E_\kappa(t))$; thus, from the elliptic estimate, \begin{equation}\label{q0} \|\tilde q\|_{3.5}^2 \le C P(E_\kappa(t)). \end{equation} Next, we must show that $\|f_1\|_{0.5}^2 + |g_1|_1^2 \le C \, P(E_\kappa(t))$. But $f_1 \sim P(\tilde J_\kappa^{-1},A,\nabla \tilde v_\kappa) ( [\nabla \tilde v_t]^2 + \nabla^2 \tilde q)$ so that with (\ref{q0}), $\|f_1\|_{0.5}^2\le C\, P(E_\kappa(t))$, with the same bound for $|g_1|_1^2$, so that $ \|\tilde q_t\|_{2.5}^2 \le C P(E_\kappa(t))$. Using this, we find, in the same fashion, that $\|f_2\|_0 \le C\, P(E_\kappa(t))$. The normal trace theorem, read in Lagrangian variables, states that if $\tilde v_{ttt} \in L^2(\Omega)$ with $\|A^j_i \tilde v_{ttt}^i,_j\|_0 \in L^2(\Omega)$, then $ \tilde v_{ttt} \cdot \sqrt{\tilde g_\kappa} \tilde n_\kappa \in H^{-0.5}(\Gamma)$ with the estimate $| \tilde v_{ttt}\cdot \sqrt{\tilde g_\kappa} \tilde n_\kappa|_{-0.5}^2 \le C \, P(E_\kappa(t))$. Since $\|\operatorname{Tr}(A\, \nabla \tilde v_{ttt})\|_0^2 = \|\operatorname{Tr}( 3 A_{t}\, \nabla \tilde v_{tt} + 3A_{tt}\, \nabla \tilde v_{t} + A_{ttt}\, \nabla \tilde v)\|_0^2 \le C\, P(E_\kappa(t))$, and using the above estimates for $\tilde q$ and $\tilde q_t$, we find that $|g_2|_{-0.5} \le C\, P(E_\kappa(t))$, thus completing the proof. \end{proof} Our smoothed $\kappa$-problem (\ref{smooth}) uses the boundary condition (\ref{smooth.e}) which we write as \begin{equation}\label{kbc} \tilde q\, {\bar{n}^{\kappa}} = \frac{\sqrt{\tilde g}}{\sqrt{\tilde g_\kappa}} \tilde H\, \tilde n \cdot {\bar{n}^{\kappa}}\, {\bar{n}^{\kappa}}- \kappa \Delta_{ 0} (\tilde v\cdot {\bar{n}^{\kappa}})\, {\bar{n}^{\kappa}}\,, \end{equation} where (we remind the reader) $\kappa>0$ is the artificial viscosity, $\Delta_{0}= \sqrt{\tilde g_\kappa}^{-1} \partial_\alpha (\sqrt{g_0} g_0^{\alpha \beta} \partial_\beta)$, $\tilde n$ is the unit normal along the boundary $\tilde \eta(t)(\Gamma)$ and ${\bar{n}^{\kappa}}$ is the unit normal along the smoothed $\kappa$-boundary $\tilde \eta_\kappa(t)(\Gamma)$. We begin with an energy estimate for the third time-differentiated problem. Although we are doing the estimates for the 2D domain $\Omega$, we keep the notation of the 3D problem as well as terms that only arise in 3D when differentiating the mean curvature vector. Thus, when we turn to the 3D problem in Section \ref{kapriori3}, the modifications will be trivial. \begin{lemma}[Energy estimates for the third time-differentiated $\kappa$-problem] \label{lemma10.2} For $M_0$ taken as in Lemma \ref{lemma1} and $\delta >0$, solutions of the $\kappa$-problem (\ref{smooth}) satisfy: \begin{align} &\sup_{t\in[0,T]}\left[ \|\tilde v_{ttt}\|_0^2 + |\tilde v_{tt}\cdot\tilde n|_1^2 \right] +\int_0^T |\sqrt{\kappa}\partial_t^3\tilde v \cdot {\bar{n}^{\kappa}}|_1^2 \le M_0 + T \, P(\sup_{t\in [0,T]} E_\kappa(t)) + \delta \sup_{t\in [0,T]} E_\kappa(t) \nonumber \\ &\qquad + C \sup_{t\in [0,T]} [ P(\|\tilde v_t\|_{2.5}^2) + P(\|\tilde v\|_{3.5}^2)+ P(\|\tilde\eta\|_{4.5}^2)] + C P(\|\sqrt{\kappa} \tilde v_{tt}\|_{L^2(0,T;H^{2.5}(\Omega))}^2) \,. \label{ss_kttt} \end{align} \end{lemma} \begin{proof} Letting $A= \tilde a_\kappa$, and testing $\partial_t^3 (\tilde J_\kappa \tilde v_t^i) + \partial_t^3({A}^k_i \tilde q,_k) =0$ with with $\partial_t^3 \tilde v^i$ shows that \begin{equation}\label{eulerttt} \int_0^T{\frac{1}{2}} \int_{\Omega} \partial_t^3(\tilde J_\kappa \tilde v_t^i) \partial_t^3 \tilde v^i - \int_0^T\int_{\Omega} \partial^3_t( A^k_i q) \ \partial_t^3 \tilde v^i,_k = - \int_0^T\int_{\Gamma} \partial_t^3 (\sqrt{\tilde g_\kappa} \tilde q {\bar{n}^{\kappa}} (\tilde \eta_\kappa)) \cdot \partial_t^3 \tilde v \ dS_0 \,. \end{equation} \noindent {\bf Step 1. Boundary integral term.} We rewrite the modified boundary condition (\ref{kbc}) as \begin{equation}\label{kbca} \tilde q\, {\bar{n}^{\kappa}} = \frac{\sqrt{g}}{\sqrt{g_\kappa}} \left[\tilde H\tilde n + \tilde H\tilde n\cdot({\bar{n}^{\kappa}} -\tilde n)\, \tilde n + \tilde H\tilde n\cdot{\bar{n}^{\kappa}}\, ({\bar{n}^{\kappa}} -\tilde n)\right] - \kappa \Delta_{0}(\tilde v\cdot {\bar{n}^{\kappa}})\, {\bar{n}^{\kappa}}\,. \end{equation} We first consider the boundary integral on the right-hand side of (\ref{eulerttt}) with only the first term on the right-hand side of (\ref{kbca}): \begin{align} &-\int_0^T\int_{\Gamma} \partial^3_{t} (\sqrt{\tilde g}\tilde H\tilde n^i \circ \eta) \partial^3_t \tilde v^i \, dS_0 \nonumber \\ &= -\int_0^T\int_{\Gamma}\sqrt{\tilde g} \tilde g^{\alpha\beta} \Pi^i_j \partial^2_t \tilde v^j_{,\beta} \partial^3_t \tilde v^i_{,\alpha} - \int_0^T\int_{\Gamma} \sqrt{\tilde g}[ \tilde g^{\mu\nu}\tilde g^{\alpha\beta} - \tilde g^{\alpha\nu}\tilde g^{\mu\beta}] \tilde \eta^j_{,\nu} \partial^2_t \tilde v^j_{,\mu} \, \tilde \eta^i_{,\beta} \partial^3_t \tilde v^i_{,\alpha} \nonumber \\ &\qquad \qquad +\int_0^T \int_{\Gamma} Q_{ij}^{\alpha\beta}(\partial \tilde \eta, \partial \tilde v) \, \partial_t \tilde v^j_{,\beta}\, \partial^3_t \tilde v^i_{,\alpha} +\int_0^T \int_{\Gamma} Q_{i}^{\alpha}(\partial \tilde \eta, \partial \tilde v) \, \partial^3_t \tilde v^i_{,\alpha} \label{Hn_ttt} \\ & =: I + II + III + IV \,. \nonumber \end{align} The first term $I$ on the right-hand side of (\ref{Hn_ttt}) is given by \begin{align*} I &= \left. -\frac{1}{2}\int_{\Gamma} \sqrt{\tilde g} (\Pi^i_j \partial^2_t\tilde v^j_{,\beta}) \tilde g^{\alpha\beta}(\Pi^i_k \partial^2_t\tilde v^k_{,\alpha})\right]^T_0 +\int_0^T\int_{\Gamma} Q^{\alpha\beta}_{jk}(\partial \tilde \eta,\partial \tilde v) \partial^2_t \tilde v^k_{,\alpha}\, \partial^2_t \tilde v^j_{,\beta} \,, \\ \end{align*} where we use the notation $f]^T_0 = f(T) -f(0)$. Since $\Pi^i_j \tilde v^j_{tt},_\beta =(\Pi^i_j \tilde v^j_{tt}),_\beta -{\Pi^i_j},_\beta \tilde v^j_{tt}$ and ${\Pi^i_j},_\beta = {Q}^{i\gamma}_{jl}(\partial\tilde \eta) \tilde \eta^l,_{\gamma \beta}$ with ${Q}(\partial\tilde \eta)$ defined by (\ref{Q}), for $\delta >0$, \begin{align*} -\frac{1}{2}\int_{\Gamma} \sqrt{\tilde g} (\Pi^i_j \partial^2_t\tilde v^j_{,\beta}) \tilde g^{\alpha\beta}(\Pi^i_k \partial^2_t\tilde v^k_{,\alpha}) & \le -{\frac{1}{2}} | \Pi \tilde v_{tt} |_1^2 + \delta | \Pi \tilde v_{tt} |_1^2 + (1+C_\delta) \left| {Q}^{i\alpha\beta}_{jl}(\partial \tilde \eta)\, \tilde \eta^l,_{\alpha \beta}\, \tilde v_{tt}^j\right|_0^2 \,, \end{align*} where the constant $C_\delta$ depends inversely on $\delta$. Since for any $t \in [0,T]$ $$ \left| \left( {Q}^{i\alpha\beta}_{jl}(\partial \tilde \eta)\, \tilde \eta^l,_{\alpha \beta}\, \tilde v_{tt}^j\right)_t \right|_0^2(t) \le C P(E_\kappa(t)), $$ it follows that $$ I\le -{\frac{1}{2}} \sup_{t\in[0,T]}| \Pi \tilde v_{tt}|_1^2 + M_0(\delta)+ \delta \sup_{t\in[0,T]} E_\kappa(t) + C \, T P(\sup_{t\in[0,T]} E_\kappa(t)) \,. $$ The second term $II$ requires some care (in the way in which the terms are grouped together). Letting \begin{align} & {\mathcal A}^1= \left[ \begin{array}{cc} \tilde \eta_{,1}\cdot \partial_t^2\tilde v_{,1} &\tilde \eta_{,1}\cdot \partial_t^2\tilde v_{,2} \\ \tilde \eta_{,2}\cdot \partial_t^2\tilde v_{,1} &\tilde \eta_{,2}\cdot \partial_t^2\tilde v_{,2} \end{array} \right], \ {\mathcal A}^2= \left[ \begin{array}{cc} \tilde v_{,1}\cdot \partial_t^2\tilde v_{,1} &\tilde \eta_{,1}\cdot \partial_t^2\tilde v_{,2} \\ \tilde v_{,2}\cdot \partial_t^2\tilde v_{,1} &\tilde \eta_{,2}\cdot \partial_t^2\tilde v_{,2} \end{array} \right], \nonumber \\ & \qquad\qquad\qquad\qquad {\mathcal A}^3= \left[ \begin{array}{cc} \tilde \eta_{,1}\cdot \partial_t^2\tilde v_{,1} &\tilde v_{,1}\cdot \partial_t^2\tilde v_{,2} \\ \tilde \eta_{,2}\cdot \partial_t^2\tilde v_{,1} &\tilde v_{,2}\cdot \partial_t^2\tilde v_{,2} \end{array} \right] \,, \label{detA} \end{align} we find that \begin{align} II &= \int_0^T\int_{\Gamma} \det{\tilde g^{-{\frac{1}{2}}}} \, ( \partial_t \det{{\mathcal A}^1} - \det{{\mathcal A}^2}- \det{{\mathcal A}^3}) \nonumber \\ &= \int_0^T\int_{\Gamma} -(\det{g^{-{\frac{1}{2}}}})_t \, \det{{\mathcal A}^1} - \det \tilde g^{-{\frac{1}{2}}}(\det{{\mathcal A}^2}+ \det{{\mathcal A}^3}) + \left.\int_{\Gamma} \det{\tilde g^{-{\frac{1}{2}}}} \det{{\mathcal A}^1} \right]_0^T\,. \label{term_II} \end{align} For $\alpha=1,2$, let $V_\alpha = \tilde \eta_{,\alpha} \cdot \partial_t^2 \tilde v$; thus ${V_\alpha},_\beta = {\mathcal A}^1_{\alpha\beta} + \tilde \eta^i,_{\alpha\beta} \tilde v^i_{tt}$ so that \begin{align*} \det{{\mathcal A}^1_{\alpha\beta}} & = \det( {V_\alpha},_\beta - \tilde \eta^i,_{\alpha\beta} \tilde v^i_{tt}) \\ & =\det{{V_\alpha},_\beta} - \det (\tilde \eta^i,_{\alpha\beta} \tilde v^i_{tt}) + P_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v^j_{tt} + P^\alpha_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v_{tt}^j,_\alpha \,. \end{align*} With $A = {\text{Cof}} (\partial V)$, $\det \partial V = A^{\beta\alpha} {V_\alpha},_\beta$. It follows that $$\int_\Gamma \det{\tilde g^{-{\frac{1}{2}}}} \det \partial V = - \int_\Gamma (\det{\tilde g^{-{\frac{1}{2}}}})_{,\beta} A^{\beta\alpha} V,_\alpha,$$ as $A^{\beta\alpha},_\beta =0$ since $A$ is the cofactor matrix. Hence, \begin{equation}\label{ssss7} \int_\Gamma \det {\tilde g^{-{\frac{1}{2}}}} \det {\mathcal A}_1 = \int_\Gamma P_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v^j_{tt} + P^\alpha_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v_{tt}^j,_\alpha \,, \end{equation} so that \begin{align*} II & \le \int_0^T \int_{\Gamma} Q^{\alpha\beta}_{ij} (\partial \tilde \eta, \partial \tilde v) \, \partial^2_t \tilde v^i_{,\alpha}\, \partial^2_t \tilde v^j_{,\beta} + \left.\ \int_\Gamma [ P_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v^j_{tt} + P^\alpha_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt} \tilde v_{tt}^j,_\alpha] \right]^T_0 \,. \end{align*} By the fundamental theorem of calculus and Young's inequality, for $\delta >0$, \begin{align*} &\int_\Gamma[P^\alpha_{ij}(\partial^2\tilde \eta) \tilde v^i_{tt} \tilde v^j_{tt},_\alpha](T) = \int_\Gamma[P^\alpha_{ij}(\partial^2\tilde \eta) \tilde v^i_{tt}](0)\, \tilde v^j_{tt},_\alpha(T) + \int_\Gamma\int_0^T {[P^\alpha_{ij}(\partial^2\tilde \eta) \tilde v^i_{tt}]}_t \, dt\, \tilde v^j_{tt},_\alpha(T) \\ &\qquad \le M_0(\delta) + \delta \|\tilde v_{tt}(T)\|_{1.5}^2 + T\left ( \int_\Gamma \sup_{t\in [0,T]} | {(P^\alpha_{ij} (\partial^2\tilde \eta) \tilde v^i_{tt})}_t |^2 dx \right)^{\frac{1}{2}} \|\tilde v^j_{tt}(T)\|_{1.5}^{\frac{1}{2}}\,. \end{align*} Since ${[P^\alpha_{ij}(\partial^2 \tilde \eta) \tilde v^i_{tt}]}_t \in L^\infty(0,T; L^2(\Gamma))$, we conclude that $$ II\le M_0(\delta)+ \delta \sup_{t\in[0,T]} E_\kappa(t) + C \, T P(\sup_{t\in[0,T]} E_\kappa(t)) \,. $$ A temporal integration by parts in the third and fourth terms on the right-hand side of (\ref{Hn_ttt}) yields \begin{align*} III+ IV & = -\int_0^T \int_{\Gamma} { [ Q^{\alpha\beta}_{ij} (\partial \tilde \eta, \partial \tilde v) \tilde v^j_t,_\beta + Q^\alpha_i(\partial\tilde \eta, \partial \tilde v)] }_t \, \tilde v^j_{tt},_\beta \\ & \qquad\qquad\qquad\qquad + \left.\int_{\Gamma} [Q^{\alpha\beta}_{ij} (\partial \tilde \eta,\partial \tilde v) \tilde v^j_{t},_\beta + Q^\alpha_i(\partial\tilde \eta,\partial \tilde v) ] \tilde v^i_{tt},_\alpha \right]^T_0 \,, \end{align*} which has the same bound as term $II$; it follows that \begin{align} & \int_0^T\int_{\Gamma} \partial_t^3 (\sqrt{\tilde g}\tilde H\tilde n \circ \tilde \eta) \cdot \partial_t^3 \tilde v \nonumber \\ & \qquad\qquad\qquad\qquad = -{\frac{1}{2}}\sup_{t\in[0,T]} |\Pi \tilde v_{tt}|_1^2 + M_0(\delta)+ \delta \sup_{t\in[0,T]} E_\kappa(t) + C \, T P(\sup_{t\in[0,T]} E_\kappa(t)) \,. \label{ttt1} \end{align} \begin{remark} \label{remark_det} The determinant structure which appears in (\ref{term_II}) is crucial in order to obtain the desired estimate. In particular, the term $\det {\mathcal A}_1$ is linear in the highest-order derivative $\partial \partial_t^2 v$ rather than quadratic (as it a priori appears). \end{remark} There are three remaining boundary integral terms appearing on the right hand side of (\ref{eulerttt}) arising from (\ref{kbca}); the terms involving $\kappa$ are \begin{align} &-\kappa \int_0^T \left( \left[\partial_t^3 (\tilde v \cdot {\bar{n}^{\kappa}}), \partial_t^3\tilde v \cdot {\bar{n}^{\kappa}}\right]_1 +3\left[\partial_t^2 (\tilde v \cdot {\bar{n}^{\kappa}}), \partial_t^3\tilde v \cdot \partial_t {\bar{n}^{\kappa}}\right]_1 +3\left[\partial_t (\tilde v \cdot {\bar{n}^{\kappa}}), \partial_t^3\tilde v \cdot \partial_t^2 {\bar{n}^{\kappa}}\right]_1 \right. \nonumber \\ & \qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad \left. +\left[\tilde v \cdot {\bar{n}^{\kappa}}, \partial_t^3\tilde v \cdot \partial_t^3 {\bar{n}^{\kappa}}\right]_1 \right)\,. \label{kextra} \end{align} The first term in (\ref{kextra}) provides both the energy contribution $\int_0^T |\sqrt{\kappa}\partial_t^3 \tilde v \cdot {\bar{n}^{\kappa}}|_1^2$ as well as error terms. We start the analysis with the most difficult error term, \begin{equation}\label{difficult} \kappa \int_0^T\{[ \tilde v \cdot \partial \partial_t^3 {\bar{n}^{\kappa}}, \partial( {\bar{n}^{\kappa}} \cdot \partial_t^3\tilde v)]_0, \end{equation} whose highest-order contribution has an integrand (modulo $L^\infty$ terms) of the form $\partial^2 \sqrt{\kappa}\tilde {v_\kappa}_{tt}$ $\sqrt{\kappa}\partial \tilde v_{ttt}$. With ${\bar{n}^{\kappa}}= (\partial_1{{{\bar{\eta}^{\kappa}}}} \times \partial_2{{{\bar{\eta}^{\kappa}}}})/\sqrt{g}=: {Q}(\partial \tilde \eta_\kappa)$, ${Q}$ given by (\ref{Q}), the highest-order term in $\partial\partial_t^3 {\bar{n}^{\kappa}}$ is ${Q}(\partial\tilde\eta_\kappa) \partial^2 \partial_t^2 \tilde v_\kappa$, so that with ${\mathcal R}_1$ denoting a lower-order remainder term, and using (\ref{deteta.b}), we have that \begin{align*} &-\kappa \int_0^T[ \tilde v \cdot \partial \partial_t^3 {\bar{n}^{\kappa}}, \partial( {\bar{n}^{\kappa}} \cdot \partial_t^3 \tilde v)]_0 \le C \sup_{t\in[0,T]}|P(\tilde v,\partial\tilde \eta_\kappa)|_{L^\infty} \int_0^T|\sqrt{\kappa}\partial_t^3\tilde v \cdot {\bar{n}^{\kappa}}|_1 \, | \sqrt{\kappa}\partial_t^2 \tilde v_\kappa|_{2} + {\mathcal R}_1\\ &\qquad \le C \sup_{t\in[0,T]}|P(\tilde v,\partial\tilde \eta_\kappa)|_{L^\infty} |\sqrt{\kappa}\partial_t^3\tilde v \cdot {\bar{n}^{\kappa}}|_{L^2(0,T; H^1(\Gamma))} |\sqrt{\kappa}\partial_t^2 \tilde v_\kappa|_{L^2(0,T; H^2(\Gamma))} + {\mathcal R}_1\\ &\qquad \le C_\delta \left[\sup_{t\in[0,T]}|P(\tilde v,\partial\tilde \eta_\kappa)|_{L^\infty} \|\sqrt{\kappa}\tilde {v_\kappa}_{tt}\|_{L^2(0,T; H^{2.5}(\Omega))}\right]^2 + \delta |\sqrt{\kappa} \tilde v_{ttt} \cdot {\bar{n}^{\kappa}}|_{L^2(0,T; H^1(\Gamma))}^2 + {\mathcal R}_1\\ &\qquad \le M_0+ C\,T\,P(\sup_{t\in[0,T]} E_\kappa(t)) + \| \sqrt{\kappa} \tilde {v_\kappa}_{tt} \|^4_{L^2(0,T; H^{2.5}(\Omega))} + \delta \sup_{t\in[0,T]} E_\kappa(t) \,, \end{align*} where ${\mathcal R}_1$ also satisfies ${\mathcal R}_1 \le C\,T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \sup_{t\in[0,T]} E_\kappa(t)$. The second term in (\ref{kextra}) has a highest-order contribution with the same type of integrand, and its analysis (and bound) is identical. The third and fourth terms in (\ref{kextra}) are effectively lower-order by one derivative with respect to the worst case analyzed above. Next, we estimate $\int_0^T\int_{\Gamma} \partial_t^3\{ \tilde H\tilde n\cdot {\bar{n}^{\kappa}}\, ({\bar{n}^{\kappa}} -\tilde n)\}\, \partial_t^3 \tilde v$. Since ${\bar{n}^{\kappa}}={Q}(\tilde\eta_\kappa)$ and since $|{\bar{n}^{\kappa}} -\tilde n| \le \sup_\kappa |\partial Q(\partial\eta_\kappa)| \cdot |\partial \eta_\kappa - \partial \eta|$, then our assumed bounds (\ref{deteta}) together with (\ref{Linf_est}) imply that \begin{align} |{\bar{n}^{\kappa}} - \tilde n|_{L^\infty} \le C\,\sqrt{\kappa} \, |P(\partial\tilde \eta, \partial^2 \tilde \eta )|_{L^\infty} | \tilde\eta |_{2.5} \le C\, \kappa\, P(E_\kappa(t))\,. \label{n0} \end{align} Similarly, \begin{align} |\partial{\bar{n}^{\kappa}} - \partial\tilde n|_{L^\infty} \le C\,\sqrt{\kappa} \, |P(\partial\tilde \eta, \partial^2 \tilde \eta )|_{L^\infty} |\tilde\eta |_{3.5} \,. \label{n} \end{align} Also by (\ref{Linf_est}), for $k=1,2,3$, \begin{align} |\partial_t^k{\bar{n}^{\kappa}} -\partial_t^k \tilde n|_{L^\infty} \le C\sqrt{\kappa} \, | P(\partial\tilde \eta, \partial^2\tilde\eta)|_{L^\infty}\, | \partial_t^{k-1}\tilde v|_{2.5} \,, \label{ntt} \end{align} and \begin{align} |\partial_t^2{v^\kappa} -\partial_t^2 \tilde v|_0 &\le C\, \sqrt{\kappa}\, |\tilde v_{tt}|_{1.5}\,. \label{vtt} \end{align} Taking three time-derivatives of formula (\ref{importantF}), we see that the highest order term in $\partial_t^3(\sqrt{\tilde g}\tilde H\tilde n)$ is $Q(\partial \tilde \eta) \partial^2 \tilde v_{tt}$. Thus, the highest-order term in the integral $$ \int_0^T\int_{\Gamma} \partial_t^3(\sqrt{\tilde g}\tilde H\tilde n)^s {\bar{n}^{\kappa}}^s \, \, ({\bar{n}^{\kappa}}^r -\tilde n^r) \partial_t^3\tilde v^r $$ is estimated using an integration by parts in space. The highest derivative count occurs when the tangential derivative is moved onto the $\tilde v_{ttt}$ term giving us \begin{align*} \int_0^T\int_{\Gamma} Q^s_i(\partial \tilde \eta) \partial \tilde v^i_{tt} {\bar{n}^{\kappa}}^s \, \, ({\bar{n}^{\kappa}}^r -\tilde n^r) \partial\tilde v^r_{ttt} &\le C\int_0^T |P(\partial \tilde \eta)|_{L^\infty}\, |\tilde v_{tt}|_1 \, |{\bar{n}^{\kappa}} -\tilde n |_{L^\infty}\, |\partial_t^3 \tilde v|_1 \\ & \le C\int_0^T |P(\partial \tilde \eta, \partial\tilde \eta)|_{L^\infty}\, |\tilde \eta|_{2.5}\, |\tilde v_{tt}|_1 \, |\sqrt{\kappa} \tilde v_{ttt}|_1 \\ & \le C_\delta\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \int_0^T |\sqrt{\kappa} \tilde v_{ttt}|_1^2\,, \end{align*} where (\ref{n}) is used for the second inequality. If instead, integration by parts places the tangential derivative on ${\bar{n}^{\kappa}} -\tilde n$, then (\ref{n}) provides the same estimate for this term. The other terms are clearly lower-order. Thanks to (\ref{ntt}), \begin{align*} \int_0^T\int_{\Gamma} \partial_t(\sqrt{\tilde g}\tilde H\tilde n^s ) \partial_t^2\{{\bar{n}^{\kappa}}^s\, ({\bar{n}^{\kappa}}^r -\tilde n^r)\} \, \partial_t^3\tilde v^r &\le C\int_0^T |P(\partial \tilde \eta, \partial^2 \tilde \eta)|_{L^\infty}\, |\partial^2 \tilde v|_0\, |\tilde v_t|_{2.5} \, |\sqrt{\kappa} \tilde v_{ttt}|_0 \\ & \le C_\delta\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \sup_{t\in[0,T]} E_\kappa(t)\,. \end{align*} We next consider the integral \begin{align} &\int_0^T\int_{\Gamma} \sqrt{\tilde g}\tilde H\tilde n^i \partial_t^3\{ {\bar{n}^{\kappa}}^i\, ({\bar{n}^{\kappa}}^r -\tilde n^r)\}\, \partial_t^3 \tilde v^r \nonumber \\ &\qquad = \int_0^T\int_{\Gamma} \sqrt{\tilde g}\tilde H\tilde n \cdot \partial_t^3{\bar{n}^{\kappa}}\, \, ({\bar{n}^{\kappa}} -\tilde n)\cdot \tilde v_{ttt} + \int_0^T\int_{\Gamma} \sqrt{\tilde g}\tilde H\tilde n \cdot {\bar{n}^{\kappa}}\, \, \partial_t^3({\bar{n}^{\kappa}} -\tilde n)\cdot \tilde v_{ttt} + {\mathcal R}_2 \nonumber \\ &\qquad =: I + II + {\mathcal R}_2 \,, \label{improve} \end{align} where ${\mathcal R}_2$ is a lower-order term. For term $I$, we use the estimate $| {\bar{n}^{\kappa}} -\tilde n|_{L^\infty} \le C\, \kappa |\tilde \eta|_{3.5}$ One $\sqrt{\kappa}$ goes with $\partial_t^3{\bar{n}^{\kappa}}$ and the other $\sqrt{\kappa}$ goes with $\tilde v_{ttt}$. Thus, $|I| \le C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \sup_{t\in[0,T]} E_\kappa(t)$. To study $II$, we set $f=\sqrt{\tilde g} \tilde H \tilde n \cdot {\bar{n}^{\kappa}}$, and consider the term $\int_0^T\int_\Gamma f\, \tilde n_{ttt}\cdot \tilde v_{ttt}$. We expand $v$ into its normal and tangential components: set $\tau_\alpha = \tilde \eta,_\alpha$, so that $$ \tilde v = v^\tau \,\tau + v^n\, \tilde n, \text{ where } v^\tau \, \tau = (\tilde v\cdot \tau_\alpha) \, \tau_\alpha\, \text{ and } v^n =\tilde v \cdot \tilde n \,. $$ Then \begin{align*} \tilde v_{ttt} &= v^\tau_{ttt} \tau + 3 v^\tau_{tt} \tau_{t} + 3v^\tau_{t} \tau_{tt}+ v^\tau \tau_{ttt} + v^n_{ttt} \tilde n + 3 v^n_{tt} \tilde n_{t} + 3v^n_{t} \tilde n_{tt}+ v^n \tilde n_{ttt}\,. \end{align*} The most difficult term to estimate comes from the term $v^\tau_{ttt} \tau$, which gives the integral $\int_0^T\int_\Gamma f\, \tilde n_{ttt}\cdot \tau \, v^\tau_{ttt}$. First, notice that $\tilde n_{ttt} \cdot \tau$ is equal to $-\tilde n \cdot \tau_{ttt}$, plus lower order terms that have at most two time derivatives on either $\tilde n$ or $\tau$, and $\tilde n \cdot \tau_{ttt} = \tilde n \cdot \partial_\beta \tilde v_{tt}$ for $\beta=1$ or $2$. Next, the $\kappa$ problem states that $\tilde v^i_{ttt} = (A^k_i \tilde q,_k)_{tt}$, where recall that $$ A = \tilde a_\kappa\,. $$ We have the formula $$ \tilde \eta^i,_\beta A^k_i \partial_k \tilde q_{tt} = \mathcal{J}^\beta \partial_\beta \tilde q_{tt} \ (\text{no sum on } \beta) \ \ \ \text{ for } \ \beta=1,2\,, $$ where $$ \mathcal{J}^1= \tilde \eta^i,_1 \, A^1_i\,, \ \ \ \ \mathcal{J}^2= \tilde \eta^i,_2 \, A^2_i\,. $$ (In the case that $\kappa=0$, $\mathcal {J}^\beta = J =1$.) Using this, we see that the highest-order term in our integral is given by \begin{equation}\label{S0} \int_0^T\int_\Gamma \mathcal{J} f\, (\tilde n \cdot \partial_\beta \tilde v_{tt}) \partial_\beta \tilde q_{tt} \,. \end{equation} Second, write $\tilde q_{tt}$ as \begin{equation}\label{S1} \tilde q_{tt} = -\left[ \frac{\sqrt{\tilde g}}{\sqrt{{\tilde g_\kappa}}} \left[ \Delta_{\tilde g}(\tilde \eta) \cdot \tilde n + \Delta_{\tilde g}(\tilde \eta) \cdot ({\bar{n}^{\kappa}} -\tilde n) \right] +\kappa \Delta_0 (\tilde v \cdot {\bar{n}^{\kappa}}) \right]_{tt} \,. \end{equation} We begin by substituting the first term on the right-hand side of (\ref{S1}) into (\ref{S0}); the highest-order contribution comes from $\partial_\beta\partial_t^2 \Delta_{\tilde g}(\tilde \eta) = Q(\partial \tilde \eta, \partial \tilde \eta_\kappa) \tilde g^{\mu \nu} \tilde n \cdot \partial_t \tilde v,_ {\mu\nu \beta}$. Integrating by parts with respect to $\partial_\nu$, the highest-order term in our integral is given by $$ \int_0^T \int_\Gamma Q(\partial \tilde \eta, \partial \tilde \eta_\kappa)\, f\, (\tilde n \cdot \tilde v_{t},_ {\mu \beta}) \, \tilde g^{\mu \nu} \, (\tilde n \cdot \tilde v_{tt},_ {\nu \beta}) \,. $$ Letting $G_{ij}^{\mu\nu}:= Q(\partial \tilde \eta, \partial \tilde \eta_\kappa)\, f\, \tilde n_i \, \tilde n_j = Q(\partial \tilde \eta, \partial \tilde \eta_\kappa) \partial^2\eta$, integration by parts in time yields \begin{align} &-\int_0^T \int_\Gamma \partial_t G_{ij}^{\mu\nu}\, \,\, \tilde v^j_{t},_ {\nu \beta}\, \tilde v^i_{t},_ {\mu \beta} +\left. \int_\Gamma \partial_t G_{ij}^{\mu\nu}\, \,\, \tilde v^j_{t},_ {\nu \beta}\, \tilde v^i_{t},_ {\mu \beta}\right]_0^T \nonumber \\ & \qquad\qquad \le C\, TP(\sup_{t\in[0,T]} E_\kappa(t)) + M_0 + C \sup_{t\in [0,T]} |G_t|_{L^\infty} \,\| \tilde v_t\|_{2.5}^2 \nonumber \\ & \qquad\qquad \le M_0 + C\, TP(\sup_{t\in[0,T]} E_\kappa(t)) + C \sup_{t\in [0,T]} [ P(\|\tilde v_t\|_{2.5}^2) + P(\|\tilde v\|_{3.5}^2)+ P(\|\tilde\eta\|_{4.5}^2)] \,.\label{G2D} \end{align} For the second term on the RHS of (\ref{S1}), the highest-order term gives the integral \begin{align*} &\int_0^T \int_\Gamma f\,Q(\partial \tilde \eta, \partial \tilde \eta_\kappa) \, (\tilde n \cdot \tilde v_{tt},_\beta) \, \tilde g^{\mu \nu} \Pi \tilde v_t,_ {\mu\nu \beta} \cdot ({\bar{n}^{\kappa}} -\tilde n) \le C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \int_0^T \|\sqrt{\kappa} \tilde v_t\|_{3.5}^2, \end{align*} where we used $|{\bar{n}^{\kappa}} -\tilde n|_{L^\infty} < C\, \kappa |\tilde \eta|_{3.5}$ again. For the third term on the RHS of (\ref{S1}), the highest-order term gives the integral \begin{align*} &\kappa\int_0^T \int_\Gamma f\,Q(\partial \tilde \eta, \partial \tilde \eta_\kappa) \, (\tilde n \cdot \tilde \partial^2 v_{tt}) \, ({\bar{n}^{\kappa}} \cdot \partial^2 \tilde v_{tt}) \\ &\qquad \qquad \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \|\sqrt{\kappa} \tilde v_{tt}\|_{L^2(0,T;H^{2.5}(\Omega)}^4 \,. \end{align*} We have thus estimated the integral $\int_0^T\int_{\Gamma} \partial_t^3\{ \tilde H\tilde n\cdot {\bar{n}^{\kappa}} \, ({\bar{n}^{\kappa}}- \tilde n) \}\, \partial_t^3 \tilde v$. The remaining integral $\int_0^T\int_{\Gamma} \partial_t^3\{ \tilde H\tilde n\cdot ({\bar{n}^{\kappa}} -\tilde n)\, {\bar{n}^{\kappa}} \}\, \partial_t^3 \tilde v$ has the same bound. \noindent {\bf Step 2. The pressure term.} We next consider the pressure term in (\ref{eulerttt}): \begin{align} -\int_0^T\int_{\Omega} \partial_t^3(A^k_i \tilde q) \partial_t^3 \tilde v^i,_k & = -\int_0^T\int_{\Omega} \partial_t^3 \tilde v^i,_k \left[ \partial_t^3 A^k_i \tilde q + 3 \partial_t^2{A^k_i}\, \tilde q_t + 3 \partial_t{A^k_i} \, \tilde q_{tt} + A^k_i \partial_t^3 \tilde q \right] \nonumber \\ &=: I + II + III + IV \,. \label{qttt_term} \end{align} We record the following identities: \begin{subequations} \label{a123t} \begin{align} \partial_t A^k_i & = \tilde J_\kappa^{-1}(A^s_r A^k_i - A^k_r A^s_i) {\tilde v_\kappa}^r,_s \,, \label{a123t.a} \\ \partial^2_t A^k_i & = \tilde J_\kappa^{-1}(A^s_r A^k_i - A^k_r A^s_i) \partial_t{\tilde v_\kappa}^r,_s +P^k_i(\tilde J_\kappa^{-1},A,\nabla \tilde v_\kappa) \,, \label{a123t.b} \\ \partial^3_t A^k_i & = \tilde J_\kappa^{-1}(A^s_r A^k_i - A^k_r A^s_i) \partial^2_t{\tilde v_\kappa}^r,_s + P^k_j (\tilde J_\kappa^{-1},A,\nabla {\tilde v_\kappa}) \partial_t{\tilde v_\kappa}^j,_i \,. \label{a123t.c} \end{align} \end{subequations} With (\ref{a123t.c}), and $f^{sk}_{ri}:=\tilde J_\kappa^{-1}(A^s_r A^k_i - A^k_r A^s_i)\tilde q$ term $I$ is written as \begin{align*} I &= \int_0^T \int_\Omega [ f^{sk}_{ri} \partial_t^3 \tilde v^i,_k \partial_t^2 {v^\kappa}^r,_s + \partial_t^3 \tilde v^i,_k \partial_t {v^\kappa}^j,_i P^k_j(\tilde J_\kappa^{-1},A,\nabla {v^\kappa})] \\ &=: I_a + I_b \,. \end{align*} We fix a chart $\theta_l$ in a neighborhood of the boundary $\Gamma$ and let $\xi= \sqrt{\alpha_l}$, where once again, we remind the reader that $\{\alpha_i\}_{i=1}^L$ denotes the partition of unity associated to the charts $\{\theta\}_{l=1}^L$. With ${\mathcal I}_a$ denoting the restriction $I_a|_{U_l}$, where $U_l\cap \Omega = \theta_l ((0,1)^3)$, and letting $\rho:= \rho_{\frac{1}{\kappa}}$ and $\theta:=\theta_l$, we have that \begin{align*} {\mathcal I}_a & = \int_0^T \int_{(0,1)^3} f^{sk}_{ri}(\theta) \partial_t^3 \tilde v^i,_k (\theta)\, \xi\, \rho\star_h \rho\star_h \xi \partial_t^2 \tilde v^r,_s(\theta) + f(\theta) \nabla\tilde v_{ttt} G(\xi, \nabla \xi) \tilde v_{tt} \\ &=: {{\mathcal I}_a}_1+ {{\mathcal I}_a}_2 \,, \end{align*} where $G(\xi,\nabla \xi)$ is a bilinear function which arises when the gradient acts on $\xi$ rather than $\tilde v_{tt}$. The term ${{\mathcal I}_a}_1$ is the difficult term which requires forming an exact derivative, and this, in turn, requires commuting the convolution with $f^{sk}_{ri}$. We let $$ V(\theta) = \rho \star_h \xi \tilde v(\theta) $$ so that using the symmetry property (\ref{selfadjoint}), we see that \begin{align*} {{\mathcal I}_a}_1 &= \int_0^T\int_{(0,1)^3} \rho\star_h [f^{sk}_{ri} \xi \partial_t^3 \tilde v^i,_k(\theta)] \partial_t^2 V^r,_s(\theta) \\ & = \int_0^T\int_{(0,1)^3}\left[ f^{sk}_{ri} \partial_t^3 V^i,_k(\theta)] \partial_t^2 V^r,_s(\theta) + (\rho \star_h [f^{sk}_{ri} \xi\partial_t^3\tilde v^i,_k] - f^{sk}_{ri} \rho\star_h [\xi\partial_t^3\tilde v^i,_k])\, V_{tt}^r,_s \right]\\ &=: {{{\mathcal I}_a}_1}_i+{{{\mathcal I}_a}_1}_{ii}\,. \end{align*} Since $f^{sk}_{ri}$ is symmetric with respect to $i$ and $r$ and $k$ and $s$, we see that \begin{align} {{{\mathcal I}_a}_1}_i &= \frac{1}{2}\int_0^T\int_{(0,1)^3} f^{sk}_{ri}(\theta) \partial_t\left[ \partial_t^2 V^i,_k(\theta)\partial_t^2 V^r,_s(\theta) \right] \nonumber \\ & = -\frac{1}{2}\int_0^T\int_{(0,1)^3} \partial_t f^{sk}_{ri}(\theta) \partial_t^2 V^i,_k(\theta)\partial_t^2 V^r,_s(\theta) + \frac{1}{2}\left.\int_{(0,1)^3} f^{sk}_{ri}(\theta) \partial_t^2 V^i,_k(\theta)\partial_t^2 V^r,_s(\theta)\right]^T_0 \,. \nonumber \end{align} We sum over $l=1,...,L$. The spacetime integral is bounded by $C T P(\sup_{t\in[0,T]} E_\kappa(t))$. For the the space integral at time $t=T$, we employ the fundamental theorem of calculus: \begin{align*} \int_\Omega [f^{sk}_{ri}V_{tt}^i,_k V_{tt}^r,_s ] (T) & = \int_\Omega V_{tt}^i,_k(T) V_{tt}^r,_s(T) f^{sk}_{ri}(0) + \int_\Omega V_{tt}^i,_k(T) V_{tt}^r,_s(T) \int_0^T \partial_tf^{sk}_{ri} \\ & \le \|V_{tt}(T)\|_1^2 \, \|\tilde q_0\|_2 + \|V_{tt}(T)\|_1^2 \,\, \|\int_0^T |f_t\| _{L^\infty} \\ & \le \|v_{tt}(T)\|_{1.5}^{2/3} \, \|v_{tt}(T)\|_{0}^{1/3} \, \|\tilde q_0\|_2 + C T P(\sup_{t\in[0,T]} E_\kappa(t)) \\ & \le {\frac{\delta}{2}} \|v_{tt}(T)\|_{1.5}^{2} + C(\delta) \|v_{tt}(T)\|_{0}^{1/2} \, \|\tilde q_0\|_2^{3/2} + C T P(\sup_{t\in[0,T]} E_\kappa(t)) \\ & \le \delta \|v_{tt}(T)\|_{1.5}^{2} + C(\delta) \, \|\tilde q_0\|_2^{2} + C T P(\sup_{t\in[0,T]} E_\kappa(t)) \\ & \le \delta \sup_{t\in[0,T]} E_\kappa(t) + M_0(\delta) + C TP(\sup_{t\in[0,T]} E_\kappa(t)) \,, \end{align*} where Young's inequality has been used. For ${{{\mathcal I}_a}_1}_{ii}$, the commutation lemma \ref{commutator} shows that \begin{align*} {{{\mathcal I}_a}_1}_{ii} &\le C \kappa^{\frac{3}{2}}\int_0^T \|f\|_3\, \|\sqrt{\kappa} \tilde v_{ttt}\|_1 \, \| \tilde v_{tt}\|_1 \\ &\le \delta \int_0^T \|\sqrt{\kappa} \tilde v_{ttt}\|_1^2 + C_\delta \int_0^T \|f\|_3^2\, \|\tilde v_{tt}\|_1^2 \,. \end{align*} Summing over $l=1,...,L$, we integrate-by-parts in time and write the term ${{{\mathcal I}_a}_2}$ as \begin{align*} {{{\mathcal I}_a}_2} & = -\int_0^T\int_\Omega f\nabla \tilde v_{tt} G(\xi, \nabla \xi) (f \tilde v_{tt})_t + \left.\int_\Omega f\nabla \tilde v_{tt} G(\xi, \nabla \xi) f \tilde v_{tt}\right]^T_0\,. \end{align*} This is estimated in the same way as term ${{{\mathcal I}_a}_1}_{i}$. The term $I_b$ is handled in the identical fashion with the same bound. Thus, we have shown that $$ I \le \delta \sup_{t\in[0,T]} E_\kappa(t) + M_0(\delta) + C TP(\sup_{t\in[0,T]} E_\kappa(t)) \,. $$ Using (\ref{a123t.b}) for term $II$, integration by parts in time gives the identical bound as for term $I$. For term $III$, a different approach is employed; we use (\ref{a123t.a}) and integration by parts in space rather than time, and let $F^{sk}_{ri}:=3\tilde J_\kappa^{-1}(A^s_r A^k_i - A^k_r A^s_i)$ to find that \begin{align*} III = - \int_0^T \int_\Omega \tilde v_{ttt}^i[ {v^\kappa}^r,_s F^{sk}_{ri} q_{tt}],_k + \int_0^T \int_\Gamma \tilde v_{ttt}^i F^{sk}_{ri} N_k q_{tt} {v^\kappa}^r,_s \end{align*} The Cauchy-Schwarz inequality together with the pressure estimate (\ref{qestimate}) give the bound $CT P(\sup_{t\in[0,T]} E_\kappa(t))$ for the first term on the right-hand side. The boundary integral term requires integration by parts in time: \begin{align*} \int_0^T \int_\Gamma \tilde v_{ttt}^i F^{sk}_{ri} N_k \tilde q_{tt} {v^\kappa}^r,_s &= \left. \int_\Gamma \tilde v_{tt}^i F^{sk}_{ri} N_k \tilde q_{tt} {v^\kappa}^r,_s \right]^T_0 +\int_0^T \int_\Gamma v_{tt}^i [ F^{sk}_{ri} N_k \tilde q_{tt} {v^\kappa}^r,_s ]_t\\ &=: III_a + III_b\,. \end{align*} First, note that \begin{align} F^{sk}_{ri} N_k &= 3\tilde J_\kappa^{-1} A^s_r \left[ (\tilde q A^k_i N_k)_{tt} - 2 \tilde q_t \partial_t A^k_i N_k -\tilde q \partial_t^2 A^k_i N_k \right] \nonumber\\ & \qquad - 3\tilde J_\kappa^{-1} A^s_i \left[ (\tilde q A^k_r N_k)_{tt} - 2 \tilde q_t \partial_t A^k_r N_k -\tilde q \partial_t^2 A^k_r N_k \right] \label{F} \,. \end{align} Next, substitute the boundary condition (\ref{kbca}), written as \begin{equation} \label{kbca2} -\tilde q A^k_i N_k = \sqrt{\tilde g}\Delta_{\tilde g}(\tilde \eta^j)\left[ \delta_{ji} + (({\bar{n}^{\kappa}})_j - \tilde n_j) \tilde n_i + \tilde n_j(({\bar{n}^{\kappa}})_i - \tilde n_i)\right] + \kappa (\sqrt{g_0} g_0^{\alpha\beta} [\tilde v\cdot \tilde n],_\beta),_\alpha \tilde n_i \,, \end{equation} into (\ref{F}). The two bracketed terms in (\ref{F}) are essentially the same, so it suffices to analyze just the first term. We begin by considering the term $\sqrt{\tilde g}\Delta_{\tilde g}(\tilde \eta^i)$ in (\ref{kbca2}). Then $III_a$ can be written as \begin{align} III_a &= \left. 3 \int_\Gamma \tilde J_\kappa^{-1}\left\{ \partial_t^2(\sqrt{\tilde g} \tilde g^{\alpha\beta} \tilde \eta^i,_\beta) ,_\alpha {v^\kappa}^r,_s A^s_r -2 q_t \partial_t A^k_i N_i {v^\kappa}^r,_s - q \partial_t^2 A^k_i N_i {v^\kappa}^r,_s \right\} \tilde v_{tt}^i \right]^T_0 \label{IIIa} \\ &= \left. 3 \int_\Gamma (\sqrt{\tilde g} \tilde g^{\alpha\beta} \tilde \eta^i,_\beta) _{tt} ( \tilde J_\kappa^{-1}{v^\kappa}^r,_s A^s_r \tilde v^i_{tt}),_\alpha -2 \tilde J_\kappa^{-1}q_t \partial_t A^k_i N_i {v^\kappa}^r,_s \tilde v^i_{tt} - \tilde J_\kappa^{-1}q \partial_t^2 A^k_i N_i {v^\kappa}^r,_s \tilde v_{tt}^i \right]^T_0 \nonumber \\ &\le \delta \sup_{t\in[0,T]} E_\kappa(t) + M_0(\delta) + CT P(\sup_{t\in[0,T]} E_\kappa(t)) \,, \nonumber \end{align} the last inequality following from the fundamental theorem of calculus and the same argument we have used above. In order to estimate $III_b$, because we do not have a trace estimate for $\partial_t^3 A$, we let $Q_r(\partial {{{\bar{\eta}^{\kappa}}}}):= \sqrt{{\tilde g_\kappa}} ({\bar{n}^{\kappa}})_r$ and compute \begin{align} \partial_t Q_r &= Q^\alpha_{ri} {v^\kappa}^i,_\alpha \,, \ \ \ \partial_t^2 Q_r = Q^{\alpha\beta}_{rij} {v^\kappa}^i,_\alpha {v^\kappa}^j,_\beta + Q^\alpha_{ri} \partial_t{v^\kappa}^i,_\alpha \,, \nonumber\\ \partial_t^3 Q_r &= Q^{\alpha\beta\gamma}_{rijk} {v^\kappa}^i,_\alpha {v^\kappa}^j,_\beta {v^\kappa}^k,_j + 3 Q^{\alpha\beta}_{rij} {v^\kappa}^i,_\alpha \partial_t {v^\kappa}^j,_\beta + Q^\alpha_{ri} \partial_t^2 {v^\kappa}^i,_\alpha \,. \label{Qttt} \end{align} Since $\sqrt{{\tilde g_\kappa}} ({\bar{n}^{\kappa}})_r \tilde q_{tt} = (\sqrt{{\tilde g_\kappa}} ({\bar{n}^{\kappa}})_r \tilde q)_{tt} - 2 \partial_t Q_r\, \tilde q_t - \partial_t^2 Q_r \, \tilde q$, it follows that \begin{align} III_b &= -3 \int_0^T \int_\Gamma \tilde J_\kappa^{-1}\left[ \tilde v_{tt}^r (\sqrt{{\tilde g_\kappa}} ({\bar{n}^{\kappa}})_r \tilde q)_{ttt} {v^\kappa}^i,_s A^s_i + \tilde v_{tt}^r (Q_r \tilde q)_{tt} ({v^\kappa}^i,_s A^s_i)_t \right. \nonumber \\ &\qquad\left. - \tilde v_{tt}^r (2 Q^\alpha_{rj}{v^\kappa}^j,_\alpha {v^\kappa}^i,_s A^s_i \tilde q_t + Q^{\alpha \beta}_{rlj} {v^\kappa}^l,_\alpha {v^\kappa}^j,_\beta {v^\kappa}^i,_s A^s_i \tilde q + Q^\alpha_{rl}\partial_t{v^\kappa}^l,_\alpha {v^\kappa}^i,_s A^s_i \tilde q)_t \right] \,. \label{IIIb} \end{align} Using the pressure estimates and by definition of our energy function, for $t \in (0,T)$, \begin{gather} \tilde q_{tt}(t) \in H^{0.5}(\Gamma), \ \ \ Q_r(t), \partial_t Q_r(t) \in L^\infty(\Gamma), \ \ \ \partial_t^3 Q_r(t) \in L^2(\Gamma), \nonumber \\ \partial \tilde v_t(t) \in H^{1.5}(\Gamma), \ \ \ \partial \tilde v_{tt}(t) \in L^2(\Gamma) \,; \nonumber \end{gather} thus, all of the terms, except the first, on right-hand side of (\ref{IIIb}) can be easily bounded by $CT P(\sup_{t\in[0,T]} E_\kappa(t))$. Integrating by parts in space, the first term in (\ref{IIIb}) has the following estimate: \begin{align} & 3 \int_0^T \int_\Gamma \sqrt{{\tilde g_\kappa}} {\tilde g_\kappa}^{\alpha\beta} \Pi^r_k \tilde v_{tt}^k,_\beta (\tilde v_{tt}^r {v^\kappa}^i,_s J_\kappa^{-1} A^s_i),_\alpha + \sqrt{{\tilde g_\kappa}} ({\tilde g_\kappa}^{\mu\nu} g^{\alpha\beta} - g^{\alpha\nu} g^{\mu\beta}) \tilde\eta^j,_\nu \tilde v_{tt}^j,_\mu \tilde\eta^r,_\beta (\tilde v_{tt}^r {v^\kappa}^i,_s \tilde J_\kappa^{-1} A^s_i),_\alpha \nonumber\\ & \qquad\qquad + [P^{\alpha\beta}_{ij}(\partial\tilde \eta,\partial \tilde v) \tilde v_{tt}^j,_\beta + P^\alpha_i(\partial \tilde \eta, \partial \tilde v)] (\tilde v_{tt}^r {v^\kappa}^i,_s \tilde J_\kappa^{-1} A^s_i),_\alpha \le C T P(\sup_{t\in[0,T]} E_\kappa(t)) \,. \label{IIIb_estimate} \end{align} The remaining three terms in the boundary condition (\ref{kbca2}) are now considered. The additional integrals which arise in the (\ref{IIIb}) are given by \begin{align*} &-3\int_0^T\int_{\Gamma} \partial_t^3 \{ \sqrt{\tilde g}\tilde H\tilde n\cdot({\bar{n}^{\kappa}} -\tilde n)\, \tilde n_r + \sqrt{\tilde g} \tilde H\tilde n\cdot {\bar{n}^{\kappa}}\, (({\bar{n}^{\kappa}})_r -\tilde n_r) + \kappa\sqrt{\tilde g_\kappa} \Delta_{\bar 0}(\tilde v\cdot {\bar{n}^{\kappa}}){\bar{n}^{\kappa}} \}\, \tilde v^r,_s \tilde a^s_i \tilde v^i_{tt} \\ &\qquad =: J_1+J_2+J_3 \,. \end{align*} Term $J_3$ with the artificial viscosity provides the integral $\kappa\int_0^T [\partial_t^3(\tilde v\cdot {\bar{n}^{\kappa}}), P(\nabla \tilde \eta,\nabla \tilde v) \cdot \partial_t^2\tilde v]_1$. The highest-order terms in this integral are estimated as \begin{align*} &\kappa\int_0^T \left\{[ P^{\alpha\beta}_{ij}(\partial {\bar{n}^{\kappa}}, \nabla \tilde \eta, \nabla \tilde v) \, \partial_t^3 \tilde v^i_{,\beta}\,, \partial_t^2\tilde v^j_{,\alpha}]_0 + [ P^{\alpha\beta\gamma}_{ij}(\partial {\bar{n}^{\kappa}}, \nabla \tilde \eta,\tilde v,\nabla \tilde v) \, \partial_t^2 \tilde v_\kappa^i,_{\beta\gamma}\,, \partial_t^2\tilde v^j_{,\alpha}]_0 \right\} \\ & \qquad \le C\, \sqrt{\kappa} \int_0^T \|P(\nabla\tilde \eta, \nabla \tilde v)\|_{L^\infty}\, | \partial_t^2 \tilde v|_1 \, \{ |\sqrt{\kappa}\partial_t^3 \tilde v|_1 + |\sqrt{\kappa}\partial_t^2 \tilde v_\kappa|_2 \}\, \\ & \qquad \le C(\delta)\, \int_0^T \|P(\nabla \tilde \eta,\nabla \tilde v) \|_{L^\infty}\, | \partial_t^2 \tilde v|^2_1 + \kappa\delta\int_0^T \left(|\sqrt{\kappa}\partial_t^3 \tilde v|^2_1+|\sqrt{\kappa}\partial_t^2 \tilde v_\kappa|^2_2\right)\\ &\qquad\le C(\delta)\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \sup_{t\in[0,T]} E_\kappa(t) \,. \end{align*} The lower-order terms in $J_3$ also have the same bound. As to terms $J_1$ and $J_2$, the estimates for the terms with $(\sqrt{\tilde g}\tilde H\tilde n)_{ttt}$ are obtained exactly as in (\ref{IIIb_estimate}). For the terms that contain $\tilde n_{ttt}$, we use the formula (\ref{Qttt}) for the third time derivative of the unit normal; it immediately follows that terms $J_1$ and $J_2$ are also bounded by $\delta \sup_{t\in[0,T]} E_\kappa(t) + CT P(\sup_{t\in[0,T]} E_\kappa(t))$. We need only consider now the additional terms in (\ref{IIIa}) from the remaining three terms in the boundary condition (\ref{kbca2}). The only novelty is in the highest-order integral coming from integration by parts in space in the $\kappa$ term: $$\kappa\int_\gamma P^{\alpha\beta}_{ij}(T) v^i_{tt},_\alpha(T) v^j_{tt},_\beta(T)\,$$ where $P^{\alpha\beta}_{ij}(t)$ and $\partial_t P^{\alpha\beta}_{ij}(t)$ are both in $L^\infty(\Gamma)$ for each $t\in[0,T]$. Using the fundamental theorem of calculus and the fact that $\sqrt\kappa v_{ttt} \in L^2(0,T; H^{1.5}(\Omega))$ together with Jensen's inequality shows that this term is bounded by $M_0(\delta) + \delta \sup_{t\in[0,T]} E_\kappa(t) + CTP(\sup_{t\in[0,T]} E_\kappa(t))$. To study term $IV$, we use (\ref{a123t}) together with the incompressibility condition $(\tilde v^i,_k A^k_i)_{ttt}=0$ to find that \begin{align*} IV= -\int_0^T \int_\Omega [(3 \tilde v_{tt}^i,_k \partial_t A^k_i + \tilde v^i,_k \partial^3_t A^k_i) \tilde q_{ttt} + 3 \tilde v_t^i,_k \partial_t^2 A^k_i \tilde q_{ttt}] =: IV_a + IV_b \,. \end{align*} For $IV_b$, we integrate by parts in time: \begin{align*} IV_b= \int_0^T \int_\Omega 3 (\tilde v_t^i,_k \partial_t^2 A^k_i)_t \tilde q_{tt} - \left. \int_\Omega 3 \tilde v_t^i,_k \partial_t^2 A^k_i \tilde q_{tt} \right]^T_0 \,. \end{align*} Since $\partial_t^3 A$ is bounded in $H^{0.5}(\Omega)$, the spacetime integral is easily bounded by $CTP(\sup_{t\in[0,T]} E_\kappa(t))$; meanwhile, the remaining space integral satisfies \begin{align*} {IV_b}_2 & \le 3\int_\Omega [\tilde v_t^i,_k(0)\partial_t^2A^k_i(0)] q_{tt}(T) +\int_\Omega \int_0^T [ \tilde v_t^i,_k\partial_t^2A^k_i]_t \, dt \ q_{tt}(T) + M_0 \\ & \le \delta \|q_{tt}(T)\|_0^2 + M_0(\delta) + T\sup_{t \in [0,T]} \| (\tilde v_t^i,_k \partial_t^2A^k_i)_t\|_0 \, \|q_{tt}(T)\|_0 \\ &\le \delta \sup_{t\in[0,T]} E_\kappa(t) + M_0(\delta) + CT P(\sup_{t\in[0,T]} E_\kappa(t)) \,. \end{align*} With $F^{sk}_{ri}:= \tilde J_\kappa^{-1} (A^s_rA^k_i - A^k_rA^s_i)$, $IV_a$ is written as \begin{align} IV_a &= -\int_0^T \int_\Omega (3 \partial_t^2 \tilde v^r,_s F^{sk}_{ri} {v^\kappa} ^i,_k +\partial_t^2 {v^\kappa}^r,_s F^{sk}_{ri} \tilde v ^i,_k) \tilde q_{ttt} + \tilde v^i,_k P^k_j(J_\kappa^{-1}, A, \nabla v_\kappa) \partial_t {v^\kappa}^j,_i \nonumber \\ &=:{IV_a}_1+{IV_a}_2+{IV_a}_3 \,. \label{IVa} \end{align} Term ${IV_a}_3$ is estimated in the same way as term $IV_b$. For term ${IV_a}_2$, we integrate by parts in space to find that \begin{align*} {IV_a}_2 &= -\int_0^T \int_\Gamma \partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k \tilde q_{ttt} N_s + \int_0^T \int_\Omega \partial_t^2 {v^\kappa}^r (F^{sk}_{ri} \tilde v ^i,_k \tilde q_{ttt}),_s := {{IV_a}_2}_i + {{IV_a}_2}_{ii} \end{align*} The first integral ${{IV_a}_2}_i$ is handled identically to term $III_b$ to give the bound $CT P(\sup_{t\in[0,T]} E_\kappa(t))$. We write the second integral as \begin{align*} {{IV_a}_2 }_{ii} = \int_0^T \int_\Omega [\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k \tilde q_{ttt},_s +\partial_t^2 {v^\kappa}^r (F^{sk}_{ri} \tilde v ^i,_k),_s \tilde q_{ttt}], \end{align*} integrate by parts in time, and obtain \begin{align*} {{IV_a}_2 }_{ii} & = -\int_0^T \int_\Omega [(\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k))_t \tilde q_{tt},_s +(\partial_t^2 {v^\kappa}^r (F^{sk}_{ri} \tilde v ^i,_k)_t,_s \tilde q_{tt}] \\ & \qquad\qquad\qquad+ \left. \int_\Omega [(\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k) \tilde q_{tt},_s +(\partial_t^2 {v^\kappa}^r (F^{sk}_{ri} \tilde v ^i,_k),_s \tilde q_{tt}]\right]_0^T \,. \end{align*} Since $\partial_t^3{v^\kappa}$ is bounded in $L^2(\Omega)$, and $(F\, \nabla v)_t$ is bounded in $L^\infty(\Omega)$, the spacetime integral is bounded by $CTP(\sup_{t\in[0,T]} E_\kappa(t))$. Next, we analyze the highest-order term in the remaining temporal boundary integral in ${IV_a}_2$: \begin{align*} \int_\Omega &[(\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k) \tilde q_{tt},_s ](T) \\ &\qquad\qquad\qquad = \int_\Omega (\partial_t^2 {v^\kappa}^r(0) F^{sk}_{ri}(0) \tilde v ^i,_k(0)) \tilde q_{tt},_s (T) + \int_\Omega \int_0^T (\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k)_t \tilde q_{tt},_s (T)\\ &\qquad\qquad\qquad \le M_0(\delta) + \delta \|q_{tt}(T)\|_1^2 + T \sup_{t\in [0,T] } \| (\partial_t^2 {v^\kappa}^r F^{sk}_{ri} \tilde v ^i,_k )_t \|_0 \, \|q_{tt}(T)\|_1 \\ &\qquad\qquad\qquad \le M_0(\delta)+ \delta \sup_{t\in[0,T]} E_\kappa(t) + CT P(\sup_{t\in[0,T]} E_\kappa(t)) \,. \end{align*} The remaining term is analyzed in the same fashion. \noindent {\bf Step 3. The inertia term.} Finally, the inertia term in (\ref{eulerttt}) satisfies \begin{align*} &\int_0^T \int_\Omega \partial_t^3 (\tilde J_\kappa \tilde v_t^i) \tilde v_{ttt}^i = \frac{1}{2}\|\tilde J_\kappa^{\frac{1}{2}} \tilde v_{ttt}(T)\|_0^2 - \frac{1}{2}\|\tilde v_{ttt}(0)\|_0^2 \\ &\qquad\qquad\qquad +\int_0^T \int_\Omega \left[ 2.5\partial_t \tilde J_\kappa |\tilde v_{ttt}|^2 + 3\partial_t^2 \tilde J_\kappa \tilde v_{tt}^i\tilde v_{ttt}^i + \partial_t^3 \tilde J_\kappa \tilde v_{t}^i\tilde v_{ttt}^i \right]\,. \end{align*} Since $\partial_t \tilde J_\kappa = \text{Trace} (\tilde a_\kappa\, \nabla \tilde v_\kappa)$, $\partial_t^2 \tilde J_\kappa = \text{Trace} (\tilde a_\kappa\, \nabla \partial_t\tilde v_\kappa) + P(\tilde J_\kappa^{-1}, \tilde a_\kappa, \nabla \tilde v_\kappa)$, and $\partial_t^2 \tilde J_\kappa = \text{Trace} (\tilde a_\kappa\, \nabla \partial_t^2\tilde v_\kappa) + P(\tilde J_\kappa^{-1}, \tilde a_\kappa, \nabla \tilde v_\kappa)\, \nabla \partial_t \tilde v_\kappa$, then using condition (\ref{deteta.c}), we see that $$\sup_{t\in[0,T]}\|v_{ttt}\|_0^2 \le \|\tilde v_{ttt}(0)\|_0^2 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)).$$ From (\ref{Neumann}) evaluated at $t=0$, we see that $\|\tilde v_{ttt}(0)\|_0^2 \le M_0$, so the lemma is proved. \end{proof} \begin{lemma}[Energy estimates for the second time-differentiated $\kappa$-problem] \label{lemma10.3} For $M_0$ taken as in Lemma \ref{lemma1} and $\delta >0$, solutions of the $\kappa$-problem (\ref{smooth}) satisfy: \begin{align} &\sup_{t\in[0,T]} |\partial_2\tilde v_t \cdot \tilde n|_0^2 + \int_0^T |\sqrt{\kappa} \partial^2\tilde v_{tt} \cdot {\bar{n}^{\kappa}}|_0^2 \nonumber \\ &\qquad \le M_0 + T \, P(\sup_{t\in [0,T]} E_\kappa(t)) + \delta \sup_{t\in [0,T]} E_\kappa(t) +C\,P(\|\sqrt{\kappa} \tilde v_t\|^2_{L^2(0,T;H^{3.5}(\Omega))}) \,. \label{ss_kttx} \end{align} \end{lemma} \begin{proof} We let $\partial \partial_t^2$ act on (\ref{smooth.b}) and test with $\zeta_i^2 \partial \tilde v_{tt}$ where $\zeta_i^2 = \alpha_i$, and $\alpha_i$ is an element of our partition of unity. This localizes the analysis to a neighborhood of the boundary $\Gamma$ where the tangential derivative is well-defined. In this neighborhood, we use a normal coordinate system spanned by $(\partial_1 \eta_0, \partial_2 \eta_0, N)$. We follow the proof of Lemma \ref{lemma10.2} and replace $\partial_t^3$ with $\partial \partial_t^2$. There are only two differences between the analysis of the second and third time-differentiated problems. The first difference can be found in the analogue of term $III$ in (\ref{Hn_ttt}), which now reads $\int_0^T\int_\Gamma P(\partial \tilde \eta, \partial \tilde v)\, \partial \tilde v_t\, \partial^2 \tilde v_{tt}$. After integration by parts in space, this term is bounded by $C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$; however, this term requires a bound on $|\tilde v_{tt}|_1$, which requires us to study the third time-differentiated problem. (Compare this with the third time-differentiated problem wherein integration by parts in time forms an exact derivative which closes the estimate.) The second difference is significant. Because the energy function places $$\tilde v_{ttt} \in L^\infty(0,T; L^2(\Omega)) \text{ and } \tilde v_{tt} \in L^\infty(0,T; H^{1.5}(\Omega)),$$ there is a one-half derivative improvement that accounts for (\ref{ss_kttx}) being better than (\ref{ss_kttt}). In particular, the analogue of term $II$ in (\ref{improve}) is $\int_0^T\int_\Gamma \sqrt{\tilde g} \tilde H \tilde n\cdot{\bar{n}^{\kappa}}\, \partial \partial_t^2 ({\bar{n}^{\kappa}} -\tilde n) \cdot \partial \tilde v_{tt}$, and since $|\partial \partial_t^2({\bar{n}^{\kappa}} -\tilde n)|_0 \le C\, |P(\partial \eta)|_{L^\infty}\, |\tilde v_t|_2$, then this integral is easily seen to be bounded by $C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. (This is in sharp contrast to the difficult analysis required at the level of the third time-differentiated problem which follows equation (\ref{improve}).) All of the other estimates follow identically to the proof of Lemma \ref{lemma10.2} with $\partial_t^3$ being replaced by $\partial \partial_t^2$. \end{proof} \begin{lemma}[Energy estimates for the time-differentiated $\kappa$-problem] For $M_0$ taken as in Lemma \ref{lemma1} and $\delta >0$, solutions of the $\kappa$-problem (\ref{smooth}) satisfy: \begin{align} &\sup_{t\in[0,T]} |\partial^3 \tilde v \cdot \tilde n|_0^2 + \int_0^T |\sqrt{\kappa} \partial_3\tilde v_{t} \cdot {\bar{n}^{\kappa}}|_0^2 \nonumber \\ &\qquad \le M_0 + T \, P(\sup_{t\in [0,T]} E_\kappa(t)) + \delta \sup_{t\in [0,T]} E_\kappa(t) +C\,P(\|\sqrt{\kappa} \tilde v\|^2_{L^2(0,T;H^{4.5}(\Omega))}) \,. \label{ss_ktxx} \end{align} \end{lemma} \begin{proof} After replacing $\partial\partial_t^2$ with $\partial^2\partial_t$, the proof is the same as the proof of Lemma \ref{lemma10.3}. \end{proof} \begin{lemma}[Energy estimates for the $\kappa$-problem] For $M_0$ taken as in Lemma \ref{lemma1} and $\delta >0$, solutions of the $\kappa$-problem (\ref{smooth}) satisfy: \begin{align} &\sup_{t\in[0,T]} |\partial^4 \tilde \eta \cdot \tilde n|_0^2 + \int_0^T |\sqrt{\kappa} \partial_4\tilde v \cdot {\bar{n}^{\kappa}}|_0^2 \le M_0 + T \, P(\sup_{t\in [0,T]} E_\kappa(t)) + \delta \sup_{t\in [0,T]} E_\kappa(t) \,. \label{ss_kxxx} \end{align} \end{lemma} \begin{proof} Let $\partial^3$ act on (\ref{smooth.b}) and test with $\partial^3 \tilde v$. All of the terms are estimated as in Lemma \ref{lemma10.3}, except the analogue of (\ref{difficult}) which reads, after replacing $\partial_t^3$ with $\partial^3$, as \begin{equation}\nonumber \kappa \int_0^T[ \tilde v \cdot \partial^4 {\bar{n}^{\kappa}}, {\bar{n}^{\kappa}} \cdot \partial^4\tilde v)]_0 \,. \end{equation} Since the energy function places $\sqrt{\kappa} \tilde \eta \in L^\infty(0,T; H^{5.5}(\Omega))$, we see that this integral is bounded by $\delta \sup_{t\in[0,T]} E_\kappa(t) + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t))$. \end{proof} To the above energy estimates, we add one elliptic estimate arising from the modified boundary condition (\ref{kbca}). We will make use of the following identity: \begin{align}\label{form1} [\sqrt{g}\Delta_g(\eta^i)],_\gamma = [\sqrt{g} g^{\alpha \beta} \Pi ^i_j \eta^j,_{\beta\gamma} + \sqrt{g}(g^{\nu\mu}g^{\alpha\beta} - g^{\alpha\nu}g^{\beta\mu}) \eta^i,_\beta \eta^j,_\nu \eta^j,_{\mu \gamma}],_\alpha \,. \end{align} \begin{lemma}[Elliptic estimate for $\sqrt{\kappa}\tilde \eta$] Let $M_0$ be given as in Lemma \ref{lemma1}. Then for $\delta >0$, \label{lemma_elliptic} \begin{equation}\label{ketaestimate} \sup_{t \in [0,T]}|\sqrt{\kappa}\tilde \eta|^2_{5}(t) \le M_0(\delta) +\delta \sup_{t\in[0,T]} E_\kappa(t) + C\,T\, P(\sup_{t\in[0,T]} E_\kappa(t)) \,, \end{equation} \end{lemma} \begin{proof} Letting $Q^{\alpha i}_j:= Q^{\alpha \beta i}_j(\partial \tilde \eta)$ denote a smooth function of $\partial \tilde \eta$, from (\ref{form1}), we see that \begin{align*} \partial^3[\sqrt{\tilde g}\Delta_{\tilde g}(\tilde \eta^i)],_\gamma &= [\sqrt{\tilde g} \tilde g^{\alpha \beta} \Pi ^i_j \partial^3 \tilde \eta^j,_{\beta \gamma} + \sqrt{\tilde g}(\tilde g^{\nu\mu} \tilde g^{\alpha\beta} -\tilde g^{\alpha\nu}\tilde g^{\beta\mu}) \tilde \eta^i,_\beta \tilde \eta^j,_\nu \partial^3\tilde \eta^j,_{\mu \gamma}],_\alpha \\ & \qquad\qquad + [\partial Q^{\alpha \beta i}_j \partial^2 \tilde \eta^j ,_{\beta \gamma} + \partial^2 Q^{\alpha \beta i}_j \partial \tilde \eta^j ,_{\beta\gamma} + \partial^3 Q^{\alpha \beta i}_j \tilde \eta^j ,_{\beta\gamma}],_\alpha \end{align*} The estimate (\ref{ketaestimate}) is obtained by letting $\partial^3\partial_\gamma$ act on the modified boundary condition (\ref{kbc}) and then testing this with $\zeta^2 \tilde g^{\gamma\delta}\Pi^i_l \partial^3 \tilde \eta^l,_\delta$ where $\zeta^2 = \alpha_i$. For convenience we drop the subscript $i$ from $\Omega_i$ and $\Gamma_i$. (Recall that $\alpha_i$ denotes the partition of unity introduced in Section 2.) For the surface tension term, integration by parts with respect to $\partial_\alpha$ yields \begin{align} &\int_{0}^{T}[\partial^3 (\sqrt{\tilde g}\tilde H\tilde n \circ \tilde\eta),_\gamma \,, \zeta^2 \tilde g^{\delta\gamma} \Pi \partial^3 \tilde \eta,_\delta ]_0 \le \int_{0}^{T}\left[ -|\zeta\sqrt{g}^{\frac{1}{2}} \Pi \partial^5\eta|_0^2 + C |F|_{L^\infty} \, |\eta|_{4}\, |\Pi\partial_5\eta|_{0} \right] \,, \nonumber \end{align} where $F:= P(\zeta, \partial \eta, \partial v, \partial^2\eta)$ is a polynomial of its arguments. To get this estimate we have used the fact that $$ \sqrt{\tilde g}(\tilde g^{\nu\mu} \tilde g^{\alpha\beta} -\tilde g^{\alpha\nu}\tilde g^{\beta\mu}) \tilde \eta^i,_\beta \tilde \eta^j,_\nu \partial^3\tilde \eta^j,_{\mu \gamma}] \tilde g^{\gamma\delta}\Pi^i_l \partial^3 \tilde \eta^l,_{\delta\alpha}=0, $$ since $ \tilde \eta^i,_\beta \Pi^i_l =0$. (This ensures that the error term is linear in $|\Pi \partial^5 \tilde \eta|_0$ rather than quadratic.) We next analyze the artificial viscosity term. The testing procedure gives us the integral $$-\int_{0}^{T}\kappa [\partial^3\partial_\gamma \Delta_0(\tilde v\cdot {\bar{n}^{\kappa}}) {\bar{n}^{\kappa}}, \zeta^2 \tilde g^{\gamma\delta} \Pi\partial^3 \tilde \eta,_\delta]_0.$$ The positive term comes from $\partial^3\partial_\gamma$ acting on $\tilde v$. This gives, after integration by parts in space, the highest-order integrand $\kappa(\partial^3 \tilde v,_{\alpha\gamma} \cdot {\bar{n}^{\kappa}}) g^{\gamma\delta} g_0^{\alpha\beta} ({\bar{n}^{\kappa}} \cdot \Pi \partial^3 \tilde \eta ,_{\beta\delta})$, where $\Pi = \tilde n\otimes \tilde n$. We can write this term as $$\kappa(\partial^3 \tilde v,_{\alpha\gamma} \cdot {\bar{n}^{\kappa}}) g^{\gamma\delta} g_0^{\alpha\beta} ({\bar{n}^{\kappa}} \cdot \partial^3 \tilde \eta ,_{\beta\delta}) + \kappa(\partial^3 \tilde v,_{\alpha\gamma} \cdot {\bar{n}^{\kappa}}) g^{\gamma\delta} g_0^{\alpha\beta} ({\bar{n}^{\kappa}} \cdot(\Pi - \Pi_\kappa) \partial^3 \tilde \eta ,_{\beta\delta}),$$ where $\Pi_\kappa = {\bar{n}^{\kappa}}\otimes {\bar{n}^{\kappa}} $. The first term is an exact derivative in time, and yields $$ \frac{\kappa}{2} \frac{d}{dt} |\partial^5 \tilde \eta \cdot {\bar{n}^{\kappa}}|^2 - \kappa \partial^5 \tilde \eta^i \,\partial^5\tilde \eta^j\,\, {\bar{n}^{\kappa}}^i \partial_t {\bar{n}^{\kappa}}^j \,. $$ The space integral of the second term is estimated by $C\, |F|_{L^\infty} |\sqrt{\kappa} v|_5\, |\sqrt{\kappa} \tilde \eta|_5\, |\Pi_\kappa -\Pi|_{L^\infty}$ and $|\Pi_\kappa - \Pi|_{L^\infty} \le C\, \kappa\, |\tilde \eta|_{3.5}$. From (\ref{kbc}) $$ \kappa \partial^3\Delta_0 (v\cdot {\bar{n}^{\kappa}}) = - \partial^3( \sqrt{{\tilde g_\kappa}}^{-1} \sqrt{\tilde g} \Delta _{\tilde g} (\tilde \eta)\cdot {\bar{n}^{\kappa}}) + \partial^3 q. $$ Thus, $$ C\,\sqrt{\kappa} |\kappa \tilde v|_5 \le \kappa |F|_{L^\infty}( |\tilde v|_3 + |\sqrt{\kappa} \tilde v|_4) + (\kappa^{\frac{3}{2}} +\kappa^{\frac{1}{2}} ) |F|_{L^\infty} | \tilde \eta|_4 + |F|_{L^\infty} |\sqrt{\kappa}\tilde \eta|_5 + \sqrt{\kappa}|\tilde q|_3 \,, $$ so that \begin{align*} &\int_0^T\int_\Gamma\kappa(\partial^3 \tilde v,_{\alpha\gamma} \cdot {\bar{n}^{\kappa}}) g^{\gamma\delta} g_0^{\alpha\beta} ({\bar{n}^{\kappa}} \cdot(\Pi - \Pi_\kappa) \partial^3 \tilde \eta,_{\beta\delta}) \\ & \qquad \le C\, \int_0^T \left[ \sqrt{\kappa} |F|_{L^\infty}( |\tilde v|_3 + |\sqrt{\kappa} \tilde v|_4 + | \tilde \eta|_4 + |\tilde q|_3) \,|\sqrt{\kappa}\tilde \eta|_5 + \,|F|_{L^\infty} |\sqrt{\kappa}\tilde \eta|_5^2\right] \,. \end{align*} Having finished the estimates for the terms leading to the positive energy contribution, we next consider the most difficult of the error terms. This occurs when $\partial^3 \partial_\gamma$ acts on ${\bar{n}^{\kappa}}$, producing the integral $\int_{0}^{T} \kappa [\zeta^2\, \tilde v \cdot \partial^4 {\bar{n}^{\kappa}}, {\bar{n}^{\kappa}} \cdot \partial^4 \tilde \eta]_0.$ To analyze this term, we let $\tilde v =\tilde v^\gamma \partial_\gamma \tilde\eta_\kappa + \tilde v_n {\bar{n}^{\kappa}};$ we remind the reader that for $\gamma=1,2$, $\partial_\gamma \tilde \eta_\kappa$ spans the tangent space of $\tilde\eta_\kappa(t)(\Gamma)$ at $\tilde\eta_\kappa(x,t)$, so that $\partial_\gamma \tilde\eta_\kappa$ is orthogonal to ${\bar{n}^{\kappa}}$. It follows that $v^\gamma \partial_\gamma \tilde \eta_\kappa \cdot \partial^5{\bar{n}^{\kappa}}$ is equal to $-v^\gamma {\bar{n}^{\kappa}} \cdot \partial^5 \partial_\gamma \tilde \eta_\kappa$ plus lower order terms $v^\gamma{\mathcal R}^\gamma_4(\tilde \eta)$, which have at most only five tangential derivative of $\tilde\eta_\kappa$. Note also that since ${\bar{n}^{\kappa}}\cdot {\bar{n}^{\kappa}}=1$, ${\bar{n}^{\kappa}}\cdot \partial^5{\bar{n}^{\kappa}}$ is also a sum of lower order terms which have at most only five tangential derivative of $\tilde\eta_\kappa$. Thus, $$ \int_{0}^{T}\kappa [\zeta^2\, \tilde v \cdot \partial^5 {\bar{n}^{\kappa}}, {\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta]_0 = \int_{0}^{T}\kappa [\zeta^2\, \tilde v^\gamma {\bar{n}^{\kappa}} \cdot \partial^5 \partial_\gamma {{{\bar{\eta}^{\kappa}}}} + \tilde v^\gamma{\mathcal R}^\gamma_4(\tilde \eta) , {\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta]_0 \,, $$ where the remainder term satisfies $$\left|\int_{0}^{T}[ \kappa\tilde v^\gamma{\mathcal R}^\gamma_4(\tilde \eta)\, , \, {\bar{n}^{\kappa}}\cdot \partial^5 \eta ]_0\right| \le C\, \int_{0}^{T} | F|_{L^\infty}\, [ |\sqrt{\kappa}\tilde v|_4 \,|\sqrt{\kappa}\tilde \eta|_5 +|\sqrt{\kappa}\tilde \eta|_5^2]\,. $$ We must form an exact derivative from the remaining highest order term \begin{equation}\label{hot} \int_{0}^{T}\kappa [\zeta^2\, \tilde v^\gamma {\bar{n}^{\kappa}} \cdot \partial^5 \partial_\gamma {{{\bar{\eta}^{\kappa}}}} , {\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta]_0 \,, \end{equation} and this will require commuting the horizontal convolution operator, so that the $\tilde \eta$ on the right side of the $L^2(\Gamma)$ inner-product also has a convolution operator, and is hence converted to a ${\bar{n}^{\kappa}}\cdot \partial^4 \rho_{\frac{1}{\kappa}}\star_h\tilde \eta$ term. With this accomplished, we will be able to pull-out the $\partial_\gamma$ operator and form an exact derivative, which can be bounded by our energy function. Noting that on $\Gamma$ the horizontal convolution $\star_h$ restricts to the usual convolution $*$ on ${\mathbb R}^2$, we have that $$ \tilde \eta_\kappa= \sum_{i=1}^K \sqrt{\alpha_i}\left[ \rho_{\frac{1}{\kappa}} * [\rho_{\frac{1}{\kappa}} * (\sqrt{\alpha_i} \tilde\eta \circ \theta_i)] \right] \circ \theta_i^{-1} $$ For notational convenience, we set $\rho= \rho_{1/\kappa}$, $\zeta= \sqrt{\alpha_i}$, and $R=[0,1]^2 = \theta_i^{-1} (\Gamma\cap U_i)$. It follows that (\ref{hot}) can be expressed as \begin{equation}\label{hot1} \int_{0}^{T} \kappa \sum_{i=1}^K \int_R (\tilde v^\gamma {\bar{n}^{\kappa}} )\circ \theta_i \cdot \partial_\gamma \partial^5 \left[ \zeta(\theta_i) \rho * \rho * (\zeta \tilde\eta) \circ \theta_i \right] \, \left({\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta\right) \circ \theta_i \end{equation} With ${\tilde g_\kappa} := \partial^5 \rho * \zeta(\theta_i)\tilde\eta(\theta_i)$, we see that \begin{equation}\label{hot2} \kappa {\bar{n}^{\kappa}} \cdot \partial_\gamma\partial^5 \left[ \zeta(\theta_i) \rho * \rho * (\zeta \tilde\eta \circ \theta_i) \right] = \kappa {\bar{n}^{\kappa}} \cdot (\partial_\gamma \partial^5 \zeta(\theta_i)) \rho *\rho*(\zeta \tilde\eta(\theta_i) + \kappa \zeta {\bar{n}^{\kappa}} \cdot \rho * \partial_\gamma {\tilde g_\kappa} + \kappa {\mathcal R}_5(\tilde\eta)\,, \end{equation} where the remainder ${\mathcal R}_5(\tilde\eta)$ has at most five tangential derivatives on $\tilde \eta$. Substitution of (\ref{hot2}) into (\ref{hot1}) yields three terms, corresponding to the three terms on the right-hand side of (\ref{hot2}). For the first term, we see that \begin{align} &\int_{0}^{T} \kappa \sum_{i=1}^K \int_R (\partial_\gamma \partial^5 \zeta(\theta_i)) \tilde v^\gamma{\bar{n}^{\kappa}} \cdot \, \rho *\rho*(\zeta \tilde\eta(\theta_i) \, \left({\bar{n}^{\kappa}} \cdot \partial^5 \tilde\eta(\theta_i) \right) \nonumber \le C \int_{0}^{T} \|\sqrt{\kappa}\theta_i\|_{6.5} |\tilde\eta(\theta_i)|_{L^\infty} |\sqrt{\kappa}\tilde\eta|_5 \,. \end{align} The second term on the right-hand side of (\ref{hot2}) gives the integral $$ \int_{0}^{T} \kappa \sum_{i=1}^K \int_R ({\bar{n}^{\kappa}} \cdot \rho * \partial_\gamma {\tilde g_\kappa}) \, \, ({\bar{n}^{\kappa}} \cdot {\tilde g_\kappa}) + {\mathcal R}_6 (\tilde\eta), $$ where the remainder ${\mathcal R}_6(\tilde\eta)$ is lower-order containing terms which have at most four tangential derivatives on $\tilde \eta$ and five on $\zeta(\theta_i)$. We fix $i \in \{1,...,K\}$, drop the explicit composition with $\theta_i$, we set \begin{align*} \triangle_{{\bar{n}^{\kappa}}, g_\kappa} &= {\bar{n}^{\kappa}} \cdot \rho *{\tilde g_\kappa} - \rho *({\bar{n}^{\kappa}} \cdot {\tilde g_\kappa})\,, \\ \triangle_{{\bar{n}^{\kappa}}, \zeta\partial^5\tilde \eta} &= {\bar{n}^{\kappa}} \cdot \rho *\zeta\partial^5\tilde \eta - \rho *({\bar{n}^{\kappa}} \cdot \zeta\partial^5\tilde \eta)\,, \end{align*} and analyze the following integral: \begin{align*} &\int_R \{\zeta{\bar{n}^{\kappa}} \cdot \rho *{\tilde g_\kappa}\} \, \{ {\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta\} = \int_R \{\rho *({\bar{n}^{\kappa}} \cdot {\tilde g_\kappa})\} \, \{ {\bar{n}^{\kappa}} \cdot \zeta\partial^5 \tilde \eta\} + \int_R \triangle_{{\bar{n}^{\kappa}},g_\kappa} \, \{ {\bar{n}^{\kappa}} \cdot \zeta\partial^4 \tilde \eta\} \,, \\ &\qquad = \int_R \{{\bar{n}^{\kappa}} \cdot {\tilde g_\kappa}\} \, \{ {\bar{n}^{\kappa}} \cdot \partial^4\rho* \zeta\tilde \eta\} -\int_R \{{\bar{n}^{\kappa}} \cdot {\tilde g_\kappa}\} \, \triangle_{{\bar{n}^{\kappa}}, \zeta\partial^5\tilde \eta} + \int_R \triangle_{{\bar{n}^{\kappa}},g_\kappa} \, \{ {\bar{n}^{\kappa}} \cdot \zeta\partial^5 \tilde \eta\} +{\mathcal R}_7(\tilde \eta)\,, \\ \end{align*} where the remainder ${\mathcal R}_7(\tilde \eta)$ comes from commuting $\partial^5$ with the cut-off function $\zeta$ and has the same bound as ${\mathcal R}_4(\tilde \eta)$. The first term on the right-hand side is a perfect derivative and for the remaining terms we use Lemma \ref{commutator} together with the estimate $\kappa |\tilde g_\kappa|_{0,R} \le C \|\tilde \eta\|_{5.5}$ to find that $$ \kappa\int_R \{\zeta{\bar{n}^{\kappa}} \cdot \rho *{\tilde g_\kappa}\} \, \{ {\bar{n}^{\kappa}} \cdot \partial^5 \tilde \eta\} \le C\, |F|_{L^\infty}\,|\sqrt{\kappa}\tilde \eta|_5^2 . $$ Thus, summing over $i \in \{1,...,K\}$, \begin{equation}\nonumber \kappa \int_{0}^{T}[\zeta^2\, \tilde v \cdot \partial^5 {\bar{n}^{\kappa}}, {\bar{n}^{\kappa}} \cdot \partial^5 \tilde\eta]_0 \le C\,\int_{0}^{T} |F|_{L^\infty}\, (|\sqrt{\kappa}\Gamma|_6 + |\sqrt{\kappa}\tilde v|_4 + |\sqrt{\kappa} \tilde \eta|_5)\, |\sqrt{\kappa}\tilde \eta|_5 \,, \end{equation} where $|\sqrt{\kappa}\Gamma|_6:= \max_{i\in\{1,...,K\}} |\sqrt{\kappa}\theta_i|_6$. It is easy to see that $$ \int_{0}^{T} [\partial^3\partial_\gamma ( \sqrt{\tilde g} \Delta_{\tilde g}(\tilde \eta) \cdot ({\bar{n}^{\kappa}} -\tilde n) {\bar{n}^{\kappa}}), \zeta^2 \tilde g^{\delta\gamma} \Pi \partial^3 \tilde \eta,_\delta] \le C\,\int_{0}^{T} |F|_{L^\infty}\, |\sqrt{\kappa}\tilde \eta|_5^2\, $$ with the same bound for $ \int_{0}^{T} [\partial^2\partial_\gamma ( \sqrt{\tilde g} \Delta_{\tilde g}(\tilde \eta) \cdot {\bar{n}^{\kappa}} ({\bar{n}^{\kappa}} -\tilde n) , \zeta^4 \tilde g^{\delta\gamma} \Pi \partial^2 \tilde \eta,_\delta] $. With (\ref{deteta}), we infer that \begin{align*} |\sqrt{\kappa} \partial^5 \tilde \eta\cdot {\bar{n}^{\kappa}}|_0^2 (t) \le M_0+ C\,\int_{0}^{T}|\tilde q|_{3}^2 + C\,\int_{0}^{T} |F|_{L^\infty}\, (|\sqrt{\kappa}\Gamma|_6 + |\sqrt{\kappa}\tilde v|_4 + |\sqrt{\kappa} \tilde \eta|_5)\, |\sqrt{\kappa}\tilde \eta|_5 \,. \end{align*} Adding to this inequality the curl estimate (\ref{curl_eta_2D}) for $\sqrt{\kappa}\tilde \eta$ and the divergence estimate (which has the same bound as the curl estimate), and using Young's inequality to get $ \int_{0}^{T} |F|_{L^\infty}\, |\sqrt{\kappa}\tilde v|_4 \, |\sqrt{\kappa}\tilde \eta|_5 \le \delta \int_{0}^{T}|\sqrt{\kappa}\tilde v|_4^2 + C_\delta \int_{0}^{T} |F|_{L^\infty}^2\, |\sqrt{\kappa}\tilde \eta|_5^2$, we see that \begin{align*} &\sup_{t\in[0,T]}| \sqrt{\kappa} \tilde \eta|_5^2 \le M_0+ C\, T\, \sup_{t\in[0,T]}\left(\|v_t\|_{2.5}^2 + \|v\|_{3.5}^2+ \|\eta\|_{4.5}^4 \right) \\ &\qquad + C\,T\, \sup_{t\in[0,T]}\left(|F|_{L^\infty}^2\, (|\sqrt{\kappa}\Gamma|_6^2 + |\sqrt{\kappa} \tilde \eta|_5^2 \right) + \delta \int_{0}^{T}|\sqrt{\kappa}\tilde v|_4^2 \,, \end{align*} from which the lemma follows. \end{proof} \subsection{Removing the additional regularity assumptions on the initial data} \label{initdatacondition} At this stage, we explain how we can remove the extra regularity assumptions on the initial data, $u_0$ and $\Omega$, so that the constant $M_0$ depends on $\|u_0\|_{4.5}$ and $|\Gamma|_{5.5}$ rather than $\sqrt{\kappa}\|u_0\|_{10.5}$ and $\sqrt{\kappa}|\Gamma|_{7}$ as stated in Lemma \ref{lemma1}. The modification requires the following regularization of the initial data: set $u_0 = \rho_{e^{-\kappa}}\star E_{\Omega_\kappa}(u(0))$, where $\Omega_\kappa$ is obtained by smoothing $\Omega$ via convolution with $\rho_{e^{-\kappa}}$, i.e., we use $\rho_{e^{-\kappa}}* \bar\theta_i$ as our family of charts. We make use of the fact that $$ P(\|\sqrt{\kappa} u_0\|_{10.5}) \le C P\|u_0\|_{4.5}) \,, \ \ \ P(|\sqrt{\kappa} \Gamma|_{10}) \le C P(|\Gamma|_{5.5}) \,, \ \ \ $$ which follows by integration by parts of six tangential derivatives onto the mollifier $\rho_{e^{-\kappa}}$; this and results in the constant $C>0$ being independent of $\kappa$. \subsection{The limit as $\kappa \rightarrow 0$} \begin{proposition}\label{prop3} With $M_0 = P(\|u_0\|_{4.5}, |\Gamma|_{5.5})$ a polynomial of its arguments and for $\tilde M_0 > M_0$, \begin{align} \sup_{t\in[0,T]} E_\kappa(t) \le \tilde M_0\,, \label{Ekbound} \end{align} where $T$ depends on the data, but not on $\kappa$. \end{proposition} \begin{proof} Summing the inequalities (\ref{ss_kttt}), (\ref{ss_kttx}), (\ref{ss_ktxx}), (\ref{ss_kxxx}), and (\ref{ketaestimate}), and using Lemma \ref{lemma1} and Proposition \ref{prop1}, we find that \begin{align*} \sup_{t\in[0,T]} E_\kappa(t) \le M_0 + C\, T\, P(\sup_{t\in[0,T]} E_\kappa(t)) + \delta \sup_{t\in[0,T]} E_\kappa(t) \,, \end{align*} where the polynomial $P$ and the constant $M_0$ do not depend on $\kappa$. Choose $\delta< 1$. Then, from the continuity of the left-hand side in $T$, we may choose $T$ sufficiently small and independent of $\kappa$, to ensure that (\ref{Ekbound}) holds. (See \cite{CoSh2005b} for a detailed account of such polynomial inequalities.) \end{proof} Proposition \ref{prop3} provides the weak convergence as $\kappa \rightarrow 0$ of subsequences of $(\tilde v,\tilde q)$ toward a limit which we denote by $(v,q)$ in the same space. We then set $\eta= \operatorname{Id} + \int_0^t v$, and $u = v \circ \eta^{-1}$. It is obvious that $\tilde v_\kappa$, arising from the double horizontal convolution by layers of $\tilde v$, satisfies $\tilde v_\kappa \rightarrow v$ in $L^2(0,T; H^{3.5}(\Omega))$, and therefore $\tilde\eta^\kappa \rightarrow \eta$ in $L^2(0,T; H^{4}(\Omega)$. It follows that ${\operatorname{div}}u=0$ in $\eta(\Omega)$ in the limit as $\kappa \rightarrow 0$ in (\ref{leuler.c}). Thus, the limit $(v,q)$ is a solution to the problem (\ref{leuler}), and satisfies $E_0(t) \le \tilde M_0$. We then take $T$ even small, if necessary, to ensure that (\ref{deteta}) holds, which follows from the fundamental theorem of calculus. \section{A posteriori elliptic estimates} Solutions of the Euler equations gain regularity with respect to the $E_\kappa(t)$ from elliptic estimates of the boundary condition (\ref{leuler.d}), which we write as $\sqrt{g}Hn(\eta) = \sqrt{g} q n$. Replacing $\partial_\gamma$ with $\partial_t$ in (\ref{form1}), we have the identities \begin{align} \label{form2} \partial_t(\sqrt{g}Hn \circ \eta)^i =- [\sqrt{g} g^{\alpha \beta} \Pi ^i_j v^j,_{\beta} + \sqrt{g}(g^{\nu\mu}g^{\alpha\beta} - g^{\alpha\nu}g^{\beta\mu}) \eta^i,_\beta \eta^j,_\nu v^j,_{\mu }],_\alpha \,. \end{align} and \begin{align} \label{form3} \partial_t^2(\sqrt{g}Hn \circ \eta)^i =- [\sqrt{g} g^{\alpha \beta} \Pi ^i_j v_t^j,_{\beta} + \sqrt{g}(g^{\nu\mu}g^{\alpha\beta} - g^{\alpha\nu}g^{\beta\mu}) \eta^i,_\beta \eta^j,_\nu v_t^j,_{\mu } + Q^{i \alpha\beta}_{j} v^j,_\beta],_\alpha \,, \end{align} where $Q^{i \alpha\beta}_{j} = Q(\partial \eta)$ is a rational function of $\partial \eta$. \begin{lemma} \label{euler_elliptic} Taking $\tilde M_0$ as in Proposition \ref{prop3}, and letting ${\mathcal M}_0$ denote a polynomial function of $\tilde M_0$, for $T$ taken sufficiently small, $$ \sup_{t\in[0,T]} \left[ | \Gamma(t)|_{5.5} + \|v(t)\|_{4.5} + \|v_t(t)\|_3\right] \le {\mathcal M}_0 \,. $$ \end{lemma} \begin{proof} We begin with the estimate for $v_t$. Following the proof Lemma \ref{lemma_elliptic}, we let $\partial_\gamma \partial_t^2$ act on the boundary condition (\ref{leuler.d}) and test with $-\zeta^2 g^{\gamma\delta}\Pi^i_k v^k_t,_\delta$, where $\zeta^2 = \alpha_i$, an element of our partition of unity. Using (\ref{form3}), we see that $-\int_\Gamma \partial_\gamma \partial_t^2[ \sqrt{g} Hn(\eta)] \cdot \zeta^2 g^{\gamma\delta} v_t,_\delta = \int_\Gamma (\sqrt{g} q n)_{tt} \cdot [\zeta^2 g^{\gamma\delta} v_t,_{\delta}],_\gamma$. Using (\ref{Ekbound}), letting $\tilde C$ denote a constant that depends on $\tilde M_0$, and summing over the partition of unity, we find that \begin{align} |\partial^2 v_t\cdot n|_0^2 \le \tilde C\, [|v|_2 + |\eta|_2\, (|v_t|_1+ |v|_1) + |(q\sqrt{g}n)_{tt}|_0 ] \, |\partial^2v_t\cdot n|_{0} + \tilde C |v_t|_1^2 |\eta|_3 \,. \label{j1} \end{align} This follow since $$ \left[\sqrt{g}(g^{\nu\mu}g^{\alpha\beta} - g^{\alpha\nu}g^{\beta\mu}) \eta^i,_\beta \eta^j,_\nu v_t^j,_{\mu\gamma } \right] \left[ g^{\gamma\delta}\Pi^i_k v^k_t,_{\delta \alpha}\right] =0 \,, $$ while $$ \int_\Gamma \left[\sqrt{g}(g^{\nu\mu}g^{\alpha\beta} - g^{\alpha\nu}g^{\beta\mu}) \eta^i,_\beta \eta^j,_\nu v_t^j,_{\mu\gamma } \right] \left[ (g^{\gamma\delta}\Pi^i_k),_\alpha v^k_t,_{\delta }\right] \le \tilde C\, |v_t|_1\, (|\partial^2 v_t \cdot n|_0 + |v_t|_1\,|\eta|_3)\,. $$ Applying Young's inequality to (\ref{j1}) yields, after adjusting the constant, \begin{align*} |\partial^2 v_t\cdot n|_0^2 \le \tilde C\, [|\eta|_3^2 + |v|_2 ^2 + |v_t|_1^2 + |q_{tt}|_0^2] \,. \end{align*} A similar computation shows that \begin{align*} |\partial^3 v_t\cdot n|_0^2 \le \tilde C\, [|\eta|_4^2 + |v|_3 ^2 + |v_t|_2^2 + |q_{tt}|_1^2] \,. \end{align*} Thus, by interpolation \begin{align*} \sup_{t\in[0,T]} |\partial^2 v_t\cdot n|_{0.5}^2 \le \tilde C\, \sup_{t\in[0,T]} [|\eta|_{3.5}^2 + |v|_{2.5} ^2 + |v_t|_{1.5}^2 + |q_{tt}|_{0.5}^2] \le {\mathcal M}_0 \,. \end{align*} Computing the $H^2(\Omega)$-norm of (\ref{zs3b}), we find that $$\sup_{t\in[0,T]}\|\operatorname{curl} v_t \|_2 \le {\mathcal M}_0 + \tilde C\, T\, \sup_{t\in[0,T]}\|v_t \|_3\,, $$ with the same estimate for $\sup_{t\in[0,T]}\|\operatorname{div} v_t \|_2$. Hence, for $T$ taken sufficiently small, we infer from Proposition \ref{prop1} that \begin{equation}\label{vth3} \sup_{t\in[0,T]}\|v_t\|_3 \le {\mathcal M}_0. \end{equation} Next, we let $\partial_\gamma \partial^2 \partial_t$ act on the boundary condition (\ref{leuler.d}) and test with $-\zeta^2 g^{\gamma\delta}\Pi^i_k \partial^2v^k,_\delta$. Using (\ref{form1}), we find that \begin{align*} \sup_{t\in[0,T]} |\partial^4 v\cdot n|_0^2 \le \tilde C\, \sup_{t\in[0,T]} [|\eta|_4^2 + |v|_3 ^2 + |q_t|_2^2] \le {\mathcal M}_0\,. \end{align*} Computing the $H^{3.5}(\Omega)$-norm of (\ref{curlvss}), and again taking $T$ sufficiently small, we see that $\sup_{t\in[0,T]}\|v\|_{4.5} \le {\mathcal M}_0$. In order to prove our remaining estimate, we need a convenient reparameterization of $\Gamma(t)$ via a height function $h$ in the normal bundle over $\Gamma$. Consider the isometric immersion $\eta_0:(\Gamma,g_0) \to ({\mathbb R}^3,\text{Id})$. Let ${\mathcal B}=\Gamma\times(-\epsilon,\epsilon)$ where $\epsilon$ is chosen sufficiently small so that the map $B:{\mathcal B}\to{\mathbb R}^3: (y,z) \mapsto y+zN(y)$ is itself an immersion, defining a tubular neighborhood of $\eta_0(\Gamma)$ in ${\mathbb R}^3$. We can choose a coordinate system $\frac{\partial}{\partial y^\alpha}$, $\alpha=1,2$ and $\frac{\partial}{\partial z}$. Let $G=B^*({\text{Id}})$, denote the induced metric on ${\mathcal B}$, and note that $G(y,z)=G_z(y)+dz\otimes dz,$ where $G_z$ is the metric on the surface $\Gamma\times\{z\}$, and that $G_0=g_0$. Let $h:\Gamma\to(-\epsilon,\epsilon)$ be a smooth function and consider the graph of $h$ in ${\mathcal B}$, parameterized by $\phi:\Gamma\to{\mathcal B}:y\mapsto (y,h(y))$. The tangent space to graph(h), considered as a submanifold of ${\mathcal B}$, is spanned at a point $\phi(x)$ by the vectors $$\phi_*(\frac{\partial}{\partial y^\alpha}) = \frac{\partial\phi}{\partial y^\alpha}=\frac{\partial}{\partial y^\alpha} + \frac{\partial h}{\partial y^\alpha}\frac{\partial}{\partial z},$$ and the normal to graph(h) is given by \begin{equation}\label{nn} n(y)=J_h^{-1}(y)\Big(-G^{\alpha\beta}_{h(y)}\frac{\partial h}{\partial y^\alpha}\frac{\partial}{\partial y^\beta} + \frac{\partial}{\partial z}\Big) \end{equation} where $J_h=(1+h_{,\alpha}G^{\alpha\beta}_{h(y)} h_{,\beta})^{1/2}$. Therefore, twice the mean curvature $H$ is defined to be the trace of $\nabla n$ while \begin{align*} (\nabla n)_{ij}=G(\nabla^{\mathcal B}_{\frac{\partial}{\partial w^i}} n,\frac{\partial}{\partial w^j}) \end{align*} where $\frac{\partial}{\partial w^\alpha} = \frac{\partial}{\partial y^\alpha}$ for $\alpha=1$, $2$ and $\frac{\partial}{\partial w^3}=\frac{\partial}{\partial z}$. Substituting the formula (\ref{nn}) for $n$, we see that \begin{align*} (\nabla n)_{\alpha\beta} =&\ G\Big(\nabla^{\mathcal B}_{\frac{\partial}{\partial y^\alpha}}\Big[-J_h^{-1} G_h^{\gamma\delta}h_{,\gamma}\frac{\partial}{\partial y^\delta}+J_h^{-1}\frac{\partial}{\partial z}\Big],\frac{\partial} {\partial y^\beta}\Big)\\ =&\ -(G_h)_{\delta\beta}(J_h^{-1}G_h^{\gamma\delta}h_{,\gamma})_{,\alpha} + F^1_{\alpha\beta}(y,h,\partial h); \\ (\nabla n)_{33} =&\ G\Big(\nabla^{\mathcal B}_{\frac{\partial}{\partial z}}\Big[-J_h^{-1} G_h^{\gamma\delta}h_{,\gamma}\frac{\partial}{\partial y^\delta}+J_h^{-1}\frac{\partial}{\partial z}\Big],\frac{\partial} {\partial z}\Big) \\ =&\ F^2_{\alpha\beta}(y,h,\partial h) \end{align*} for some functions $F_{\alpha\beta}^1$ and $F_{\alpha\beta}^2$, $\alpha,\beta=1,2$. Letting $\gamma_0$ denote the Christoffel symbols associated to the metric $g_0$ on $\Gamma$, we find that the curvature of graph($h$) is given by \begin{align} L_h(h):= H = -(J_h^{-1}G_h^{\gamma\delta}h_{,\gamma})_{,\delta} + J_h^{-1}([\gamma_0]^j_{j3} - G_h^{\gamma\delta}h_{,\gamma}[\gamma_0]^j_{j\delta})\,. \label{divform} \end{align} Note that the metric $G_h = P(h)$, and that the highest-order term is in divergence form, while the lower-order term is a polynomial in $\partial h$. The function $h$ determines the {\it height}, and hence shape, of the surface $\Gamma(t)$ above $\Gamma$. Given a signed height function $h: \Gamma_0 \times [0,T) \rightarrow {\mathbb R}$, for each $t\in [0,T)$, define the {\it normal} map $$ \eta^\nu : \Gamma_0 \times [0,T) \rightarrow \Gamma(t), \ \ \ (y,t) \mapsto y+ h(y,t) N(y)\,. $$ Then, there exists a unique {\it tangential} map $\eta^\tau: \Gamma_0 \times [0,T)\rightarrow \Gamma_0 $ (a diffeomorphism as long as $h$ remains a graph) such that $\eta|_\Gamma(t)$ has the decomposition $$\eta|_\Gamma(\cdot, t) = \eta^\nu(\cdot,t) \circ \eta^\tau(\cdot, t), \ \ \ \eta|_\Gamma(y,t) = \eta^\tau(y,t) + h(\eta^\tau(y,t),t) N(\eta^\tau(y,t))\,.$$ The boundary condition (\ref{euler.c}) can be written as $\sigma L_h(h)= q\circ (\eta^\tau)^{-1}$. The operator $L_h$ is a quasilinear elliptic operator; from the standard regularity theory for quasilinear elliptic operators with $H^3$ coefficients on a compact manifold, we have the elliptic estimate $$ |h|_{5.5} \le \tilde C\, |q \circ (\eta^\tau)^{-1}|_{3.5} \le \tilde C\, \|q\|_{4} \,. $$ By (\ref{Neumann}), we see that for all $t\in[0,T]$, $$ \|q\|_4 \le \tilde C\, \|a\|_2^2\, \|v\|_3^2 + \tilde C \,|\eta|_3 \, |v_t|_2 \le {\mathcal M}_0\,, $$ the last inequality following from (\ref{vth3}). Since $\Gamma(t)=$graph$h(t)$, this estimate shows that $\Gamma(t)$ maintains its $H^{5.5}$-class regularity for $t\le T$. \end{proof} \section{$\kappa$-independent estimates for the smoothed problem and existence of solutions in 3D} \label{kapriori3} The 3D analysis of the $\kappa$-problem requires assuming that the initial data $ u_0 \in H^{5.5}(\Omega)$ and $ \Gamma$ of class $H^{6.5}$. This is necessitated by Sobolev embedding $\|\tilde v_t\|_{L^\infty} \le C \|\tilde v_t\|_3$. By replacing the third-time differentiated problem with the fourth time-differentiated problem the identical analysis as in Section \ref{kapriori} yields $$ E^{3D}_\kappa(t) \le \tilde M_0\,, $$ where $\tilde M_0$ is a polynomial of $\| u_0\|_{5.5}$ and $| \Gamma|_{6.5}$. (In fact, our analysis in Section \ref{kapriori} used all of the 3D terms and notation, so no changes are required other than raising the regularity by one derivative.) We let $(v,q)$ again denote the limit of $(\tilde v, \tilde q)$ as $\kappa \rightarrow 0$. The identical limit process as in 2D, shows that $(v,q)$ is a solution of the Euler equations. Having a solution $(v,q)$ to the Euler equation, we can use the a posteriori estimates (\ref{euler_elliptic}) as a priori estimates for solutions of the Euler equations. We see that $$ \sup_{t\in[0,T]} \left[ E_\kappa^{2D}(t) + | \Gamma(t)|_{5.5} + \|v(t)\|_{4.5} + \|v_t(t)\|_3\right] \le {\mathcal M}_0 \,, $$ where ${\mathcal M}_0$ is polynomial function of $\|u_0\|_{4.5}$ and $|\Gamma|_{5.5}$. This key point here, is that the elliptic estimate for $v_t \in H^3(\Omega)$ improves the regularity given by $E_\kappa^{2D}(t)$ and allows for the required Sobolev embedding theorem to hold. Since our initial data is a priori assumed regularized as in Subsection \ref{initdatacondition}, we see that solutions of the Euler equations in 3D only depend on ${\mathcal M}_0$. \section{Uniqueness of solutions to (\ref{leuler})}\label{uniqueness} Suppose that $(\eta^1,v^1,q^1)$ and $(\eta^2,v^2,q^2)$ are both solutions of (\ref{leuler}) with initial data $u_0 \in H^{5.5}(\Omega$ and $\Gamma \in H^{6.5}$. Setting $$ {\mathcal E}_\eta(t) = \sum_{k=0}^4 \|\partial_t^k \eta(t)\|_{5.5-k}^2 , $$ by the method of Section \ref{kapriori} with $\kappa=0$, we infer that both ${\mathcal E}_{\eta^1}(t)$ and ${\mathcal E}_{\eta^2}(t)$ are bounded by a constant ${\mathcal M}_0$ depending on the data $u_0$ and $\Gamma$ on a time interval $0\le t\le T$ for $T$ small enough. Let $$ w: = v^1-v^2, \ \ r: = q^1 - q^2, \ \text{ and } \xi := \eta^1 - \eta^2 \,. $$ Then $(\xi,w,r)$ satisfies \begin{subequations} \label{lunique} \begin{alignat}{2} \xi&=\int_0^t w\ \ \ &&\text{in} \ \Omega \times (0,T]\,, \label{lunique.a}\\ \partial_t w^i+ (a^1)^k_i\,r,_k&=(a^2-a^1)^k_i\, q^2,_k &&\text{in} \ \Omega \times (0,T]\,, \label{lunique.b}\\ (a^1)^j_i w^i,_j &= (a^2-a^1)^j_i {v^2}^i,_j &&\text{in} \ \Omega \times (0,T] \,, \label{lunique.c}\\ r n_1 &= -\sigma\Pi^1 {g^1}^{\alpha\beta} \xi,_{\alpha\beta} -\sigma\sqrt{g^1} \Delta_{g^1-g^2}(\eta^2) \quad\quad &&\text{on} \ \Gamma\times(0,T] \,, \label{lunique.d} \\ (\xi,w) &= (0, 0) &&\text{on} \ \Omega\times\{t=0\} \,. \label{lunique.e} \end{alignat} \end{subequations} Set $$ E(t) = \sum_{k=0}^3 \|\partial_t^k \xi(t)\|_{4.5-k}^2 . $$ We will show that $E(t)=0$, which shows that $w=0$. To do so, we analyze the forcing terms on the right-hand side of (\ref{lunique.b}) and (\ref{lunique.c}), as well as the term $\sigma\Delta_{g^1-g^2}(\eta^2)$ in (\ref{lunique.d}). We begin with the third time-differentiated problem, and study the integral $\int_0^T \int_\Omega \partial_t^3 [ (a^1-a^1) \, \nabla q^2]\, w_{ttt}$. The highest-order term is \begin{align*} \int_0^t\int_\Omega (a^1-a^2) \, \nabla q^2_{ttt} w_{ttt} & \le {\frac{1}{2}}{\mathcal M}_0\int_0^t\|a^1-a^2\|_{L^\infty}^2 + {\frac{1}{2}}\int_0^T \|w_{ttt}\|_0^2 \le C\, t\, P(E(t))\,. \end{align*} The third space differentiated and mixed-derivative problems have forcing terms that can be similarly bounded. The difference in pressure $r$ satisfies, using the notation of (\ref{Neumann}), satisfies the following Neumann problem: \begin{align*} L_{a_1}(r) & = -\partial_t {a^1}^j_i w^i,_j + [{a^1}^j_i (a^2-a^1)^k_i q^2,_k],_j \text{ in } \Omega \,, \\ B_{a_1}(r) & = -w_t\cdot \sqrt{g^1} n^1 + {a^1}^j_i (a^2-a^1)^k_i q^2,_k N_j \text{ on } \Gamma \,. \end{align*} Since $P(\|\eta^1\|_{4.5})$ is bounded by some constant $C=P({\mathcal M}_0)$, (\ref{elliptic}) provides the estimate $$ \|r\|_{3.5} \le C [ \|a^1_t\|_{1.5} \, \|w\|_{2.5} + \|a^1\|_{2.5}\, \|q^2\|_{3.5}\, \|a^1-a^2\|_{2.5} + |\sqrt{g^1}n^1|_{2}\, \|w_t\|_{2.5}] \,. $$ Since $\|a^1-a^2\|_{2.5} \le C \|\xi\|_{3.5}$, and $\|a^1_t\|_{1.5}$, $\|a^1\|_{2.5}$, $\|q^2\|_{3.5}$, and $|\sqrt{g^1}n^1|_{2}$ are all bounded ${\mathcal M}_0$, we see that $\|r(t)\|_{3.5} \le C P(E(t))$. Similar estimates for the time derivatives of $r$ show that $\|r(t)\|_{3.5} + \|r_t(t)\|_{2.5}+ \|r_{tt}(t)\|_{1} \le C P(E(t))$. This shows that the energy estimates of Section \ref{kapriori} go through unchanged for equation (\ref{lunique}); therefore, using (\ref{lunique.e}), we see that $\sup_{t\in[0,T]} E(t) \le C\,T\,P(\sup_{t\in[0,T]} E(t) )$. \def\mathbb R{\mathbb R} \def\nonumber{\nonumber} \def\displaystyle{\displaystyle} \def\tilde w{\tilde w} \def\bar a{\bar a} \def\tilde b{\tilde b} \def\tilde v{\tilde v} \def\tilde q{\tilde q} \def\tilde\eta{\tilde\eta} \def\tilde u{\tilde u} \def\tilde p{\tilde p} \def\tilde{E}{\tilde{E}} \def\Omega{\operatorname} \def\hfill\break{\hfill\break} \def{{{\bar{\eta}^{\kappa}}}}{{\eta^{\kappa}}} \def{v^\kappa}{{v^\kappa}} \def{\bar{u}^{\kappa}}{{u^\kappa}} \def{p^\kappa}{{p^\kappa}} \def{q^\kappa}{{q^\kappa}} \def{\bar{n}^{\kappa}}{{n^\kappa}} \def{N^\kappa}{{N^\kappa}} \def\star_h{\star_h} \def\tilde u^{\kappa}{\tilde u^{\kappa}} \def{\frac{1}{2}}{{\frac{1}{2}}} \def{\frac{3}{2}}{{\frac{3}{2}}} \def{\frac{5}{2}}{{\frac{5}{2}}} \def{\frac{5}{2}}{{\frac{5}{2}}} \def{\frac{7}{2}}{{\frac{7}{2}}} \def{\frac{7}{2}}{{\frac{7}{2}}} \def{\frac{1}{2}}{{\frac{1}{2}}} \def\Omega{\Omega} \def\star_h{\star_h} \def\bar{\eta}{\bar{\eta}} \def{\bar{u}^{\kappa}}{{\bar{u}^{\kappa}}} \def{\bar{n}^{\kappa}}{{\bar{n}^{\kappa}}} \def{{{\bar{\eta}^{\kappa}}}}{{{{\bar{\eta}^{\kappa}}}}} \def{\bar{a}^\kappa}{{\bar{a}^\kappa}} \def{{\tilde a}^\kappa}{{{\tilde a}^\kappa}} \def{{\tilde v}^\kappa}{{{\tilde v}^\kappa}} \def{\bar{u}^n}{{\bar{u}^n}} \def\operatorname{div}{\operatorname{div}} \def\operatorname{curl}{\operatorname{curl}} \def{{\tilde\eta}^\kappa}{{{\tilde\eta}^\kappa}} \def{{\tilde u}^\kappa}{{{\tilde u}^\kappa}} \def{{\tilde b_l}^\kappa}{{{\tilde b_l}^\kappa}} \def{{\tilde\eta}_{l\kappa}}{{{\tilde\eta}_{l\kappa}}} \def{{\tilde v}_\kappa}{{{\tilde v}_\kappa}} \def{{\tilde u}_{l\kappa}}{{{\tilde u}_{l\kappa}}} \def{(0,1)^2}{{(0,1)^2}} \def\star_h{\star_h} \section{The zero surface tension case $\sigma=0$} \label{L1} In this, the second part of the paper, we use our methodology to prove well-posedness of the free-surface Euler equations with $\sigma=0$ and the Taylor sign condition (\ref{lindblad}) imposed, previously established by Lindblad in \cite{Li2004}. The main advantages of our method over the Nash-Moser approach of \cite{Li2004} are the significantly shorter proof and the fact that we provide directly the optimal space in which the problem is set, instead of having to separately perform an optimal energy study once a solution is known as in \cite{ChLi2000}. If one uses a Nash-Moser approach without performing the analysis of \cite{ChLi2000}, then one obtains results with much higher regularity requirements than necessary, as for instance in \cite{La2005} for the irrotational water-wave problem without surface tension. We also obtain lower regularity results than those given by the functional framework of \cite{ChLi2000} for the 3D case. We will extensively make use of the horizontal convolution by layers defined in Section \ref{2}, and just as in the first part of the paper, for $v\in L^2(\Omega)$ and $\kappa\in (0,\kappa_0)$, we define the smoothed velocity $v^\kappa$ by \begin{equation*} \displaystyle v^{\kappa}=\sum_{i=1}^K \sqrt{\alpha_i} \bigl[\rho_{\frac{1}{\kappa}}\star_h [\ \rho_{\frac{1}{\kappa}}\star_h ((\sqrt{\alpha_i} v)\circ\theta_i)]\bigr]\circ\theta_i^{-1}+\sum_{i=K+1}^{L} \alpha_i v \,. \end{equation*} The horizontal convolution by layers is of crucial importance for defining an approximate problem whose asymptotic behavior will be compatible with the formal energy laws for smooth solutions of the original (unsmoothed) problem (\ref{euler}), since the regularity of the moving domain will appear as a surface integral term. In this second part of the paper, the properties of these horizontal convolutions will be featured in a more extensive way than in the surface tension case of the first part of the paper. We remind the reader that this type of smoothing satisfies the usual properties of the standard convolution; in particular, independently of $\kappa$, we have the existence of $C>0$ such that for any $v\in H^s(\Omega)$: \begin{equation*} \|v^{\kappa}\|_s\le C\ \|v\|_s,\ \text{and}\ \ |v^\kappa|_{s-{\frac{1}{2}}+p}\le {C} \kappa^{-p} |v|_{s-{\frac{1}{2}}}\ \text{for}\ p\ge 0. \end{equation*} We will denote for any $l\in \{1,...,K\}$, the following transformed functions from $v$ and $\eta$ that will naturally arise at the variational level: \begin{definition} \begin{equation*} \displaystyle v_{l\kappa}=\rho_{\frac{1}{\kappa}}\star_h (\sqrt{\alpha_l} v\circ\theta_l) \ \text{in}\ (0,1)^3, \end{equation*} \begin{align*} \eta^\kappa&=\text{Id}+\int_0^t v^\kappa\ \text{in}\ \Omega,\\ \eta_{l\kappa}&=\theta_l+\int_0^t v_{l\kappa}\ \text{in}\ (0,1)^3. \end{align*} \end{definition} \begin{remark} The regularity of the moving free surface will be provided by control of each $\eta_{l\kappa}$ in a suitable norm independently of the parameter $\kappa$. \end{remark} \section{The smoothed $\kappa$-problem and its linear fixed-point formulation} \label{L3} As it turns out, the smoothed problem associated to the zero surface tension Euler equations can be found quite simply and naturally, and involves only transport-type arguments in an Eulerian framework. Also, the construction of a solution is easier if we assume more regularity on the domain and initial velocity than in Theorem \ref{ltheorem2}. We shall therefore assume until Section \ref{L12} that $\Omega$ is of class $H^{\frac{9}{2}}$ and $u_0\in H^{\frac{9}{2}}(\Omega)$. In Section \ref{L12}, we will show how this restriction can be removed. Letting $u=v\circ{\eta^\kappa}^{-1}$, we consider the following sequence of approximate problems in which the transport velocity $u^\kappa$ is smoothed: \begin{subequations} \label{smoothl} \begin{align} u_t+\nabla_{u^\kappa}u+\nabla p&=0\ \text{in}\ \eta^\kappa(t,\Omega),\\ \operatorname{div}u&=0\ \text{in}\ \eta^\kappa(t,\Omega),\\ p&=0\ \text{on}\ \eta^\kappa(t,\Gamma),\\ u(0)&=u_0\ \text{in}\ \Omega. \end{align} \end{subequations} In order to solve this smoothed problem, we will use a linear problem whose fixed point will provide the desired solution. If we denote by $\bar v$ an arbitrary element of $C_T$ defined in Section \ref{L4}, and ${{{\bar{\eta}^{\kappa}}}}$ the corresponding Lagrangian flow defined above, then we search for $w$ such that if $u=w\circ{(\bar\eta^\kappa)^{-1}}$ and ${\bar{u}^{\kappa}}=\bar v^\kappa\circ({{{\bar{\eta}^{\kappa}}}}) ^{-1}$, we have that \begin{subequations} \label{smoothlinearl} \begin{align} u_t+\nabla_{{\bar u}^\kappa}u +\nabla p&=0\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(t,\Omega),\\ \operatorname{div}u&=0\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(t,\Omega),\\ p&=0 \ \text{on}\ {{{\bar{\eta}^{\kappa}}}}(t,\Gamma),\\ u(0)&=u_0\ \text{in}\ \Omega. \end{align} \end{subequations} A fixed point $w=\bar v$ to this problem then provides a solution to (\ref{smoothl}). In the following sections, $\bar v\in C_T$ is assumed given, and $\kappa$ is in $(0,\kappa_0)$. $\kappa$ is fixed until Section \ref{L6} where we study the asymptotic behavior of the problem (\ref{smoothl}) as $\kappa\rightarrow 0$. \begin{remark} Note that, for this problem, we do not add any parabolic artificial viscosity, in order to keep the transport-type structure of the Euler equations and to preserve the condition $p=0$ on the free boundary. \end{remark} \section{Existence of a solution to (\ref{smoothl})} \label{L4} \subsection{A Closed convex set} \begin{definition} For $T>0$, we define the following closed convex set of the Hilbert space $L^2(0,T;H^{{\frac{7}{2}}}(\Omega))$: \begin{align*} C_T=\{ v \in L^2(0,T;H^{{\frac{7}{2}}}(\Omega))| \ \sup_{[0,T]} \|v\|_{{\frac{7}{2}}}\le 2 \|u_0\|_{{\frac{7}{2}}}+1 \}, \end{align*} \end{definition} It is clear that $C_T$ is non-empty (since it contains the constant in time function $u_0$), and is a convex, bounded and closed set of the separable Hilbert space $L^2(0,T;H^{{\frac{7}{2}}}(\Omega))$. By choosing $T(\|\nabla u_0\|_{{{\frac{7}{2}}}}+1)\le C_\Omega \epsilon_0$, condition (\ref{eta}) holds for $\eta = \text{Id}+\int_0^t v$ and any $v \in C_T$ and thus (\ref{aequation}) is well-defined. We then see that, by taking $T$ smaller if necessary, we have the existence of $\kappa_1>0$ such that for any $\kappa\in (0,\kappa_1)$, we have the injectivity of $\eta^\kappa(t)$ on $\Omega$ for any $t\in [0,T]$, and $\nabla\eta^\kappa$ satisfies condition (\ref{eta}). We then denote $a^\kappa=[\nabla\eta^\kappa]^{-1}$, and we let $n^\kappa(\eta^\kappa(x))$ denote the exterior unit normal to $\eta^\kappa(\Omega)$ at $\eta^\kappa(x)$, with $x\in\Gamma$. We now set $\kappa_2=\min(\kappa_0,\kappa_1)$, and assume in the following that $\kappa\in (0,\kappa_2)$. \subsection{Existence and uniqueness for the smoothed problems (\ref{smoothlinearl}) and (\ref{smoothl})} Suppose that $\bar v\in C_T$ is given. Now, for $v\in C_T$ given, we define $p$ on ${{{\bar{\eta}^{\kappa}}}}(t, \Omega)$ by \begin{subequations} \label{ex1} \begin{align} \triangle p&=-{\bar{u}^{\kappa}}_i,_j u_j,_i\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(t, \Omega),\\ p&=0\ \text{on}\ {{{\bar{\eta}^{\kappa}}}}(t, \Gamma), \end{align} \end{subequations} where $u=v\circ({{{\bar{\eta}^{\kappa}}}})^{-1}$. We next define $\tilde v$ in $\Omega$ by \begin{equation} \tilde v(t)=u_0+\int_0^t [\nabla p] (t', {{{\bar{\eta}^{\kappa}}}}(t',\cdot) ) dt', \end{equation} and we now explain why the mapping $v\rightarrow \tilde v$ has a fixed point in $C_T$ for $T>0$ small enough. For each $t\in [0,T]$, let $\Psi(t)$ denote the solution of $\Delta \Psi(t) =0$ in $\Omega$ with $\Psi(t) = \bar \eta^\kappa(t)$ on $\Gamma$. For $\kappa$ and $T$ taken sufficiently small $|\bar\eta^\kappa - \text{Id}|_4 <<1 $ so that $\Psi(t)$ is an embedding, and satisfies \begin{equation} \label{ex1a} \| \Psi(t)\|_{4.5} \le C |\bar \eta^\kappa(t)|_4\,. \end{equation} Letting $Q(x,t) = p(\Psi(x,t), t)$ and $A(x,t) = [ \nabla \Psi(x,t)]^{-1}$, (\ref{ex1}) can be written as \begin{align*} [A^k_i A^j_i q,_k],_j &= -[{\bar{u}^{\kappa}}_i,_j u_j,_i](\Psi(x,t),t) \ \ \text{ in } \ \ \Omega \,, \\ Q &=0 \ \ \text{ on } \ \ \Gamma\,. \end{align*} By elliptic regularity (with Sobolev class regularity on the coefficients \cite{Eb2002}) almost everywhere in $(0,T)$ and using (\ref{ex1a}), \begin{align} \|p\|_{\frac{9}{2},{{{\bar{\eta}^{\kappa}}}}(t,\cdot)}\le C P(|{{{\bar{\eta}^{\kappa}}}}|_4) \|\bar v\|_{\frac{7}{2}}\|v\|_{\frac{7}{2}} P(\|{{{\bar{\eta}^{\kappa}}}}\|_{\frac{7}{2}}), \label{ex2} \end{align} where $P$ denotes a generic polynomial. Now, with the definition of $T$ and $C_T$, along with the properties of the convolution that allow us to state that $$|{{{\bar{\eta}^{\kappa}}}}|_4\le \frac{1}{\kappa} |{{{\bar{\eta}^{\kappa}}}}|_3,$$ (since the derivatives involved are along the boundary, allowing our convolution by layers to smooth in these directions), this provides the following estimate: \begin{align*} \|p\|_{\frac{9}{2},{{{\bar{\eta}^{\kappa}}}}(t,\cdot)}\le C_\kappa P(\|{{{\bar{\eta}^{\kappa}}}}\|_{\frac{7}{2}})\|v\|_{\frac{7}{2}}\le C_\kappa\|v\|_{\frac{7}{2}}, \end{align*} where we have used the definition of $C_T$, and where $C_\kappa$ denotes a generic constant depending on $\kappa$. Consequently, we get in $[0,T]$, \begin{align} \|\tilde v(t)\|_{\frac{7}{2}} &\le \|u_0\|_{\frac{7}{2}}+\int_0^t \|p\|_{\frac{9}{2},{{{\bar{\eta}^{\kappa}}}}(t',\cdot)}\|{{{\bar{\eta}^{\kappa}}}}\|_{\frac{7}{2}}\nonumber\\ &\le \|u_0\|_{\frac{7}{2}}+C_\kappa \int_0^t \|v\|_{\frac{7}{2}}\|{{{\bar{\eta}^{\kappa}}}}\|_{\frac{7}{2}}. \label{ex3} \end{align} With the definition of $C_T$, this yields: \begin{equation*} \sup_{[0,T]}\|\tilde v(t)\|_{\frac{7}{2}} \le \|u_0\|_{\frac{7}{2}} +C_\kappa T. \end{equation*} Now, for $T_\kappa\in(0,T)$ such that $T_\kappa C_\kappa\le 1$, we see that $\tilde v\in C_{T_\kappa}$, which ensures that the closed convex set $C_{T_\kappa}$ is stable under the mapping $v\rightarrow \tilde v$. We could also show that this mapping is also sequentially weakly continuous in $L^2(0,{T_\kappa}; H^{\frac{7}{2}}(\Omega))$. Therefore, by the Tychonoff fixed point theorem, there exists a fixed point $v=\tilde v$ in $C_{T_\kappa}$. Now, to see the uniqueness of this fixed point, we see that if another fixed point $\check v$ existed, we would have by the linearity of the mapping $v\rightarrow p$ and the estimates (\ref{ex2}) and (\ref{ex3}) an inequality of the type: \begin{equation*} \|(v-\check v)(t)\|_{\frac{7}{2}} \le C \int_0^t \|v-\check v\|_{\frac{7}{2}}, \end{equation*} which establishes the uniqueness of the fixed point. By construction, if we denote $u=v\circ({{{\bar{\eta}^{\kappa}}}})^{-1}$, this fixed point satisfies the equation on $(0,{T_\kappa})$: \begin{equation*} u_t+{\bar{u}^{\kappa}}_i u,_i+\nabla p=0\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(t,\Omega). \end{equation*} Besides from the definition of $p$ in (\ref{ex1}), we have \begin{equation*} \operatorname{div} u_t+{\bar{u}^{\kappa}}_i \operatorname{div} u,_i=0\ \text{in}\ {{{\bar{\eta}^{\kappa}}}}(t,\Omega), \end{equation*} {\it i.e.} \begin{equation*} \operatorname{div} u (t,{{{\bar{\eta}^{\kappa}}}}(t,x))=\operatorname{div} u_0(x)=0\ \text{in}\ \Omega. \end{equation*} This precisely shows that $u=v\circ({{{\bar{\eta}^{\kappa}}}})^{-1}$ is the unique solution of the linear system (\ref{smoothlinearl}) on $(0,{T_\kappa})$. Now, we see that we again have a mapping $\bar v\rightarrow v$ from $C_{T_\kappa}$ into itself, which is also sequentially weakly lower semi-continuous. It therefore also has a fixed point $v_\kappa$ in $C_{T_\kappa}$, which is a solution of (\ref{smoothl}). In the following we study the limit as $\kappa\rightarrow 0$ of the time of existence ${T_\kappa}$ and of $v_\kappa$. We will also denote for the sake of conciseness $v_\kappa$, $u_\kappa=v_\kappa\circ{\eta^{\kappa}}^{-1}$ and $(u_\kappa)^\kappa$ respectively by $\tilde v$, $\tilde u$ and $\tilde u^\kappa$. \section{Conventions about constants, the time of existence $T_\kappa$, and the dimension of the space} \label{L5} From now on, until Section \ref{L12}, we shall stay in $\mathbb R^2$ for the sake of notational convenience. In Section \ref{L12}, we shall explain the differences for the three dimensional case. In the remainder of the paper, we will denote any constant depending on $\|u_0\|_{\frac{9}{2}}$ as $N(u_0)$. So, for instance, with $q_0$ solution of \begin{subequations} \begin{align} \triangle q_0&=-u_0,_j^iu_0,_i^j\ \text{in}\ \Omega,\\ q_0&=0\ \text{on}\ \Gamma, \end{align} \end{subequations} we have by elliptic regularity $\|q_0\|_{\frac{9}{2}}\le N(u_0)$ (since $\Omega$ is assumed in $H^{\frac{9}{2}}$ until Section \ref{L12}). We will also denote generic constants by the letter $C$. Moreover, we will denote $$\|\Omega\|_s=\sum_{i=1}^K \|\theta_i\|_{s,{(0,1)^2}}.$$ Furthermore, the time $T_\kappa>0$ will be chosen small enough so that on $[0,T_\kappa]$, we have for our solution $\tilde v$ given by Section \ref{L4}: \begin{subequations} \label{assume} \begin{align} {\frac{1}{2}}\le \text{det} \nabla{{\tilde\eta}^\kappa} &\le {\frac{3}{2}}\ \text{in}\ \Omega, \label{assume.a}\\ \|\tilde\eta\|_3&\le |\Omega|+1, \label{assume.b}\\ \|\tilde q\|_3&\le \|q_0\|_3+1 \label{assume.c}\\ \|\tilde v\|_{\frac{5}{2}}&\le \|u_0\|_{\frac{5}{2}}+1, \label{assume.d} \end{align} \end{subequations} The right-hand sides appearing in the three last inequalities shall be denoted by a generic constant $C$ in the estimates that we will perform. In what follows, we will prove that this can be achieved on a time independent of $\kappa$. \section{A continuous in time space energy appropriate for the asymptotic process} \label{L6} \begin{definition} We choose $0\le \xi\in\mathcal{C}^\infty(\overline{\Omega})$ such that $\text{Supp}\xi\subset\cap_{i=K+1}^{L}[\text{Supp}\alpha_i]^c$ and $\xi=1$ in a neighborhood of $\Gamma$. We then pick $0\le \beta\in\mathcal{D}(\Omega)$ such that $\beta=1$ on $[\text{Supp}\xi]^c$. We then define on $[0,T_\kappa]$: \begin{align} \tilde E(t)=&\sup_{[0,t]}\bigl[\sum_{l=1}^K\|\sqrt{\alpha_l}(\theta_l){{\tilde\eta}_{l\kappa}}\|_{{\frac{7}{2}},(0,1)^2}+\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}}+\|\beta \tilde\eta\|_{\frac{7}{2}}+\|\tilde v\|_3+\|\tilde q\|_{\frac{7}{2}} \bigr]\nonumber\\ &+\sup_{[0,t]}\sum_{l=1}^K\bigl[\kappa\|\sqrt{\alpha_l}\tilde v\circ\theta_l\|_{{\frac{7}{2}},(0,1)^2}+\kappa^{\frac{3}{2}}\|\sqrt{\alpha_l}\tilde v\circ\theta_l\|_{4,(0,1)^2}\bigr]+1\,. \label{Elin} \end{align} \end{definition} \begin{remark} Note the presence of $\kappa$-dependent coefficients in $\tilde E(t)$, that indeed arise as a necessity for our asymptotic study. The corresponding terms, without the $\kappa$, would of course not be asymptotically controlled. \end{remark} \begin{remark} The 1 is added to the norm to ensure that $\tilde E\ge 1$, which will sometimes be convenient, whereas not necessary. \end{remark} Now, since from Section \ref{L4}, $\tilde v\in \mathcal{C} ^0 (0,T_\kappa;H^{\frac{7}{2}}(\Omega))$ (in a way not controlled asymptotically, which does not matter for our purpose), we have $\tilde\eta\in\mathcal{C}^0([0,T_\kappa];H^{\frac{7}{2}}(\Omega))$. Next with the definition (\ref{ex1}), and the definition of our fixed point $\tilde v$, we have for $\tilde p=\tilde q\circ({{\tilde\eta}^\kappa})^{-1}$: \begin{align*} \triangle \tilde p&=-{{\tilde u}^\kappa}_i,_j \tilde u_j,_i\ \text{in}\ {{\tilde\eta}^\kappa}(t, \Omega),\\ \tilde p&=0\ \text{on}\ {{\tilde\eta}^\kappa}(t, \Gamma), \end{align*} which shows that $\tilde q\in\mathcal{C}^0([0,T_\kappa];H^{\frac{7}{2}}(\Omega))$. Consequently, $\tilde E$ is a continuous function on $[0,T_\kappa]$. We will then prove that this continuous in time energy is controlled by the same type of polynomial law as (41) of \cite{CoSh2005b}, which will provide a control independent of $\kappa$ on a time independent of $\kappa$. \section{A commutation-type lemma.} \label{L7} We will need the following Lemma in order to later on identify exact in time energy laws from terms arising from our convolution by horizontal layers: \begin{lemma} \label{convolutionbis} Let $\delta_0>0$ be given. Independently of $\kappa\in (0,\delta_0)$, there exists $C>0$ such that for any $g\in H^{\frac{1}{2}}((0,1)^2)$ and $f\in H^{\frac{5}{2}}((0,1)^2)$ such that $$\delta_0<\min(\hbox{dist}({supp}\ fg, \{1\}\times [0,1]), \text{dist}(\text{supp}\ fg, \{0\}\times [0,1])),$$ we have, \begin{align*} \bigl\|\rho_{\frac{1}{\kappa}}\star_h[f g]- f \rho_{\frac{1}{\kappa}}\star_h g\bigr\|_{{\frac{1}{2}},(0,1)^2}\le C\ \|\kappa g\|_{{\frac{1}{2}},(0,1)^2}\|f\|_{{\frac{5}{2}},(0,1)^2}+ C\kappa^{\frac{1}{2}}\ \| g\|_{0,(0,1)^2}\|f\|_{{\frac{5}{2}},(0,1)^2}. \end{align*} \end{lemma} \begin{proof} Let $\Delta=\rho_{\frac{1}{\kappa}}\star_h[f g]- f\ \rho_{\frac{1}{\kappa}}\star_h g$. Then, we have: \begin{align*} \Delta(x)=\int_{x_1-\kappa}^{x_1+\kappa}\ \rho_{\frac{1}{\kappa}}({x_1-y_1})[f(y_1,x_2)-f(x_1,x_2)]\ g(y_1,x_2)\ dy_1, \end{align*} this integral being well-defined because of our condition on the support of $fg$. We then have, since $H^{\frac{3}{2}}$ is embedded in $L^\infty$ in $2d$, \begin{align*} |\Delta(x)|\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \int_{x_1-\kappa}^{x_1+\kappa}\ \rho_{\frac{1}{\kappa}}({x_1-y_1})\ |g(y_1,x_2)|\ dy_1, \end{align*} showing that \begin{align} \|\Delta\|_{0,{(0,1)^2}}&\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \| \rho_{\frac{1}{\kappa}}\star_h |g|\|_{0,{(0,1)^2}}\nonumber\\ &\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \| g\|_{0,{(0,1)^2}}. \label{si1} \end{align} Now, let $p\in\{1,2\}$. Then, $$\Delta,_p=\rho_{\frac{1}{\kappa}}\star_h[f g,_p]- f\ \rho_{\frac{1}{\kappa}}\star_h g,_p+\rho_{\frac{1}{\kappa}}\star_h[f,_p g]- f,_p\ \rho_{\frac{1}{\kappa}}\star_h g.$$ The difference between the two first terms of the right-hand side of this identity can be treated in a similar fashion as (\ref{si1}), leading us to: \begin{align} \|\Delta,_p\|_{0,{(0,1)^2}}&\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \|g\|_{1,{(0,1)^2}}+\|\rho_{\frac{1}{\kappa}}\star_h[f,_p g]\|_{0,{(0,1)^2}}+\|f,_p\ \rho_{\frac{1}{\kappa}}\star_h g\|_{0,{(0,1)^2}}\nonumber\\ &\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \|g\|_{1,{(0,1)^2}}+\|f,_p g\|_{0,{(0,1)^2}}+\|f,_p\|_{L^\infty({(0,1)^2})} \| \rho_{\frac{1}{\kappa}}\star_h g\|_{0,{(0,1)^2}}\nonumber\\ &\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \|g\|_{1,{(0,1)^2}}+2 \|f,_p\|_{L^\infty({(0,1)^2})} \| g\|_{0,{(0,1)^2}}\nonumber\\ &\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \|g\|_{1,{(0,1)^2}}+C \|f\|_{{\frac{5}{2}},{(0,1)^2}} \| g\|_{0,{(0,1)^2}}. \label{si2} \end{align} Consequently, we obtain by interpolation from (\ref{si1}) and (\ref{si2}): \begin{align*} \|\Delta\|_{{\frac{1}{2}},{(0,1)^2}} &\le C \kappa \|f\|_{{\frac{5}{2}},(0,1)^2} \|g\|_{{\frac{1}{2}},{(0,1)^2}}+C\kappa^{\frac{1}{2}} \|f\|_{{\frac{5}{2}},{(0,1)^2}} \| g\|_{0,{(0,1)^2}}. \end{align*} \end{proof} We then infer the following result, whose proof follows the same patterns as the previous one: \begin{lemma} \label{convolution} Let $\delta_0>0$ be given. Independently of $\kappa\in (0,\delta_0)$, there exists $C>0$ such that for any $g\in H^s((0,1)^2)$ ($s={\frac{3}{2}},{\frac{5}{2}}$) and for any $f\in H^{\frac{5}{2}}((0,1)^2)$ such that $$\delta_0<\min(\hbox{dist}({supp}\ fg, \{1\}\times [0,1]), \text{dist}(\text{supp}\ fg, \{0\}\times [0,1])),$$ we have \begin{align*} \bigl\|\rho_{\frac{1}{\kappa}}\star_h[f g]- f \rho_{\frac{1}{\kappa}}\star_h g\bigr\|_{s,(0,1)^2}\le C\ \|\kappa g\|_{s,(0,1)^2}\|f\|_{{\frac{5}{2}},(0,1)^2}+ C\kappa^{\frac{1}{2}} \| g\|_{s-{\frac{1}{2}},(0,1)^2}\|f\|_{{\frac{5}{2}},(0,1)^2}. \end{align*} \end{lemma} \section{Asymptotic regularity of the divergence and curl of $\tilde\eta_{l\kappa}$.} \label{L8} In this Section, we state the necessary a priori controls that we have on the divergence and curl of various transformations of $\tilde v$ and $\tilde\eta$. This process has to be justified again, since the functional framework substantially differs from the case with surface tension, with in this case one time derivative on the velocity corresponding to half a space derivative. We will base our argument on the fact that the divergence and curl of $\tilde u$ satisfy the following transport type equations: \begin{subequations} \begin{align} D_t\text{div}\tilde u&=0, \label{si11.a}\\ D_t\text{curl}\tilde u+{{\tilde u}^\kappa}_i,_1 \tilde u^2,_i-{{\tilde u}^\kappa}_i,_2\tilde u^1,_i&=0. \label{si11.b} \end{align} \end{subequations} We now study the consequences of these relations on the divergence and curl of $\tilde\eta$ in the interior of $\Omega$, and of each $\tilde\eta_{l\kappa}, (1\le l\le N)$. \subsection{Estimate for $\operatorname{div} (\beta\tilde\eta),_s$} From (\ref{si11.a}), we then infer in $\Omega$ that: $({{\tilde a}^\kappa})_i^j \tilde v,_j^i=0. $ Thus, for $s=1, 2$ \begin{equation*} ({{\tilde a}^\kappa})_i^j (\beta\tilde v),_{sj}^i=-\beta({{\tilde a}^\kappa})_i^j,_s \tilde v,_{j}^i+[({{\tilde a}^\kappa})_i^j (\beta\tilde v),_{sj}^i-\beta ({{\tilde a}^\kappa})_i^j \tilde v,_{sj}^i], \end{equation*} and by integration in time, \begin{align} ({{\tilde a}^\kappa})_i^j (\beta\tilde\eta),_{sj}^i(t)&=({{\tilde a}^\kappa})_i^j (\beta\tilde\eta),_{sj}^i(0)+\int_0^t [-\beta({{\tilde a}^\kappa})_i^j,_s \tilde v,_{j}^i+(({{\tilde a}^\kappa})_i^j (\beta\tilde v),_{sj}^i-\beta ({{\tilde a}^\kappa})_i^j \tilde v,_{sj}^i)]\nonumber\\ &\ \ \ +\int_0^t {({{\tilde a}^\kappa})_i^j}_t (\beta\tilde\eta),_{sj}^i. \label{diveta1} \end{align} Consequently, \begin{align} \operatorname{div} (\beta\tilde\eta),_s(t)=&[-({{\tilde a}^\kappa})_i^j+\delta_i^j] (\beta\tilde\eta),_{sj}^i(t)+({{\tilde a}^\kappa})_i^j (\beta\tilde\eta),_{sj}^i(0)+\int_0^t {({{\tilde a}^\kappa})_i^j}_t (\beta\tilde\eta),_{sj}^i\nonumber\\ &+\int_0^t [-\beta({{\tilde a}^\kappa})_i^j,_s \tilde v,_{j}^i+(({{\tilde a}^\kappa})_i^j (\beta\tilde v),_{sj}^i-\beta ({{\tilde a}^\kappa})_i^j \tilde v,_{sj}^i)]\nonumber\\ =&[-\int_0^t{({{\tilde a}^\kappa})_i^j}_t] (\beta\tilde\eta),_{sj}^i(t)+({{\tilde a}^\kappa})_i^j (\beta\tilde\eta),_{sj}^i(0)+\int_0^t {({{\tilde a}^\kappa})_i^j}_t (\beta\tilde\eta),_{sj}^i\nonumber\\ &+\int_0^t [-\beta({{\tilde a}^\kappa})_i^j,_s \tilde v,_{j}^i+(({{\tilde a}^\kappa})_i^j (\beta\tilde v),_{sj}^i-\beta ({{\tilde a}^\kappa})_i^j \tilde v,_{sj}^i)], \label{diveta1bis} \end{align} showing that \begin{align} \|\operatorname{div} (\beta\tilde\eta),_s(t)\|_{\frac{3}{2}}&\le C t \sup_{ [0,t]}[\ \|{{\tilde a}^\kappa}_t\|_{\frac{3}{2}}\ \|\beta\tilde\eta\|_{\frac{7}{2}}] +C+Ct\sup_{[0,t]}[\ \|{{\tilde a}^\kappa}\|_{\frac{5}{2}}\|\tilde v\|_{\frac{5}{2}}]\le C t \tilde E(t)+C, \label{divetabeta} \end{align} where we have used our convention stated in Section \ref{L5}. \subsection{Estimate for $\operatorname{div}[{{\tilde\eta}_{l\kappa}},_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} Since $\text{det}\nabla\theta_l=1$, we then infer in $(0,1)^2$, with ${{\tilde b_l}^\kappa}=[\nabla{{\tilde\eta}^\kappa}\circ\theta_l]^{-1}$ that \begin{equation*} ({{\tilde b_l}^\kappa})_i^j (\tilde v\circ\theta_l),_j^i=0. \end{equation*} Therefore, as for (\ref{diveta1}), \begin{align} ({{\tilde b_l}^\kappa})_i^j ((\sqrt{\alpha_l}\tilde\eta)\circ\theta_l),_{sj}^i(t)=&\ ({{\tilde b_l}^\kappa})_i^j (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i(0)\nonumber\\ & +\int_0^t {({{\tilde b_l}^\kappa})_i^j}_t (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i -\int_0^t \sqrt{\alpha_l}(\theta_l)({{\tilde b_l}^\kappa})_i^j,_s (\tilde v\circ\theta_l),_{j}^i\nonumber\\ &+\int_0^t [({{\tilde b_l}^\kappa})_i^j (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i-\sqrt{\alpha_l}(\theta_l) ({{\tilde b_l}^\kappa})_i^j (\tilde v\circ\theta_l),_{sj}^i]. \label{diveta3} \end{align} Consequently, \begin{align*} \rho_{\frac{1}{\kappa}}\star_h[({{\tilde b_l}^\kappa})_i^j ((\sqrt{\alpha_l}\tilde\eta)\circ\theta_l),_{sj}^i](t)= &\rho_{\frac{1}{\kappa}}\star_h\bigl[({{\tilde b_l}^\kappa})_i^j (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i(0)\bigr]\nonumber\\ & +\int_0^t \rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j}_t (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i- \sqrt{\alpha_l}(\theta_l)({{\tilde b_l}^\kappa})_i^j,_s (\tilde v\circ\theta_l),_{j}^i\bigr]\nonumber\\ &+\int_0^t \rho_{\frac{1}{\kappa}}\star_h\bigl[({{\tilde b_l}^\kappa})_i^j (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i-\sqrt{\alpha_l}(\theta_l) ({{\tilde b_l}^\kappa})_i^j (\tilde v\circ\theta_l),_{sj}^i\bigr]\nonumber\\ =& \int_0^t \rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j}_t (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i\bigl] +R, \end{align*} with $\|R\|_{\frac{3}{2}}\le Ct \tilde E(t)+C.$ Next, thanks to Lemma \ref{convolution}, \begin{align} &\bigl\|\rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j} (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i(t)\bigr]-{({{\tilde b_l}^\kappa})_i^j}{{\tilde\eta}_{l\kappa}},_{sj}^i(t)\bigr\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\le C \|({{\tilde b_l}^\kappa})_i^j \|_{{\frac{5}{2}},(0,1)^2}\ \|\kappa (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i(0)+\kappa \int_0^t (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\ \ \ + C \|({{\tilde b_l}^\kappa})_i^j \|_{{\frac{5}{2}},(0,1)^2}\kappa^{\frac{1}{2}} \|(\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i(0)+\int_0^t (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i\|_{1,{(0,1)^2}}]\nonumber\\ &\le C \kappa^{\frac{1}{2}} \tilde E(t)+C t \tilde E(t)^2+C. \label{diveta6} \end{align} By successively integrating by parts in time and using Lemma \ref{convolution}, \begin{align} &\bigl\|\int_0^t\rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j}_t (\sqrt{\alpha_l}\tilde \eta\circ\theta_l),_{sj}^i(t)\bigr]-\int_0^t{({{\tilde b_l}^\kappa})_i^j}_t\rho_{\frac{1}{\kappa}}\star_h\bigl[ (\sqrt{\alpha_l}\tilde \eta\circ\theta_l),_{sj}^i\bigr]\bigr\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\le \bigl\|\int_0^t\rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j} (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i(t)\bigr]-\int_0^t{({{\tilde b_l}^\kappa})_i^j}\rho_{\frac{1}{\kappa}}\star_h\bigl[ (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i\bigr]\bigr\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &+\bigl\|\bigl[\rho_{\frac{1}{\kappa}}\star_h\bigl[{({{\tilde b_l}^\kappa})_i^j} (\sqrt{\alpha_l}\tilde \eta\circ\theta_l),_{sj}^i(t)\bigr]-{({{\tilde b_l}^\kappa})_i^j}\rho_{\frac{1}{\kappa}}\star_h\bigl[ (\sqrt{\alpha_l}\tilde \eta\circ\theta_l),_{sj}^i\bigr]\bigr]_0^t\bigr\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\le C \kappa^{\frac{1}{2}} \tilde E(t)+C t \tilde E(t)^2+C. \label{diveta5} \end{align} Consequently, with (\ref{diveta3}), (\ref{diveta5}) and (\ref{diveta6}), we infer \begin{align*} \bigl\|\operatorname{div}[{{\tilde\eta}_{l\kappa}},_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]({{\tilde\eta}^\kappa}\circ\theta_l)(t)&-\int_0^t {({{\tilde b_l}^\kappa})_i^j}_t\rho_{\frac{1}{\kappa}}\star_h\bigl[ (\sqrt{\alpha_l}\tilde \eta\circ\theta_l),_{sj}^i]\bigr\|_{{\frac{3}{2}},{(0,1)^2}}\\ & \le C \kappa^{\frac{1}{2}} \tilde E(t)+C t \tilde E(t)^2+C, \end{align*} showing that \begin{align} \bigl\|\operatorname{div}[{{\tilde\eta}_{l\kappa}},_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{3}{2}},{{\tilde\eta}^\kappa}(\theta_l({(0,1)^2}))} \le C \kappa^{\frac{1}{2}} \tilde E(t)+C t \tilde E(t)^2+C. \label{diveta} \end{align} We now study the curl of the same vector fields as in the two previous subsections. \subsection{Estimate for $\operatorname{curl} (\beta\tilde\eta),_s$} From (\ref{si11.b}), we obtain: \begin{equation*} ({{\tilde a}^\kappa})_1^j \tilde v,_j^2-({{\tilde a}^\kappa})_2^j \tilde v,_j^1=\operatorname{curl}\tilde u(0)+\int_0^t [\ -{{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k+{{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_2^j\tilde v,_k^1({{\tilde a}^\kappa})_i^k] . \end{equation*} Therefore, for $s=1, 2,$ \begin{align*} ({{\tilde a}^\kappa})_1^j (\beta\tilde v),_{sj}^2-({{\tilde a}^\kappa})_2^j (\beta\tilde v),_{sj}^1=&\beta\operatorname{curl}\tilde u(0),_s -\beta[({{\tilde a}^\kappa})_1^j,_s (\tilde v),_{j}^2-({{\tilde a}^\kappa})_2^j,_s (\tilde v),_{j}^1]\\ &+ ({{\tilde a}^\kappa})_1^j [(\beta\tilde v),_{sj}^2-\beta\tilde v,_{sj}^2]-({{\tilde a}^\kappa})_2^j [(\beta\tilde v),_{sj}^1 -\beta\tilde v,_{sj}^1]\\ &+\int_0^t \beta[\ -{{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k+{{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_2^j\tilde v,_k^1({{\tilde a}^\kappa})_i^k],_s, \end{align*} which implies by integration in time, \begin{align*} ({{\tilde a}^\kappa})_1^j (\beta\tilde\eta),_{sj}^2-({{\tilde a}^\kappa})_2^j (\beta\tilde\eta),_{sj}^1=&\int_0^t [({{\tilde a}^\kappa}_t)_1^j (\beta\tilde\eta),_{sj}^2-({{\tilde a}^\kappa}_t)_2^j (\beta\tilde\eta),_{sj}^1] +t \beta\operatorname{curl}\tilde u(0),_s\\ &-\beta\int_0^t [({{\tilde a}^\kappa})_1^j,_s (\tilde v),_{j}^2-({{\tilde a}^\kappa})_2^j,_s (\tilde v),_{j}^1]+\int_0^t [f+g]\\ &+\int_0^t({{\tilde a}^\kappa})_1^j [(\beta\tilde v),_{sj}^2-\beta\tilde v,_{sj}^2]-\int_0^t({{\tilde a}^\kappa})_2^j [(\beta\tilde v),_{sj}^1 -\beta\tilde v,_{sj}^1], \end{align*} with $\displaystyle f(t')=\int_0^{t'} \beta[\ -{{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k],_s$ and $\displaystyle g(t')=\int_0^{t'}\beta[ {{\tilde v}^\kappa},_j^i ({{\tilde a}^\kappa})_2^j\tilde v,_k^1({{\tilde a}^\kappa})_i^k],_s$. Now, since $H^{\frac{3}{2}}$ is a Banach algebra in 2d, \begin{align} \bigl\|({{\tilde a}^\kappa})_2^j (\beta\tilde\eta),_{sj}^1-({{\tilde a}^\kappa})_1^j (\beta\tilde\eta),_{sj}^2\bigr\|_{\frac{3}{2}}& \le C \int_0^t \|{{\tilde a}^\kappa}_t\|_{\frac{3}{2}} \|\beta\tilde\eta\|_{\frac{7}{2}} + t\|u_0\|_{\frac{7}{2}}+\int_0^t \|{{\tilde a}^\kappa}\|_{\frac{5}{2}}\|\tilde v\|_{\frac{5}{2}}\nonumber\\ &\ \ \ +\int_0^t\|f+g\|_{\frac{3}{2}}\nonumber\\ &\ \ \ \le N(u_0)+ Ct \tilde E(t)+\int_0^t\|f+g\|_{\frac{3}{2}}. \label{curleta1} \end{align} We now notice that \begin{align*} f(t)&=-\int_0^{t} \beta[\ {{\tilde v}^\kappa},_{sj}^i ({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k+{{\tilde v}^\kappa},_{j}^i ({{\tilde a}^\kappa})_1^j\tilde v,_{sk}^2({{\tilde a}^\kappa})_i^k]-\int_0^t \beta {{\tilde v}^\kappa},_{j}^i \tilde v,_k^2 [({{\tilde a}^\kappa})_1^j({{\tilde a}^\kappa})_i^k],_s\\ &=\int_0^{t} \beta[\ {{\tilde\eta}^\kappa},_{sj}^i [({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k]_t +\tilde\eta,_{sk}^2[{{\tilde v}^\kappa},_{j}^i ({{\tilde a}^\kappa})_1^j({{\tilde a}^\kappa})_i^k]_t]-\int_0^t \beta {{\tilde v}^\kappa},_{j}^i \tilde v,_k^2 [({{\tilde a}^\kappa})_1^j({{\tilde a}^\kappa})_i^k],_s\\ &\ \ \ +\bigl[\beta[\ {{\tilde\eta}^\kappa},_{sj}^i ({{\tilde a}^\kappa})_1^j\tilde v,_k^2({{\tilde a}^\kappa})_i^k +\tilde\eta,_{sk}^2{{\tilde v}^\kappa},_{j}^i ({{\tilde a}^\kappa})_1^j({{\tilde a}^\kappa})_i^k]\bigr]_0^t, \end{align*} which allows us to infer that \begin{align*} \|f(t)\|_{\frac{3}{2}}&\le \int_0^{t} (\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}}+\|\beta\tilde\eta\|_{\frac{7}{2}}+\|\tilde\eta\|_{\frac{5}{2}}) [\|{{\tilde a}^\kappa}_t\|_{\frac{3}{2}}\|{{\tilde a}^\kappa}\|_{\frac{3}{2}}\|\tilde v\|_{\frac{5}{2}}+ \|{{\tilde a}^\kappa}\|_{\frac{3}{2}}\|{{\tilde a}^\kappa}\|_{\frac{3}{2}}\|\tilde v_t\|_{\frac{5}{2}}]\\ &\ \ \ +\int_0^t \|\tilde v\|^2_{\frac{5}{2}} \|{{\tilde a}^\kappa}\|_{\frac{5}{2}}^2+(\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}}+\|\beta\tilde\eta\|_{\frac{7}{2}}+\|\tilde\eta\|_{\frac{5}{2}})(t) \|{{\tilde a}^\kappa}(t)\|^2_{\frac{3}{2}}\|\tilde v\|_{\frac{5}{2}}+ N(u_0)\\ &\ \ \ \le C t \tilde E(t)^2 +N(u_0)+C \tilde E(t). \end{align*} Since $g(t)$ can be estimated in a similar fashion, (\ref{curleta1}) provides us with \begin{align*} \bigl\|({{\tilde a}^\kappa})_2^j (\beta\tilde\eta),_{sj}^1-({{\tilde a}^\kappa})_1^j (\beta\tilde\eta),_{sj}^2\bigr\|_{\frac{3}{2}}\le C t \tilde E(t)^2 +N(u_0). \end{align*} showing that, \begin{align} \bigl\|\operatorname{curl}(\beta\tilde\eta),_{s}\bigr\|_{\frac{3}{2}}&\le \bigl\|\bigl[\int_0^t ({{\tilde a}^\kappa}_t)_2^j \bigr](\beta\tilde\eta),_{sj}^1-\bigl[\int_0^t ({{\tilde a}^\kappa}_t)_1^j \bigr](\beta\tilde\eta),_{sj}^2\bigr\|_{\frac{3}{2}} + C t \tilde E(t)^2 +N(u_0)\nonumber\\ &\le C t \tilde E(t)^2 +N(u_0). \label{curletabeta} \end{align} \subsection{Estimate for $\operatorname{curl}[{{\tilde\eta}_{l\kappa}},_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} In a similar fashion as we obtained (\ref{diveta}), we also have here \begin{align} \bigl\|\operatorname{curl}[{{\tilde\eta}_{l\kappa}},_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{3}{2}},{{\tilde\eta}^\kappa}(\Omega)}\le C t \tilde E(t)^2 + N(u_0). \label{curleta} \end{align} \subsection{Estimate for $\kappa \operatorname{div}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} By time differentiating (\ref{diveta3}) twice in time, we find: \begin{align} ({{\tilde b_l}^\kappa})_i^j ((\sqrt{\alpha_l}\tilde v_t)\circ\theta_l),_{sj}^i(t)=&\ -{({{\tilde b_l}^\kappa})_i^j}_{tt} ((\sqrt{\alpha_l}\tilde \eta)\circ\theta_l),_{sj}^i(t)-2{({{\tilde b_l}^\kappa})_i^j}_t ((\sqrt{\alpha_l}\tilde v)\circ\theta_l),_{sj}^i(t)\nonumber\\ & +[{({{\tilde b_l}^\kappa})_i^j}_t (\sqrt{\alpha_l}\tilde\eta\circ\theta_l),_{sj}^i]_t - [\sqrt{\alpha_l}(\theta_l)({{\tilde b_l}^\kappa})_i^j,_s (\tilde v\circ\theta_l),_{j}^i]_t\nonumber\\ &+ [({{\tilde b_l}^\kappa})_i^j (\sqrt{\alpha_l}\tilde v\circ\theta_l),_{sj}^i-\sqrt{\alpha_l}(\theta_l) ({{\tilde b_l}^\kappa})_i^j (\tilde v\circ\theta_l),_{sj}^i]_t. \label{divvt1} \end{align} Therefore, \begin{align} \kappa\|({{\tilde b_l}^\kappa})_i^j ((\sqrt{\alpha_l}\tilde v_t)\circ\theta_l),_{sj}^i(t)\|_{{\frac{3}{2}},{(0,1)^2}}\le&\ C\kappa \tilde E(t)^2\nonumber\\ &\ +\|{({{\tilde b_l}^\kappa})_i^j}_t\|_{{\frac{3}{2}},{(0,1)^2}}\|\kappa ((\sqrt{\alpha_l}\tilde v)\circ\theta_l),_{sj}^i(t)\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\ +\|\kappa[({{\tilde b_l}^\kappa})_i^j,_s]_t\|_{{\frac{3}{2}},{(0,1)^2}} \|(\sqrt{\alpha_l}\tilde v\circ\theta_l),_{j}^i\|_{{\frac{3}{2}},{(0,1)^2}}\nonumber\\ &\le\ C \tilde E(t)^2+\|\kappa[({{\tilde b_l}^\kappa})_i^j,_s]_t\|_{{\frac{3}{2}},{(0,1)^2}} \|(\sqrt{\alpha_l}\tilde v\circ\theta_l),_{j}^i\|_{{\frac{3}{2}},{(0,1)^2}}. \label{diveta2bis} \end{align} Next, we for instance have: \begin{align*} \kappa{({{\tilde b_l}^\kappa},_s)_1^1}_t&=\kappa \frac{\tilde v_2^\kappa\circ\theta_l,_{2s}}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}-\kappa [\text{det}({{\tilde\eta}^\kappa}\circ\theta_l)]_t,_{s}\frac{\tilde \eta_2^\kappa\circ\theta_l,_{2}}{\text{det}^2\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}\\ &\ \ \ -\kappa [\text{det}({{\tilde\eta}^\kappa}\circ\theta_l)]_t\frac{\tilde \eta_2^\kappa\circ\theta_l,_{s2}}{\text{det}^2\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}-\kappa [\text{det}({{\tilde\eta}^\kappa}\circ\theta_l)],_s\frac{\tilde v_2^\kappa\circ\theta_l,_{2}}{\text{det}^2\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}, \end{align*} which shows that, \begin{align*} \|\kappa{({{\tilde b_l}^\kappa},_s)}_t\|_{{\frac{3}{2}},{(0,1)^2}}&\le C \tilde E(t), \end{align*} and with (\ref{diveta2bis}) this implies: \begin{align*} \kappa\|({{\tilde b_l}^\kappa})_i^j ((\sqrt{\alpha_l}\tilde v_t)\circ\theta_l),_{sj}^i(t)\|_{{\frac{3}{2}},{(0,1)^2}}\le&\ C \tilde E(t)^2. \end{align*} and thus, still by writing $\displaystyle {{\tilde\eta}^\kappa}(t)={{\tilde\eta}^\kappa}(0)+\int_0^t \tilde v^\kappa$, we finally have \begin{align} \kappa \bigl\|\operatorname{div}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{3}{2}},{{\tilde\eta}^\kappa}(\Omega)}\le C \tilde E(t)^2. \label{divkappavt} \end{align} With the same type of arguments, we also have the following asymptotic estimates: \subsection{Estimate for $\kappa \operatorname{curl}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} \begin{align} \kappa \bigl\|\operatorname{curl}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{3}{2}},{{\tilde\eta}^\kappa}(\Omega)}\le C \tilde E(t)^2.\label{curlkappavt} \end{align} \subsection{Estimate for $\kappa^2 \operatorname{div}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} \begin{align} \kappa^2 \bigl\|\operatorname{div}[(\sqrt{\alpha_l} \tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{5}{2}},{{\tilde\eta}^\kappa}(\Omega)}\le C \tilde E(t)^2. \label{divkappa2vt} \end{align} \subsection{Estimate for $\kappa^2 \operatorname{curl}[(\sqrt{\alpha_l}\tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]$} \begin{align} \kappa^2 \bigl\|\operatorname{curl}[ (\sqrt{\alpha_l} \tilde v_t\circ\theta_l),_s\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\bigr\|_{{\frac{5}{2}},{{\tilde\eta}^\kappa}(\Omega)}\le C \tilde E(t)^2. \label{curlkappa2vt} \end{align} \begin{remark} Since we will time integrate the previous quantities, the absence of a small parameter in front of $\tilde E(t)^2$ is not problematic. \end{remark} \subsection{Asymptotic control of $\operatorname{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa})$} We have \begin{align*} \text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa})&=({{\tilde a}^\kappa})_k^j{{\tilde v}^\kappa},_j^k\\ &=({{\tilde a}^\kappa})_k^j[\sqrt{\alpha_i}[\rho_{\frac{1}{\kappa}}\star_h (\tilde v_{i\kappa})^k](\theta_i^{-1})],_j\\ &=[({{\tilde b_l}^\kappa})_k^j[\sqrt{\alpha_l}(\theta_l)\rho_{\frac{1}{\kappa}}\star_h (\tilde v_{l\kappa})],_j^k](\theta_l^{-1}). \end{align*} Now, thanks to Lemma \ref{convolution}, this leads us to \begin{align*} \text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa})&=[\rho_{\frac{1}{\kappa}}\star_h [({{\tilde b_l}^\kappa})_k^j\sqrt{\alpha_l}(\theta_l) (\tilde v_{l\kappa}),_j^k]](\theta_l^{-1})+r_1, \end{align*} with \begin{align*} \|r_1\|_{\frac{5}{2}}\le C \|\nabla{{\tilde\eta}^\kappa}\|_{\frac{5}{2}} ( \|\kappa\nabla \tilde v\|_{\frac{5}{2}} +\kappa^{\frac{1}{2}}\|\nabla\tilde v\|_2)\le C \tilde E(t)^2. \end{align*} Next, we notice that \begin{align*} ({{\tilde b_l}^\kappa})_k^l (E_t^{l\kappa}),_l^k &=({{\tilde b_l}^\kappa})_k^j \rho_{\frac{1}{\kappa}}\star_h[(\sqrt{\alpha_l}\tilde v)(\theta_l)],_j^k\\ &= \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_l}(\theta_l) ({{\tilde b_l}^\kappa})_k^j \tilde v(\theta_l),_j^k]+r_2, \end{align*} with in virtue of Lemma \ref{convolution}, \begin{align*} \|r_2\|_{\frac{5}{2}}\le C \|\nabla{{\tilde\eta}^\kappa}\|_{\frac{5}{2}} ( \|\kappa\nabla \tilde v\|_{\frac{5}{2}} +\kappa^{\frac{1}{2}}\|\nabla\tilde v\|_2)\le C \tilde E(t)^2. \end{align*} Now, since $({{\tilde b_l}^\kappa})_k^l \tilde v(\theta_l),_l^k=0$ in ${(0,1)^2}$, this finally provides us with \begin{equation} \label{divtuk} \|\operatorname{div}{{\tilde u}^\kappa}({{\tilde\eta}^\kappa})\|_{{\frac{5}{2}}}\le C \tilde E(t)^2. \end{equation} \section{Asymptotic regularity of $\kappa \tilde v_t$ and of $\kappa^2 \tilde v_t$} \label{L9} This Section is devoted to the asymptotic control of $\kappa\tilde v_t$ and $\kappa^2 \tilde v_t$ in spaces smoother than the natural regularity $H^{\frac{5}{2}}(\Omega)$ for $\tilde v_t$, the idea still being that one degree in the power of $\kappa$ allows one more degree of space regularity. \subsection{Asymptotic control of $\kappa \tilde v_t$ in $H^{\frac{7}{2}}(\Omega)$} Our starting point will be the fact that since $\tilde q=0$ on $\Gamma$, we have for any $l\in \{1,...,K\}$ on $(0,1)\times\{0\}$: \begin{equation*} {\tilde v_t\circ\theta_l}+\frac{{\tilde q\circ\theta_l},_2}{\text{det}\nabla{{\tilde\eta}^\kappa}(\theta_l)}{{{{\tilde\eta}^\kappa}\circ\theta_l},_1^\perp}=0, \end{equation*} where $x^\perp=(-x_2,x_1)$. Therefore, we have on $(0,1)\times\{0\}$: \begin{align*} ({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{111}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1&=-[\frac{\sqrt{\alpha_l}(\theta_l){\tilde q\circ\theta_l},_2}{\text{det}\nabla{{\tilde\eta}^\kappa}(\theta_l)}],_{11}{{{{\tilde\eta}^\kappa}\circ\theta_l},_{11}^\perp}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1\\ &\ \ \ -[\frac{\sqrt{\alpha_l}(\theta_l){\tilde q\circ\theta_l},_2}{\text{det}\nabla{{\tilde\eta}^\kappa}(\theta_l)}],_{1}{{{{\tilde\eta}^\kappa}\circ\theta_l},_{111}^\perp}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1\\ &\ \ \ - \frac{\sqrt{\alpha_l}(\theta_l){\tilde q\circ\theta_l},_2}{\text{det}\nabla{{\tilde\eta}^\kappa}(\theta_l)}{{{{\tilde\eta}^\kappa}\circ\theta_l},_{1111}^\perp}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1, \end{align*} showing that \begin{align} \kappa\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{111}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1\|_{0,\partial{(0,1)^2}}&\le \kappa \tilde E(t)^2 +C\kappa\|\sqrt{\alpha_l}(\theta_l){{{{\tilde\eta}^\kappa}\circ\theta_l},_{1111}^\perp}\|_{0,\partial{(0,1)^2}}. \label{kappa0} \end{align} By definition, \begin{align*} \sqrt{\alpha_l}(\theta_l){{{\tilde\eta}^\kappa}\circ\theta_l},_{1111}=\sqrt{\alpha_l}(\theta_l) \sum_{i=1}^K[\sqrt{\alpha_i}(\theta_l) [\rho_{\frac{1}{\kappa}}\star_h E^{i\kappa}](\theta_i^{-1}\circ\theta_l)],_{1111}, \end{align*} the sum being restricted to the indices $i$ such that $\theta_i({(0,1)^2})$ and $\theta_l({(0,1)^2})$ have a non empty intersection. We then have \begin{align} \sqrt{\alpha_l}(\theta_l){{{\tilde\eta}^\kappa}\circ\theta_l},_{1111}=\sqrt{\alpha_l}(\theta_l) [\sum_{i=1}^K\sqrt{\alpha_i}(\theta_l) [\rho_{\frac{1}{\kappa}}\star_h E^{i\kappa}],_{i_1i_2i_3i_4}(\theta_i^{-1}\circ\theta_l) a_{il,1111}^{i_1i_2i_3i_4}+\Delta], \label{kappa1} \end{align} with $a_{il,1111}^{i_1i_2i_3i_4}=(\theta_i^{-1}\circ\theta_l),_1^{i_1}(\theta_i^{-1}\circ\theta_l),_1^{i_2}(\theta_i^{-1}\circ\theta_l),_1^{i_3}(\theta_i^{-1}\circ\theta_l),_1^{i_4}$, and \begin{align} \kappa\|\sqrt{\alpha_l}(\theta_l)\Delta\|_{0,\partial{(0,1)^2}}&\le C \kappa \|\Omega\|_{\frac{9}{2}} \sup_{i}\|\rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta\circ\theta_i]\|_{3,{(0,1)^2}}\nonumber\\ &\ \ \ +C\kappa \sup_{i}\|\rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta\circ\theta_i]\|_{{\frac{7}{2}},{(0,1)^2}}\nonumber\\ & \le C \kappa (\|\Omega\|_{\frac{9}{2}}+1) \tilde E(t)\nonumber\\ &\le C \kappa \tilde E(t)^2. \label{kappa2} \end{align} Now, we notice that for $x_1\in (0,1)$ such that $\theta_l(x_1,0)\in\theta_i({(0,1)^2})$, we necessarily have, since for all $k\in \{1,...,K\}$, $\theta_k([0,1]\times\{0\})=\partial\Omega\cap\theta_k([0,1]^2)$, that $\theta_i^{-1}\circ\theta_l(x_1,0)=(f_{il}(x_1),0)$, showing that on $(0,1)\times\{0\}$, we have $\sqrt{\alpha_l}(\theta_l) a_{il,1111}^{i_1i_2i_3i_4}=0$ except when $i_1=i_2=i_3=i_4=1$. Therefore, (\ref{kappa1}) can be expressed as \begin{align} \sqrt{\alpha_l}(\theta_l){{{\tilde\eta}^\kappa}\circ\theta_l},_{1111}=\sqrt{\alpha_l}(\theta_l) [\sum_{i=1}^K\sqrt{\alpha_i}(\theta_l) [\rho_{\frac{1}{\kappa}}\star_h E^{i\kappa}],_{1111}(\theta_i^{-1}\circ\theta_l) a_{il,1111}^{1111}+\Delta]. \label{kappa3} \end{align} Now, from the properties of our convolution by layers, we have (since the derivatives are horizontal) that \begin{align} \kappa\|[\rho_{\frac{1}{\kappa}}\star_h E^{i\kappa}],_{1111}\|_{0,(0,1)\times\{0\}}\le C \|E^{i\kappa},_{111}\|_{0,(0,1)\times\{0\}}. \label{kappa4} \end{align} Thus, with (\ref{kappa1}), (\ref{kappa2}) and (\ref{kappa4}), we infer \begin{align*} \kappa\|\sqrt{\alpha_l}(\theta_l){{\tilde\eta}^\kappa}\circ\theta_l,_{1111}\|_{0,\partial{(0,1)^2}} \le C \kappa \tilde E(t)+C \tilde E(t), \end{align*} which coupled with (\ref{kappa0}) provides us with \begin{align*} \kappa\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{111}\cdot{{\tilde\eta}^\kappa}\circ\theta_l,_1\|_{0,\partial{(0,1)^2}}&\le C \tilde E(t)^2 + C. \end{align*} This provides us the trace estimate: \begin{align} \kappa\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{1}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\cdot&{{\tilde\eta}^\kappa}\circ\theta_l,_1(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\|_{2,\partial{{\tilde\eta}^\kappa}(\theta_l({(0,1)^2}))}\nonumber\\ &\le C \tilde E(t)^2 + C. \label{kappa5} \end{align} Consequently with the divergence and curl estimates (\ref{divkappavt}) and (\ref{curlkappavt}) and the trace estimate (\ref{kappa5}), we infer by elliptic regularity: \begin{align} &\kappa\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{1}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\|_{{\frac{5}{2}},{{\tilde\eta}^\kappa}(\theta_l({(0,1)^2}))}\le C \tilde E(t)^2 + C. \label{kappavt} \end{align} \begin{remark} It is the presence of $\|\Omega\|_{\frac{9}{2}}$ in the inequalities leading to (\ref{kappa2}) which explains the assumption of $\Omega$ in $H^{\frac{9}{2}}$. It is, however, not essential as will be shown in Section \ref{L12}. One way to see this, is to smooth the initial domain by a convolution with the parameter $\kappa$ to form $\Omega^\kappa$. Then, by the properties of the convolution, $\kappa \|\Omega^\kappa\|_{\frac{9}{2}}\le C \|\Omega\|_{\frac{7}{2}}$. \end{remark} \subsection{Asymptotic control of $\kappa^2 \tilde v_t$ in $H^{\frac{9}{2}}(\Omega)$} In a similar way as in the previous subsection, we would obtain the trace estimate: \begin{align*} \kappa^2\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{1}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\cdot&{{\tilde\eta}^\kappa}\circ\theta_l,_1(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\|_{3,\partial{{\tilde\eta}^\kappa}(\theta_l({(0,1)^2}))}\nonumber\\ &\le C \tilde E(t)^2 + C, \end{align*} which coupled with (\ref{divkappa2vt}) and (\ref{curlkappa2vt}) provides \begin{align} &\kappa^2\|({\sqrt{\alpha_l}\tilde v_t\circ\theta_l}),_{1}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\|_{{\frac{7}{2}},{{\tilde\eta}^\kappa}(\theta_l({(0,1)^2}))}\le C \tilde E(t)^2 + C. \label{kappa2vt} \end{align} \section{Basic energy law for the control of $\tilde v$ and ${{\tilde\eta}_{l\kappa}}$ independently of $\kappa$.} \label{L10} We will use a different type of energy than in \cite{ChLi2000}, namely: \begin{definition} $$\displaystyle H^\kappa(t)={\frac{1}{2}}\sum_{l=1}^K \int_{(0,1)^2}\xi_l\circ\theta_l|(\tilde v\circ\theta_l),_{111}|^2,$$ where $\xi_l=\xi\ \alpha_l$, $\xi$ being defined in Section \ref{L6}. \end{definition} \begin{remark} The main differences with respect to the energy of \cite{ChLi2000} are in the absence in our energy of any restriction to the tangent components, allowing a more convenient set of estimates, and in a setting in Lagrangian variables. \end{remark} We have: \begin{align*} H^\kappa_t(t)&=\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l \tilde v_t\circ\theta_l,_{111}\tilde v\circ\theta_l,_{111}\nonumber\\ &= -\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l (({{\tilde a}^\kappa})_j^k\tilde q,_k)\circ\theta_l,_{111}{\tilde v}_j\circ\theta_l,_{111}\nonumber\\ &= -\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l [({{\tilde b_l}^\kappa})_j^p\tilde q\circ\theta_l,_p],_{ 111}{\tilde v}_j\circ\theta_l,_{111}, \end{align*} where ${{\tilde b_l}^\kappa}=[\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^{-1}$ . Next, we see that $H^\kappa_t=-[H_1+H_2+H_3]$, with \begin{align*} H_1(t)&=\sum_{l=1}^K \int_{(0,1)^2} \xi_l(\theta_l) ({{\tilde b_l}^\kappa})_j^p,_{111}\tilde q\circ\theta_l,_p{\tilde v}_j\circ\theta_l,_{111},\\ H_2(t)&=\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l [({{\tilde b_l}^\kappa})_j^p\tilde q\circ\theta_l,_{p111}]{\tilde v}_j\circ\theta_l,_{111},\\ H_3(t)&=\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l [[({{\tilde b_l}^\kappa})_j^p\tilde q\circ\theta_l,_p],_{ 111}-({{\tilde b_l}^\kappa})_j^p\tilde q\circ\theta_l,_{l111}-({{\tilde b_l}^\kappa})_j^p,_{111}\tilde q\circ\theta_l,_l]{\tilde v}_j\circ\theta_l,_{111}. \end{align*} We immediately have for the third term: \begin{equation} \label{ecl1} |H_3(t)|\le C \tilde E(t)^2. \end{equation} Next, for $H_2$, since $(\xi_l \tilde q)\circ\theta_l=0$ on $\partial (0,1)^2$, \begin{align*} H_2(t)&=-\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l\ ({{\tilde b_l}^\kappa})_j^p\tilde q\circ\theta_l,_{111}{\tilde v}_j\circ\theta_l,_{p111}\\ &\ \ \ -\sum_{l=1}^K \int_{(0,1)^2} [\xi_l\circ\theta_l ({{\tilde b_l}^\kappa})_j^p],_p \tilde q\circ\theta_l,_{111}{\tilde v}_j\circ\theta_l,_{111}. \end{align*} We then notice that from the divergence condition, we have $({{\tilde b_l}^\kappa})_j^p\ \tilde v_j\circ\theta_l,_p=0$ in $(0,1)^2$, implying \begin{align*} H_2(t)&=\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l\ \tilde q\circ\theta_l,_{111}({{\tilde b_l}^\kappa})_j^p,_{111} {\tilde v}_j\circ\theta_l,_{p}\\ &\ \ \ +\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l\ \tilde q\circ\theta_l,_{111}({{\tilde b_l}^\kappa})_j^p,_{11} {\tilde v}_j\circ\theta_l,_{p1}\\ &\ \ \ +\sum_{l=1}^K \int_{(0,1)^2} \xi_l\circ\theta_l\ \tilde q\circ\theta_l,_{111}({{\tilde b_l}^\kappa})_j^p,_{1} {\tilde v}_j\circ\theta_l,_{p11}\\ &\ \ \ -\sum_{l=1}^K \int_{(0,1)^2} [\xi_l\circ\theta_l ({{\tilde b_l}^\kappa})_j^p],_p \tilde q\circ\theta_l,_{111}{\tilde v}_j\circ\theta_l,_{111}. \end{align*} Now, in a way similar as (\ref{int}), we have for any $f\in H^{\frac{1}{2}}((0,1)^2)$ \begin{equation*} \|\xi_l\circ\theta_l f,_1\|_{H^{\frac{1}{2}} ((0,1)^2)'}\le C \|f\|_{H^{\frac{1}{2}} ((0,1)^2)}, \end{equation*} since the derivative is in the horizontal direction. By applying this result to $f={({{\tilde b_l}^\kappa})_j^p},_{11}$ for the first integral appearing in the equality above, and by using the continuous embedding of $H^1$ into $L^6$ ($6\in (1,\infty)$), and of $H^{\frac{1}{2}}$ into $L^3$ ($ 3\in (1,4)$) for the other integrals, we then obtain: \begin{align} |H_2(t)|&\le C \| {{{\tilde a}^\kappa}}\|_{\frac{5}{2}} \|\tilde q\|_{\frac{7}{2}} \|{\tilde v}\|_3\le C \tilde E(t)^3. \label{ecl2} \end{align} We now come to $H_1$, which will require more care, and will provide us with the regularity of ${{\tilde\eta}_{l\kappa}}(\Omega)$ in $H^{\frac{7}{2}}$ independently of $\kappa$. We have: \begin{align} H_1(t)&= \int_{(0,1)^2} \xi_l\circ\theta_l ({{\tilde b_l}^\kappa})_j^p,_{111} [\tilde q\circ\theta_l],_p\ {\tilde v}_j\circ\theta_l,_{111}\nonumber\\ &=H_{11}(t)+H_{12}(t) -\int_{(0,1)^2} \xi_l\circ\theta_l \Delta_j^p \ [\tilde q\circ\theta_l],_p\ {\tilde v}_j\circ\theta_l,_{111}, \label{ecl3} \end{align} with \begin{align*} H_{11}(t)&=\int_{(0,1)^2} \xi_l\circ\theta_l \frac{(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p,_{111}}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}[\tilde q\circ\theta_l],_p\ {\tilde v}_j\circ\theta_l,_{111},\nonumber\\ H_{12}(t)&=-\int_{(0,1)^2} \xi_l\circ\theta_l (\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}[\tilde q\circ\theta_l],_p\ {\tilde v}_j\circ\theta_l,_{111}, \end{align*} and \begin{align*} \Delta_j^p=\bigl[\frac{(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}\bigr],_{111}-\frac{(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p,_{111}}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}+(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}, \end{align*} so that \begin{align} \bigl|\int_{(0,1)^2} \Delta_j^p\ [(\xi_l\tilde q)\circ\theta_l],_p\ {\tilde v}_j\circ\theta_l,_{111}\bigr|\le C \|\tilde v\|_3. \label{ecl4} \end{align} We now turn our attention to the other terms of (\ref{ecl3}), and to shorten notations, we will set: $\tilde Q_l=\tilde q\circ\theta_l$. We first study the perturbation $H_{12}$, which would not appear if the volume preserving condition was respected by our smoothing by convolution. It turns out that we do need the double convolution by layers appearing in the definition of $v^\kappa$ in order to identify time derivatives of space energies. We first notice that since $\theta_l$ does not depend on $t$, we have: \begin{equation*} ({{\tilde\eta}^\kappa}\circ\theta_l)_t={{\tilde u}^\kappa}\circ({{\tilde\eta}^\kappa}\circ\theta_l), \end{equation*} from which we infer in $(0,1)^2$, since $\theta_l$ is volume preserving, \begin{equation} [\text{det}(\nabla{{\tilde\eta}^\kappa}\circ\theta_l)]_t=\text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa}\circ\theta_l)\ \text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l). \label{ec5} \end{equation} \subsection{Study of $H_{12}$.} We have after an integration by parts in time, and the use of (\ref{ec5}): \begin{align*} \int_0^t H_{12} =\sum_{i=1}^3 H_{12}^i+R_{12}, \end{align*} with \begin{align*} H_{12}^1&=\int_0^t\int_{(0,1)^2} \xi_l\circ\theta_l (\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{\text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa}\circ\theta_l)\ [\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111},\\ H_{12}^2&=\int_0^t\int_{(0,1)^2} \xi_l\circ\theta_l (\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{[\text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}\ \text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111},\\ H_{12}^3&=-\int_{(0,1)^2} \xi_l\circ\theta_l (\text{Cof}\nabla\tilde ({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{ [\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111}(t), \end{align*} and \begin{align} |R_{12}(t)|\le C t \tilde E(t)^3 +C. \label{ecl6} \end{align} \subsubsection{\bf Study of $H_{12}^1$.} For the sake of conciseness, we denote $$\displaystyle A_{jl}=\xi_l\circ\theta_l(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p \frac{\text{div}{{\tilde u}^\kappa} ({{\tilde\eta}^\kappa}\circ\theta_l)}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}[\tilde q\circ\theta_l],_p.$$ We then see, by expanding the third space derivative of the determinant in the integrand of $H_{12}^1$, that $H_{12}^1=\sum_{i=1}^4 H_{12}^{1i}+R_{12}^1$ with the $H_{12}^{1i}$ being estimated as $H_{12}^{11}$ that we precise below and $R_{12}^1$ being a remainder estimated as (\ref{ecl6}). By definition of ${{\tilde\eta}^\kappa}$ we have, if we denote $$E^{i\kappa}=\rho_{\frac{1}{\kappa}}\star_h ((\sqrt{\alpha_i} \tilde\eta)\circ\theta_i),$$ \begin{align*} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} \bigl[ A_{jl} ({{\tilde\eta}^\kappa}_2\circ\theta_l),_2\ \bigl[\bigl[\sqrt{\alpha_i}(\theta_i)\ \rho_{\frac{1}{\kappa}}\star_h E_1^{i\kappa}\bigr](\theta_i^{-1}\circ\theta_l)\bigr],_{1111}\\ &\hskip 3cm [{\tilde\eta_j\circ\theta_i(\theta_i^{-1}\circ\theta_l)}],_{111}\bigr]+R_{12}^{1}, \end{align*} with $|R_{12}^1|\le Ct$ and where, because of the term $\sqrt{\alpha_i}(\theta_l)$, the only indexes $i$ and $l$ appearing in this sum are the ones for which $\theta_l((0,1)^2)\cap\theta_i((0,1)^2)\ne\emptyset$. Only such indexes will be considered later on when such terms arise. From our assumed regularity on $\Omega$ in $H^{\frac{9}{2}}$, we then have \begin{align*} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} \bigl[ A_{jl} ({{\tilde\eta}^\kappa}_2\circ\theta_l),_2\ \bigl[\bigl[\rho_{\frac{1}{\kappa}}\star_h E_1^{i\kappa}\bigr](\theta_i^{-1}\circ\theta_l)\bigr],_{1111}\\ &\hskip 3cm [\sqrt{\alpha_i}{\tilde\eta_j\circ\theta_i(\theta_i^{-1}\circ\theta_l)}],_{111}\bigr]+R_{12}^{11}, \end{align*} with $|R_{12}^{11}|\le C t \tilde E(t)^2.$ We next have, since the charts $\theta_i$ are volume preserving, \begin{align*} H_{12}^{11}&=\int_0^t\int_{\theta_i^{-1}(\theta_l(0,1)^2)} \bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i)\ [\rho_{\frac{1}{\kappa}}\star_h E_1^{i\kappa}],_{j_1j_2j_3j_4}\\ &\hskip 4cm c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111} [\sqrt{\alpha_i}\tilde\eta_j\circ\theta_i],_{i_1i_2i_3}\bigl]+R, \end{align*} with $|R|\le C t \tilde E(t)^2$ and \begin{subequations} \label{cij} \begin{align} c_{il,111}^{j_1j_2j_3j_4}&=[(\theta_i^{-1}\circ\theta_l)^{j_1},_1(\theta_i^{-1}\circ\theta_l)^{j_2},_1(\theta_i^{-1}\circ\theta_l)^{j_3},_1(\theta_i^{-1}\circ\theta_l)^{j_4},_1](\theta_l^{-1}\circ\theta_i),\\ c_{il,111}^{i_1i_2i_3}&=[(\theta_i^{-1}\circ\theta_l)^{i_1},_1(\theta_i^{-1}\circ\theta_l)^{i_2},_1(\theta_i^{-1}\circ\theta_l)^{i_3},_1](\theta_l^{-1}\circ\theta_i). \end{align} \end{subequations} Next, we notice that the term $A_{jl}(\theta_l^{-1}\circ\theta_i)$ introduces a factor $\alpha_l\circ\theta_i$ which is non zero only if $x\in \theta_i^{-1}(\theta_l(0,1)^2)$, leading us to \begin{align*} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} \bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i)\ \rho_{\frac{1}{\kappa}}\star_h E_1^{i\kappa},_{j_1j_2j_3j_4}\\ &\hskip 4cm c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111} [\sqrt{\alpha_i}\tilde\eta_j\circ\theta_i],_{i_1i_2i_3}\bigl]+R, \end{align*} where $\theta_l^{-1}\circ\theta_i$ is extended outside of $\theta_i^{-1}(\theta_l(0,1)^2)$ in any fashion. This argument of replacing an integral on a subset of $(0,1)^2$ by an integral on $(0,1)^2$ will be implicitly repeated at other places later on. Now, since $\rho$ is even, \begin{align} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} E_1^{i\kappa},_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star\bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i)\ \nonumber\\ &\hskip 5cm c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111} [\sqrt{\alpha_i}\tilde\eta_j\circ\theta_i],_{i_1i_2i_3}\bigl]+R. \label{ecl16bis} \end{align} Now, let us call $f=[A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111}$ and $g=[\sqrt{\alpha_j}\tilde\eta_1\circ\theta_i],_{i_1i_2i_3}$. We notice that $\|f\|_{{\frac{5}{2}},{(0,1)^2}}$ is the natural norm associated to $\tilde E(t)$. Here we cannot use directly Lemma \ref{convolution} for the case where all the $j_i=3$, since $E^{i\kappa}_{3333}$ is not necessarily in $H^{\frac{1}{2}}((0,1)^2)'$ a priori. Instead, we write \begin{align*} f(y_1,x_2)=f(x_1,x_2)+(y_1-x_1). f,_1(x_1,x_2)+\int_{x_1}^{y_1}[f,_1(x,x_2)-f,_1(x_1,x_2)]dx, \end{align*} which shows that on ${(0,1)^2}$: \begin{align*} \rho_{\frac{1}{\kappa}}\star_h[fg] (x_1,x_2)=&f(x_1,x_2) \rho_{\frac{1}{\kappa}}\star_h g (x_1,x_2) \\ &+f,_1(x_1,x_2)\int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) (y_1-x_1)g(y_1,x_2) dy_1\\ &+\int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) \int_{x_1}^{y_1}[f,_1(x,x_2)-f,_1(x_1,x_2)]dx\ g(y_1,x_2) dy_1. \end{align*} This implies \begin{align} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} \bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i)\ E_1^{i\kappa},_{j_1j_2j_3j_4}\nonumber\\ &\hskip 3cm c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111} \rho_{\frac{1}{\kappa}}\star_h [\sqrt{\alpha_i}(\theta_i)[\tilde\eta_1\circ\theta_i],_{i_1i_2i_3}]\bigl] +R-R_1-R_2, \label{ecl16ter} \end{align} with \begin{align*} R_1=&\int_0^t\int_{(0,1)^2} \bigl[E_1^{i\kappa},_{j_1j_2j_3j_4}(x_1,x_2) f,_1(x_1,x_2)\nonumber\\ &\hskip 3cm \int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) (y_1-x_1)g(y_1,x_2) dy_1\bigr] \ dx_1dx_2,\nonumber\\ R_2=& \int_0^t\int_{(0,1)^2} \bigl[E_1^{i\kappa},_{j_1j_2j_3j_4}(x_1,x_2)\nonumber\\ &\hskip 2cm \int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) \int_{x_1}^{y_1}[f,_1(x,x_2)-f,_1(x_1,x_2)]dx\ g(y_1,x_2) dy_1\bigr]\ dx_1dx_2. \end{align*} Now, for $R_2$, we notice that since $$|f,_1(x,x_2)-f,_1(x_1,x_2)|\le C |f|_{{\frac{5}{2}},{(0,1)^2}} |x-x_1|^{\frac{1}{2}}\le C |{{\tilde\eta}^\kappa}|_{{\frac{7}{2}}} |x-x_1|^{\frac{1}{2}},$$ we have \begin{align} R_2\le & C\int_0^t\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}} \int_{(0,1)^2} \bigl[|E_1^{i\kappa},_{j_1j_2j_3j_4}(x_1,x_2)|\int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) \kappa^{\frac{3}{2}} |g(y_1,x_2)| dy_1\bigr] dx_1dx_2\nonumber\\ \le & C\int_0^t\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}} \int_{(0,1)^2} \bigl[|\kappa^{\frac{3}{2}} E_1^{i\kappa},_{j_1j_2j_3j_4}|\ \rho_{\frac{1}{\kappa}}\star_h |g|\bigr]\nonumber\\ \le & C\int_0^t\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}} \kappa^{\frac{3}{2}} \bigl[\|E_1^{i\kappa},_{j_1j_2j_3j_4}(0)\|_{0,{(0,1)^2}}+\int_0^t \|(E_1^{i\kappa})_t,_{j_1j_2j_3j_4} \|_{0,{(0,1)^2}}\bigr] \|g\|_{0,{(0,1)^2}}\nonumber\\ \le & Ct \kappa^{\frac{3}{2}} \tilde E(t) +Ct \tilde E(t)^3, \label{ecl16qua} \end{align} where we have used the fact that $\|\kappa^{\frac{3}{2}} \sqrt{\alpha_i}\tilde v_1\circ\theta_i],_{i_1i_2i_3i_4}\|_{0,{(0,1)^2}}$ is contained in the definition of $\tilde E(t)$. We now turn our attention to $R_1$. We first remark that \begin{align*} f,_1=& ([A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}),_1 c^{j_1j_2j_3j_4}_{il,1111}\\ &+\sum_{n=1}^4\bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}[(\theta_i^{-1}\circ\theta_l),_1^{j_n}(\theta_l^{-1}\circ\theta_i)],_1\\ &\hskip 5cm \Pi_{p\ne n}(\theta_i^{-1}\circ\theta_l),_1^{j_p}(\theta_l^{-1}\circ\theta_i)\bigr], \end{align*} which implies that \begin{align*} R_1=R_1^1+\sum_{n=1}^4 R_1^{i_n}, \end{align*} with \begin{align*} R_1^1&=\int_0^t\int_{(0,1)^2} c^{j_1j_2j_3j_4}_{il,1111} \bigl[E_1^{i\kappa},_{j_1j_2j_3j_4}(x_1,x_2) ([A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}),_1\nonumber\\ &\hskip 3cm \int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) (y_1-x_1)g(y_1,x_2) dy_1\bigr] \ dx_1dx_2,\nonumber\\ R_1^{i_n}&= \int_0^t\int_{(0,1)^2} \bigl[E_1^{i\kappa},_{j_1j_2j_3j_4}(x_1,x_2) A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) \nonumber\\ &\hskip 3cm [(\theta_i^{-1}\circ\theta_l),_1^{j_n}(\theta_l^{-1}\circ\theta_i)],_1 \Pi_{p\ne n}(\theta_i^{-1}\circ\theta_l),_1^{j_p}(\theta_l^{-1}\circ\theta_i)\nonumber\\ &\hskip 3cm c^{i_1i_2i_3}_{il,111}\int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) (y_1-x_1)g(y_1,x_2) dy_1\bigr] \ dx_1dx_2. \end{align*} Let us study $R_1^1$. If we denote $h(x_1,x_2)=\int_\mathbb R \rho_{\frac{1}{\kappa}}(y_1-x_1) (y_1-x_1)g(y_1,x_2) dy_1$, since $(E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l),_1(\theta_l^{-1}\circ\theta_i)=(\theta_i^{-1}\circ\theta_l),_1^{j_1}(\theta_l^{-1}\circ\theta_i) E_1^{i\kappa},_{j_1j_2j_3j_4}$, \begin{align*} R_1^1&=\int_0^t\int_{(0,1)^2} c^{j_2j_3j_4}_{il,111} \bigl[(E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l),_1(\theta_l^{-1}\circ\theta_i) \nonumber\\ &\hskip 3cm ([A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}),_1 h\bigr]\nonumber\\ &=\int_0^t\int_{(0,1)^2} \bigl[(E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l),_1 \nonumber\\ &\hskip 3cm [c^{j_2j_3j_4}_{il,111}([A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}),_1 h](\theta_i^{-1}\circ\theta_l)\bigr]. \end{align*} Since the derivative of $(E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l)$ is in the horizontal direction, we infer similarly as in (\ref{int}) that \begin{align*} R_1^1 &\le \int_0^t\int_{(0,1)^2} \bigl[\bigl\|E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l\bigr\|_{{\frac{1}{2}},{(0,1)^2}} \nonumber\\ &\hskip 3cm \bigl\|[c^{j_2j_3j_4}_{il,111}([A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i) c^{i_1i_2i_3}_{il,111}),_1 h](\theta_i^{-1}\circ\theta_l)\bigr\|_{{\frac{1}{2}},{(0,1)^2}}\bigr]. \end{align*} Since we have by interpolation $\|h\|_{{\frac{1}{2}},{(0,1)^2}}\le C \kappa \|g\|_{{\frac{1}{2}},{(0,1)^2}}$, we then infer: \begin{align} |R_1^1|\le C t \tilde E(t)^2. \label{ecl16qui} \end{align} In a similar fashion, for $R_1^{i_1}$ we can identify an horizontal derivative: \begin{align*} &E_1^{i\kappa},_{j_1j_2j_3j_4}[(\theta_i^{-1}\circ\theta_l),_1^{j_{1}}(\theta_l^{-1}\circ\theta_i)],_1 \Pi_{p=2}^4(\theta_i^{-1}\circ\theta_l),_1^{j_{p}}(\theta_l^{-1}\circ\theta_i)\\ &=[(\theta_i^{-1}\circ\theta_l),_1^{j_{1}}(\theta_l^{-1}\circ\theta_i)],_1 \bigl[\Pi_{p=2}^3(\theta_i^{-1}\circ\theta_l),_1^{j_{p}} (E_1^{i\kappa},_{j_1j_2j_3}\circ\theta_i^{-1}\circ\theta_l),_1\bigr](\theta_l^{-1}\circ\theta_i), \end{align*} which leads for the same reasons as for $R_1^1$ to $|R_1^{i_1}|\le C t \tilde E(t)^2$. Since the other $R_1^{i_n}$ are similar in structure, we have \begin{align} |R_1^{i_n}|\le C t \tilde E(t)^2. \label{ecl16hex} \end{align} Consequently from (\ref{ecl16ter}), (\ref{ecl16qua}), (\ref{ecl16qui}) and (\ref{ecl16hex}), we infer \begin{align*} H_{12}^{11}&=\int_0^t\int_{(0,1)^2} \bigl[ [A_{jl}({{\tilde\eta}^\kappa}_2\circ\theta_l),_2](\theta_l^{-1}\circ\theta_i)\ E_1^{i\kappa},_{j_1j_2j_3j_4}\\ &\hskip 3cm c^{j_1j_2j_3j_4}_{il,1111}c^{i_1i_2i_3}_{il,111} \rho_{\frac{1}{\kappa}}\star_h [\sqrt{\alpha_i}(\theta_i)[\tilde\eta_1\circ\theta_i],_{i_1i_2i_3}]\bigl] +r_{12}^{11}, \end{align*} with \begin{align} |r_{12}^{11}(t)|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \label{ecl16sept} \end{align} Since $E_1^{i\kappa},_{j_1j_2j_3j_4} c^{j_1j_2j_3j_4}_{il,1111}=c^{j_2j_3j_4}_{il,111} (E_1^{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ\theta_l),_1(\theta_l^{-1}\circ\theta_i)$, we infer as for $R_1^1$ that \begin{align*} |H_{12}^{11}(t)|&\le C t \sup_{j,l}\sup_{[0,t]}\|A_{jl}\|_{{\frac{5}{2}},(0,1)^2}\|{{\tilde\eta}_{l\kappa}}\|^2_{\frac{7}{2}}+|r_{12}^{11}|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \end{align*} The other $H_{12}^{1i}$ are estimated in the same fashion, leading us to \begin{align} |H_{12}^1(t)|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \label{ecl7} \end{align} \subsubsection{\bf Study of $H_{12}^2$.} Next, for $H_{12}^2$, we first notice from the asymptotic regularity result (\ref{divtuk}) on $\operatorname{div}{{\tilde u}^\kappa}({{\tilde\eta}^\kappa})$, that $H_{12}^2$ can be treated in the same fashion as $H_{12}^1$, leading to \begin{align} |H_{12}^2(t)|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \label{ecl8} \end{align} \subsubsection{\bf Study of $H_{12}^3$.} We simply write \begin{align*} -H_{12}^3&=\int_{(0,1)^2} \xi_l(\theta_l)(\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{ [\text{det}\nabla\theta_l],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111}(t)+R_{12}^3\\ &\ \ \ +\int_{(0,1)^2} \xi_l(\theta_l) (\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{\int_0^t [\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)],_{111}\text{div}({{\tilde u}^\kappa}\circ\theta_l)}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111}(t)\\ &\ \ \ +\int_{(0,1)^2} \xi_l(\theta_l) (\text{Cof}\nabla({{\tilde\eta}^\kappa}\circ\theta_l))_j^p\frac{\int_0^t \text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)[\text{div}({{\tilde u}^\kappa}\circ\theta_l)],_{111}}{[\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)]^2}\tilde Q_l,_p\ {\tilde \eta}_j\circ\theta_l,_{111}(t), \end{align*} with $R_{12}^3$ being bounded by a term similar as the right-hand side of (\ref{ecl8}). We also see that the first term of this equality can be estimated by a bound similar as the right-hand side of (\ref{ecl8}). The third term is treated in a way similar as $H_{12}^1$, in order to put a convolution in front of $(\tilde\eta_j\circ\theta_l),_{111}$. There is no difference linked to the fact that the integral from $0$ to $t$ does not apply on all terms as for $H_{12}^1$ since $\rho_{\frac{1}{\kappa}}$ and the $\theta_l$ do not depend on time. The fourth term follows the same treatment as $H_{12}^2$, leading us to \begin{align*} |H_{12}^3(t)|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t), \end{align*} which with (\ref{ecl7}) and (\ref{ecl8}) implies \begin{align} |H_{12}(t)|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \label{ecl9} \end{align} \subsection{Study of $H_{11}$.} As for $H_{11}$, we have if we still denote $E^{i\kappa}=\rho_{\frac{1}{\kappa}}\star_h ((\sqrt{\alpha_i} \tilde\eta)\circ\theta_i)$ and $\epsilon^{mn}$ the sign of the permutation between $(m,n)$ and $(1,2)$: \begin{align*} H_{11}(t)&=\epsilon^{mn}\epsilon^{rs} \int_{(0,1)^2} \xi_l(\theta_l) \frac{({{\tilde\eta}^\kappa}_m\circ\theta_l),_{r111}}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}[\tilde q\circ\theta_l],_s\ {\tilde v_n(\theta_l)},_{111}\\ &=\epsilon^{mn}\epsilon^{rs} \int_{(0,1)^2} \bigl[\xi_l(\theta_l) \frac{[\tilde q\circ\theta_i(\theta_i^{-1}\circ\theta_l)],_s}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}\ \bigl[[\sqrt{\alpha_i}(\theta_i)\rho_{\frac{1}{\kappa}}\star_h E_m^{i\kappa}](\theta_i^{-1}\circ\theta_l)\bigr],_{r111}\\ &\hskip 3cm [{\tilde v_n\circ\theta_i(\theta_i^{-1}\circ\theta_l)}],_{111}\bigr]\\ &=\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \bigl[ \xi_l(\theta_i)\frac{[\tilde q\circ\theta_i],_{i_1}}{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l) (\theta_l^{-1}\circ\theta_i)}\ \bigl[\rho_{\frac{1}{\kappa}}\star_h E_m^{i\kappa}\bigr],_{j_1j_2j_3j_4}c^{j_1j_2j_3j_4}_{il,r111}\\ &\hskip 3cm c^{i_1i_2i_3i_4}_{il,s111} [{\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4}}\bigl]+R_{11}, \end{align*} with \begin{align*} |R_{11}(t)|\le C \tilde E(t)^2, \end{align*} and \begin{align*} c^{j_1j_2j_3j_4}_{il,r111}&=\bigl[(\theta_i^{-1}\circ\theta_l),_r^{j_1}(\theta_i^{-1}\circ\theta_l),_1^{j_2}(\theta_i^{-1}\circ\theta_l),_1^{j_3}(\theta_i^{-1}\circ\theta_l),_1^{j_4}\bigr](\theta_l^{-1}\circ\theta_i). \end{align*} Therefore, \begin{align*} H_{11}&=\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \xi_l(\theta_i)[ \tilde q(\theta_i)],_{i_1} \rho_{\frac{1}{\kappa}}\star_h E_m^{i\kappa},_{j_1j_2j_3j_4} [\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4} h_{rs}^{(ji)_{1234} } +R_{11}, \end{align*} with \begin{equation} \label{hrs} \displaystyle h_{rs}^{(ji)_{1234}}=\bigl[\frac{c^{j_1j_2j_3j_4}_{il,r111} c^{i_1i_2i_3i_4}_{il,s111}} {\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)(\theta_l^{-1}\circ\theta_i)}\bigr]. \end{equation} Similarly as in the study of $H_{12}^{11}$ (from equations (\ref{ecl16bis}) to (\ref{ecl16sept})) we have: \begin{align*} H_{11}&=\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \xi_l(\theta_i)[\tilde q(\theta_i)],_{i_1} h_{rs}^{(ji)_{1234}} E_m^{i\kappa},_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h [\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4}\\ &\ \ \ +S_{11} +R_{11}, \end{align*} with, \begin{align*} |S_{11}|\le C t \tilde E(t)^3+C t\kappa^{\frac{3}{2}} \tilde E(t). \end{align*} By integrating by parts in space (and using $\xi_l\tilde q(\theta_i)=0$ on $\partial (0,1)^2$), \begin{align} H_{11}&=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} (\xi_l \tilde q)(\theta_i)h_{rs}^{(ji)_{1234}} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_1i_2i_3i_4}\nonumber\\ &\ \ \ +H_{11}^1+H_{11}^2+S_{11} +R_{11}, \label{ecl10} \end{align} with \begin{subequations} \begin{align} H_{11}^1&=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} (\xi_l \tilde q)(\theta_i) h_{rs}^{(ji)_{1234}} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{i_1j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4},\label{N}\\ H_{11}^2&= -\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4}. \label{E1} \end{align} \end{subequations} For $H_{11}^1$, by taking into account the symmetric role of $\{i_2,i_3,i_4\}$ and $\{j_2, j_3,j_4\}$, we obtain: \begin{align*} H_{11}^1=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} (\xi_l \tilde q)(\theta_i) h_{rs}^{(ji)_{1234}} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{i_1j_1i_2i_3i_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{j_2j_3j_4}. \end{align*} Next, since $h_{rs}^{(ji)_{1234}}=h_{sr}^{(ij)_{1234}}$, this implies \begin{align*} H_{11}^1&=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} (\xi_l \tilde q)(\theta_i) h_{sr}^{(ij)_{1234}} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{i_1j_1i_2i_3i_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{j_2j_3j_4}\\ &=\epsilon^{mn}\epsilon^{sr}\int_{(0,1)^2} (\xi_l \tilde q)(\theta_i) h_{sr}^{(ij)_{1234}} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_1i_1i_2i_3i_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{j_2j_3j_4}. \end{align*} Therefore, by relabeling $sr$ as $rs$ and $ij$ as $ji$, we obtain by comparison to (\ref{N}): \begin{align*} H_{11}^1&=-H_{11}^1, \end{align*} and thus $H_{11}^1=0$. For $H_{11}^2$, we have by integrating by parts $H_{11}^2=H_{11}^{21}+H_{11}^{22}$, with \begin{align*} H_{11}^{21}&= \epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} [ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4}\\ H_{11}^{22}&=\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{j_1i_2i_3i_4}. \end{align*} First, for $H_{11}^{21}$ we have if we denote $E_{mn}=\epsilon^{mn}\rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{i_2i_3i_4}$: \begin{align*} \int_0^t H_{11}^{21}=&-{\frac{1}{2}} \int_0^t \epsilon^{rs}\int_{(0,1)^2} [[ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1}]_t E_{mn}\\ &+{\frac{1}{2}}\bigl[\epsilon^{rs}\int_{(0,1)^2} [ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1} E_{mn} \bigr]_0^t. \end{align*} Therefore, \begin{align*} \bigl|\int_0^t H_{11}^{21}\bigr|\le & C \int_0^t |\epsilon^{rs}| \bigl\|[[ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1}]_t\bigr\|_{{\frac{1}{2}},{(0,1)^2}} \|\tilde\eta\|^2_{\frac{7}{2}}\\ &+C |\epsilon^{rs}|\bigl\|[ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1}(0)\bigr\|_{L^\infty({(0,1)^2})} \bigl[ \|\tilde\eta\|^2_{\frac{7}{2}}(t) + \|\tilde\eta\|^2_{\frac{7}{2}}(0) \bigr] \\ &+C \int_0^{t}\bigl\|[[ \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}],_{j_1}]_t\bigr\|_{{\frac{1}{2}},{(0,1)^2}} \bigl[ \|\tilde\eta\|^2_{\frac{7}{2}}(t) + \|\tilde\eta\|^2_{\frac{7}{2}}(0) \bigr] \,. \end{align*} With the definition (\ref{hrs}) and (\ref{divtuk}) for the control of the time derivative of $\det(\nabla(\tilde\eta^\kappa))$ in $H^{\frac{5}{2}}(\Omega)$, we then infer: \begin{align} \bigl|\int_0^t H_{11}^{21}\bigr|\le & C t \tilde E(t)^4 + N(u_0). \label{ecl11} \end{align} Next, for $H_{11}^{22}$, we have by relabeling $m$ and $n$ \begin{align*} H_{11}^{22}&=\epsilon^{nm}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_m\circ\theta_i],_{j_1i_2i_3i_4}\\ &=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_m\circ\theta_i],_{j_1i_2i_3i_4}. \end{align*} By taking into account the symmetric role of $\{i_2,i_3,i_4\}$ and $\{j_2, j_3,j_4\}$, we then obtain: \begin{align} H_{11}^{22}&=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{i_2i_3i_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_m\circ\theta_i],_{j_1j_2j_3j_4}. \label{E2} \end{align} Consequently, by (\ref{E1}) and (\ref{E2}), \begin{align*} 2 H_{11}^{2}&=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_n\circ\theta_i],_{i_2i_3i_4}\\ &\ \ \ -\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{i_2i_3i_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde v_m\circ\theta_i],_{j_1j_2j_3j_4}\\ &\ \ \ +H_{11}^{21}\\ &=-\epsilon^{mn}\epsilon^{rs}\int_{(0,1)^2} \tilde q(\theta_i) [\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} [\rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_m\circ\theta_i],_{j_1j_2j_3j_4} \rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta_n\circ\theta_i],_{i_2i_3i_4}]_t\\ &\ \ \ +H_{11}^{21}. \end{align*} Therefore, \begin{align*} \int_0^t H_{11}^2&={\frac{1}{2}}\epsilon^{mn}\epsilon^{rs}\int_0^t \int_{(0,1)^2} [ \tilde q(\theta_i)[\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1}]_t \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\nonumber\\ &\ \ \ -{\frac{1}{2}}\epsilon^{mn}\epsilon^{rs}\bigl[ \int_{(0,1)^2} \tilde q(\theta_i)[\xi_l(\theta_i)h_{rs}^{(ji)_{1234}}],_{i_1} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]_0^t\nonumber +{\frac{1}{2}}\int_0^tH_{11}^{21}. \end{align*} Now, from (\ref{ecl10}) and $H_{11}^1=0$, we infer by integrating by parts in time: \begin{align*} \int_0^t H_{11}&={\frac{1}{2}}\epsilon^{mn}\epsilon^{rs}\int_0^t \int_{(0,1)^2} [(\xi_l \tilde q)(\theta_i)h_{rs}^{(ji)_{1234}}]_t \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\nonumber\\ &\ \ \ -{\frac{1}{2}}\epsilon^{mn}\epsilon^{rs}\bigl[ \int_{(0,1)^2} (\xi_l \tilde q)(\theta_i)h_{rs}^{(ji)_{1234}} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\bigr]_0^t\nonumber\\ &\ \ \ +\int_0^t [S_{11} +R_{11}+{\frac{1}{2}} H_{11}^{2}]. \end{align*} Now, we claim that the only couples $(i_1,j_1)$ contributing to the sum above are the ones with $i_1\ne j_1$. To see that, we notice that if $i_1=j_1$, then by simply relabeling $m$ and $n$, and using the symmetric role of $\{i_2,i_3,i_4\}$ and $\{j_2,j_3,j_4\}$ \begin{align*} h_{rs}^{(ji)_{1234}}\epsilon^{mn} \tilde\eta_{i\kappa}^m,_{i_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}&= h_{rs}^{(ji)_{1234}}\epsilon^{nm} \tilde\eta_{i\kappa}^n,_{i_1j_2j_3j_4} \tilde\eta_{i\kappa}^m,_{i_1i_2i_3i_4}\\ &=h_{rs}^{(ji)_{1234}}\epsilon^{nm} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \end{align*} leading to $h_{rs}^{(ji)_{1234}}\epsilon^{mn} \tilde\eta_{i\kappa}^m,_{i_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}=0$, since $\epsilon^{mn}=-\epsilon^{nm}$. Consequently if we denote \begin{align*} d^{(ji)_{234}}=\frac{h_{rs}^{(ji)_{1234}}}{[(\theta_i^{-1}\circ\theta_l),_r^{j_1} (\theta_i^{-1}\circ\theta_l),_s^{i_1}](\theta_l^{-1}\circ\theta_i)}, \end{align*} we have \begin{align*} \int_0^t H_{11}&={\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\int_0^t \int_{(0,1)^2}\bigl[ [(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]_t \epsilon^{rs} \epsilon^{i_1j_1}[(\theta_i^{-1}\circ\theta_l),_r^{j_1} (\theta_i^{-1}\circ\theta_l),_s^{i_1}](\theta_l^{-1}\circ\theta_i)\\ &\hskip 4cm \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\bigr]\nonumber\\ &\ \ \ -{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\bigl[ \int_{(0,1)^2} \bigl[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}} \epsilon^{rs}\epsilon^{i_1j_1} [(\theta_i^{-1}\circ\theta_l),_r^{j_1} (\theta_i^{-1}\circ\theta_l),_s^{i_1}](\theta_l^{-1}\circ\theta_i)\\ &\hskip 4cm \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\bigr]\bigr]_0^t\nonumber\\ &\ \ \ +\int_0^t [S_{11} +R_{11}+{\frac{1}{2}} H_{11}^{2}]. \end{align*} Now, for any fixed $(i_1,j_1)$, we have $\epsilon^{rs} \epsilon^{i_1j_1}[(\theta_i^{-1}\circ\theta_l),_r^{j_1} (\theta_i^{-1}\circ\theta_l),_s^{i_1}]=-\text{det}(\nabla(\theta_i^{-1}\circ\theta_l))=-1$, leading us to \begin{align*} \int_0^t H_{11}&=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\int_0^t \int_{(0,1)^2}[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]_t \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\nonumber\\ &\ \ \ +{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\bigl[ \int_{(0,1)^2} (\xi_l \tilde q)(\theta_i)d^{(ji)_{234}} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_1i_2i_3i_4}\bigr]_0^t\nonumber\\ &\ \ \ +\int_0^t [S_{11} +R_{11}+{\frac{1}{2}} H_{11}^{2}]. \end{align*} Next, by integrating by parts in space: \begin{align} \int_0^t H_{11}&={\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\int_0^t \int_{(0,1)^2} [(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]_t,_{i_1} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\nonumber\\ &\ \ \ -{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1}\bigl[ \int_{(0,1)^2} [(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}],_{i_1} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]_0^t\nonumber\\ &\ \ \ +\int_0^t [S_{11} +R_{11}+{\frac{1}{2}} H_{11}^{2}], \label{ecl12} \end{align} where we have used the fact that similarly as for $H_{11}^1$, we have \begin{align*} 0&=\epsilon^{mn}\epsilon^{i_1j_1} d^{(ji)_{234}} \tilde\eta_{i\kappa}^m,_{i_1j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}. \end{align*} We now come to the study of the crucial term bringing the regularity of the surface. \subsubsection{\bf Control of the trace of $\tilde\eta_{i\kappa}$ on $\Gamma$} Let us study the second term of the right-hand side of (\ref{ecl12}): \begin{align*} H&=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1} \int_{(0,1)^2} [(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}} ],_{i_1} \tilde\eta_{i\kappa}^m,_{j_1j_2j_3j_4} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}, \end{align*} for which we have \begin{align*} H&=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1} \int_{(0,1)^2} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\circ{{\tilde\eta}^\kappa}\circ\theta_i ],_{i_1} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\circ{{\tilde\eta}^\kappa}\circ\theta_i],_{j_1} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]\\ &=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i_1j_1} \int_{(0,1)^2} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{i'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\ [{{\tilde\eta}^\kappa}\circ\theta_i ],_{i_1}^{i'_1} \\ &\hskip 3cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{j'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\ [{{\tilde\eta}^\kappa}\circ\theta_i],_{j_1}^{j'_1} \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]. \end{align*} Now, for the same reason as before, the couples $(i'_1,j'_1)$ such $i'_1=j'_1$ will not contribute to the sum above, leading us to \begin{align*} H&=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i'_1j'_1} \int_{(0,1)^2} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{i'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\\ &\hskip 3cm \epsilon^{i_1j_1}\epsilon^{i'_1j'_1} [{{\tilde\eta}^\kappa}\circ\theta_i ],_{i_1}^{i'_1}[{{\tilde\eta}^\kappa}\circ\theta_i],_{j_1}^{j'_1} \\ &\hskip 3cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{j'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]\\ &=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i'_1j'_1} \int_{(0,1)^2} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{i'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\ \text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_i) \\ &\hskip 3cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{j'_1}\circ{{\tilde\eta}^\kappa}\circ\theta_i\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\bigr]\\ &=-{\frac{1}{2}}\epsilon^{mn}\epsilon^{i'_1j'_1} \int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{i'_1} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{j'_1}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]\\ &=I+J, \end{align*} with \begin{align*} I&=-{\frac{1}{2}}(\epsilon^{mn})^2 \int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{m} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]\\ J&= {\frac{1}{2}}(\epsilon^{mn})^2 \int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^m,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{m}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]. \end{align*} Next, we notice that \begin{align*} J&= -{\frac{1}{2}}\sum_n \int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]+J_1, \end{align*} with the perturbation term \begin{align*} J_1&= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n} \\ &\hskip 4cm \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]\\ &=J_1^1+J_1^2, \end{align*} where \begin{align*} J_1^1&= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)c^{i_2i_3i_4}_{il,111}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n} c^{j_2j_3j_4}_{il,111}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\\ &\hskip 4cm \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr],\\ J_1^2&= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[(\xi_l \tilde q)(\theta_i)c^{i_2i_3i_4}_{il,111}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1} [c^{j_2j_3j_4}_{il,111}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_n\\ &\hskip 4cm \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]. \end{align*} Now, for $J_1^1$, let us set \begin{align*} f_{il}&=[[(\xi_l \tilde q)(\theta_i)c^{i_2i_3i_4}_{il,111}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n} [\Pi_{p=2}^3 (\theta_i^{-1}\circ\theta_l),_1^{j_p}(\theta_l^{-1}\circ\theta_i)\tilde\eta_{i\kappa}^n,_{i_2i_3i_4}](\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}), \end{align*} so that in order to identify an horizontal derivative on the highest order term we have: \begin{align*} J_1^1&= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} f_{il}\ (\theta_i^{-1}\circ\theta_l),_1^{j_4}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} f_{il}\ (\theta_i^{-1}\circ\theta_l),_1^{j_4}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) (\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}),_k^l [\tilde\eta_{i\kappa},_{j_2j_3j_4l}^k]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[ f_{il}\ (\theta_i^{-1}\circ\theta_l),_1^{j_4}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) (\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}),_k^l \\ &\hskip 4cm [\tilde\eta_{i\kappa},_{j_2j_3l}^k\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_q({{\tilde\eta}^\kappa}\circ\theta_i),_{j_4}^q(\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\bigr]\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[f_{il}\ (\theta_i^{-1}\circ\theta_l),_1^{j_4}(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) \\ &\hskip 2cm [(\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}),_k^l\tilde\eta_{i\kappa},_{j_2j_3l}^k\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_q({{\tilde\eta}^\kappa}\circ\theta_i),_{j_4}^q(\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\bigr]+r_1^1\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))}\bigl[ f_{il}\ [({{\tilde\eta}^\kappa}\circ\theta_i),_{j_4}^q(\theta_i^{-1}\circ\theta_l) (\theta_i^{-1}\circ\theta_l),_1^{j_4}](\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) \\ &\hskip 4cm \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_q \bigr]+r_1^1\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} f_{il}\ ({{\tilde\eta}^\kappa}\circ\theta_l),_{1}^q (\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\ \operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_q +r_1^1\\ &= {\frac{1}{2}}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} f_{il}\ (\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\circ{{\tilde\eta}^\kappa}\circ\theta_l),_1(\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}) +r_1^1\\ \end{align*} with $|r_1^1|\le C \|{{\tilde\eta}^\kappa}\|^2_3$. Now, we notice that the presence of the factor $\xi_l\circ({{\tilde\eta}^\kappa})^{-1}$ in $f_{il}$ implies that the integrand in the integral above is zero outside of ${{\tilde\eta}^\kappa}(\theta_l((0,1)^2))$. Similarly, the presence of $\rho_{\frac{1}{\kappa}}\star_h[\sqrt{\alpha_i}\tilde\eta\circ\theta_i],_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}$ implies that $x\in {{\tilde\eta}^\kappa}(\theta_i({(0,1)^2}))$ in order for this integrand to be non-zero. Therefore, \begin{align*} J_1^1&= {\frac{1}{2}}\int_{{(0,1)^2}} f_{il}({{\tilde\eta}^\kappa}\circ\theta_l)\ (\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\circ{{\tilde\eta}^\kappa}\circ\theta_l),_1 \text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l) +r_1^1. \end{align*} Now, since the derivative of $\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\circ{{\tilde\eta}^\kappa}\circ\theta_l$ is taken in the horizontal direction, this implies: \begin{align*} |J_1^1|&\le C \|f_{il}\|_{{\frac{1}{2}},{(0,1)^2}} \|\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\circ{{\tilde\eta}^\kappa}\circ\theta_l\|_{{\frac{1}{2}},{(0,1)^2}}+|r_1^1|\\ &\le C \|f_{il}\|_{{\frac{1}{2}},{(0,1)^2}} \|\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\|_{{\frac{1}{2}},{{\tilde\eta}^\kappa}(\theta_i({(0,1)^2}))}+C \|\text{Id}+\int_0^t \tilde v\|_3^2. \end{align*} Now, since we have in the same fashion as (\ref{diveta}): \begin{equation*} \|\operatorname{div}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\|_{{\frac{1}{2}},{{\tilde\eta}^\kappa}(\theta_i({(0,1)^2}))}\le C t \tilde E(t)^2+C\kappa^{\frac{1}{2}} \tilde E(t)+C, \end{equation*} we then have \begin{align} |J_1^1|&\le Ct \tilde E(t)^3 +C+C\kappa^{\frac{1}{2}} \tilde E(t)^2. \label{J11} \end{align} Next, for $J_1^2$, we notice that $[c^{j_1j_2j_3}_{il,111}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_n$ is a sum of product, each one containing a factor $(\theta_i^{-1}\circ\theta_l),_1^{j_p} (\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})$. This implies that $J_1^2$ can be treated in the same way as $J_1^1$, with the identification of an horizontal derivative on the highest order term of the integrand, leading to the same majorization. We can also treat $I$ in a similar fashion, due to the curl estimate (similar as (\ref{curleta})): \begin{equation*} \|\operatorname{curl}[\tilde\eta_{i\kappa},_{j_2j_3}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}]\|_{{\frac{1}{2}},{{\tilde\eta}^\kappa}(\theta_i({(0,1)^2}))}\le C t \tilde E(t)^2 +N(u_0), \end{equation*} which finally provides us with: \begin{align*} H&=-{\frac{1}{2}} \sum_{m\ne n}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[[(\xi_l \tilde q)(\theta_i)d^{(ji)_{234}}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{m} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{m}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]\\ &\ \ \ -{\frac{1}{2}} \sum_n\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[d^{(ji)_{234}}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}(\xi_l \tilde q)(({{\tilde\eta}^\kappa})^{-1})],_{n} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{n}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]+h^1\\ &=-{\frac{1}{2}} \sum_{m, n}\int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[d^{(ji)_{234}}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}(\xi_l \tilde q)(({{\tilde\eta}^\kappa})^{-1})],_{m} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}],_{m}\ \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr] +h^1, \end{align*} with \begin{align*} |h^1(t)|&\le C t \tilde E(t)^3 +N(u_0)+(C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C_\delta. \end{align*} Therefore, by integrating by parts, \begin{align*} H&=-\frac{1}{4} \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[d^{(ji)_{234}}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}(\xi_l \tilde q)(({{\tilde\eta}^\kappa})^{-1})],_{m} \tilde n^\kappa_m \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}] \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]+h^2, \end{align*} with \begin{align*} h^2&=\frac{1}{4} \int_{{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[[d^{(ji)_{234}}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}(\xi \tilde q)(({{\tilde\eta}^\kappa})^{-1})],_{mm} \\ &\hskip 4cm [\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}] \tilde\eta_{i\kappa}^n,_{i_2i_3i_4}\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]+h^1, \end{align*} so that, \begin{align*} |h^2(t)|&\le [\ \|\tilde q\|_{{\frac{7}{2}}}\|\tilde\eta\|_3+\|{{\tilde\eta}^\kappa}\|_{{\frac{7}{2}}}\|q\|_3]\ \|\tilde\eta\|_3^2 +|h^1(t)|\\ &\le (C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C_\delta + Ct \tilde E(t)^4 +N(u_0). \end{align*} Now, since $\tilde q=0$ and $\xi_l=\alpha_l$ on $\Gamma$, we infer \begin{align*} H&= -\frac{1}{4}\int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} \bigl[\alpha_l(({{\tilde\eta}^\kappa})^{-1})[p_0+\int_0^t p_t],_{m} [N_m+\int_0^t (\tilde n^\kappa_m)_t]\\ &\hskip 2cm [d_{il}^{j_{234}}\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1} [ d_{il}^{i_{234}}\tilde\eta_{i\kappa}^n,_{i_2i_3i_4}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\bigr]+h^2, \end{align*} with $\displaystyle d_{il}^{k_{234}}=\frac{(\theta_i^{-1}\circ\theta_l),_1^{k_2}(\theta_i^{-1}\circ\theta_l),_1^{k_3}(\theta_i^{-1}\circ\theta_l),_1^{k_4}}{\sqrt{\text{det}\nabla({{\tilde\eta}^\kappa}\circ\theta_l)}}(\theta_l^{-1}\circ\theta_i)$. Therefore, with the initial pressure condition $$p_0,_m N_m <-C<0\ \text{on}\ \Gamma,$$ we infer \begin{align} -H&\le -C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} [\alpha_l(\theta_i) d_{il}^{j_{234}}\tilde\eta_{i\kappa}^n,_{j_2j_3j_4} d_{il}^{i_{234}}\tilde\eta_{i\kappa}^n,_{i_2i_3i_4}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\nonumber\\ &\ \ \ +t \tilde E(t)^2+|h^2|\nonumber\\ &\le -C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} [\alpha_i(\theta_i) d_{ii}^{j_{234}}\tilde\eta_{i\kappa}^n,_{j_2j_3j_4} d_{ii}^{i_{234}}\tilde\eta_{i\kappa}^n,_{i_2i_3i_4}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\nonumber\\ &\ \ \ +(C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C t \tilde E(t)^4 +C_\delta +N(u_0)\nonumber\\ &\le -C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} [\alpha_i(\theta_i)\tilde\eta_{i\kappa}^n,_{j_2j_3j_4}\tilde\eta_{i\kappa}^n,_{i_2i_3i_4}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\nonumber\\ &\ \ \ +(C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C t \tilde E(t)^4 +C_\delta +N(u_0)\nonumber\\ &\le -C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} [\alpha_i(\theta_i)\tilde\eta_{i\kappa}^n,_{111} \tilde\eta_{i\kappa}^n,_{111}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\nonumber\\ &\ \ \ +(C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C t \tilde E(t)^4 +C_\delta +N(u_0). \label{ecl13} \end{align} Now, it is clear that the space integral in front of the first time integral in (\ref{ecl12}) can be treated in a similar way, except that we do not have a control on the sign of the boundary term as in (\ref{ecl13}), which does not matter since a time integral is applied to it. This therefore leads us to \begin{align*} -\int_0^t H_{11}&\le -C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2)} [\alpha_i(\theta_i)\tilde\eta_{i\kappa}^n,_{111} \tilde\eta_{i\kappa}^n,_{111}]\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\nonumber\\ &\ \ \ +(C\kappa^{\frac{1}{2}}+\delta) \tilde E(t)^2+C t \tilde E(t)^4 +C_\delta +N(u_0), \end{align*} which , with (\ref{ecl1}) and (\ref{ecl2}), finally gives the trace control for each $\tilde\eta_{i\kappa}$, as well as the control of $v$ around $\Gamma$: \begin{align} H^\kappa (t) + C \int_{\partial{{\tilde\eta}^\kappa}(\theta_i((0,1)^2))} |[\sqrt{\alpha_i}(\theta_i) \tilde\eta_{i\kappa}^n,_{111}]&\circ\theta_i^{-1}\circ({{\tilde\eta}^\kappa})^{-1}|^2 \nonumber\\ & \le\delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{ecl28} \end{align} \subsection{Asymptotic regularity of each ${{\tilde\eta}_{l\kappa}}$} Consequently, we infer that for each $l\in \{1,...,K\}$, we have the trace control \begin{align} \|[\sqrt{\alpha_l}(\theta_l) {{\tilde\eta}_{l\kappa}},_1]\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\|^2_{2,\partial{{\tilde\eta}^\kappa}\circ\theta_l({(0,1)^2})} \le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{reg1} \end{align} Consequently, with the estimates (\ref{diveta}) and (\ref{curleta}) on the divergence and curl, we obtain by elliptic regularity: \begin{align*} \| [\sqrt{\alpha_l}(\theta_l){{\tilde\eta}_{l\kappa}},_1]\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1}\|^2_{{\frac{5}{2}},{{\tilde\eta}^\kappa}\circ\theta_l({(0,1)^2})} &\le P(\|{{\tilde\eta}^\kappa}\|_{{\frac{5}{2}}})[\delta \tilde E(t)^2+C_\delta t\tilde E(t)^4 +C_\delta N(u_0)]\nonumber\\ & \le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \end{align*} Therefore, \begin{align} \| \sqrt{\alpha_l}(\theta_l){{\tilde\eta}_{l\kappa}},_1\|^2_{{\frac{5}{2}},{(0,1)^2}} &\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \label{reg2} \end{align} which implies that $\| \sqrt{\alpha_l}(\theta_l){{\tilde\eta}_{l\kappa}},_{12}\|^2_{{\frac{3}{2}},{(0,1)^2}} $ and $\| \sqrt{\alpha_l}(\theta_l){{\tilde\eta}_{l\kappa}},_{2}\|^2_{2,\partial{(0,1)^2}} $ are controlled by the same right-hand side as in (\ref{reg2}). Consequently, with (\ref{diveta}) and (\ref{curleta}), we infer in the same way as we obtained (\ref{reg2}) from (\ref{reg1}) that \begin{align*} \|\sqrt{\alpha_l}(\theta_l) {{\tilde\eta}_{l\kappa}},_{2}\|^2_{{\frac{5}{2}},{(0,1)^2}} &\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \end{align*} and finally that \begin{align} \|\sqrt{\alpha_l}(\theta_l) {{\tilde\eta}_{l\kappa}}\|^2_{{\frac{7}{2}},{(0,1)^2}} &\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{reg3} \end{align} \subsection{Asymptotic regularity of ${{\tilde\eta}^\kappa}$} This also obviously implies that for the advected domain \begin{align} \| {{\tilde\eta}^\kappa}\|^2_{{\frac{7}{2}},\Omega^K} &\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \label{reg4} \end{align} where $\displaystyle\Omega^K=\Omega\cap_{i=K+1}^{L} (\text{supp} \alpha_i)^c$. From the divergence and curl estimates (\ref{divetabeta}) and (\ref{curletabeta}), we then infer that \begin{equation} \label{betatildeeta} \|{\beta}\tilde\eta\circ\theta_l\|^2_{{\frac{7}{2}},{(0,1)^2}}\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \end{equation} which with (\ref{reg3}) provides \begin{equation} \label{etakappa} \|{{\tilde\eta}^\kappa}\|^2_{{\frac{7}{2}}}\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0,\kappa\|\Omega\|_{\frac{9}{2}}). \end{equation} \subsection{Asymptotic regularity of $\tilde v$} The relation (\ref{ecl28}) provides us with the asymptotic regularity of $\tilde v$ near $\partial\Omega$. For the interior regularity, we notice that if we time-differentiate the analog of (\ref{diveta3}) for the cut-off $\beta\in\mathcal{D}(\Omega)$, we obtain \begin{equation} \label{divv} \bigl\|\text{div}((\beta\tilde v\circ\theta_l),_{s}\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\bigr\|_{1,{{\tilde\eta}^\kappa}(\Omega)}\le C. \end{equation} Similarly, we also have \begin{equation} \label{curlv} \bigl\|\text{curl}(({\beta}\tilde v\circ\theta_l),_{s}\circ\theta_l^{-1}\circ({{\tilde\eta}^\kappa})^{-1})\bigr\|_{2,{{\tilde\eta}^\kappa}(\Omega)}\le C. \end{equation} From (\ref{divv}) and (\ref{curlv}), elliptic regularity yields \begin{equation*} \|{\beta}\tilde v\circ\theta_l\|_{2,{(0,1)^2}}\le C, \end{equation*} which, together with (\ref{ecl28}), provides \begin{equation} \label{v} \|\tilde v\|^2_{3}\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \end{equation} \subsection{Asymptotic regularity of $\tilde q$.} From the elliptic system: \begin{align*} ({{\tilde a}^\kappa})_i^j[({{\tilde a}^\kappa})_i^k \tilde q,_k],_j&=-({{\tilde u}^\kappa},_i^j\tilde u,_j^i) ({{\tilde\eta}^\kappa})\ \text{in}\ \Omega,\\ \tilde q=0\ \text{on}\ \Gamma, \end{align*} we then infer on $(0,T_\kappa)$ (ensuring that (\ref{assume}) is satisfied): \begin{align} \|\tilde q\|^2_{\frac{7}{2}}\le C \|{{\tilde\eta}^\kappa}\|^2_{\frac{7}{2}} \le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{q} \end{align} \subsection{Asymptotic regularity of $\kappa\sqrt{\alpha_l}\tilde v\circ\theta_l$} From (\ref{kappavt}), we have: \begin{equation} \label{10.34} [\kappa\|\sqrt{\alpha_l}\tilde v\circ\theta_l\|_{{\frac{7}{2}},{(0,1)^2}}]^2\le [\kappa \|u_0\|_{\frac{7}{2}}]^2+ \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \end{equation} \subsection{Asymptotic regularity of $\kappa^{\frac{3}{2}}\sqrt{\alpha_l}\tilde v\circ\theta_l$} From (\ref{kappa2vt}), we have: \begin{equation*} [\kappa^2\|\sqrt{\alpha_l}\tilde v\circ\theta_l\|_{\frac{9}{2},{(0,1)^2}}]^2\le [\kappa^2 \|u_0\|_{\frac{9}{2}}]^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \end{equation*} which by interpolation leads to \begin{equation} \label{10.35} [\kappa^{\frac{3}{2}}\|\sqrt{\alpha_l}\tilde v\circ\theta_l\|_{4,{(0,1)^2}}]^2\le [\kappa^{\frac{3}{2}} \|u_0\|_4]^2+ \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \end{equation} \subsection{Asymptotic regularity of $\tilde q_t$.} From the elliptic system: \begin{align*} ({{\tilde a}^\kappa})_i^j[({{\tilde a}^\kappa})_i^k \tilde q_t,_k],_j&=-[{{\tilde u}^\kappa},_i^j\tilde u,_j^i ({{\tilde\eta}^\kappa})]_t - [({{\tilde a}^\kappa})_i^j[({{\tilde a}^\kappa})_i^k]_t \tilde q,_k],_j\ \text{in}\ \Omega,\\ \tilde q=0\ \text{on}\ \Gamma, \end{align*} we then infer on $(0,T_\kappa)$: \begin{align} \|\tilde q_t\|^2_3 \le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{qtassume} \end{align} \subsection{Asymptotic regularity of $\tilde v_t$} Since $\tilde v^i_t=-({{\tilde a}^\kappa})_i^j q,_j$, we then infer on $(0,T_\kappa)$: \begin{align} \|\tilde v_t\|^2_{\frac{5}{2}}\le C (\|{{\tilde\eta}^\kappa}\|_{\frac{7}{2}} +\|\tilde q\|_{\frac{7}{2}})^2\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0). \label{vtassume} \end{align} \section{Time of existence independent of $\kappa$ and solution to the limit problem} \label{L11} By (\ref{etakappa}), (\ref{q}), (\ref{v}), (\ref{betatildeeta}), (\ref{reg3}), (\ref{10.34}), (\ref{10.35}) we then infer the control on $(0,T_\kappa)$: \begin{equation*} \tilde E(t)^2\le \delta \tilde E(t)^2+C_\delta t \tilde E(t)^4 +C_\delta N(u_0), \end{equation*} which for a choice of $\delta_0$ small enough provides us with \begin{equation*} \tilde E(t)^2\le C_{\delta_0} N(u_0)+C_{\delta_0} t \tilde E(t)^4. \end{equation*} Similarly as in Section 9 of \cite{CoSh2005b}, this provides us with a time of existence $T_\kappa=T_1$ independent of $\kappa$ and an estimate on $(0,T_1)$ independent of $\kappa$ of the type: \begin{equation*} \tilde E(t)^2\le N_0(u_0), \end{equation*} as long as the conditions (\ref{assume}) hold. Now, since $\displaystyle\|\tilde\eta(t)\|_3\le \|\text{Id}\|_3+\int_0^t \|\tilde v\|_3$, we see that condition (\ref{assume.b}) will be satisfied for $\displaystyle t\le \frac{1}{N_0(u_0)}$. The other conditions in (\ref{assume}) are satisfied with similar arguments ((\ref{qtassume}) and (\ref{vtassume}) are used for (\ref{assume.c}) and (\ref{assume.d})). This leads us to a time of existence $T_2>0$ independent of $\kappa$ for which we have the estimate on $(0,T)$ \begin{equation*} \tilde E(t)^2\le N_0(u_0), \end{equation*} which provides by weak convergence the existence of a solution $(v,q)$ of (\ref{euler}), with $\sigma=0$, on $(0,T)$. \section{Optimal regularity} \label{L12} In this section, we assume that $\Omega$ is of class $H^{\frac{7}{2}}$ in $\mathbb R^3$, that $u_0\in H^3(\Omega)$, and that the pressure condition is satisfied. We denote by $N(u_0)$ a generic constant depending on $\|u_0\|_3$. With these requirements, we will only get the $H^{\frac{7}{2}}$ regularity of the moving domain $\eta(\Omega)$ and not of the mapping $\eta$. Due to the fact that $H^{\frac{3}{2}}$ is not continuously embedded in $L^\infty$ in the case that $\Omega$ is three-dimensional, we cannot directly study the integral terms as in Section \ref{L10} as we did for the two-dimensional case. Instead, we are forced to also regularize the initial domain, by a standard convolution, with a parameter $\epsilon>0$ fixed independently of $\kappa$, on the charts defining it locally, so that the initial regularized domain $\Omega_\epsilon=\tilde\Omega$ obtained in this fashion is of class $C^\infty$. The regularized initial velocity, by a standard convolution, will be denoted $u_0(\epsilon)$. We then start at Section \ref{L4} in the same way except that the regularity of the functional framework is increased by one degree for each quantity. This leaves us with the existence of a solution to (\ref{smoothl}) on $(0,T_{\kappa,\epsilon})$, with initial domain $\Omega_\epsilon$ and initial velocity $u_0(\epsilon)$. We then perform the same asymptotic analysis as $\kappa\rightarrow 0$ as we did in Sections \ref{L5} to \ref{L10}, in this new framework. We then see that the problematic term is now updated to one which can be treated directly by the Sobolev embedding of $H^2$ into $L^\infty$ in 3d. This leads us to the existence of a solution to a system similar to (\ref{euler}) (with $\sigma=0$) with initial domain $\Omega_\epsilon$ on $(0,T_\epsilon)$, with $\eta_\epsilon\in L^\infty(0,T_\epsilon;H^{\frac{9}{2}}(\Omega_\epsilon))$, with initial domain $\Omega_\epsilon$ and initial velocity $u_0(\epsilon)$. We then study hereafter the asymptotic behavior of this solution and of $T_\epsilon$ as $\epsilon\rightarrow 0$. This will be less problematic than in Section \ref{L10} since the convolutions by layers with the parameter $\kappa$ do not appear in the problem (\ref{euler}) with smoothed initial data and domain. We will denote the dependence on $\epsilon$ this time by a tilde, $\tilde v$ standing here for $v_\epsilon$ for instance, and prove that as $\epsilon\rightarrow 0$, the time of existence and norms of $\tilde v$ are $\epsilon-$independent, which leads to the existence of a solution with optimal regularity on the initial data, as stated in Theorem \ref{ltheorem2}. Our functional framework will be different than in Sections \ref{L5} to \ref{L10}. Our continuous in time energy will be: \begin{definition} \begin{align} \tilde H(t)=&\sup_{[0,t]}\bigl[|\tilde n|_{2,\tilde\Gamma}+\|\tilde v\|_3+\|\tilde v_t\|_{\frac{5}{2}}+\|\tilde q\|_3+\|\tilde v_{tt}\|_2 \bigr]+1, \end{align} where $\tilde n$ denotes the unit exterior normal to $\tilde\eta(\Omega)$. \end{definition} Our condition on $T_\epsilon$ will be that on $(0,T_\epsilon)$, \begin{subequations} \label{assume2} \begin{align} {\frac{1}{2}}\le \text{det} \nabla\tilde\eta &\le {\frac{3}{2}}\ \text{in}\ \tilde \Omega, \label{assume2.a}\\ \|\tilde\eta\|_3&\le |\Omega|+1,\ \ \|\tilde q\|_3\le \|q_0\|_3+1,\ \ \|\tilde v\|_{\frac{5}{2}}\le \|u_0\|_{\frac{5}{2}}+1,\\ \|\tilde v_t\|_2&\le \|w_1\|_2+1,\\ \forall l\in\{1,...,K\},&\ \bigl|\tilde\eta\circ\theta_l,_1\times\tilde\eta\circ\theta_l,_2\bigr|\ge {\frac{1}{2}} \bigl|\theta_l,_1\times\theta_l,_2\bigr|\ \text{on}\ (0,1)^2\times\{0\}, \label{assume2.d} \end{align} \end{subequations} where $w_1=-\nabla q_0\in H^{\frac{5}{2}}(\Omega)$. We will use a more straightforward approach than in Section \ref{L10}, which is enabled by the fact that we have $\tilde a$ instead of the convolution by layers ${{\tilde a}^\kappa}$ in our equation, by defining the following energy: \begin{definition} $$\displaystyle E^\epsilon(t)= \sum_{l=1}^K \int_{(0,1)^3}\xi\circ\theta_l |D^2(\tilde v_{tt}\circ\theta_l)|^2,$$ where $D^2 f$ stands for any second space derivative in a horizontal direction, i.e., $f,_{\alpha_1\alpha_2}$, where $\alpha_i\in\{1,2\}$. Summation over all horizontal derivatives is taken in the expression for $E^\kappa$. \end{definition} \begin{remark} We also note that this energy is associated with the second time-differentiated problem; we thus avoid the use of the curl relation (\ref{curleta}) for $\tilde\eta$, which necessitates the supplementary condition $\operatorname{curl} u_0\in H^{\frac{5}{2}}(\Omega)$ (which we do not have here). \end{remark} With $\tilde b_l=[\nabla(\tilde\eta\circ\theta_l)]^{-1}$, we have: $E^\epsilon_t=\sum_{i=1}^9 E_i,$ with \begin{align*} E_1(t)&= -\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D^2([(\tilde b_l)_j^k]_{tt})(\tilde q\circ\theta_l),_kD^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_2(t)&= -2 \sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D([(\tilde b_l)_j^k]_{tt})D(\tilde q\circ\theta_l),_k D^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_3(t)&= -\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) [(\tilde b_l)_j^k]_{tt} D^2(\tilde q\circ\theta_l),_kD^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_4(t)&= -4\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D[(\tilde b_l)_j^k]_{t}D(\tilde q_t\circ\theta_l),_k D^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_5(t)&= -2\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D^2[(\tilde b_l)_j^k]_{t}(\tilde q_t\circ\theta_l),_k D^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_6(t)&= -2\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) [(\tilde b_l)_j^k]_{t}D^2(\tilde q_t\circ\theta_l),_k D^2({\tilde v}_{tt}\circ\theta_l)^j,\\ E_7(t)&= -\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde b)_j^k (\tilde q_{tt}\circ\theta_l),_kD^2{(\tilde v_{tt}\circ\theta_l)}^j,\\ E_8(t)&= -2\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D(\tilde b)_j^k D(\tilde q_{tt}\circ\theta_l),_kD^2{(\tilde v_{tt}\circ\theta_l)}^j,\\ E_9(t)&= -\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)(\tilde b)_j^k D^2(\tilde q_{tt}\circ\theta_l),_kD^2{(\tilde v_{tt}\circ\theta_l)}^j. \end{align*} \subsection{Estimate for $\tilde q_t$, $\tilde q_{tt}$ and $\tilde q_{ttt}$} From the elliptic system \begin{align*} \tilde a_i^j(\tilde a_i^k \tilde q_t,_k),_j&=-[\tilde a_i^j(\tilde a_i^k]_t \tilde q,_k),_j-[\tilde a_j^k\tilde v^i,_k\tilde a_i^l\tilde v^j,_l]_t\ \text{in}\ \tilde\Omega,\\ \tilde q_t&=0\ \text{on}\ \partial\tilde\Omega, \end{align*} we infer \begin{align} \|\tilde q_t\|_3&\le C [\ \|\tilde v\|_3+\|\tilde q\|_3+\|\tilde\eta\|_3+\|\tilde v_t\|_2]\le C \tilde H(t). \label{qt3} \end{align} For similar reasons, we also have \begin{subequations} \begin{align} \|\tilde q_{tt}\|_{\frac{5}{2}}&\le C \tilde H(t), \label{qtt3}\\ \|\tilde q_{ttt}\|_2&\le C \tilde H(t). \label{qttt3} \end{align} \end{subequations} \subsection {Estimate for $E_2$, $E_4$, $E_5$, $E_6$, $E_8$} We first immediately have thanks to the embedding of $H^1$ into $L^6$ and $H^{\frac{1}{2}}$ into $L^3$: \begin{subequations} \label{H246} \begin{align} |E_2(t)|&\le C \|\tilde a_{tt}\|_1 \|\tilde q\|_{\frac{5}{2}} \|\tilde v_{tt}\|_2 \le C [\ \|\tilde v_t\|_2\|\tilde\eta\|_3+ \|v\|_3^2]\ \|\tilde q\|_3 \|\tilde v_{tt}\|_2\le C \tilde H(t)^4,\\ |E_4(t)|&\le C \|\tilde a_t\|_2 \|\tilde q_t\|_{\frac{5}{2}} \|\tilde v_{tt}\|_2\le C \tilde H(t)^3,\label{H246.b}\\ |E_5(t)|&\le C \|\tilde v\|_3\|\tilde q_t\|_3 \|\tilde v_{tt}\|_2\le C \tilde H(t)^3,\label{H246.c}\\ |E_6(t)|&\le C \|\tilde v\|_3\|\tilde q_t\|_3 \|\tilde v_{tt}\|_2\le C \tilde H(t)^3,\label{H246.d}\\ |E_8(t)|&\le C \|\tilde a\|_2\|\tilde q_{tt}\|_{\frac{5}{2}}\|\tilde v_{tt}\|_2\le C \tilde H(t)^3,\label{H246.e} \end{align} \end{subequations} where we have used (\ref{qt3}) for (\ref{H246.c}), (\ref{H246.d}), and (\ref{qtt3}) for (\ref{H246.e}). \subsection{Estimate for $E_3$} By integrating by parts, and using $[(\tilde b)_j^k],_k=0$, we obtain $E_3=E_3^1+E_3^2$, with \begin{align*} E_3^1&= \sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)[(\tilde b)_j^k]_{tt} D^2(\tilde q\circ\theta_l) D^2{(\tilde v_{tt}\circ\theta_l)},_k^j,\\ E_3^2&= \sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l),_k [(\tilde b)_j^k],_{tt} D^2( \tilde q\circ\theta_l) D^2{(\tilde v_{tt}\circ\theta_l)}^j. \end{align*} We first have \begin{align*} |E_3^2(t)|&\le C \|\tilde a_{tt}\|_1\|\tilde q\|_3\|\tilde v_{tt}\|_2\le C \tilde H(t)^4. \end{align*} Next, $E_3^1=\sum_{i=1}^3 E_3^{1i},$ with \begin{align*} E_3^{11}&= -\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D[(\tilde b)_j^k]_{tt} D^2(\tilde q\circ\theta_l) D{(\tilde v_{tt}\circ\theta_l)},_k^j,\\ E_3^{12}&=-\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)[(\tilde b)_j^k]_{tt} D^3(\tilde q\circ\theta_l) D{(\tilde v_{tt}\circ\theta_l)},_k^j,\\ E_3^{13}&=-\sum_{l=1}^K\int_{(0,1)^3} D(\xi(\theta_l))[(\tilde b)_j^k]_{tt} D^2(\tilde q\circ\theta_l) D{(\tilde v_{tt}\circ\theta_l)},_k^j. \end{align*} We obviously have $|E_3^{13}(t)|\le C \tilde H(t)^4$. Next, we have by integrating by parts in time: \begin{align*} \int_0^t E_3^{12}&=\sum_{l=1}^K\int_0^t\int_{(0,1)^3} \xi(\theta_l)\bigl([(\tilde b)_j^k]_{tt} D^3(\tilde q_t\circ\theta_l)+[(\tilde b)_j^k]_{ttt} D^3(\tilde q\circ\theta_l)\bigr) D{(\tilde v_{t}\circ\theta_l)},_k^j\\ &\ \ \ +\bigl[\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)[(\tilde b)_j^k]_{tt} D^3(\tilde q\circ\theta_l) D{(\tilde v_{t}\circ\theta_l)},_k^j\bigr]_0^t, \end{align*} showing, with the continuous embedding of $H^1$ into $L^6$ and of $H^{\frac{1}{2}}$ into $L^3$: \begin{align} \bigl|\int_0^t E_3^{12}\bigr|&\le C t \sup_{[0,t]} [\|\tilde a_{tt}\|_1\|\tilde q_t\|_3\|\tilde v_t\|_{\frac{5}{2}}+ \|\tilde a_{ttt}\|_1\|\tilde q\|_3\|\tilde v_t\|_{\frac{5}{2}}]\nonumber\\ &\ \ \ +\bigl[\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D{(\tilde v_{t}\circ\theta_l)},_k^j \bigl[ [(\tilde b)_j^k]_{tt}(0) D^3(\tilde q\circ\theta_l)(0) +\int_0^\cdot [[(\tilde b)_j^k]_{tt} D^3(\tilde q\circ\theta_l)]_t\bigr] \bigr]_0^t\nonumber\\ &\le C t \sup_{[0,t]} [\|\tilde a_{tt}\|_1\|\tilde q_t\|_3\|\tilde v_t\|_{\frac{5}{2}}+ \|\tilde a_{ttt}\|_1\|\tilde q\|_3\|\tilde v_t\|_{\frac{5}{2}}]\nonumber\\ &\ \ \ +\|\tilde v_{t}|_{\frac{5}{2}} \|\tilde q(0)\|_3 \|\tilde b_{tt}(0)\|_1 + \|\tilde v_t|_{\frac{5}{2}} t \sup_{[0,t]} [ \|\tilde q_t\|_3\|\tilde b_{tt}\|_1+\|\tilde q\|_3\|\tilde b_{ttt}\|_1] +N(u_0)\nonumber\\ &\le C\delta \tilde H(t)^2+t \tilde H(t)^4+C_\delta N(u_0), \label{E312} \end{align} for any $\delta>0$. For the remaining term $E_3^{11}$, \begin{align*} |E_3^{11}|&\le C \|\tilde a_{tt}\|_{\frac{3}{2}}\|\tilde q\|_3\|\tilde v_{tt}\|_2\le C \tilde H(t)^4. \end{align*} Consequently, we have \begin{align} \bigl|\int_0^t E_3\bigr|&\le C\delta \tilde H(t)^2+t \tilde H(t)^4+C_\delta N(u_0). \label{H3} \end{align} \subsection{Estimate for $E_7$} By integrating by parts, $E_7=E_7^1+E_7^2$, with \begin{align*} E_7^1&= \sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde b)_j^k (\tilde q_{tt}\circ\theta_l) D^2{(\tilde v_{tt}\circ\theta_l)},_k^j,\\ E_7^2&= \sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l),_k D^2(\tilde b)_j^k ( \tilde q_{tt}\circ\theta_l) D^2{(\tilde v_{tt}\circ\theta_l)}^j. \end{align*} We first have \begin{align*} |E_7^2(t)|&\le C \|\tilde a\|_2\|\tilde q_{tt}\|_2\|\tilde v_{tt}\|_2\le C \tilde H(t)^4. \end{align*} Next, we notice by integrating by parts in time and space that \begin{align*} \int_0^tE_7^1&= \sum_{l=1}^K\int_0^t\int_{(0,1)^3} [D^2(\tilde b_t)_j^k (\xi\tilde q_{tt}\circ\theta_l),_k +D^2(\tilde b)_j^k (\xi\tilde q_{ttt}\circ\theta_l),_k] D^2{(\tilde v_{t}\circ\theta_l)}^j\\ &\ \ \ +\bigl[\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde b)_j^k (\xi\tilde q_{tt}\circ\theta_l),_k D^2{(\tilde v_{t}\circ\theta_l)}^j\bigr]_0^t \end{align*} In the same fashion as we obtained (\ref{E312}), we then infer \begin{align} \bigl|\int_0^t E_7^{1}\bigr|&\le C t \sup_{[0,t]} [\|\tilde b_{t}\|_2\|\tilde q_{tt}\|_2\|\tilde v_t\|_{\frac{5}{2}}+ \|\tilde b\|_2\|\tilde q_{ttt}\|_2\|\tilde v_t\|_{\frac{5}{2}}]\nonumber\\ &\ \ \ +\|\tilde v_t\|_{\frac{5}{2}} \|\tilde q_{tt}(0)\|_2 \|\tilde b(0)\|_2 + \|\tilde v_{t}\|_{\frac{5}{2}} t \sup_{[0,t]}[\|\tilde q_{ttt}\|_2\|\tilde b\|_2+\|\tilde q_{tt}\|_2\|\tilde b_t\|_2] +N(u_0)\nonumber\\ &\le C\delta \tilde H(t)^2+t \tilde H(t)^4+C_\delta N(u_0), \label{H51} \end{align} where we have used (\ref{qttt3}) for $\tilde q_{ttt}$. Consequently, we have \begin{align} \bigl|\int_0^t E_7\bigr|&\le C\delta \tilde H(t)^2+t \tilde H(t)^4+C_\delta N(u_0). \label{H5} \end{align} \subsection{Estimate for $E_9$} We notice by integrating by parts in space that \begin{align*} E_9(t)&= \sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) [(\tilde b_l)_j^k] D^2(\tilde q_{tt}\circ\theta_l)D^2({\tilde v}_{tt}\circ\theta_l),_k^j\\ &\ \ \ +\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l),_k [(\tilde b_l)_j^k] D^2(\tilde q_{tt}\circ\theta_l)D^2({\tilde v}_{tt}\circ\theta_l)^j \end{align*} Next by the divergence condition, \begin{align*} E_9(t)&= -\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D^2[(\tilde b_l)_j^k] D^2(\tilde q_{tt}\circ\theta_l)({\tilde v}_{tt}\circ\theta_l),_k^j\\ &\ \ \ - 2 \sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l) D[(\tilde b_l)_j^k] D^2(\tilde q_{tt}\circ\theta_l)D({\tilde v}_{tt}\circ\theta_l),_k^j\\ &\ \ \ +\sum_{l=1}^K \int_{(0,1)^3} \xi(\theta_l),_k [(\tilde b_l)_j^k] D^2(\tilde q_{tt}\circ\theta_l)D^2({\tilde v}_{tt}\circ\theta_l)^j, \end{align*} showing that \begin{align} |E_9(t)|\le C \|\tilde b\|_2\|\tilde q_{tt}\|_{\frac{5}{2}}\|\tilde v_{tt}\|_2\le C \tilde H(t)^4. \label{H7} \end{align} \subsection{Estimate for $E_1$} If $\epsilon^{ijk}$ denotes the sign of the permutation between $\{i,j,k\}$ and $\{1,2,3\}$, if $i,j, k$ are distinct, and is set to zero otherwise, we obtain \begin{align*} E_1&=E_1^1+E_1^2, \end{align*} with \begin{align*} \displaystyle E_1^1&=\sum_{l=1}^K\int_{(0,1)^3} \xi(\theta_l),_k \tilde q\circ\theta_l D^2[(\tilde b_l)^k_j]_{tt} D^2(\tilde v_{tt}\circ\theta_l)^j,\\ E_1^2&={\frac{1}{2}}\sum_{l=1}^K\epsilon^{mnj}\epsilon^{pqk}\int_{(0,1)^3} \xi\tilde q(\theta_l) D^2[\tilde\eta\circ\theta_l,_p^m\tilde\eta\circ\theta_l,_q^n]_{tt} D^2(\tilde v_{tt}\circ\theta_l),_k^j. \end{align*} \subsubsection{\bf Estimate of $E_1^1$} Now, for $E_1^1$, since \begin{equation*} D_t\triangle \tilde u+\tilde u,_j^i \tilde u,_{ij}-\nabla[\tilde u,_i^j\tilde u,_j^i]=0, \end{equation*} we obtain in $\tilde\Omega$ that \begin{align*} \tilde a_i^j(\tilde a_i^k\tilde v,_k),_j(t)=\triangle \tilde u(0)+\int_0^t [\tilde a_n^m (\tilde a_i^k\tilde v,_k^j\tilde a_j^l\tilde v,_l^i),_m]_{n=1}^3-\int_0^t \tilde a_j^k\tilde v,_k^i\tilde a_j^m(\tilde a_i^l\tilde v,_l),_m, \end{align*} and thus, \begin{align} \tilde a_i^j(\tilde a_i^k\tilde v_t,_k),_j=-[\tilde a_i^j(\tilde a_i^k]_t\tilde v,_k),_j+[\bar a_n^m (\tilde a_i^k\tilde v,_k^j\tilde a_j^l\tilde v,_l^i),_m]_{n=1}^3- \tilde a_j^k\tilde v,_k^i\tilde a_j^m(\tilde a_i^l\tilde v,_l),_m. \label{vtinterior} \end{align} By elliptic regularity in the interior of $\tilde\Omega$, we infer that for any $\omega$ whose closure is contained in $\tilde\Omega$, \begin{align*} \|\tilde v_t\|_{3,\omega}\le C_\omega [ \|\bar a_i^j(\bar a_i^k\tilde v_t,_k),_j\|_{1,\tilde\Omega} +\|\tilde v_t\|_2]\le C_\omega \tilde H(t). \end{align*} With this estimate and the condition $\xi\circ\theta_l,_k=0$ in a neighborhood of $(0,1)^2\times\{0\}$, we then obtain \begin{align} |E_1^1|&\le C \|\tilde q\|_2 \bigl[ C \tilde H(t) \|\tilde\eta\|_3+\|\tilde v\|_3^2\bigr] \|\tilde v_{tt}\|_2\nonumber\\ &\le C (\tilde H(t)^3+1). \label{E11} \end{align} \subsubsection{\bf Estimate for $E_1^2$ and the trace regularity} We now study $E_1^2$, which will be the term bringing the asymptotic regularity of the moving domain $\tilde\eta(\tilde\Omega)$. We have that \begin{align*} E_1^2= \sum_{l=1}^3 E_1^{2l}, \end{align*} with \begin{align*} E_1^{21}&=\sum_{l=1}^K \epsilon^{mnj}\epsilon^{pqk}\int_{(0,1)^3} \xi\tilde q(\theta_l) D^2[\tilde v\circ\theta_l,_p^m\tilde v\circ\theta_l,_q^n] D^2(\tilde v_{tt}\circ\theta_l),_k^j\\ E_1^{22}&=2\sum_{l=1}^K\epsilon^{mnj}\epsilon^{pqk}\int_{(0,1)^3} \xi\tilde q(\theta_l) [D(\tilde v_t\circ\theta_l),_p^m D(\tilde\eta\circ\theta_l),_q^n] D^2(\tilde v_{tt}\circ\theta_l),_k^j\\ E_1^{23}&=\sum_{l=1}^K{\frac{1}{2}}\epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3}\xi(\theta_l)[D^2(\tilde\eta_{tt}\circ\theta_l),_{p}^m D^2(\tilde\eta_{tt}\circ\theta_l),_j^i]_t\tilde q\circ\theta_l(\tilde\eta\circ\theta_l),_q^n. \end{align*} We first notice that \begin{align} |E_1^{21}|+|E_1^{22}|&\le C \|\tilde q\|_3\|\tilde v\|_3^2\|\tilde v_{tt}\|_2+C \|\tilde q\|_3\|\tilde v_t\|_{\frac{5}{2}}\|\tilde\eta\|_3\|\tilde v_{tt}\|_2\nonumber\\ &\le C \tilde H(t)^4. \label{E121122} \end{align} Now, for the remaining term $E_1^{23}$, an integration by parts in time provides$$\int_0^t E_1^{23}={\frac{1}{2}} E_1^{231}+{\frac{1}{2}}[E_1^{232}]_0^t,$$ with \begin{align*} E_1^{231}&=- \sum_{l=1}^K\epsilon^{mni}\epsilon^{pqj}\int_0^t\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde v_t\circ\theta_l),_{p}^m D^2(\tilde v_t\circ\theta_l),_j^i(\tilde q\circ\theta_l\ (\tilde \eta\circ\theta_l),_q^n)_t,\\ E_1^{232}&=\epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde v_t\circ\theta_l),_{p}^m D^2(\tilde v_t\circ\theta_l),_j^i\tilde q\circ\theta_l\ (\tilde \eta\circ\theta_l),_q^n.\\ \end{align*} First, for the perturbation term $E_1^{231}$, by integrating by parts in space (and using $\tilde q=0$ on $\Gamma$): \begin{align*} E_1^{231}&= \int_0^t\sum_{l=1}^K\epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3} \xi(\theta_l)D^2(\tilde v_t\circ\theta_l),_{pj}^m D^2(\tilde v_t\circ\theta_l)^i(\tilde q\circ\theta_l\ (\tilde\eta\circ\theta_l),_q^n)_t\\ &\ \ \ +\int_0^t\sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3}D^2(\tilde v_t\circ\theta_l),_{p}^m D^2(\tilde v_t\circ\theta_l)^i(\xi\tilde q\circ\theta_l\ (\tilde\eta\circ\theta_l),_q^n)_t,_j. \end{align*} For the first integral, we notice that for any $f$, $g$ smooth, \begin{align*} \epsilon^{mni}\epsilon^{pqj} f,_{pj}^m f^i g,_q^n=\epsilon^{mni}\epsilon^{jqp} f,_{jp}^m f^i g,_q^n, \end{align*} and, since $\epsilon^{pqj}=-\epsilon^{jqp}$, this quantity equals zero, leading to \begin{align*} E_1^{231}&= \int_0^t \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3} D^2(\tilde v_t\circ\theta_l),_{p}^m D^2(\tilde v_t\circ\theta_l)^i(\xi\tilde q\circ\theta_l(\tilde\eta\circ\theta_l),_q^n)_t,_j. \end{align*} Consequently, as in (\ref{int}) since the derivatives in $D^2$ are horizontal, \begin{align} |E_1^{231}|&\le C \int_0^t \|\tilde v_t\|_{\frac{5}{2}}^2\| [\ \|\nabla(\tilde q_t\nabla\eta)\|_1+\|\nabla(\tilde q\nabla\tilde v)\|_1]\nonumber\\ &\le C t H(t)^4. \label{E1231} \end{align} Now for $ E_1^{232}$, we will introduce the notation $$V(l)=\tilde v\circ\theta_l \text{ and } E(l)=\tilde\eta\circ\theta_l.$$ We then have after a change of variables made in order to get vector fields whose divergence and curl are controlled: \begin{align*} E_1^{232}= \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{(0,1)^3}\bigl[&\xi\tilde q\circ\theta_l\ (D^2 V_t(l)\circ E(l)^{-1}),_{p_1}^m(E(l)) E(l),_p^{p_1} \\ & (D^2 V_t(l)\circ E(l)^{-1}),_{j_1}^i(E(l)) E(l),_j^{j_1} \ (E(l)\circ E(l)^{-1}),_{q_1}^n E(l),_q^{q_1}\bigr]. \end{align*} Now, we notice that any triplet $(i_1,j_1,q_1)$ such that $\text{Card}\{i_1,j_1,q_1\}<3$ will not contribute to this sum. For instance, if $j_1=q_1$, we notice that by relabeling $j$ and $q$, \begin{align*} \epsilon^{pqj} E(l),_p^{p_1} E(l),_j^{j_1} E(l),_q^{j_1}=\epsilon^{pjq} E(l),_p^{p_1} E(l),_q^{j_1} E(l),_j^{j_1}, \end{align*} where $j_1=q_1$ is fixed in the sum above. Now, since $\epsilon^{pqj} =-\epsilon^{pjq}$, this shows that \begin{align*} \epsilon^{pqj} E(l),_p^{p_1} E(l),_j^{j_1} E(l),_q^{j_1}=0. \end{align*} By a similar argument, \begin{align*} \epsilon^{pqj} E(l),_p^{p_1} E(l),_j^{p_1} E(l),_q^{q_1}=0. \end{align*} Consequently, only the triplets where $\text{Card}\{i_1,j_1,q_1\}=3$ contribute to $E_1^{232}$, showing that \begin{align*} E_1^{232}= \sum_{l=1}^K \epsilon^{mni}\epsilon^{p_1q_1j_1}\int_{(0,1)^3}\bigl[&\xi\tilde q\circ\theta_l\ \epsilon^{pqj}\epsilon^{p_1q_1j_1} E(l),_p^{p_1}E(l),_j^{j_1} E(l),_q^{q_1}\\ &(D^2 V_t(l)\circ E(l)^{-1}),_{p_1}^m(E(l)) (D^2 V_t(l)\circ E(l)^{-1}),_{j_1}^i(E(l)) \delta_{q_1}^n \bigr]. \end{align*} Since for each given $(p_1,j_1,q_1)$ we have $\epsilon^{pqj}\epsilon^{p_1q_1j_1} E(l),_p^{p_1}E(l),_j^{j_1} E(l),_q^{q_1}=\text{det}\nabla E(l)=1$, we then infer \begin{align*} E_1^{232}= \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{\tilde\eta(\tilde\Omega)}\bigl[\xi\tilde q\circ\tilde\eta^{-1} (D^2 V_t(l)\circ E(l)^{-1}),_{p}^m (D^2 V_t(l)\circ E(l)^{-1}),_{j}^i \delta_{q}^n \bigr]. \end{align*} By integrating by parts in space, we get by using $\tilde q=0$ on $\tilde\Gamma$: \begin{align*} E_1^{232}= - \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{\tilde\eta(\tilde\Omega)}\bigl[\xi\tilde q\circ\tilde\eta^{-1} (D^2 V_t(l)\circ E(l)^{-1}),_{pj}^m (D^2 V_t(l)\circ E(l)^{-1})^i \delta_{q}^n \bigr]\\ - \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{\tilde\eta(\tilde\Omega)}\bigl[(\xi\tilde q\circ\tilde\eta^{-1}),_j (D^2 V_t(l)\circ E(l)^{-1}),_{p}^m (D^2 V_t(l)\circ E(l)^{-1})^i \delta_{q}^n \bigr]. \end{align*} Next, since for any $f$ smooth, \begin{align*} \epsilon^{mni}\epsilon^{pqj} f,_{pj}^m \delta_{q}^n=0, \end{align*} we infer \begin{align*} E_1^{232}&= - \sum_{l=1}^K \epsilon^{mni}\epsilon^{pqj}\int_{\tilde\eta(\tilde\Omega)}(\xi\tilde q\circ\tilde\eta^{-1}),_j (D^2 V_t(l)\circ E(l)^{-1}),_{p}^m (D^2 V_t(l)\circ E(l)^{-1})^i \delta_{q}^n. \end{align*} Now, we notice that for $\delta_n^q\epsilon^{mni}\epsilon^{pqj}\ne 0$, if $p=i$, then necessarily $j=m$. Similarly, if $p\ne i$, then since $p\ne n$, necessarily, $p=m$, and thus $i=j$. Therefore, \begin{align*} E_1^{232}&= -\sum_{l=1}^K\epsilon^{mni}\epsilon^{inm}\int_{\tilde\eta(\tilde\Omega)} (D^2 V_t(l)\circ E(l)^{-1}),_{i}^m (D^2 V_t(l)\circ E(l)^{-1})^i\ (\xi\tilde q\circ\tilde\eta^{-1}),_m\\ &\ \ \ -\sum_{l=1}^K\epsilon^{mni}\epsilon^{mni}\int_{\tilde\eta(\tilde\Omega)} (D^2 V_t(l)\circ E(l)^{-1}),_{m}^m (D^2 V_t(l)\circ E(l)^{-1})^i\ (\xi\tilde q\circ\tilde\eta^{-1}),_i. \end{align*} Now, in the same way as we obtained the divergence and curl estimates (\ref{divv}) and (\ref{curlv}), we have the same type of estimates for $V_t(l)$ leading us to \begin{align*} \|\sqrt{\xi}(\theta_l)\ [(D^2 V_t(l)\circ E(l)^{-1}),_i^m-(D^2 V_t(l)\circ E(l)^{-1}),_m^i]\|_{H^{\frac{1}{2}}(\tilde\eta(\tilde\Omega))'}&\le C,\\ \|\sqrt{\xi}(\theta_l) \text{div}(D^2 V_t(l)\circ E(l)^{-1})\|_{H^{\frac{1}{2}}(\tilde\eta(\tilde\Omega))'}&\le C. \end{align*} The fact that $D^2$ contains horizontal derivatives once again played a crucial role in these estimates. This implies \begin{align*} E_1^{232}&= \sum_{l=1}^K\sum_{i\ne m}\int_{\tilde\eta(\tilde\Omega)} (D^2 V_t(l)\circ E(l)^{-1}),_{m}^i (D^2 V_t(l)\circ E(l)^{-1})^i\ (\xi\tilde q\circ\tilde\eta^{-1}),_m\\ &\ \ \ +\sum_{l=1}^K\int_{\tilde\eta(\tilde\Omega)} (D^2 V_t(l)\circ E(l)^{-1}),_{i}^i (D^2 V_t(l)\circ E(l)^{-1})^i\ (\xi\tilde q\circ\tilde\eta^{-1}),_i+R_1\\ &= \sum_{l=1}^K\int_{\tilde\eta(\tilde\Omega)} (D^2 V_t(l)\circ E(l)^{-1}),_{m}^i (D^2 V_t(l)\circ E(l)^{-1})^i\ (\xi\tilde q\circ\tilde\eta^{-1}),_m +R_1, \end{align*} with $|R_1(t)|\le C t \tilde H(t)$. Consequently, \begin{align*} E_1^{232}&=- \frac{1}{2}\sum_{l=1}^K\int_{\tilde\eta(\tilde\Omega)} |D^2 V_t(l)\circ E(l)^{-1}|^2 \triangle(\xi\tilde q\circ\tilde\eta^{-1})\\ &\ \ \ +\frac{1}{2}\sum_{l=1}^K\int_{\partial\tilde\eta(\tilde\Omega)} |D^2 V_t(l)\circ E(l)^{-1}|^2 (\xi\tilde q\circ\tilde\eta^{-1}),_m \tilde n_m+R_1. \end{align*} If we write $\displaystyle\tilde q=\tilde q(0)+\int_0^t\tilde q_t$, $\displaystyle \tilde n=N+\int_0^t \tilde n_t$, and use the fact that $\xi=1$ on $\tilde\Gamma$, we then get \begin{align} E_1^{232}=\frac{1}{2} \sum_{l=1}^K\int_{\partial\tilde\eta(\tilde\Omega)} |D^2 V_t(l)\circ E(l)^{-1}|^2\ \tilde q(0),_m \tilde N_m+R_2, \label{opt3} \end{align} with $|R_2(t)|\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0)$. Together with (\ref{H246}), (\ref{H3}), (\ref{H5}), (\ref{H7}), (\ref{E11}), (\ref{E121122}) and (\ref{E1231}), this provides us on $[0,T_\kappa]$ with \begin{align} \sum_{l=1}^K \int_{(0,1)^3}\xi(\theta_l)|D^2(\tilde v_{tt}\circ\theta_l|^2+&\frac{1}{2} \sum_{l=1}^K \int_{\partial\tilde\eta(\tilde\Omega)} | D^2 V_t(l)\circ E(l)^{-1}|^2\ (-\tilde q(0),_m \tilde N_m)\nonumber\\ &\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0). \label{opt4} \end{align} Similarly as in Section \ref{L10}, from the pressure condition (\ref{lindblad}), this provides us with an estimate of the type: \begin{align} \|\tilde v_{tt}\|_2^2+\|\tilde v_t\|^2_{\frac{5}{2}}\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0). \label{opt5} \end{align} The control \begin{align} \|\tilde q\|_3^2 \le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0), \label{opt6} \end{align} is then easy to achieve by elliptic regularity on the pressure system. Next, we see that (\ref{opt5}) implies that for any $l\in \{1,...,K\}$, $\tilde v_t\circ\theta_l$, and thus $(\tilde q\circ\theta_l),_3 \tilde\eta\circ\theta_l,_1\times\tilde\eta\circ\theta_l,_2$, are controlled in $H^2((0,1)^2\times\{0\})$ by the right-hand side of (\ref{opt5}). This implies the same control on $(0,T_\epsilon)$ for $\displaystyle\frac{\tilde q\circ(\theta_l),_3 \tilde\eta\circ\theta_l,_1\times\tilde\eta\circ\theta_l,_2}{\bigl|\tilde q\circ(\theta_l),_3 \tilde\eta\circ\theta_l,_1\times\tilde\eta\circ\theta_l,_2\bigr|}$ in $H^2((0,1)^2\times\{0\})$, {\it i.e.} that \begin{equation} \label{opt7} |\tilde n|^2_{2,\tilde\Gamma}\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0), \end{equation} which brings the $H^{\frac{7}{2}}$ regularity of the {\it domain} $\tilde\eta(\tilde\Omega)$. Now, for $\tilde v$, we notice that from the identity on $(0,1)^2\times\{0\}$: \begin{align*} V_{tt}(l)+\tilde q\circ\theta_l,_3 V(l),_1\times E(l),_2 + \tilde q\circ\theta_l,_3 E(l),_1\times V(l),_2 + \tilde q_t\circ\theta_l,_3 E(l),_1\times E(l),_2=0, \end{align*} we infer by taking the scalar product of the above vector by $E(l),_1$ that \begin{equation*} |V(l),_1\cdot \tilde n|^2_{{\frac{3}{2}},(0,1)^2\times \{0\}}\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0), \end{equation*} which by divergence and curl relations for $\xi(\theta_l) V(l),_1(E(l)^{-1})$ similar in spirit to the ones in Section \ref{L8}, leads to \begin{equation*} \|\xi(\theta_l) V(l),_1\|^2_{2,(0,1)^3}\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0). \end{equation*} In a similar fashion, \begin{equation*} \|\xi(\theta_l) V(l),_2\|^2_{2,(0,1)^3}\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0). \end{equation*} Now, with divergence and curl relations for $\xi(\theta_l) V(l)(E(l)^{-1})$ similar in spirit to the ones in Section \ref{L8}, this leads to \begin{equation*} \|\xi \tilde v\|^2_3\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0), \end{equation*} and consequently, with the control of the divergence and curl of $\tilde v$ inside $\tilde\Omega$ as in Section \ref{L8}, we get \begin{equation} \label{opt8} \|\tilde v\|^2_3\le \delta \tilde H(t)^2+C_\delta t \tilde H(t)^4 +C_\delta N(u_0). \end{equation} Now, with the estimates (\ref{opt5}), (\ref{opt6}), (\ref{opt7}) and (\ref{opt8}), we then get similarly as in Section \ref{L11} the existence of a time $T>0$ independent of $\epsilon$ such that on $(0,T)$ the estimates (\ref{assume2}) hold, and such that we have $\tilde H(t)\le N(u_0)$ on $(0,T)$ for any $\epsilon>0$ small enough. Therefore, we have a solution to the problem with optimal regularity on the initial data and domain as the weak limit as $\epsilon\rightarrow 0$. \section{Uniqueness} \label{L13} Let $(v,q)$ and $(\bar v,\bar q)$ be solutions of \ref{lindblad} on $[0,T]$. We denote $\delta v=v-\bar v$ and $\delta q=q-\bar q$. We then introduce the energy: $$\displaystyle f(t)= \sum_{l=1}^K \int_{{(0,1)^d}}\xi(\theta_l)|D^2 (v_{tt}\circ\theta_l-\bar v_{tt}\circ\theta_l)|^2,$$ where $D^2 v$ stands for any second order horizontal space derivative $v,_{i_1i_2}$. By proceeding in the same way as in the previous section, and using the fact that the divergence and curl of $\delta v$ have a transport type structure as well, we obtain an energy inequality similar to (\ref{opt5}), without the presence of $N(u_0)$ (since $\delta v(0)=0$). This establishes uniqueness of solutions. \section*{Acknowledgments} DC and SS were supported by the National Science Foundation under grant NSF ITR-0313370. We thank the referee for the major time and effort expended on the careful reading of our manuscript.
1,314,259,992,759
arxiv
\section{Introduction} Let $\mathbb{F}$ be a fixed finite field of prime order $p$, and consider the vector space $\mathbb{F}^n$. We identify $\mathbb{F}^n$ with its own dual via the dot product, and in this way define the Fourier transform of a function $f : \mathbb{F}^n \rightarrow \mathbb{C}$ by \[ \hat{f}(r) := \mathbb{E}_{x \in \mathbb{F}^n} f(x) e_p(-r \cdot x),\] where $e_p(t) := e^{2\pi i t/p}$ and $r$ takes values in $\mathbb{F}^n$. When $\mathbb{F} = \mathbb{F}_2$, we have $e_p(t) = (-1)^t$. For any non-empty set $S \subset G$ we write $\mu_S$ for the uniform probability measure induced on $S$, that is the probability measure assigning mass $|S|^{-1}$ to each $s \in S$. \begin{definition} Suppose that $A \subset \mathbb{F}^n$ is a set and that $V \leq \mathbb{F}^n$ is a subspace. Let $x \in \mathbb{F}^n$. Then we say that $A$ is $\varepsilon$-uniform on the coset $x + V$ if \[ \sup_{r \notin V^{\perp}}| (1_A \mu_{x + V})^{\wedge}(r)| \leq \varepsilon.\] \end{definition} The following fact, proven by a ``density increment argument'' is well-known in the additive combinatorics literature and is implicit, for example, in the work of Meshulam \cite{mes::}. \begin{theorem} Suppose that $A \subset \mathbb{F}^n$ is a set. Then there is a subspace $V \leq \mathbb{F}^n$, $\codim V \ll \varepsilon^{-1} $, and an $x \in \mathbb{F}^n$ such that $A$ is $\varepsilon$-uniform on the coset $x + V$. \end{theorem} Our aim in this note is to prove the following. \begin{theorem}\label{mainthm} Suppose that $A \subset \mathbb{F}_2^n$ is a set. Then there is a subspace $V \leq \mathbb{F}^n$, $\codim V \ll_{\varepsilon} 1$, such that $A$ is $\varepsilon$-uniform on $V$. \end{theorem} \emph{Remarks.} Sadly, the implied constant in $\varepsilon$ is atrocious, being a tower of towers of height $O(\varepsilon^{-1})$. It would be interesting to get a better bound. Note that it is quite permissible for $A \cap V$ to be empty, and indeed this is generally unavoidable, as the example $A = \{x : x_1 = 1\}$ shows. In Section \ref{f3-example} we give a simple example to show that this statement is not true when $\mathbb{F}_2$ is replaced by $\mathbb{F}_3$. \section{An example over $\mathbb{F}_3$}\label{f3-example} In this section we give a simple example to show that the analogue of Theorem \ref{mainthm} is false over $\mathbb{F}_3$ (similar examples may be constructed over other prime fields). The example comes from the literature on Rado's theorem over finite fields, in particular from \cite{BDH}. Indeed, if Theorem \ref{mainthm} \emph{had} been true over $\mathbb{F}_3$ it would have implied that every homogeneous equation in three or more variables is partition regular in $\mathbb{F}_3^{\mathbb{N}}$, a result which is known to be false. \begin{theorem}\label{thm.main2} There is a set $A \subset \mathbb{F}_3^n$ such that for any subspace $V \leq \mathbb{F}_3^n$ of positive dimension we have \[ \sup_{r \notin V^{\perp}} |(1_A d\mu_V)^{\wedge}(r)| \geq \frac{\sqrt{3}}{6}.\] \end{theorem} \begin{proof} Take $A = \{ x \in \mathbb{F}_3^n : \mbox{there exists $i$ such that } \; x_1 = \dots = x_i = 0, x_{i + 1} = 1\}$. Let $V \leq \mathbb{F}_3^n$ be a subspace of positive dimension, and let $j \in [n]$ be minimal such that $v_j \neq 0$ for at least one $v \in V$. Of course, we then have $x_1 = \dots = x_{j-1} = 0$ for all $x \in V$. It follows that \begin{equation}\label{inclu-1}\{ x \in V: x_j = 1\} \subset A,\end{equation} whilst \begin{equation}\label{inclu-2}\{x \in V: x_j = 2\} \cap A = \emptyset.\end{equation} Take $r \in \mathbb{F}_3^n$ to have $r_1 = \dots = r_{j-1},r_{j+1},\dots,r_n = 0$ and $r_j = 1$. Then $r \notin V^{\perp}$, since $r \cdot v \neq 0$. Furthermore, a short computation using \eqref{inclu-1} and \eqref{inclu-2} gives \[ \mbox{Im} \big((1_A d\mu_V)^{\wedge}(r)\big) = -\frac{\sqrt{3}}{2} \mu_V \big( \{ x \in V: x_j = 1\}\big) = - \frac{\sqrt{3}}{6},\] the second equality being a consequence of the fact that the map $V \rightarrow \mathbb{F}_3$ given by $x \mapsto x_j$ is linear and nontrivial. \end{proof} \section{A Ramsey result for almost colourings} By a $(1 - \delta)$-almost $r$-colouring of a set $X$, we mean a map $c : \tilde X \rightarrow [r]$ where $|X \setminus \tilde X| \leq \delta |X|$. \begin{proposition}\label{finite-union} Let $r,d$ be integers. Then there is an $\eta(r,d) > 0$ such that the following is true. If $n \geq n_0(r,d)$ is sufficiently large, $\eta \in [0,\eta(r,d)]$, and if we have a $(1 - \eta)$-almost $r$-colouring of $\mathbb{F}_2^n$, then there are linearly independent $x_1,\dots, x_d$ such that all of the sums $\sum_{i \in I} x_i$, $I \subset [d]$, $I \neq \emptyset$, are the same colour. \end{proposition} \begin{proof} In the case $\eta = 0$ (that is, genuine $r$-colourings rather than almost- colourings) this follows quickly from a well-known theorem of Graham and Rothschild \cite[Corollary 1]{gr2} (for a short proof see \cite{nesetril-rodl}). Indeed, our colouring of $\mathbb{F}_2^n$ induces a colouring of the power set $\mathcal{P}([n])$ via the usual identification of these two sets by characteristic functions. In the power set $\mathcal{P}([n])$, the theorem of Graham and Rothschild guarantees that if $n$ is sufficiently large then there are disjoint subsets $S_1,\dots, S_d \subset [n]$ such that every nontrivial union $\bigcup_{i \in I} S_i$, $I \subset [d]$, $I \neq \emptyset$, is the same colour. These sets pull back under the identification to give $x_1,\dots, x_d$ with the claimed property. We may deduce the stronger result claimed (that is, with $\eta > 0$) by a simple averaging argument. Let $m = m(r,d)$ be a value of $n$ for which the result is true with $\eta = 0$. Now take $\eta(r,d):= 2^{-m-1}$, and suppose $\eta \in [0,\eta(r,d)]$ and we have a $(1 - \eta)$-almost colouring of $\mathbb{F}_2^n$. If $n \geq m+3$, this induces a $(1 - 2\eta(r,d))$-almost colouring of $\mathbb{F}_2^n \setminus \{0\}$. Now $\mathbb{F}_2^n \setminus \{0\}$ is uniformly covered by sets $V \setminus \{0\}$, where $V$ ranges over all $m$-dimensional subspaces of $\mathbb{F}_2^n$. Therefore, by the pigeonhole principle, there is some $V$ for which we get an induced $(1 - 2\eta(r,d))$-almost colouring of $V \setminus \{0\}$. However, since $1 - 2\eta(r,d) \geq 1- 2^{-m} > 1 - \frac{1}{|V \setminus \{0\}|}$, this is in fact a full colouring of $V \setminus \{0\}$. The result follows by the choice of $m$. \end{proof} \section{Proof of the main theorem} Suppose that $A \subset \mathbb{F}_2^n$ is a set and we are given a parameter $\varepsilon \in (0,1]$. Choose integers $d, r$ with $2^d \sim r \sim \frac{1}{\varepsilon}$, and let $\eta = \eta(r,d)$ be the parameter whose existence is guaranteed by Proposition \ref{finite-union}. By the ``arithmetic regularity lemma'' in this context \cite{gre::02}, there is\footnote{Strictly speaking, the argument as presented in \cite{gre::02} does not guarantee a \emph{lower} bound on $\codim W$. However, this may very easily be arranged with a trivial modifcation of the proof, for example by foliating $\mathbb{F}_2^n$ into cosets of some arbitrary subspace of codimension $n_0(r,d)$ and then running the energy increment argument as in that paper.} some subspace $W \leq \mathbb{F}_2^n$, $n_0(r,d) \leq \codim W \ll_{\varepsilon} 1$, such that \begin{equation}\label{reg} \sup_{r \notin W^{\perp}} |(1_A \mu_{x + W})^{\wedge}(r)| \leq \varepsilon\end{equation} for a proportion at least $1 - \eta$ of all $x \in \mathbb{F}_2^n$. For notational simplicity, put $X:=\mathbb{F}_2^m$ and change basis so that $W = \{0_X\} \times \mathbb{F}_2^{n - m}$. Let $\tilde X \subset X$ be the set of all $x \in \mathbb{F}_2^m$ for which \eqref{reg} holds. Thus $|\tilde X| \geq (1 - \eta)|X|$. Define an $(r+1)$-colouring $c : \tilde X \rightarrow \{0,1,\dots, r\}$ (and hence a $(1 - \eta)$-almost $(r+1)$-colouring of $X$) by defining $c(x) := \lfloor r \mathbb{E}_{x + V} 1_A \rfloor$. That is, $c(x) = j$ if the density of $A$ on $x + V$ lies in the range $[\frac{j}{r}, \frac{j+1}{r})$. By Proposition \ref{finite-union}, we may find linearly independent $x_1,\dots, x_d \in \mathbb{F}_2^m$ and $j \in \{0,1,\dots, r\}$ such that $c(\sum_{i \in I} x_i) = j$ for all $I \subset [d]$, $I \neq \emptyset$. Set $V := W + \langle x_1,\dots, x_d\rangle$. We claim that \begin{equation}\label{to-check} \sup_{r \notin V^{\perp}} |(1_A \mu_V)^{\wedge}(r)| = O(\varepsilon),\end{equation} which (after redefining $\varepsilon$ to $\varepsilon/C$) implies our main theorem. Suppose first that $r \notin W^{\perp}$. Note that \begin{equation}\label{tbn} \mu_V = 2^{-d} \sum_{I \subset [d]} \mu_{W + \sum_{i \in I} x_i}.\end{equation} If $I \neq \emptyset$ we have $\sum_{i \in I} x_i \in \tilde X$, and so in this case \[ |(1_A \mu_{\sum_{i \in I} x_i + W})^{\wedge}(r)| \leq \varepsilon\] by \eqref{reg}. Summing over all $I \neq \emptyset$, and handling the case $I = \emptyset$ trivially, we have \[ |(1_A \mu_V)^{\wedge}(r)| \leq 2^{-d} + \varepsilon = O(\varepsilon),\] as desired. Now suppose that $r \in W^{\perp} \setminus V^{\perp}$. In this case \[ (1_A \mu_{x + W})^{\wedge}(r) = (-1)^{r \cdot x} \mathbb{E}_{x + W} 1_A.\] Hence from \eqref{tbn} we have \[ (1_A \mu_V)^{\wedge}(r) = 2^{-d} \sum_{I \subset [d]} (-1)^{r \cdot \sum_{i \in I} x_i} \mathbb{E}_{\sum_{i \in I} x_i + W} 1_A.\] By construction, \[ \frac{j}{r} \leq \mathbb{E}_{\sum_{i \in I} x_i + W} 1_A < \frac{j+1}{r}\] whenever $I \neq \emptyset$. It follows that \[ (1_A \mu_V)^{\wedge}(r) = \frac{j}{r} 2^{-d} \sum_{I \subset [d]} (-1)^{r \cdot \sum_{i \in I} x_i} + O(2^{-d} + \frac{1}{r}).\] However \[ \sum_{I \subset [d]} (-1)^{r \cdot \sum_{i \in I} x_i} = \prod_{i = 1}^d (1 + (-1)^{r \cdot x_i}),\] and at least one of the factors here vanishes since $r \notin V^{\perp}$. Hence \[ (1_A \mu_V)^{\wedge}(r) = O(2^{-d} + \frac{1}{r}) = O(\varepsilon),\] and the proof is complete.
1,314,259,992,760
arxiv
\section{Introduction} \label{i} The simple quark model (SQM) though qualitatively successful, fails to account for low energy properties of baryons quantitatively. Experimentaly~\cite{1}, it is found that quarks cannot even account for the proton spin and thus it is necessary to go beyond SQM. Since quarks interact through strong color forces mediated by gluons, a physical hadron, in reality, consists of valence quarks surrounded by a ``sea" of gluons and quark-antiquark ($q\bar q$) pairs. The effect of the sea contribution to hadron structure has been considered by several authors~\cite{2,3,4,5,6,7,8,9}. In this paper, we study the static properties of the spin 1/2 baryons ($p$, $n$, $\Lambda$, ...) following Refs.~\cite{8} and~\cite{9} where the general sea is specified by its total flavor, spin and color quantum numbers. The baryons are pictured as a composite system made out of a baryon ``core" of the three valence quarks (as in SQM) and a flavor octet sea with spin 0 and 1 but no color. Earlier~\cite{8,9}, such a physical baryon wavefunction was applied to baryon magnetic moments and semileptonic decays and gave excellent fits. The purpose of this paper is to use this wavefunction to obtain a simultaneous fit to masses, magnetic moments and semileptonic decays. Sec.~\ref{ii} gives the wavefunction for the physical baryon in our model. Sec.~\ref{iii} presents a discussion of the mass operator used and the models for the ``core" baryon masses. It also gives briefly how the magnetic moments and semileptonic decays are calculated. Sec.~\ref{iv} gives the combined fits to masses, magnetic moments and semileptonic decays, while a prediction of these fits for nucleon spin distributions is discussed in Sec.~\ref{v}. Lastly, Sec.~\ref{vi} gives some concluding remarks. \section{Spin 1/2 Baryon Wave Functions with Sea} \label{ii} The physical baryon octet states, denoted by $B(1/2\uparrow)$ are obtained by combining the ``core" wavefunction $\tilde{B}({\bf 8},1/2)$ (the usual SQM spin 1/2 baryon octet wave function) with the sea wavefunction with specific properties given below. We assume the sea is a color singlet but has flavor and spin properties which when combined with those of the core baryons $\tilde{B}$ give the desired properties of the physical baryon $B$. Since both the physical and core baryon have $J^P=\frac{1}{2}^+$, this implies that the sea has even parity and spin 0 or 1. The spin 0 and 1 wavefunctions for the sea are denoted by $H_0$ and $H_1$, respectively. We also refer to a spin 0 (1) sea as a scalar (vector) sea. For $SU(3)$ flavor we assume the sea has a $SU(3)$ singlet component and an octet component described by wavefunctions $S({\bf 1})$ and $S({\bf 8})$, respectively. The color singlet sea in our model is thus described by the wavefunctions $S({\bf 1})H_0$, $S({\bf 1})H_1$, $S({\bf 8})H_0$, and $S({\bf 8})H_1$. The total flavor-spin wavefunction of a spin up ($\uparrow$) physical baryon which consists of 3 valence quarks and a sea component (as discussed above) can be written schematically as \begin{eqnarray} B(1/2\uparrow) &=& \tilde{B}({\bf 8},1/2\uparrow)H_{0}S({\bf 1})+ b_{{\bf 0}}\left[\tilde{B}({\bf 8},1/2) \otimes H_{1}\right]^{\uparrow}S({\bf 1}) \nonumber\\ && +\sum_{N}a(N)\left[\tilde{B}({\bf 8},1/2\uparrow)H_{0} \otimes S({\bf 8})\right]_{N} \label{1}\\ && +\sum_{N}b(N)\left\{[\tilde{B}({\bf 8},1/2)\otimes H_{1}]^{\uparrow} \otimes S({\bf 8})\right\}_{N}. \nonumber \end{eqnarray} \noindent The first term is the usual $q^{3}$-wavefunction of the SQM (with a trivial sea) and the second term (coefficient $b_0$) comes from spin-1 (vector) sea which combines with the spin 1/2 core baryon $\tilde{B}$ to a spin 1/2$\uparrow$ state. So that, \begin{equation} \left[\tilde{B}({\bf 8},1/2)\otimes H_{1}\right]^{\uparrow} = \sqrt{\frac{2}{3}}\tilde{B}({\bf 8},1/2\downarrow)H_{1,1} - \sqrt{\frac{1}{3}}\tilde{B}({\bf 8},1/2\uparrow)H_{1,0}. \label{2} \end{equation} \noindent In both these terms the sea is a flavor singlet. The third (fourth) term in Eq.~(\ref{1}) contains a scalar (vector) sea which transforms as a flavor octet. The various $SU(3)$ flavor representations obtained from $\tilde{B}({\bf 8})\otimes S({\bf 8})$ are labelled by $N={\bf 1,8_{F},8_{D},10,\bar{10},27}$. As it stands, Eq.~(\ref{1}) represents a spin 1/2$\uparrow$ baryon which is not {\em a pure flavor octet} but has an admixture of other $SU(3)$ representations weighted by the unspecified constants $a(N)$ and $b(N)$. It will be a flavor octet if $a(N)=b(N)=0$ for $N={\bf 1,10,\bar{10},27}$. The color wavefunctions have not been indicated as the three valence quarks in the core $\tilde{B}$ and the sea (by assumption) are in a color singlet state. The sea isospin multiplets contained in the $SU(3)$ flavor octet $S({\bf 8})$ are denoted as $(S_{\pi^+},S_{\pi^0},S_{\pi^-})$, $(S_{K^+},S_{K^0})$, $(S_{\bar{K}^0},S_{K^-})$, and $S_{\eta}$. The familiar pseudoscalar mesons are used here as subscripts to label the isospin and hypercharge quantum numbers of the sea states. Details of the wavefunction have been given earlier~\cite{8,9}. However, for completeness the explicit physical baryon states in terms of the core and sea states are given in Tables~\ref{tabla1} and \ref{tabla2}. The normalization of a given baryon state (not indicated in Eq.~(\ref{1})) depends on the parameters which enter in the wavefunction and is different for different isospin multiplets (see Table~\ref{tabla3}). For our applications we adopt the phenomenological wavefunction given in Eq.~(\ref{1}), where the physical spin 1/2 baryons have admixtures of flavor $SU(3)$ determined by the coefficients $a(N)$ and $b(N)$, $N={\bf 1,10,\bar{10},27}$. As we shall see, such a wavefunction which respects the isospin and hypercharge properties of the usual spin 1/2 baryon states is general enough to provide an excellent fit to the masses, magnetic moments and semileptonic decays data simultaneously. Only few of the thirteen parameters in Eq.~(\ref{1}) are needed for this purpose. For applications, we need the quantities $(\Delta q)^B$, $q=u,d,s$; for each spin-up baryon $B$. These are defined as \begin{equation} (\Delta q)^B = n^B(q\uparrow)-n^B(q\downarrow)+n^B(\bar{q}\uparrow)-n^B(\bar{q}\downarrow), \label{4} \end{equation} \noindent where $n^B(q\uparrow)$ ($n^B(q\downarrow)$) are the number of spin-up (spin-down) quarks of flavor $q$ in the spin-up baryon $B$. Also, $n^B(\bar{q}\uparrow)$ and $n^B(\bar{q}\downarrow)$ have a similar meaning for antiquarks. However, these are zero as there are no explicit antiquarks in the wavefunctions given by Eq.~(\ref{1}). The expressions for $(\Delta q)^B$ reduce to the SQM values if there is no sea contribution, that is, $b_0=0$, $a(N)=b(N)=0$, $N={\bf 1,8_F,8_D,10,\bar{10},27}$. \section{Magnetic Moments, Masses and Semileptonic Decays} \label{iii} For any operator $\hat{O}$ which depends only on quarks, the matrix elements are easily obtained using the orthogonality of the sea components. Clearly $\langle B\uparrow|\hat{O}|B'\uparrow\rangle$ will be a linear combination of the matrix elements $\langle \tilde{B}\uparrow|\hat{O}|\tilde{B'}\uparrow\rangle$ (known from SQM) with coefficients which depend on the coefficients in the wavefunction, Eq.~(\ref{1}). \subsection{Magnetic Moments (MM's)} \label{iiia} We assume the baryon magnetic moment operator, $\hat{\mu}$, to act solely on the valence quarks in $\tilde B$, so that \begin{equation} \hat{\mu}\equiv \sum_{q} \mu_q \sigma_z^q \label{5} \end{equation} \noindent where $\mu_{q}=e_{q}/2m_{q}$ and $e_q$ and $m_q$ are quark charge and mass for $q=u,d,s$. It is possible to show that the MM's of the spin 1/2 baryons, $\mu_{B}$ ($B=p,n,\Lambda,\dots$), and the transition magnetic moment, $\mu_{\Sigma^{0}\Lambda}$, can be written as \begin{equation} \mu_{B}=\sum_{q=u,d,s}(\Delta q)^{B}\mu_{q} \qquad \hbox{and} \qquad \mu_{\Sigma^{0}\Lambda}=\sum_{q=u,d}(\Delta q)^{\Sigma^{0}\Lambda}\mu_{q}, \label{6} \end{equation} \noindent where the $(\Delta q)^{B}$ are defined in Eq.~(\ref{4}). Expressions for $(\Delta q)^{B}$ in terms of the parameters $b_0$, $\beta_i$ and $\beta_i^\prime$ are given in Ref.~\cite{8}. From Eqs.~(\ref{6}) we see that the MM's depend on the quark masses (or quark MM's) and on the parameters $b_0$, $a(N)$, $b(N)$ which determine the sea. \subsection{Masses} \label{iiib} For masses we assume that the mass operator $H$ acts only on the quarks in the core $\tilde B$, this gives the physical baryon masses \begin{equation} m_{B}=\sum_{\tilde B} \Omega_{B\tilde B}m_{\tilde B} \label{8} \end{equation} \noindent as a linear combination of the eight ``core" baryon masses $m_{\tilde B}$ weighted by the coefficients $\Omega_{B\tilde B}$ (given in Table~\ref{tabla3}) which depend on the parameters of the wavefunction, Eq.~(\ref{1}). The parameters in the wavefunction can be fixed by fitting other data ({\em e.g.} MM's) and thus determine $\Omega_{B\tilde B}$. However, we still need to know $m_{\tilde B}$ to be able to calculate $m_B$. For this purpose, we assume the mass operator of the form \begin{equation} {H}=H_0+H_8+H_3, \label{9} \end{equation} \noindent where $H_0$ is flavor $SU(3)$ singlet and $H_8$ transforms like the eighth component of an octet and breaks flavor $SU(3)$ down to $SU(2)_{I}\otimes U(1)_{Y}$. The last term $H_3$ transforms like $I=1$, $I_3=0$ or third component of an octet. It breaks $SU(2)_{I}$ giving different masses to members of an isospin multiplet in $\tilde{B}({\bf 8})$. Given these general transformation properties for ${H}$, one can express the eight masses of the core baryon octet as \begin{eqnarray} m_{\tilde{p}}&\equiv& m_0-(F_8+D_8)-(F_3-D_3), \nonumber\\ m_{\tilde{n}}&\equiv& m_0-(F_8+D_8)+(F_3-D_3), \nonumber\\ m_{\tilde{\Lambda}}&\equiv& m_0-2D_8, \nonumber\\ m_{\tilde{\Sigma}^{+}}&\equiv& m_0+2D_8-2F_3, \nonumber\\ m_{\tilde{\Sigma}^{0}}&\equiv& m_0+2D_8, \label{10} \\ m_{\tilde{\Sigma}^{-}}&\equiv& m_0+2D_8+2F_3, \nonumber\\ m_{\tilde{\Xi}^{0}}&\equiv& m_0+(F_8-D_8)-(F_3+D_3), \nonumber\\ m_{\tilde{\Xi}^{-}}&\equiv& m_0+(F_8-D_8)+(F_3+D_3), \nonumber \end{eqnarray} \noindent where $m_0\equiv\langle\tilde{B}|H_0|\tilde{B}\rangle$ is the common core mass, while $F_8$ and $D_8$ ($F_3$ and $D_3$) represent the two reduced matrix elements for ${H_8}$ (${H_3}$). It is clear that our choice of ${H}$ guarantees the three sum rules \begin{equation} m_{\tilde{\Sigma}^{0}}=\frac{1}{2} (m_{\tilde{\Sigma}^{+}}+m_{\tilde{\Sigma}^{-}}), \label{11} \end{equation} \begin{equation} m_{\tilde{p}}-m_{\tilde{n}}=(m_{\tilde{\Sigma}^{+}}-m_{\tilde{\Sigma}^{-}})- (m_{\tilde{\Xi}^{0}}-m_{\tilde{\Xi}^{-}}), \label{12} \end{equation} and \begin{equation} 2(m_{\tilde{N}}+m_{\tilde{\Xi}})=3m_{\tilde{\Lambda}}+m_{\tilde{\Sigma}} \label{13} \end{equation} where \begin{equation} m_{\tilde{N}}\equiv \frac{1}{2}(m_{\tilde{p}}+m_{\tilde{n}}),\; m_{\tilde{\Xi}}\equiv \frac{1}{2}(m_{\tilde{\Xi}^{0}}+m_{\tilde{\Xi}^{-}}),\; m_{\tilde{\Sigma}}\equiv\frac{1}{3}(m_{\tilde{\Sigma}^{+}}+m_{\tilde{\Sigma}^{-}}+m_{\tilde{\Sigma}^0}), \label{13'} \end{equation} are the average masses of the isospin multiplets. Eqs.~(\ref{12}) and (\ref{13}) correspond to the Coleman-Glashow~\cite{10} and the Gell-Mann-Okubo~\cite{11} mass formulas for the core baryons. The physical baryon masses $m_B$ do not obey these two relations exactly due to the $SU(3)$ breaking in the wavefunction (Eq.~(\ref{1})) due to parameters $a(N)$, $b(N)$ for $N={\bf 1,10,\bar{10},27}$. However, Eq.~(\ref{11}) is obeyed by $m_{\Sigma^{\pm}}$ and $m_{\Sigma^0}$ since our wavefunction respects isospin. As they stand, Eqs.~(\ref{10}) provide a model for the eight baryon masses $m_{\tilde B}$ in terms of five unknown $m_0$, $F_3$, $F_8$, $D_3$ and $D_8$. We can treat these five as independent parameters or try and connect them with the quark masses $m_q$ which enter in the baryon MM's through $\mu_q$. To do this, we note that the naive assumption that $m_{\tilde B}$ is equal to the sum of the masses of its three constituent quarks gives $m_{\tilde p}=2m_u+m_d$, $m_{\tilde\Lambda}=m_{\tilde{\Sigma}^0}=m_u+m_d+m_s$, $m_{\tilde{\Xi}^0}=2m_s+m_d$, etc. Such a model would amount to putting \begin{eqnarray} m_{0} &\equiv& m_{u}+m_{d}+m_{s}, \nonumber\\ F_{8} &\equiv& m_{s}-\frac{1}{2}(m_{u}+m_{d}), \label{14} \\ F_3 &\equiv&\frac{1}{2}(m_d-m_u), \nonumber \end{eqnarray} with $D_3=D_8=0$ in Eqs.~(\ref{10}). Motivated by this observation, for our fits we also consider the alternative model for $m_{\tilde B}$ where \begin{eqnarray} m_{\tilde p}&=&2m_u+m_d-(D_8-D_3), \nonumber\\ m_{\tilde n}&=&m_u+2m_d-(D_8+D_3), \nonumber\\ m_{\tilde \Lambda}&=&m_u+m_d+m_s-2D_8, \nonumber\\ m_{\tilde{\Sigma}^{+}}&=&2m_u+m_s+2D_8, \nonumber\\ m_{\tilde{\Sigma}^{0}}&=&m_u+m_d+m_s+2D_8, \label{15} \\ m_{\tilde{\Sigma}^{-}}&=&2m_d+m_s+2D_8, \nonumber\\ m_{\tilde \Xi^{0}}&=&m_u+2m_s-(D_8+D_3), \nonumber\\ m_{\tilde \Xi^{-}}&=&m_d+2m_s-(D_8-D_3). \nonumber \end{eqnarray} This model for $m_{\tilde B}$ treats $D_3$ and $D_8$ as extra independent parameters in fitting $m_B$ unlike the model for $m_{\tilde B}$ in Eqs.~(\ref{10}) which has five parameters. We will use both the models (Eqs.~(\ref{10}) and Eqs.~(\ref{15})) for $m_{\tilde B}$ to make simultaneous fits to baryon masses and MM's. Since the actual baryon masses satisfy Eqs.~(\ref{11})--(\ref{13}) to a good accuracy, one may ask: why not fit the $m_B$ with five parameters as in Eq.~(\ref{10})?; and does the wavefunction or coefficients $\Omega_{B\tilde B}$ play a significant role? The answers lie in the fact that a fit to the 8 physical masses $m_B$ using Eqs.~(\ref{10}) directly (with a theoretical error of 1 MeV) gives $\chi^2/DOF=50.37/3$. Instead, the use of Eq.~(\ref{8}) for $m_B$ with $m_{\tilde B}$ given by Eqs.~(\ref{10}) gives very good fits to $m_{B}$ (see Secs.~\ref{iv} and \ref{v}). So, the wavefunction parameters in $\Omega_{B\tilde B}$ do play a significant part. \subsection{Semileptonic Decays (SLD's)} \label{iiic} The detailed expressions for $G_{V,A}(B\rightarrow B^\prime)=\langle B^\prime|J_{V,A}|B\rangle$ of the charge changing hadronic vector ($J_V$) and axial vector ($J_A$) currents using our wavefunction (Eq.~(\ref{2})) are given in Ref.~\cite{9}. Here we briefly summarize how they were calculated. The $\Delta S=0$ and $\Delta S=1$ vector currents are the total isospin raising ($I_{+}=I_{+}^{(q)}+I_{+}^{(s)}$) and $V$-spin lowering ($V_{-}=V_{-}^{(q)}+V_{-}^{(s)}$) operators~\cite{9}. The operators $I_{+}^{(q)}$ and $V_{-}^{(q)}$ act on the quarks in the core baryons and $I_{+}^{(s)}$ and $V_{-}^{(s)}$ act on the sea states in the wavefunction. However, the axial vector current has a quark part $J_{A}^{(q)}$ and a sea part $J_{A}^{(s)}$ which may, in general, have different relative strengths, so that \begin{equation} J_A(\Delta S=0,1)=J_{A}^{(q)}(\Delta S=0,1)+A_{0,1}J_{A}^{(s)}(\Delta S=0,1) \label{16} \end{equation} where the constants $A_0$ and $A_1$ specify the strength of $J_{A}^{(s)}$ relative to $J_{A}^{(q)}$ for $\Delta S=0$ and $\Delta S=1$ transitions respectively. In SQM, $J_{A}^{(q)}(\Delta S=0)=\sum_{q}I_{+}^{(q)}\sigma_{z}^{q}$ and $J_{A}^{(q)}(\Delta S=1)=\sum_{q}V_{-}^{(q)}\sigma_{z}^{q}$ so that, in analogy, we took $J_{A}^{(s)}(\Delta S=0)=2I_{+}^{(s)}S_{z}^{(s)}$ and $J_{A}^{(s)}(\Delta S=1)=2V_{-}^{(s)}S_{z}^{(s)}$ where $S_{z}^{(s)}$ is the spin operator acting only on the sea states in the wavefunction. For $\Delta S=0$ transitions, the quark part was sufficient so that $A_0=0$ for all the fits. For $\Delta S=1$ transitions, a direct sea contribution through $J_A^{(s)}$ is needed when the theoretical error on the MM's is very small. \section{Combined Fits and Results} \label{iv} In the last section, we have considered three possible models for $m_{\tilde B}$ (the core baryon masses) which could be used in Eq.~(\ref{8}). These are: \paragraph*{\bf A)} The naive or simple quark model assumption that the mass of baryon $\tilde B$ is equal to the sum of its three constituent quarks. Thus, all $m_{\tilde B}$ are given in terms of 3 $m_q$'s ($q=u,d,s$) and this corresponds to use of Eq.~(\ref{15}) with $D_3=D_8=0$. This model is attractive as it does not introduce new parameters for $m_{\tilde B}$ since the $m_q$'s enter as parameters in the MM's (see Sec.~\ref{iiia}). \paragraph*{\bf B)} The model for $m_{\tilde B}$ is given by Eq.~(\ref{15}) and introduces two new parameters $D_3$ and $D_8$ for eight $m_{\tilde B}$. For the 3 average isospin multiplet masses (see Eq.~(\ref{13'})) $m_{\tilde N}$, $m_{\tilde \Sigma}$, $m_{\tilde\Xi}$ and $m_{\tilde\Lambda}$ this model has $D_8$ as an extra parameter. \paragraph*{\bf C)} This model for $m_{\tilde B}$ is given by Eq.~(\ref{10}) and introduces 5 new parameters ($m_0$, $F_3$, $D_3$, $F_8$, $D_8$) for the $m_{\tilde B}$'s and three parameters ($m_0$, $F_8$, $D_8$) for the average isospin multiplet masses. To test the viability of these models we made extensive and systematic fits to the 8 masses and 8 MM's using Eqs.~(\ref{5})--(\ref{8}). For the fits we used theoretical errors added in quadratures to the experimental errors~\cite{12}. For the MM's of the baryons we chose initially $0.1\mu_N$ and for the masses 1~MeV. A motivation for adding these errors is that all masses and MM's are treated ``democratically" \subsection{Fits for Masses and MM's} We briefly summarize the main results \cite{13}. \paragraph*{\bf Fit 1.} Use of Model~A for the masses gives $\chi^2/DOF=146/9$ for 16 data for the best fit with seven parameters (4 for the sea and 3 $m_q$'s). \paragraph*{\bf Fit 2.} Use of Model~B for the masses improves the situation dramatically because of the parameters $D_3$ and $D_8$. This best fit gives $\chi^2/DOF=5.47/8$ with eight parameters (3 for the sea, 3 $m_q$'s, $D_3$, and $D_8$). \paragraph*{\bf Fit 3.} Use of Model~C gives $\chi^2/DOF=2.5/6$ with 10 parameters (2 for the sea, 3 $m_q$'s, and 5 ($m_0$, $F_3$, $F_8$, $D_3$, $D_8$) for the masses). What we learn from the above is that Model~A is not viable. As we shall see, Model~B works only with a generous theoretical error like $0.1\mu_N$ for the MM's and later fits will use only Model~C which seems the most viable model for the core baryon masses. \subsection{Results for Masses Using the Wavefunction Determined by Earlier Fits to MM's and SLD's} In Ref.~\cite{9}, excellent 3 and 7 parameter fits were obtained to MM's and SLD's (12 data) using theoretical errors of 0.1$\,\mu_N$ and 0.001$\,\mu_N$ respectively. Since these fits specify the $m_q$'s and the wavefunction it is of interest to see their prediction for the masses. The prediction for the 4 average isospin multiplet masses using Eq.~(\ref{8}) and Model~C are given in Table~\ref{tabla4}. As we can see, the prediction for the masses are fairly good. This encouraged us to make fresh combined fits to 8 MM's, 4 semileptonic decays (SLD's) and 8 masses using Model~C. \subsection{Combined Fits to MM's, SLD's and Masses} The combined fits here are different from those in Ref.~\cite{9} since, in addition to MM's and SLD's, we also fit the baryon masses. For the masses and MM's Eqs.~(\ref{5})--(\ref{8}) were used together with Model~C for the core baryon masses. The $G_V$ and $G_A$ for the SLD's depend on the sea parameters and they were calculated as described briefly in Sec.~\ref{iiic}. For explicit expressions for $G_V$ and $G_A$ see Ref.~\cite{9}. For $\Delta S=0$ transitions, the quark part was sufficient so that $A_0=0$ for all the fits. For $\Delta S=1$ transitions, a direct sea contribution through $J_{A}^{(s)}$ is needed when the theoretical error on the MM's is very small. In Table~\ref{tabla5}, $A_1=0$ for theoretical error of 0.1$\,\mu_N$ (fit in Column~3) while for theoretical errors of 0.01$\,\mu_N$ and 0.001$\,\mu_N$ (fits in Columns~4 and 5) we took $A_1=-1$. These values for $A_0$ and $A_1$ can be treated as input values since varying them does not affect the fits too much. Fits to 20 pieces of data were made using theoretical errors of 0.1$\,\mu_N$, 0.01$\,\mu_N$ and 0.001$\,\mu_N$ for the MM's. In each case, a theoretical error of 1 MeV was used for the masses, while experimental errors were used for $G_A/G_V$ for the SLD's. The results are displayed in Table~\ref{tabla5} and the values of the parameters are compared in Table~\ref{tabla6}. Of the many good fits possible, Table~\ref{tabla5} displays fits which have a reasonable $\chi^2/\hbox{DOF}$ with as few parameters as possible\footnote{Note, very much lower $\chi^2$ can be achieved for fits in Columns~4 and 5 of Table~\ref{tabla5} with more parameters for the sea.}. The number of parameters describing the sea increase from 2 for a large theoretical error of 0.1$\,\mu_N$ to 6 for small error of 0.001$\,\mu_N$. Our fit for 0.1$\,\mu_N$ theoretical error is comparable to other phenomenological fits which use this error to fit MM's and SLD's alone. Fits for the extremely small theoretical error, {\em e.g.} 0.001$\,\mu_N$ (close to most experimental errors) are not given by other models. In contrast, our wavefunction gives a very good fit (Column~5 of Table~\ref{tabla5}) suggesting that our phenomenological model for incorporation of the sea may be in the right direction. Most of the $\chi^2$ in the 0.1$\,\mu_N$ fit is from SLD's. Actual break up for 0.1$\,\mu_N$ is $\chi^2_{\rm MM}=2.61$, $\chi^2_{\rm Masses}=0.75$, $\chi^2_{\rm SLD}=6.90$, while for 0.001$\,\mu_N$ it is $\chi^2_{\rm MM}=0.81$, $\chi^2_{\rm Masses}=0.62$, $\chi^2_{\rm SLD}=1.23$. Comparison of the parameters for 3 fits in Table~\ref{tabla6} shows the following: a) The values of the quark masses which enter in the MM's are approximately the same. For smaller theoretical error on MM's the data requires $m_u>m_d$. The $m_q$'s obtained do not satisfy Eq.~(\ref{14}) ruling out Model B for the core baryon masses. b) The 5 parameters of Model C for core baryon masses are approximately the same. It is interesting that the average core baryon octet masses $m_0\approx 1159$ MeV is close to the experimental value $\frac{1}{8}\sum_{B}m_B\approx 1151$ MeV. c) Since the $m_q$ needed for the MM's do not satisfy Eq.~(\ref{14}), it is clear that the sea is responsible (through Eq.~(\ref{8})) for a good fit to the physical baryon masses. d) One requires more parameters to describe the sea as one reduces the theoretical error on the MM's. The extra parameters are connected with the vector sea component in the wavefunction. e) $SU(3)$ flavor breaking effects in all the fits are mainly given by the scalar sea parameter $a({\bf 10})$ which contributes only to the $\Sigma^{\pm,0}$ and $\Xi^0$, $\Xi^-$ MM's and masses. For smaller theoretical error, breaking effects through the vector sea parameter $b({\bf 1})$ (which contributes only to $\mu_\Lambda$ and $m_\Lambda$) are needed. Table~\ref{tabla6} also gives the values of $(\Delta q)^p$ which are relevant for the measured [1,16] nucleon spin distribution discussed below. The interesting point to note is that in all fits $(\Delta u)^p\approx 1$, $(\Delta d)^p\approx$ $-$0.26 to $-$0.3 with quite small $(\Delta s)^p\approx 0.01$. This in contrast to other models [14,15] which require $(\Delta u)^p\approx 0.8$, $(\Delta d)^p\approx -0.5$, $(\Delta s)^p\approx -0.15$. Thus, our fits need only a small strange-quark content in the nucleon and thus, are physically quite different to the other phenomenological fits in the literature. \section{Spin Distributions} \label{v} The spin distribution, $I_{1B}$, for baryon $B$ is defined as \begin{equation} I_{1B}\equiv\int_{0}^{1}g_{1B}(x)dx, \label{18} \end{equation} where the spin structure function $g_{1B}$ occurs in polarized electron-baryon scattering. In SQM, $I_{1B}$ is given by the expectation value $I_{1B}\equiv\langle B|\hat{I}_{1}^{(q)}|B\rangle$ where the quark operator $\hat{I}_{1}^{(q)}=(1/2)\sum_{q}e_{q}^2\sigma_{Z}^{q}$. This gives \begin{equation} I_{1B}^{(q)}=\frac{1}{18}\left [ 4(\Delta u)^B+(\Delta d)^B+(\Delta s)^B\right ]. \label{19} \end{equation} In our model in addition to the quarks there can be a direct sea contribution $I_{1B}\equiv\langle B|\hat{I}_{1B}^{(s)}|B\rangle$ where by analogy we take $\hat{I}_{1B}^{(s)}=e_s^2S_{Z}^{(s)}$. Thus only the charged states in the vector sea will contribute to $I_{1B}^{(s)}$. For the nucleons, one obtains \begin{equation} I_{1p}^{(s)}=\frac{1}{3N_1^2}\left (\bar{\beta}_{2}^{\prime 2}+\frac{2}{3} \bar{\beta}_{3}^{\prime 2}+\frac{1}{3}\bar{\beta}_{4}^{\prime 2}\right ),\qquad I_{1n}^{(s)}=\frac{1}{3N_1^2}\left (\frac{2}{3} \bar{\beta}_{3}^{\prime 2}+\frac{2}{3}\bar{\beta}_{4}^{\prime 2}\right ). \label{20} \end{equation} Putting the two contributions together we have \begin{equation} I_{1B}=I_{1B}^{(q)}+B_{1}I_{1B}^{(s)}, \label{21} \end{equation} where $B_1$ determines the strength of the direct sea contribution to the valence quark contribution. Since the value of $B_1$ is not known \'a priori, so phenomenologically it may be treated as a parameter. Experiment [1,16] gives $I_{1p}=0.126\pm 0.018$ and $I_{1n}=-0.08\pm 0.06$ which are very different from the SQM predictions $I_{1p}=5/18=0.2778$ and $I_{1n}=0$. One must note that the EMC experiment gives $I_{1p}$ for $\langle Q^2\rangle =10.7\, (\rm{GeV}/\rm{c})^2$ and this could be very different for the very low $Q^2$ ($\approx 0$) result predicted by SQM or other theoretical models. This could mean that a model which gives values for $I_{1B}$ differing by 2--3 standard deviations from experiment may be quite acceptable. Using the values for $(\Delta q)^p$ in Table~\ref{tabla6} it is clear that our values for $I_{1p}^{(q)}$ ($\approx 0.2$) are much lower than the SQM value but still $4\sigma$ higher than experiment. This may be due to the large $\langle Q^2\rangle$ in the experiment. In our model, in addition to the quark part $I_{1B}^{(q)}$ one can invoke the direct sea contribution $I_{1B}^{(s)}$. The numerical values are listed in Table~\ref{tabla7} with the choice $B_{1}=-1$. As one can see, one obtains good agreement with experiment only for the fit (second column, Table~\ref{tabla7}) when extremely large theoretical error for the magnetic moments was used. \section{Summary} \label{vi} We have shown that our wavefunction, for spin 1/2 baryons, which incorporates a flavor octet sea component can simultaneously give a good fit to their magnetic moments, weak decays constants $G_A/G_V$ for both $\Delta S=0,1$ semileptonic decays as well as the eight baryon masses. In addition, these fits give viable predictions for the nucleon spin distributions. The sea was found to be both scalar (spin 0) and vector (spin 1). The $SU(3)$ flavor breaking in the wavefunction is mainly due to the scalar component. Two important features of the fits are that the valence quarks carry about 70\% of the proton spin and that the nucleons have a small strange-quark content. In conclusion, our model can account for all the static properties of the eight low-lying spin 1/2 baryons. \acknowledgments This work was partially supported by CONACyT (M\'exico).
1,314,259,992,761
arxiv
\section{Introduction} In their paper \textit{Twists of Newforms} Hijikata, Pizer and Shemanske \cite{HPS} show how to decompose spaces of elliptic modular newforms into direct sums of character twists of other spaces of newforms. These decompositions provide important information about the behavior of newforms under character twists; for example, the exact level of the twist of a newform, when such a twist is itself a newform, and when a newform may be realized as the twist of a primitive newform. The main technique they used to prove these decompositions is Hijikata's formula \cite{hijikata} for the trace of the Hecke operator $T_k(n)$ acting on the space of cusp forms $S_k(N, \phi)$ of weight $k$, level $N$ and character $\phi$. Fix a positive integer $N$. The Hecke algebra spanned by the $T_k(n)$ with $n$ coprime to $N$ acting on $S_k(N,\phi)$ is semi-simple. Showing that two Hecke-modules $A$ and $B$ are isomorphic therefore reduces to showing that the trace of $T_k(n)$ on $A$ equals the trace of $T_k(n)$ on $B$ for all $n$ coprime to $N$. It is in this context that Hijikata's formula is applied. For instance, in Theorem 3.2 they take $A$ to be the space $S^0_k(N,\omega\phi)$ generated by newforms of level $N$ and character $\omega\phi$ and $B$ to be the space $S_k^0(N,\overline{\omega}\phi)^{\omega}$ generated by twists (by $\omega$) of newforms of level $N$ and character $\overline{\omega}\phi$. Here $\omega$ is a Dirichlet character modulo a power of a prime dividing $N$ and $\phi$ is a Dirichlet character whose conductor is coprime to the conductor of $\omega$. Hijikata, Pizer and Shemanske use Hijikata's formula for the trace of $T_k(n)$ to show that $$S^0_k(N,\omega\phi)\cong S_k^0(N,\overline{\omega}\phi)^{\omega}.$$ Hijikata's formulas for the trace of Hecke operators apply in much more general contexts than modular forms on subgroups of $SL_2(\mathbb Z)$. For instance, they apply equally well to spaces of Hilbert modular forms. In theory one could use these more general formulas in order to extend the results of \cite{HPS} to the Hilbert modular setting. However the general formulas are quite complicated, so it is of interest to find a more elementary method of extending the aforementioned results. In this paper we prove several of the results of \cite{HPS} for Hilbert modular forms without appealing to formulas for the traces of Hecke operators. In fact, we use only the basic properties of newforms which were proven for elliptic modular forms in the fundamental papers \cite{li} and \cite{atkin-li} of Li, and Atkin and Li, and later extended to Hilbert modular forms by Shemanske and Walling \cite{shemanske-walling}. Thus the results of this paper are new for Hilbert modular forms over totally real number fields other than $\mathbb Q$, and provide simplified proofs for modular forms over $\mathbb Q$ (the elliptic modular case). A sample result is the following (see Section \ref{section:prelims} for notation and terminology): \begin{theorem} Let $\cn$ be an integral ideal which we decompose as $\cn=\cp \cn_0$ for $\cp$ a power of a prime ideal $\p$ coprime to $\cn_0$. Set $\nu=ord_{\p}\cp$. Let $\phi$ be a numerical character modulo $\cn$ and $\Phi$ be a Hecke character extending $\phi\phi_{\infty}$ which satisfies $\frac{\nu}{2}<e(\Phi_{\cp})=ord_{\p}(\mathfrak{f}_{\Phi_{\cp}})<\nu$. Then $$\mathscr{S}_k^+(\cn,\Phi)=\bigoplus_{e(\Psi)=\nu-e(\Phi_{\cp})} \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi )^{\overline{\Psi}},$$ where the sum $\bigoplus_{e(\Psi)=\nu-e(\Phi_{\cp})} $ is taken over all $\p$-primary Hecke characters $\Psi$ with conductor $\p^{\nu-e(\Phi_{\cp})}$ and infinite part $\Psi_{\infty}(a)=\mbox{sgn}(a)^l$ for $l\in\mathbb Z^n$ and $a\in K_{\infty}^\times$. \end{theorem} \section{Notation and Preliminaries}\label{section:prelims} For the most part we follow the notation of \cite{shemanske-walling, shimura-ann,shimura-duke}. However, to make this paper somewhat self-contained, we shall briefly review the basic definitions of the functions and operators which we shall study. Let $K$ be a totally real number field of degree $n$ over $\mathbb{Q}$ with ring of integers $\mathcal{O}$, group of units $\mathcal{O}^\times$ and totally positive units $\mathcal{O}^\times_+$. Fix an embedding $a\mapsto (a^{(1)},\cdots,a^{(n)})$ of $K$ into $\mathbb{R}^n$. Let $\diff$ be the different of $K$. If $\q$ is a finite prime of $K$, we denote by $K_{\q}$ the completion of $K$ at $\q$, $\mathcal{O}_{\q}$ the valuation ring of $K_{\q}$, and $\pi_{\q}$ a local uniformizer. We denote by $K_A$ the ring of $K$-adeles and by $K_A^\times$ the group of $K$-ideles. As usual we view $K$ as a subgroup of $K_A$ via the diagonal embedding. If $\tilde{\alpha}\in K_A^\times$, we let $\tilde{\alpha}_{\infty}$ denote the archimedean part of $\tilde{\alpha}$ and $\tilde{\alpha}_0$ the finite part of $\tilde{\alpha}$. If $\mathcal{J}$ is an integral ideal we let $\tilde{\alpha}_{\mathcal{J}}$ denote the $\mathcal{J}$-part of $\tilde{\alpha}$. For an integral ideal $\cn$ we define a numerical character $\phi$ modulo $\cn$ to be a character $\phi: (\mathcal{O}/\cn)^\times \rightarrow \mathbb{C}^\times$, and a Hecke character to be a continuous character on the idele class group: $\Phi: K_A^\times/ K^\times \rightarrow \mathbb{C}^\times$. We denote the induced character on $K_A^\times$ by $\Phi$ as well. Every Hecke character is of the form $\Phi(\tilde{\alpha})=\prod_{\nu} \Phi_{\nu}(\alpha_{\nu})$ where each $\Phi_\nu$ is a character $\Phi_{\nu}: K_{\nu}^\times \longrightarrow \mathbb{C}^\times$. The conductor, $\mbox{cond}(\Phi)$, of $\Phi$ is defined to be the modulus whose finite part is $\mathfrak{f}_{\Phi}$ (see \cite{heilbronn}) and whose infinite part is the formal product of those archimedean primes $\nu$ for which $\Phi_{\nu}$ is nontrivial. In the case that $\mathfrak{f}_{\Phi}$ is a power of a single prime $\q$, we define the exponential conductor $e(\Phi)$ to be the integer such that $\mathfrak{f}_{\Phi}=\q^{e(\Phi)}$. We adopt the convention that $\phi$ and $\psi$ will always denote numerical characters and $\Phi$ and $\Psi$ will denote Hecke characters. Let $GL_2^+(K)$ denote the group of invertible matrices with totally positive determinant and $\ch$ the complex upper half-plane. Then $GL_2^+(K)$ acts on $\ch^n$ via fractional linear transformations as follows: $$\left( \begin{array}{ c c } a& b \\ c&d \end{array} \right) \mapsto \huge\left[ \tau\rightarrow \left( \cdots, \frac{a^{(\nu)}\tau_\nu+b^{(\nu)}}{c^{(\nu)}\tau_\nu+d^{(\nu)}},\cdots \right) \huge\right]$$ Let $k=(k_1,...,k_n)\in \mathbb{Z}_+^n, \tau\in \ch^n$ and set $$(c\tau+d)^k=\prod_{\nu=1}^n (c^{(\nu)}\tau_{\nu}+d^{(\nu)})^{k_{\nu}}$$ and for $A\in GL_2^+(K)$ $$\dete(A)^k = \prod_{\nu=1}^n (a^{(\nu)}d^{(\nu)}-b^{(\nu)}c^{(\nu)})^{k_\nu}.$$ For $N\in \mathbb{Z}_+$, let $\Gamma_N$ denote the kernel of the reduction map $\sltwo(\mathcal{O})\rightarrow \sltwo(\mathcal{O}/N\mathcal{O}).$ Following Shimura \cite{shimura-ann, shimura-duke}, we define $M_k(\Gamma_N)$ to be the complex vector space of functions $f$ which are holomorphic on $\ch^n$ and at the cusps of $\Gamma_N$ such that $$f(A\tau)=\dete(A)^{-\frac{k}{2}}(c\tau+d)^kf(\tau)$$ for all $A=\left( \begin{array}{ c c } a& b \\ c&d \end{array} \right)\in\Gamma_N$. Let $M_k=\bigcup_{N=1}^{\infty} M_k(\Gamma_N)$. For a fractional ideal $\mathcal{I}$ and integral ideal $\cn$ we set $$\Gamma_0(\cn,\mathcal{I})=\{ A\in \left( \begin{array}{ c c } \mathcal{O}& \mathcal{I}^{-1}\diffinv \\ \cn\mathcal{I}\diff & \mathcal{O} \end{array} \right) : \dete A \in \mathcal{O}^\times_+ \}.$$ Let $\theta : \mathcal{O}^\times_+\rightarrow \mathbb{C}^\times$ be a character of finite order and note that there exists an element $m\in\mathbb{R}^n$ such that $\theta(a)=a^{im}$ for all totally positive $a$. While such an $m$ is not unique, we shall fix one such $m$ for the remainder of this paper. Let $\phi$ be a numerical character modulo $\cn$ and define $M_k(\Gamma_0(\cn,\mathcal{I}),\phi,\theta)$ to be the set of $f\in M_k$ which satisfy $$f(A\tau)=\dete(A)^{-\frac{k}{2}}\phi(a)\theta(\dete A) (c\tau+d)^k f(\tau)$$ for all $A=\left( \begin{array}{ c c } a& b \\ c&d \end{array} \right)\in \Gamma_0(\cn,\mathcal{I})$. Fix a set of strict ideal class representatives $\mathcal{I}_1,...,\mathcal{I}_h$ of $K$, set $\Gamma_{\lambda}=\Gamma_0(\cn,\mathcal{I}_{\lambda})$, and put $$\mathfrak{M}_k(\cn,\phi,\theta)=\prod_{\lambda=1}^h M_k(\Gamma_{\lambda},\phi,\theta).$$ We are interested in studying $h$-tuples $(f_1,...,f_h)\in\mathfrak{M}_k(\cn,\phi,\theta)$. In order to deal with class number $h>1$ we follow Shimura \cite{shimura-ann, shimura-duke} and describe Hilbert modular forms as functions on an idele group. Let $G_A=GL_2(K_A)$ and view $G_K=GL_2(K)$ as a subgroup of $G_A$ via the diagonal embedding. Denote by $G_{\infty} = GL_2(\mathbb{R})^n$ the archimedean part of $G_A$ and by $G_{\infty +}$ the subgroup of elements having totally positive determinant. For an integral ideal $\cn$ of $\mathcal{O}$ and a prime $\p$, let $$Y_{\p}(\cn)=\{ A=\left( \begin{array}{ c c } a& b \\ c&d \end{array} \right) \in \left( \begin{array}{ c c } \mathcal{O}_{\p}& \diffinv\mathcal{O}_{\p} \\ \cn\diff\mathcal{O}_{\p} & \mathcal{O}_{\p} \end{array} \right) : \dete A\in K_{\p}^\times, (a\mathcal{O}_{\p},\cn\mathcal{O}_{\p})=1 \},$$ $$W_{\p}(\cn)=\{ x\in Y_{\p}(\cn) : \dete x\in \mathcal{O}^\times_{\p} \}$$ and put $$Y=Y(\cn)=G_A\cap \left(G_{\infty +}\times \prod_{\p} Y_{\p}(\cn)\right),$$ $$W=W(\cn)=G_{\infty +}\times \prod_{\p} W_{\p}(\cn).$$ Given a numerical character $\phi$ modulo $\cn$ define a homomorphism $\phi_Y: Y\rightarrow \mathbb{C}^\times$ by setting $\phi_Y(\left( \begin{array}{ c c } \tilde{a}& * \\ *&* \end{array} \right))=\phi(\tilde{a}_{\cn}\mbox{ mod }\cn )$. Given a fractional ideal $\mathcal I$ of $K$ define $\tilde{\mathcal{I}}=(\mathcal{I}_{\nu})_{\nu}$ to be a fixed idele such that $\mathcal{I}_{\infty}=1$ and $\tilde{\mathcal{I}}\mathcal{O}=\mathcal{I}$. For $\lambda=1,...,h,$ set $x_{\lambda}=\left( \begin{array}{ c c } 1& 0 \\ 0&\tilde{I}_{\lambda} \end{array} \right)\in G_A$. By the Strong Approximation theorem we have $$G_A=\bigcup_{\lambda=1}^h G_K x_{\lambda} W=\bigcup_{\lambda=1}^h G_K x_{\lambda}^{-\iota} W$$ where $\iota$ denotes the canonical involution on two-by-two matrices. For an $h$-tuple $(f_1,...,f_h)\in\mathfrak{M}_k(\cn,\phi,\theta)$ we define a function $\f: G_A\rightarrow \mathbb{C}$ by $$\f(\alpha x_{\lambda}^{-\iota}w)=\phi_Y(w^{\iota})\dete(w_{\infty})^{im}(f_{\lambda}\mid w_{\infty})(\textbf{i})$$ for $\alpha\in G_K$, $w\in W(\cn)$ and $\textbf{i}=(i,...,i)$ (with $i=\sqrt{-1}$). Here $$f_{\lambda}\mid \left( \begin{array}{ c c } a& b \\ c&d \end{array} \right)(\tau)=(ad-bc)^{\frac{k}{2}}(c\tau+d)^{-k} f_{\lambda}\left(\frac{a\tau+b}{c\tau+d}\right).$$ As in \cite{shimura-ann, shimura-duke}, we identify $\mathfrak{M}_k(\cn,\phi,\theta)$ with the set of functions $\f: G_A\rightarrow \mathbb{C}$ satisfying \begin{enumerate} \item $\f(\alpha x w)=\phi_Y(w^{\iota})\f(x)$ for all $\alpha\in G_K, x\in G_A, w\in W(\cn), w_{\infty}=1$ \item For each $\lambda$ there exists an element $f_{\lambda}\in M_k$ such that $$\f(x_{\lambda}^{-\iota}y)=\dete(y)^{im}(f_{\lambda}\mid y)(\textbf{i})$$ for all $y\in G_{\infty +}$. \end{enumerate} Let $\phi_{\infty}: K_A^\times\rightarrow \mathbb{C}^\times$ be defined by $\phi_{\infty}(\tilde{a})=\mbox{sgn}(\tilde{a}_{\infty})^k|\tilde{a}_{\infty}|^{2im}$, where $m$ was defined in the definition of $\theta$. We say that a Hecke character $\Phi$ extends $\phi\phi_{\infty}$ if $\Phi(\tilde{a})=\phi(\tilde{a}_{\cn}\mbox{ mod }\cn)\phi_{\infty}(\tilde{a})$ for all $\tilde{a}\in K_{\infty}^\times \times \prod_{\p} \mathcal{O}_{\p}^\times$. If $\mathfrak{P}_{\infty}$ denotes the $K$-modulus consisting of the product of all the infinite primes of $K$, then any Hecke character $\Phi$ extending $\phi\phi_{\infty}$ has conductor dividing $\cn\mathfrak{P}_{\infty}$. Henceforth we will use the word conductor to refer to the finite part of the conductor. If $\phi$ is a numerical character modulo $\cp\cn_0$ where $\cp=\p^a$ is a power of a prime $\p$ and $(\p,\cn_0)=1$, then by the Chinese Remainder Theorem we have a decomposition $\phi=\phi_{\cp}\phi_{\cn_0}$ where $\phi_{\cp}$ is a numerical character modulo $\cp$ and $\phi_{\cn_0}$ is a numerical character modulo $\cn_0$. If $\Phi_{\cp}$ is a Hecke character extending $\phi_{\cp}$ (i.e. trivial infinite part) and $\Phi_{\cn_0}$ is a Hecke character extending $\phi_{\cn_0}\phi_{\infty}$ then it is clear that $\Phi=\Phi_{\cp}\Phi_{\cn_0}$. Throughout this paper we shall adopt this convention and decompose Hecke characters $\Phi$ extending numerical characters modulo $\cp\cn_0$ as $\Phi=\Phi_{\cp}\Phi_{\cn_0}$ where $\Phi_{\cp}$ has trivial infinite part. Given a Hecke character $\Phi$ extending $\phi\phi_{\infty}$ we define an ideal character $\Phi^*$ modulo $\cn\mathfrak{P}_{\infty}$ by \begin{displaymath} \left\{ \begin{array}{ll} \Phi^*(\p)=\Phi(\tilde{\pi}_{\p}) & \textrm{for $\p\nmid \cn$ and $\tilde{\pi}\mathcal{O}=\p,$}\\ \Phi^*(\mathfrak{a})=0 & \textrm{if $(\mathfrak{a},\cn)\neq 1$ }\\ \end{array} \right. \end{displaymath} Observe that for any $\tilde{a}\in K_A^\times$ with $(\tilde{a}\mathcal{O},\cn)=1$, $\Phi(\tilde{a})=\Phi^*(\tilde{a}\mathcal{O})\phi(\tilde{a}_{\cn})\phi_{\infty}(\tilde{a})$. For $\tilde{s}\in K_A^\times$, define $\f^{\tilde{s}}(x)=\f(\tilde{s}x)$. The map $\tilde{s}\longrightarrow \left( \f\mapsto \f^{\tilde{s}}\right)$ defines a unitary representation of $K_A^\times$ in $\mathfrak{M}_k(\cn,\phi,\theta)$. By Schur's Lemma the irreducible subrepresentations are all one-dimensional (since $K_A^\times$ is abelian). For a character $\Phi$ on $K_A^\times$, let $\mathscr{M}_k(\cn,\Phi)$ denote the subspace of $\mathfrak{M}_k(\cn,\phi,\theta)$ consisting of all $\f$ for which $\f^{\tilde{s}}=\Phi(\tilde{s})\f$ and let $\mathscr{S}_k(\cn,\Phi)\subset \mathscr{M}_k(\cn,\Phi)$ denote the subspace of cusp forms. If $s\in K^\times$ then $\f^{s}=\f$. It follows that $\mathscr{M}_k(\cn,\Phi)$ is nonempty only when $\Phi$ is a Hecke character. If $\f=(f_1,...,f_h)\in \mathfrak{M}_k(\cn,\phi,\theta)$, then each $f_{\lambda}$ has a Fourier expansion $$f_{\lambda}(\tau)=a_{\lambda}(0)+\sum_{0\ll \xi\in\mathcal{I}_{\lambda}} a_{\lambda}(\xi) e^{2\pi i \mbox{tr} (\xi\tau)}.$$ If $\mathfrak{m}$ is an integral ideal then following Shimura we define the $\mathfrak{m}$-th `Fourier' coefficient of $\f$ by \begin{displaymath} C(\mathfrak{m},\f)=\left\{ \begin{array}{ll} N(\mathfrak{m})^{\frac{k_0}{2}}a_{\lambda}(\xi)\xi^{-\frac{k}{2}-im}& \textrm{if $\mathfrak{m}=\xi\mathcal{I}_{\lambda}^{-1}\subset\mathcal{O}$}\\ 0 & \textrm{otherwise}\\ \end{array} \right. \end{displaymath} where $k_0=\mbox{max}\{k_1,...,k_n\}$. Given $\f\in\mathfrak{M}_k(\cn,\phi,\theta)$ and $y\in G_A$ define a slash operator by setting $(\f\mid y)(x)=\f(xy^{\iota})$. For an integral ideal $\mathfrak{r}$ define the shift operator $B_{\mathfrak{r}}$ by $$\f\mid B_{\mathfrak{r}}=N(\mathfrak{r})^{-\frac{k_0}{2}} \f\mid \left( \begin{array}{ c c } 1& 0 \\ 0&\tilde{\mathfrak{r}}^{-1} \end{array} \right).$$ The shift operator maps $\mathscr{M}_k(\cn,\Phi)$ to $\mathscr{M}_k(\mathfrak{r}\cn,\Phi)$ and takes cusp forms to cusp forms. Further, $C(\mathfrak{m},\f\mid B_{\mathfrak{r}})=C(\mathfrak{m}\mathfrak{r}^{-1},\f)$. It is clear that $\f \mid B_{\mathfrak{r}_1}\mid B_{\mathfrak{r}_2}=\f\mid B_{\mathfrak{r}_1\mathfrak{r}_2}$. For an integral ideal $\mathfrak{r}$ the Hecke operator $T_{\mathfrak{r}}=T_{\mathfrak{r}}^{\cn}$ maps $\mathscr{M}_k(\cn,\Phi)$ to itself regardless of whether or not $(\mathfrak{r},\cn)=1$. This action is given on Fourier coefficients by $$C(\mathfrak{m},\f\mid T_{\mathfrak{r}})=\sum_{\mathfrak{m}+\mathfrak{r}\subset\mathfrak{a}} \Phi^*(\mathfrak{a})N(\mathfrak{a})^{k_0-1}C(\mathfrak{a}^{-2}\mathfrak{m}\mathfrak{r},\f).$$ Like the shift operator, $T_{\mathfrak{r}}$ takes cusp forms to cusp forms. Also note that if $(\mathfrak{a},\mathfrak{r})=1$ then $B_{\mathfrak{a}}T_{\mathfrak{r}}=T_{\mathfrak{r}}B_{\mathfrak{a}}$. Given $\f\in \mathscr{S}_k(\cn,\Phi)$ we define the annihilator operator $A_{\p}$ by $$\f\mid A_{\p} = \f-\f\mid T_{\p}\mid B_{\p}.$$ Let $\mathscr{S}_k^-(\cn,\Phi)$ be the subspace of $\mathscr{S}_k(\cn,\Phi)$ generated by all $\g\mid B_{\mathcal{Q}}$ where $\g\in \mathscr{S}_k(\cn^\prime,\Phi)$ for some proper divisor $\cn^\prime$ of $\cn$ with $\mathcal{Q}\cn^\prime\mid \cn$. This space is invariant under the action of the Hecke operators $T_\mathfrak{r}$ with $(\mathfrak{r},\cn)=1$. Shimura defines ((2.28) of \cite{shimura-duke}) a Petersson inner product $\langle \f,\g\rangle$ for $\f,\g\in\mathscr{S}_k(\cn,\Phi)$. With respect to this inner product the Hecke operators satisfy $$\Phi^*(\mathfrak{m})\langle\f\mid T_{\mathfrak{m}},\g\rangle=\langle \f,\g\mid T_{\mathfrak{m}}\rangle$$ for integral ideals $\mathfrak{m}$ coprime to $\cn$. Let $\mathscr{S}_k^+(\cn,\Phi)$ denote the orthogonal complement of $\mathscr{S}_k^-(\cn,\Phi)$ in $\mathscr{S}_k(\cn,\Phi)$. It follows from our discussion above that $\mathscr{S}_k^+(\cn,\Phi)$ is invariant under the Hecke operators $T_{\mathfrak{r}}$ with $(\mathfrak{r},\cn)=1$. \begin{definition}A newform $\f$ in $\mathscr{S}_k(\cn,\Phi)$ is a form in $\mathscr{S}_k^+(\cn,\Phi)$ which is a simultaneous eigenform for all Hecke operators $T_{\q}$ with $\q$ a prime not dividing $\cn$. We say that $\f$ is normalized if $C(\mathcal{O},\f)=1$.\end{definition} As in the classical case, if $\f\in \mathscr{S}_k(\cn,\Phi)$ is a newform with Hecke eigenvalues $\{\lambda_{\p} : \p \mbox{is prime} \},$ then $C(\p,\f)=\lambda_{\p}C(\mathcal{O},\f)$ for all primes $\p\nmid\cn$. Since $\{ T_{\q} : \q\nmid \cn\}$ is commuting family of hermitian operators, $\mathscr{S}_k^+(\cn,\Phi)$ has an orthogonal basis consisting of newforms. If $\g\in \mathscr{S}_k^-(\cn,\Phi)$ is a simultaneous eigenform for all $T_{\q}$ with $\q\nmid \cn$ then there exists a newform $\textbf{h}\in \mathscr{S}_k^+(\cn^\prime,\Phi)$ with $\cn^\prime\mid \cn$ having the same eigenvalues as $\g$ for all such $T_{\q}$. Finally, if $\f, \g\in \mathscr{S}_k(\cn,\Phi)$ are both simultaneous eigenforms for all Hecke operators $T_{\q}$ with $\q$ a prime not dividing $\cn$ having the same Hecke eigenvalues, then we say that $\f$ is equivalent to $\g$ and write $\f \sim \g$. If $\f$ is a newform and $\f\sim \g$, then there exists $c\in \mathbb{C}^\times$ such that $\f=c\g$. This follows from Theorem 3.5 of \cite{shemanske-walling}. \section{Twists of Newforms} Throughout this section $\p$ will denote a fixed prime ideal of $\mathcal{O}$. Fix an integral ideal $\cn$ and write $\cn=\cp{\mathcal{N}_0}$ where $\cp$ is the $\p$-primary part of $\cn$ and $(\cp,{\mathcal{N}_0})=1$. Fix a space $\mathscr{S}_k(\cn,\Phi)\subset \mathfrak{S}_k(\cn,\phi, m)$, where $\Phi$ is a Hecke character extending $\phi\phi_{\infty}$. \begin{definition} If $\f\in\mathscr{S}_k(\cn,\Phi)$ is a normalized newform and $\Psi$ is a Hecke character then we define the twist of $\f$ by $\Psi$, denoted $\f_\Psi$, by $$\textbf{f}_\Psi(x)=\tau(\overline{\Psi})^{-1}\Psi(\dete x) \sum_{r\in \mathfrak{f}_{\Psi}^{-1}\mathfrak{d}^{-1}/\mathfrak{d}^{-1}} \overline{\Psi}_{\infty}(r)\overline{\Psi}^*(r\mathfrak{f}_{\Psi}\diff)\f\mid \left( \begin{smallmatrix} 1& r \\ 0& 1 \end{smallmatrix} \right)_0 (x), $$ where $\tau(\overline{\Psi})$ is the Gauss sum associated to $\overline{\Psi}$ defined in (9.31) of \cite{shimura-ann} and the subscript $0$ denotes the projection onto the nonarchimedean part. \end{definition} \begin{proposition}\label{proposition:crudebound} Let notation be as above and set $\mathcal{L}=lcm\{\cn,\cond_{\Phi}\mathfrak{f}_{\Psi},\mathfrak{f}_{\Psi}^2\}$. If $\f\in\mathscr{S}_k(\cn,\Phi)$ is a normalized newform then $\f_{\Psi}\in \mathscr{S}_k(\mathcal{L},\Psi^2\Phi)$ and $C(\mathfrak{m},\f_{\Psi})=\Psi^*(\mathfrak{m})C(\mathfrak{m},\f)$ for all integral ideals $\mathfrak{m}$. \end{proposition} \begin{proof} This is Proposition 4.5 of \cite{shimura-duke}. \end{proof} The following proposition is trivial to verify using the action of the Hecke operators on Fourier coefficients. \begin{proposition}\label{proposition:eigentwist} Let notation be as above and $\mathfrak q$ be a prime with $\mathfrak{q}\nmid \mathfrak{f}_{\Psi}$. For $\f\in \mathscr{S}_k(\mathcal{N},\Phi)$ we have $\f_{\Psi}\mid T_{\mathfrak{q}} = \Psi^*(\mathfrak q) (\f\mid T_{\mathfrak q})_{\Psi}$. \end{proposition} Although Proposition \ref{proposition:crudebound} gives an upper bound for the exact level of $\f_{\Psi}$, one can obtain better bounds in certain special cases. Of particular interest to us is the case in which $\Psi=\overline{\Phi}_{\cp}$. The following proposition gives an improved bound on the level of $\f_{\Psi}$ in this special case and generalizes Proposition 3.6 of \cite{atkin-li}. \begin{proposition}\label{proposition:levelbound} Let $\mathfrak{f}$ be the conductor of $\Phi_{\mathcal{P}}$. Set \begin{displaymath} \cl = \left\{ \begin{array}{ll} \mathcal{N} & \textrm{if $ord_{\mathfrak{p}}(\mathfrak{f})<ord_{\mathfrak{p}}(\mathcal{P})$}\\ \mathfrak{p}\mathcal{N} & \textrm{if $ord_{\mathfrak{p}}(\mathfrak{f})=ord_{\mathfrak{p}}(\mathcal{P})$}\\ \end{array} \right. \end{displaymath} If $\f\in \mathscr{S}_k(\cn,\Phi)$ then $\textbf{f}_{\overline{\Phi}_{\mathcal{P}}}\in \mathscr{S}_k(\cl,\overline{\Phi}_{\mathcal{P}}\Phi_{{\mathcal{N}_0}})$. \end{proposition} \begin{proof} Let $\alpha\in G_K, x\in G_A$ and $w\in W(\cl)$ with $w_\infty=1$. We will show that $$\textbf{f}_{\overline{\Phi}_{\mathcal{P}}}(\alpha x w)=(\phi_{{\mathcal{N}_0}}\overline{\phi}_{\cp})_Y(w^\iota) \textbf{f}_{\overline{\Phi}_{\mathcal{P}}}(x).$$ Write $w=\left( \begin{array}{c c} \tilde{a} & \td^{-1}\tilde{b} \\ \tilde{c}\tl\td& \tilde{d} \end{array} \right)$. Let $r \in \mathfrak{f}^{-1}\mathfrak{d}^{-1}/\mathfrak{d}^{-1}$ and observe that by the Strong Approximation theorem there exists an element $r^\prime\in K$ such that \begin{enumerate} \item $ord_\mathfrak{q}(r^\prime)\geq 0$ for all primes $\mathfrak{q}$ such that $ord_{\q}(\cond\diff)=0$ \item $ord_\mathfrak{q}(r^\prime)\geq -ord_{\mathfrak{q}}(\mathfrak d)$ for all primes $\mathfrak q$ such that $q\mid \mathfrak d$ and $\mathfrak{q}\neq \mathfrak{p}$ \item $a_{\mathfrak{p}}r-r^\prime(d_{\mathfrak{p}}-c_{\mathfrak{p}}\cl_{\p} \mathfrak{d}_{\p} r) \in \diff^{-1}\mathcal{O}_{\p}$ \end{enumerate} Note that such an $r^\prime$ lies in $\mathfrak{f}^{-1}\mathfrak{d}^{-1}$. We claim that such a $r^\prime$ is uniquely determined in $\mathfrak{f}^{-1}\mathfrak{d}^{-1}/\mathfrak{d}^{-1}$. More precisely, suppose that $r_0, r_1 \in \mathfrak{f}^{-1}\mathfrak{d}^{-1}/\mathfrak{d}^{-1}$ give rise to $r^0,r^1\in\mathfrak{f}^{-1}\mathfrak{d}^{-1}/\mathfrak{d}^{-1}$. We will show that if $r^0+\diffinv=r^1+\diffinv$ then $r_0+\diffinv=r_1+\diffinv$. To do this we will suppose that $(r^0-r^1)\in\diffinv$ and show that $(r_0-r_1)\in\diffinv\mathcal{O}_{\mathfrak{q}}$ for all finite primes $\q$. It will then follow from the local-global correspondence for lattices that $(r_0-r_1)\in\diffinv$. We have two cases to consider. Case 1 - $\q\neq\p$: Both $r_0$ and $r_1$ lie in $\cond^{-1}\diffinv$ and hence in $\cond^{-1}\diffinv\mathcal{O}_{\q}=\mathcal{O}_{{\q}}^\times\diffinv\mathcal{O}_{\q}\subset \diffinv\mathcal{O}_{\q}$. It follows that $(r_0-r_1)\in\diffinv\mathcal{O}_{\q}$. Case 2 - $\q=\p$: By condition (3) we have $$r_0a_{\p}-r^0(d_{\p}-c_{\p}\mathcal{L}_{\p} \mathfrak{d}_{\p}r_0)\in\mathfrak{d}^{-1}\mathcal{O}_{\p}$$ and $$r_1a_{\p}-r^1(d_{\p}-c_{\p}\mathcal{L}_{\p} \mathfrak{d}_{\p}r_1)\in\mathfrak{d}^{-1}\mathcal{O}_{\p}.$$ Putting these together yields $$r_0(a_{\p})-r^0(d_{\p}-c_{\p}\mathcal{L}_{\p} \mathfrak{d}_{\p}r_0)-r_1(a_{\p})+r^1(d_{\p}-c_{\p}\mathcal{L}_{\p} \mathfrak{d}_{\p}r_1)\in\mathfrak{d}^{-1}\mathcal{O}_{\p}.$$ Observe that by definition of $\cl$ and $W(\cl)$, each of the terms in parentheses lies in $\mathcal{O}_{\p}^\times$. We may therefore ease notation by writing $u_i$ for the parenthesized unit: \begin{equation}\label{equation:eq1} r_0 u_1 - r^0 u_2 - r_1 u_3 + r^1 u_4\in\mathfrak{d}^{-1}\mathcal{O}_{\p}. \end{equation} Also observe that $$u_1+\diffinv\mathcal{O}_{\p}=a_{\p}+\diffinv\mathcal{O}_{\p}=u_3+\diffinv\mathcal{O}_{\p}$$ and $$u_2+\diffinv\mathcal{O}_{\p}=d_{\p}+\diffinv\mathcal{O}_{\p}=u_4+\diffinv\mathcal{O}_{\p}.$$ It follows that $$r_0 u_1+\diffinv\mathcal{O}_{\p}=r_0 a_{\p}+\diffinv\mathcal{O}_{\p},$$ $$r_1 u_3+\diffinv\mathcal{O}_{\p}=r_1 a_{\p}+\diffinv\mathcal{O}_{\p},$$ $$r^0 u_2+\diffinv\mathcal{O}_{\p}=r^0 d_{\p}+\diffinv\mathcal{O}_{\p},$$ and $$r^1 u_4+\diffinv\mathcal{O}_{\p}=r^1 d_{\p}+\diffinv\mathcal{O}_{\p}.$$ Suppose that $(r^0-r^1)\in\diffinv\subset\diffinv\mathcal{O}_{\p}$. Then $d_{\p}(r^0-r^1)\in\diffinv\mathcal{O}_{\p}$ as well. We conclude, by Equation \ref{equation:eq1}, that $(r_1 u_3-r_0 u_1)\in\diffinv\mathcal{O}_{\p}$. This means that $a_{\p}(r_1-r_0)\in\diffinv\mathcal{O}_{\p}$, hence $(r_1-r_0)\in\diffinv\mathcal{O}_{\p}$. We have shown that $(r^0-r^1)\in\diffinv$ implies that $(r_0-r_1)\in\diffinv\mathcal{O}_{\q}$ for all finite primes $\q$, hence $(r_0-r_1)\in\diffinv$. We now show that $$\textbf{f}_{\overline{\Phi}_{\mathcal{P}}}(\alpha x w)=(\phi_{{\mathcal{N}_0}}\overline{\phi}_{\cp})_Y(w^\iota) \textbf{f}_{\overline{\Phi}_{\mathcal{P}}}(x).$$ By definition, \begin{equation}\label{equation:line1} \textbf{f}_{\overline{\Phi}_{\mathcal{P}}}(\alpha x w)=\tau(\Phi_{\mathcal{P}})^{-1}\overline{\Phi}_{\mathcal{P}}(\dete(\alpha x w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv} \Phi_{\mathcal{P}}^*(r\cond\diff)\f\mid \left( \begin{smallmatrix} 1& r \\ 0& 1 \end{smallmatrix} \right)_0 (\alpha x w) \end{equation} \begin{equation}\label{equation:line2} =\tau(\Phi_{\mathcal{P}})^{-1}\overline{\Phi}_{\mathcal{P}}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv}\Phi_{\mathcal{P}}^*(r\cond\diff)\f(\alpha x w \left( \begin{smallmatrix} 1& -r \\ 0& 1 \end{smallmatrix} \right)_0 )\end{equation} Let $r^\prime\in \cond^{-1}\diffinv / \diffinv$ correspond to $r$ (i.e. $r^\prime$ satisfies the three conditions listed in the first paragraph of this proof) and $w^\prime$ be a solution to the matrix equation \begin{equation}\label{equation:matrixeq} w \left( \begin{array}{ c c } 1& -r \\ 0& 1 \end{array} \right)_0= \left( \begin{array}{ c c } 1& -r^\prime \\ 0& 1 \end{array} \right)_0w^\prime. \end{equation} We note that \begin{equation}\label{equation:matrixwprime} w^\prime=\left( \begin{array}{ c c } \tilde{a} + \tilde{c} \tl \td r^\prime& \td^{-1}\tilde{b}+\tilde{d}r^\prime-\tilde{a}\nu-\tilde{c}\tl\td r r^\prime \\ \tilde{c}\tl\td& \tilde{d}-\tilde{c}\tl\td r \end{array} \right) \end{equation} and that the three conditions defining $r^\prime$ imply that $w^\prime\in W(\cn)$. Substituting equation \ref{equation:matrixeq} into equation \ref{equation:line2} yields \begin{equation}\label{equation:line3} \tau(\Phi_{\mathcal{P}})^{-1}\overline{\Phi}_{\mathcal{P}}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv} \Phi_{\mathcal{P}}^*(r\cond\diff)\f(\alpha x \left( \begin{smallmatrix} 1& -r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 w^\prime ) \end{equation} Because $\f\in \mathscr{S}_k(\mathcal{N},\Phi)$, we may rewrite this as \begin{equation}\label{equation:line4} =\tau(\Phi_{\mathcal{P}})^{-1}\overline{\Phi}_{\mathcal{P}}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv} \Phi_{\mathcal{P}}^*(r\cond\diff){\phi}_Y ( ({w^\prime})^\iota )\f\mid \left( \begin{smallmatrix} 1& r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 (x) \end{equation} \begin{equation}\label{equation:line5} =\tau(\Phi_{\cp})^{-1}\overline{\Phi}_{\cp}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv}\Phi_{\cp}^*(r\cond\diff)\phi_{{\mathcal{N}_0}}(d_{\p})\phi_{\cp}( d_{\p})\f\mid \left( \begin{smallmatrix} 1& r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 (x) \end{equation} \begin{equation}\label{equation:line5} =\phi_{{\mathcal{N}_0}}(d_{\p})\tau(\Phi_{\cp})^{-1}\overline{\Phi}_{\cp}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\displaystyle\sum _{r\in \cond^{-1}\diffinv/\diffinv} \Phi_{\cp}^*(r\cond\diff)\phi_{\cp}( d_{\p})\f\mid \left( \begin{smallmatrix} 1& r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 (x) \end{equation} We proceed by rewriting the sum in terms of $r^\prime$ rather than $r$. To do this we consider the expression $$\Phi_{\cp}^*(r\cond\diff)\phi_{\cp}( d_{\p})$$ inside the summation. As $\Phi_{\cp}(\tilde{\alpha})=\Phi_{\cp}^*(\tilde{\alpha}\mathcal{O}_K)\phi_{\cp}(\tilde{\alpha})$ for all $\tilde{\alpha}\in J_K$ with $(\tilde{\alpha}\mathcal{O}_K,\p)=1$ this expression is equal to: $$\Phi_{\cp}(r\tf\td)\overline{\phi}_{\cp}(r\mathfrak{f}_{\p}\mathfrak{d}_{\p})\phi_{\cp}(d_{\p}).$$ Setting $D_{\p}=\dete(w_{\p})=a_{\p}d_{\p}$, we rewrite this expression as $$\Phi_{\cp}(r\tf\td)\overline{\phi}_{\cp}(r\mathfrak{f}_{\p}\mathfrak{d}_{\p})\overline{\phi}_{\cp}(a_{\p} D^{-1}_{\p})=\Phi_{\cp}(r\tf\td)\overline{\phi}_{\cp}(a_{\p}r\mathfrak{f}_{\p}\mathfrak{d}_{\p})\overline{\phi}_{\cp}( D^{-1}_{\p}).$$ Recall the third condition defining $r^\prime$: $a_{\mathfrak{p}}r-r^\prime(d_{\mathfrak{p}}-c_{\mathfrak{p}}\mathcal{L}_{\p}\mathfrak{d}_{\p} r) \in \diff^{-1}\mathcal{O}_{\p}$. This implies $$a_{\mathfrak{p}}r\mathfrak{f}_{\p}\mathfrak{d}_{\p}-r^\prime\mathfrak{f}_{\p}\mathfrak{d}_{\p}(d_{\mathfrak{p}}-c_{\mathfrak{p}}\tilde{\cl}_{\p}\mathfrak{d}_{\p} r) \in\cond\mathcal{O}_{\p},$$ and in particular, $a_{\mathfrak{p}}r\mathfrak{f}_{\p}\mathfrak{d}_{\p}-d_{\mathfrak{p}}r^\prime\mathfrak{f}_{\p}\mathfrak{d}_{\p}\in\cond\mathcal{O}_{\p}$. This, along with the fact that $\Phi_{\cp}(r)=\Phi_{\cp}(r^\prime)=1$, shows that we now have $$\Phi_{\cp}(r^\prime\tf\td)\overline{\phi}_{\cp}(d_{\p}r^\prime\mathfrak{f}_{\p}\mathfrak{d}_{\p})\overline{\phi}_{\cp}( D^{-1}_{\p})=\Phi_{\cp}(r^\prime\tf\td)\overline{\phi}_{\cp}(r^\prime\mathfrak{f}_{\p}\mathfrak{d}_{\p})\phi_{\cp}(a_{\p})=\Phi_{\cp}^*(r^\prime\cond\diff)\phi_{\cp}(a_{\p}).$$ We have shown that $\Phi_{\cp}^*(r\cond\diff)\phi_{\cp}( d_{\p})=\Phi_{\cp}^*(r^\prime\cond\diff)\phi_{\cp}(a_{\p})$. We rewrite equation \ref{equation:line5} as \begin{equation}\label{equation:line6} =\phi_{{\mathcal{N}_0}}(d_p)\tau(\Phi_{\cp})^{-1}\overline{\Phi}_{\cp}(\dete(x))\overline{\Phi}_{\mathcal{P}}(\dete(w))\phi_{\cp}( a_{\p})\displaystyle\sum _{r^\prime \in \cond^{-1}\diffinv/\diffinv} \Phi_{\cp}^*(r^\prime\cond\diff)\f\mid \left( \begin{smallmatrix} 1& r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 (x) \end{equation} By definition of $W(\cl)$, $\dete(w)\in\mathcal{O}_{\q}^\times$ for all finite primes $\q$. It follows that $$\overline{\Phi}_{\cp}(\dete(w))=\overline{\phi}_{\cp}(\dete(w))=\overline{\phi}_{\cp}(a_{\p}d_{\p}).$$ We therefore rewrite equation \ref{equation:line6} as \begin{equation}\label{equation:line7} =\phi_{{\mathcal{N}_0}}(d_p)\tau(\Phi_{\cp})^{-1}\overline{\Phi}_{\cp}(\dete(x))\overline{\phi}_{\cp}(a_{\p}d_{\p})\phi_{\cp}( a_{\p})\displaystyle\sum _{r^\prime \in \cond^{-1}\diffinv/\diffinv} \Phi_{\cp}^*(r^\prime\cond\diff)\f\mid \left( \begin{smallmatrix} 1& r^\prime \\ 0& 1 \end{smallmatrix} \right)_0 (x) \end{equation} This is equal to $\phi_{{\mathcal{N}_0}}(d_{\p})\overline{\phi}_{\cp}(d_{\p})\f_{\overline{\Phi}_{\mathcal{P}}}(x)=(\phi_{{\mathcal{N}_0}}\overline{\phi}_{\cp})_Y(w^\iota)\f_{\overline{\Phi}_{\mathcal{P}}}(x)$. Therefore $\f_{\overline{\Phi}_{\mathcal{P}}}(\alpha x w)=(\phi_{{\mathcal{N}_0}}\overline{\phi}_{\cp})_Y(w^\iota)\f_{\overline{\Phi}_{\mathcal{P}}}(x)$ for $\alpha\in G_K, x\in G_A$ and $w\in W(\mathcal{L})$ with $w_\infty=1$. It follows that $\f_{\overline{\Phi}_{\mathcal{P}}}\in \mathscr{S}_k(\mathcal{L},\overline{\Phi}_{\mathcal{P}}\Phi_{{\mathcal{N}_0}})$.\end{proof} If $\f\in \mathscr{S}_k^+(\cn,\Phi)$ is a normalized newform and $\Psi$ is a Hecke character with $(\mathfrak{f}_{\Phi},\mathfrak{f}_{\Psi})=1$, then $\f_{\Psi}$ is always a normalized newform of $\mathscr{S}_k^+(\mathfrak{f}_{\Psi}^2\cn,\Psi^2\Phi)$ by Theorem 5.5 of \cite{shemanske-walling}. The situation when the conductors of $\Phi$ and $\Psi$ are not coprime is much more subtle and will be studied throughout the remainder of this paper. Clearly it suffices to consider characters whose conductor is a power of a single prime dividing the level $\cn$. We therefore suppose that $\Psi$ is a $\p$-primary Hecke character. Henceforth we assume that $\Psi$ is a Hecke character with conductor dividing $\cp$. The infinite part of $\Psi$ has the form $\Psi_{\infty}(a)=\mbox{sgn}(a)^l |a|^{ir}$ for $l\in\mathbb Z^n$, $r\in\mathbb R^n$ and $a\in K_{\infty}^\times$. In what follows we shall always choose $\Psi$ so that $r=0$. We will see that the vanishing of $C(\p,\f)$ lies at the heart of the question of whether or not $\f_{\Psi}$ is a newform of $\mathscr{S}_k(\cn,\Psi^2\Phi)$. We present a slightly strengthened version of Theorem 3.3 of \cite{shemanske-walling}, which will allow us to determine when $C(\p,\f)\neq 0$. \begin{theorem}\label{theorem:threethree} Let $\f$ be a normalized newform lying in $\mathscr{S}_k(\cn,\Phi)$. \begin{enumerate} \item The Dirichlet series attached to $\f$, $D(s,f)=\sum_{\mathfrak{m}\subset \mathcal{O}} C(\mathfrak{m},\f) N(\mathfrak{m})^{-s}$ has an Euler product $$D(s,\f)=\prod_{\qz\mid \cn} (1-C(\qz,\f)N(\qz)^{-s})^{-1}\times \prod_{\qo\nmid\cn}(1-C(\qo,\f)N(\qo)^{-s}+\Phi^*(\qo)N(\qo)^{k_0-1-2s})^{-1}$$ \item If $\phi$ is not defined modulo $\cn \p^{-1}$, then $|C(\p,\f)|=N(\p)^{\frac{(k_0-1)}{2}}$. \item If $\phi$ is a character modulo $\cn\p^{-1}$, then $C(\p,\f)=0$ if $\p^2\mid\cn$ and $|C(\p,\f)|^2=N(\p)^{k_0-2}$ if $\p^2\nmid\cn$. \end{enumerate} \end{theorem} \begin{proof} The statement of this theorem differs from Theorem 3.3 of \cite{shemanske-walling} only in that part 2 of the latter showed that either $C(\p,\f)=0$ or $|C(\p,\f)|=N(\p)^{\frac{(k_0-1)}{2}}$ and that $C(\p,\f)$ was non-zero for a set of primes having density 1. Kevin Buzzard has recently shown that in fact, $C(\p,\f)$ is never zero (see \cite{buzzard}), allowing us to state the above theorem in its strengthened form. \end{proof} Henceforth we use the letter $\nu$ to denote $ord_{\p}(\cp)=ord_{\p}(\cn)$. \begin{lemma}\label{lemma:vanishingcoeffs} Assume that $\nu \geq 2$ and that $e(\Phi_{\cp})<\nu$. If $\textbf{f}\in\mathscr{S}_k^+(\cn,\Phi)$ is a normalized newform then $\textbf{f}_{\overline{\Psi}\Psi}=\textbf{f}$. In particular, $$\mathscr{S}_k^+(\cn, \Phi)^{\overline{\Psi}\Psi}=\mathscr{S}_k^+(\cn, \Phi).$$ \end{lemma} \begin{proof} It follows immediately from Theorem \ref{theorem:threethree}.(3) that $C(\mathfrak{p},\textbf{f})=0$. Because $\f$ is an eigenform of $T_{\p}$ with eigenvalue $C(\p,\f)$, $C(\mathscr{I}\mathfrak{p},\textbf{f})=C(\mathscr{I},\textbf{f})C(\mathfrak{p},\textbf{f})=0$ for all integral ideals $\mathscr{I}$. Thus the annihilator operator $A_{\mathfrak p}$ acts as the identity operator on the newforms of level $\cn$ and character $\Phi$. The first part therefore follows from the observation that $\textbf{f}_{\overline{\Psi}\Psi}=\textbf{f}\mid A_\mathfrak{p}$. As newforms generate the space $\mathscr{S}_k^+(\cn, \Phi)$, we have the second part as well.\end{proof} \begin{proposition}\label{proposition:liesinnewformspace} Assume that $\nu\geq 2$ and that $0<e(\Phi_{\cp})<\nu$. If $\textbf{f}\in \mathscr{S}_k^+(\cn,\Phi)$ is a newform then $\textbf{f}_{\overline{\Phi}_{\cp}} \in \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})$ is a newform as well. \end{proposition} \begin{proof} Let $\textbf{f}\in \mathscr{S}_k^+(\cn,\Phi)$ be a normalized newform. By Proposition \ref{proposition:levelbound}, $\textbf{f}_{\overline{\Phi}_{\cp}} \in \mathscr{S}_k(\cn, \overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})$, and by Proposition \ref{proposition:eigentwist}, $\f_{\overline{\Phi}_{\cp}}$ is an eigenfunction of all the Hecke operators $T_\mathfrak{q}$ with $\mathfrak{q}$ a prime not dividing $\cn$, so there exists an ideal $\cn_0^\prime\mid\cn_0$, an integer $\mu$ satisfying $1\leq e(\Phi_{\cp})\leq\mu \leq \nu$ and a newform $\g\in \mathscr{S}_k^+(\p^\mu {\mathcal{N}_0^\prime},\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})$ such that $\f_{\overline{\Phi}_{\cp}} \sim \g$. We claim that $\cn_0^\prime=\cn_0$. Note that $\f=\f_{\overline{\Phi}_{\cp}\Phi_{\cp}}\sim \g_{\Phi_{\cp}}$ by Lemma \ref{lemma:vanishingcoeffs}, where $\g_{\Phi_{\cp}}$ has level $\p^\lambda \cn_0^\prime$ for some non-negative integer $\lambda$. Thus $\cn_0\mid\cn_0^\prime$, hence $\cn_0=\cn_0^\prime$. If $\mu=\nu$ then $\f_{\overline{\Phi}_{\cp}}$ and $\g$ are of the same level, hence there exists $c\in\mathbb{C}$ such that $\f_{\overline{\Phi}_{\cp}}=c\g$. As both forms are normalized, $c=1$ and $\f_{\overline{\Phi}_{\cp}}=\g$ is a newform, finishing the proof. We may therefore suppose that $\mu<\nu$. We claim that $e(\overline{\Phi}_{\cp})<\mu$. To show this, we will assume that $e(\overline{\Phi}_{\cp})=e(\Phi_{\cp})=\mu$ and derive a contradiction. Because $\f_{\overline{\Phi}_{\cp}} \sim \g$, we have $\f_{\overline{\Phi}_{\cp}\Phi_{\cp}}\sim \g_{\Phi_{\cp}}$ as well. By Lemma \ref{lemma:vanishingcoeffs}, $\f_{{\overline{\Phi}_{\cp}\Phi_{\cp}}}=\f$, hence $\f\sim \g_{\Phi_{\cp}}$. By Proposition \ref{proposition:levelbound}, $\g_{\Phi_{\cp}}\in \mathscr{S}_k(\p^{\mu+1}{\mathcal{N}_0},\Phi)$. Therefore $\nu\leq \mu+1$, meaning that $$ \mu+1\geq \nu > \mu.$$ It is thus clear than $\nu=\mu+1$. This means that $\f$ is a newform of level $\p^{\mu+1}{\mathcal{N}_0}$ and character $\Phi$ and $\g_{\Phi_{\cp}}$ is a normalized cuspform in the same space which is equivalent to it. Therefore there exists $c\in\mathbb{C}$ such that $\f=c\g_{\Phi_{\cp}}$. As both $\f$ and $\g_{\Phi_{\cp}}$ are normalized, we see that $c=1$ and $\f=\g_{\Phi_{\cp}}$. But as $C(\p,\g)\neq 0$ by Theorem \ref{theorem:threethree}(2), this contradicts Corollary 6.4 of \cite{shemanske-walling}, which implies that $\g_{\Phi_{\cp}}$ is not a newform of any level. We conclude that $e(\overline{\Phi}_{\cp})<\mu$. If $\mu\geq 2$ then Theorem \ref{theorem:threethree}(3) implies that the $\p$-th coefficient $C(\p,\g)$ of $\g$ is zero. Since $C(\p,\g)=0$ we have $\g=\g\mid A_{\p}$. But $$\textbf{f}_{\overline\Phi_{\cp}}=c_{\mathcal{O}}\g+c_{\p}\g\mid B_{\p}$$ and one easily checks by comparing Fourier coefficients that $c_{\mathcal{O}}=1$ and $c_{\p}=-C(\p,\g)$. Then $\textbf{f}_{\overline{\Phi}_{\cp}}=\g-C(\p,\g)\g\mid B_{\p}=\g\mid A_{\p}=\g$. Therefore $\f_{\overline{\Phi}_{\cp}}$ is a newform and we're done. Now suppose that $\mu=1$. Then $e(\overline{\Phi}_{\cp})<\mu$ implies that $\Phi_{\cp}$ is trivial. This contradicts our hypothesis that $\Phi_{\cp}$ is nontrivial. \end{proof} \begin{proposition}\label{proposition:liesinnewformspacearbitrary} Assume that $0<e(\Psi)< \frac{\nu}{2}$ and $e(\Phi_{\cp})+e(\Psi)< \nu$. If $\f\in \mathscr{S}_k^+(\cn,\Phi)$ is a newform then $\f_\Psi\in \mathscr{S}_k^+(\cn,\Psi^2\Phi)$ is a newform as well. \end{proposition} \begin{proof} We begin by noting that our hypotheses imply that $\nu\geq 3$. By Proposition \ref{proposition:crudebound}, $\f_\Psi\in \mathscr{S}_k(\p^\nu {\mathcal{N}_0},\Psi^2\Phi)$. Since $\f_\Psi$ is an eigenfunction of all the Hecke operators $T_{\q}$ with $\q$ a prime not dividing $\cn$ by Proposition \ref{proposition:eigentwist}, there exists an ideal $\cn_0^\prime\mid \cn_0$, an integer $\mu$ satisfying $0\leq e(\Phi_{\cp}\Psi^2)\leq\mu\leq \nu$ and a newform $\g\in \mathscr{S}_k^+(\p^\mu {\mathcal{N}_0^\prime},\Psi^2\Phi)$ such that $\f_\Psi \sim \g$. An argument identical to the one used in Proposition \ref{proposition:liesinnewformspace} shows that $\cn_0^\prime=\cn_0$. We will show that $e(\Phi_{\cp}\Psi^2)<\mu$ by assuming that $e(\Phi_{\cp}\Psi^2)=\mu$ and deriving a contradiction. Let $L=\mbox{max}\{\mu,e(\Phi_{\cp}\Psi^2)+e(\Psi),2e(\Psi)\}$. As $\f_\Psi \sim \g$, we have, by Lemma \ref{lemma:vanishingcoeffs}, $\f=\f_{\Psi\overline{\Psi}}\sim \g_{\overline{\Psi}}$ where $\g_{\overline{\Psi}} \in \mathscr{S}_k(\p^L {\mathcal{N}_0},\Phi)$ by Proposition \ref{proposition:crudebound}. Therefore $L\geq \nu$. We have three cases to consider. Case 1: $L=2e(\Psi)$. In this case $2e(\Psi)\geq \nu$ implies that $e(\Psi)\geq \frac{\nu}{2}$, contradicting our hypothesis that $e(\Psi)<\frac{\nu}{2}$. Case 2: $L=e(\Phi_{\cp}\Psi^2)+e(\Psi)$. We have three subcases to consider. First suppose that $e(\Phi_{\cp})>e(\Psi)$. Then $e(\Phi_{\cp}\Psi^2)=e(\Phi_{\cp})$, hence $L\geq \nu$ implies that $e(\Phi_{\cp})+e(\Psi)\geq \nu$, contradicting our hypothesis that $e(\Phi_{\cp})+e(\Psi)<\nu$. If $e(\Psi)>e(\Phi_{\cp})$, then $e(\Psi)\geq e(\Phi_{\cp}\Psi^2)$, hence $L\geq \nu$ implies that $2e(\Psi)\geq \nu$, which we have already seen results in a contradiction. Finally, suppose that $e(\Phi_{\cp})=e(\Psi)$. Then $e(\Psi)<\frac{\nu}{2}$ implies that $e(\Phi_{\cp})<\frac{\nu}{2}$ and consequently that $e(\Phi_{\cp}\Psi^2)<\frac{\nu}{2}$. But this means that $L = e(\Phi_{\cp}\Psi^2)+e(\Psi)<\nu$, contradicting the fact that $L\geq \nu$. Case 3: $L=\mu$. This case cannot occur as we have assumed that $e(\Phi_{\cp}\Psi^2)=\mu$, meaning that $e(\Phi_{\cp}\Psi^2)+e(\Psi)>\mu$ by the non-triviality of $\Psi$. We conclude that $e(\Phi_{\cp}\Psi^2)<\mu$ . Suppose first that $\mu>1$. Then Theorem \ref{theorem:threethree}(3) implies that $c(\p,\g)=0$. As in the proof of Proposition \ref{proposition:liesinnewformspace} we may easily show that $\f_\Psi=\g\mid A_{\p}$. But we've just shown that $\g\mid A_{\p}=\g$. Therefore $\f_\Psi$ is a newform and we're done. We show that the case $\mu=1$ cannot occur. Indeed, suppose that $\mu=1$ (and hence $e(\Phi_{\cp}\Psi^2)=0$). Then $\g$ is a newform of $\mathscr{S}_k(\p{\mathcal{N}_0},\Phi)$. As $\f_\Psi\sim \g$, we also have $\f_{\Psi\overline{\Psi}}\sim \g_{\overline{\Psi}}$. Our hypotheses imply that $\nu\geq 3$, so Lemma \ref{lemma:vanishingcoeffs} implies that $\f=\f_{\Psi\overline{\Psi}}$; hence $\f\sim \g_{\overline{\Psi}}$. Theorem 6.1 of \cite{shemanske-walling} implies that $\g_{\overline{\Psi}}$ is a newform of $\mathscr{S}_k(\p^{2e(\Psi)}{\mathcal{N}_0},\Phi)$, hence Theorem 3.5 of \cite{shemanske-walling} implies that in fact we have $\f=\g_{\overline{\Psi}}$. By comparing the levels of $\f$ and $\g_{\overline{\Psi}}$, we see that this means that $2e(\Psi)=\nu$; i.e. $e(\Psi)=\frac{\nu}{2}$. We assumed that $e(\Psi)<\frac{\nu}{2}$ however, so we obtain a contradiction, finishing our proof.\end{proof} \begin{theorem}\label{theorem:innertwist} If $e(\Phi_{\cp})<\nu$ then $ \mathscr{S}_k^+(\cn,\Phi)=\mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})^{\Phi_{\cp}}. $ If $e(\Phi_{\cp})=\nu$ and $\f$ is a normalized newform in $\mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{\cn_0})$, then $$\textbf{f}_{\Phi_{\cp}}=\g-C(\p,\g)\cdot\g\mid B_{\p}$$ for some normalized newform $\g$ in $\mathscr{S}_k^+(\cn,\Phi)$. \end{theorem} \begin{proof} When $K=\mathbb Q$ this is Corollary 3.4 of \cite{HPS}. Note first that the theorem is vacuously true when $e(\Phi_{\cp})=0$. We therefore assume that $e(\Phi_{\cp})\geq 1$. As a consequence, $\nu\geq 2$. Let $\f\in \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})$ be a newform. Applying Proposition \ref{proposition:liesinnewformspace} shows that $\f_{\Phi_{\cp}}\in \mathscr{S}_k^+(\cn,\Phi)$ is a newform. As $\mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{\cn_0})$ is generated by newforms, we have the inclusion \begin{equation}\label{equation:thm1star} \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})^{\Phi_{\cp}}\subset \mathscr{S}_k^+(\cn,\Phi). \end{equation} Now let $\f\in \mathscr{S}_k^+(\cn,\Phi)$. Then as above $\f_{\overline{\Phi}_{\cp}} \in \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})$ (by interchanging $\Phi_{\cp}$ and $\overline{\Phi}_{\cp}$ in equation \ref{equation:thm1star}), hence $\f_{\overline{\Phi}_{\cp}\Phi_{\cp}}\in \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})^{\Phi_{\cp}}$. This gives us the chain of inclusions \begin{displaymath} \mathscr{S}_k^+(\cn,\Phi)^{\overline{\Phi}_{\cp}\Phi_{\cp}}\subset \mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})^{\Phi_{\cp}}\subset \mathscr{S}_k^+(\cn,\Phi). \end{displaymath} Lemma \ref{lemma:vanishingcoeffs} shows that $\mathscr{S}_k^+(\cn,\Phi)^{\overline{\Phi}_{\cp}\Phi_{\cp}}=\mathscr{S}_k^+(\cn,\Phi)$, and it follows that $$\mathscr{S}_k^+(\cn,\Phi)=\mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{{\mathcal{N}_0}})^{\Phi_{\cp}}.$$ We now prove the second assertion. Suppose that $e(\Phi_{\cp})=\nu$. First note that by Proposition \ref{proposition:levelbound}, $\f_{\Phi_{\cp}}\in \mathscr{S}_k(\p^{\nu+1}\cn_0,\Phi)$. By Proposition \ref{proposition:eigentwist}, $\textbf{f}_{\Phi_{\cp}}$ is a Hecke eigenform for all $T_{\q}$ with $\q$ a prime not dividing $\cn$. Thus there exists an integer $\mu$ with $e(\Phi_{\cp})=\nu\leq \mu\leq \nu+1$ and a normalized newform $\g\in \mathscr{S}_k^+(\p^{\mu}\cn_0,\Phi)$ such that $\textbf{f}_{\Phi_{\cp}}\sim\g$. We claim that the case $\mu=\nu+1$ cannot occur. Indeed, if $\mu=\nu+1$ then $\g$ and $\textbf{f}_{\Phi_{\cp}}$ would both lie in $\mathscr{S}_k^+(\p^{\nu+1}\cn_0,\Phi)$ and our remarks at the end of Section \ref{section:prelims} would imply that $\textbf{f}_{\Phi_{\cp}}=\g$ is a newform. But Theorem \ref{theorem:threethree} shows that $C(\p,\f)\neq 0$, so that Corollary 6.4 of \cite{shemanske-walling} implies that $\textbf{f}_{\Phi_{\cp}}$ is not a newform of any level. This contradiction allows us to conclude that $\mu=\nu$. It then follows from Proposition \ref{proposition:levelbound} that $\g_{\overline{\Phi}_{\cp}}\in\mathscr{S}_k(\p^{\nu+1}\cn_0,\overline{\Phi}_{\cp}\Phi_{\cn_0})$. Using the fact that $\g$ is an eigenform of $T_{\p}$ (as follows from Theorem 3.5 of \cite{shemanske-walling}), we see that $$\g-C(\p,\g)\cdot\g\mid B_{\p}=\g-\g\mid T_{\p}\mid B_{\p} = (\textbf g_{\overline{\Phi}_{\cp}})_{\Phi_{\cp}}=\left( c_1\f + c_2 \f\mid B_{\p} \right)_{\Phi_{\cp}}=c_1 \textbf f_{\Phi_{\cp}}$$ Comparing Fourier coefficients yields $c_1=1$.\end{proof} \begin{theorem}\label{theorem:twocharacters} If $0<e(\Psi)< \frac{\nu}{2}$ and $e(\Phi_{\cp})+e(\Psi)< \nu$ then $$\mathscr{S}_k^+(\cn,\Phi)^\Psi=\mathscr{S}_k^+(\cn,\Psi^2\Phi).$$ \end{theorem} \begin{proof} When $K=\mathbb Q$ this is Theorem 3.12 of \cite{HPS}. We begin by noting that our hypotheses imply that $\nu\geq 3$. Let $\f\in \mathscr{S}_k^+(\cn,\Phi)$ be a newform. By Proposition \ref{proposition:liesinnewformspacearbitrary}, $\f_\Psi\in \mathscr{S}_k^+(\cn,\Psi^2\Phi)$ is a newform. As $\mathscr{S}_k^+(\cn,\Phi)$ is generated by newforms, we have the inclusion \begin{equation}\label{equation:thm2star} \mathscr{S}_k^+(\cn,\Phi)^\Psi \subset \mathscr{S}_k^+(\cn,\Psi^2\Phi). \end{equation} Twisting by $\overline{\Psi}$ yields: \begin{equation}\label{equation:thm2starstar} \mathscr{S}_k^+(\cn,\Phi)^{\Psi\overline{\Psi}} \subset \mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}}. \end{equation} We claim that $e(\Psi^2\Phi_{\cp})+e(\Psi)<\nu$. We have two cases to consider. Case 1: $e(\Phi_{\cp})<\frac{\nu}{2}$ - By hypothesis $e(\Psi)< \frac{\nu}{2}$. Therefore $e(\Psi^2\Phi_{\cp})<\frac{\nu}{2}$, hence $e(\Psi^2\Phi_{\cp})+e(\Psi)<\nu$. Case 2: $e(\Phi_{\cp})\geq \frac{\nu}{2}$ - We have two subcases to consider. Suppose first that $e(\Phi_{\cp})>e(\Psi^2)$. Then $e(\Psi^2\Phi_{\cp})=e(\Phi_{\cp})<\nu-e(\Psi)$. Now suppose that $e(\Phi_{\cp})\leq e(\Psi^2)$. Then $e(\Phi_{\cp})\leq e(\Psi^2)\leq e(\Psi)< \frac{\nu}{2}$. But Case 2 assumes that $e(\Phi_{\cp})\geq \frac{\nu}{2}$, so this subcase cannot occur and we have shown our claim. Having shown that $e(\Psi^2\Phi_{\cp})+e(\Psi)<\nu$, we apply Theorem 5.7 of \cite{shemanske-walling} and Proposition \ref{proposition:liesinnewformspacearbitrary} to show that \begin{equation}\label{equation:thm2starstarstar} \mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}} \subset \mathscr{S}_k^+(\cn,\Phi). \end{equation} Combining equations (\ref{equation:thm2starstar}) and (\ref{equation:thm2starstarstar}) gives us the chain of inclusions: \begin{displaymath} \mathscr{S}_k^+(\cn,\Phi)^{\Psi\overline{\Psi}} \subset \mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}} \subset \mathscr{S}_k^+(\cn,\Phi). \end{displaymath} Lemma \ref{lemma:vanishingcoeffs} implies that $ \mathscr{S}_k^+(\cn,\Phi)= \mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}}$. Twisting by $\Psi$ then yields: $$\mathscr{S}_k^+(\cn,\Phi)^\Psi=\mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}\Psi}.$$ As $e(\Psi^2\Phi_{\cp})<\nu$, Lemma \ref{lemma:vanishingcoeffs} shows that $\mathscr{S}_k^+(\cn,\Psi^2\Phi)^{\overline{\Psi}\Psi}=\mathscr{S}_k^+(\cn,\Psi^2\Phi)$, finishing the proof.\end{proof} \begin{theorem} \label{theorem:primitivesum} If $\frac{\nu}{2}<e(\Phi_{\cp})<\nu$ then $$\mathscr{S}_k^+(\cn,\Phi)=\bigoplus_{e(\Psi)=\nu-e(\Phi_{\cp})} \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi )^{\overline{\Psi}},$$ where the sum $\bigoplus_{e(\Psi)=\nu-e(\Phi_{\cp})} $ is taken over all Hecke characters $\Psi$ with conductor $\p^{\nu-e(\Phi_{\cp})}$ and infinite part $\Psi_{\infty}(a)=\mbox{sgn}(a)^l$ for $l\in\mathbb Z^n$ and $a\in K_{\infty}^\times$. \end{theorem} \begin{proof} When $K=\mathbb{Q}$ this is Theorem 3.9 of \cite{HPS}. We begin by noting that our hypothesis $\frac{\nu}{2}<e(\Phi_{\cp})<\nu$ implies that $\nu\geq 2$. By Theorem \ref{theorem:threethree}(3) above and Theorem 6.8 of \cite{shemanske-walling} we have the inclusion $$\mathscr{S}_k^+(\cn,\Phi)\subset \sum_{e(\Psi)=\nu-e(\Phi_{\cp})} \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}.$$ Our strategy to complete the proof will be to prove the reverse inclusion and then show that the sum is direct. Let $\Psi$ be a Hecke character with conductor $\p^{\nu-e(\Phi_{\cp})}$ and infinite part $\Psi_{\infty}(a)=\mbox{sgn}(a)^l$, and let $\f\in \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)$ be a newform. By Theorem 5.7 of \cite{shemanske-walling} we have $\f_{\overline{\Psi}}\in \mathscr{S}_k(\cn,\Phi)$ where $\cn$ is the exact level of $\f_{\overline{\Psi}}$. By Theorem \ref{theorem:threethree}(2), $C(\p,\f)\neq 0$, so by Theorem 6.3 of \cite{shemanske-walling}, $\f_{\overline{\Psi}}$ is a newform. Therefore for all $\p$-primary Hecke characters $\Psi$ with $e(\Psi)=\nu-e(\Phi_{\cp})$ we have the inclusion $$\mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}\subset \mathscr{S}_k^+(\cn,\Phi).$$ We have therefore shown that \begin{equation}\label{equation:sum}\mathscr{S}_k^+(\cn,\Phi)= \sum_{e(\Psi)=\nu -e(\Phi_{\cp})} \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}.\end{equation} It therefore remains only to show that the sum on the right hand side of equation \ref{equation:sum} is direct. We do this by showing that $$\mbox{dim}(\mathscr{S}_k^+(\cn,\Phi))=\sum_{e(\Psi)=\nu -e(\Phi_{\cp})} \mbox{ dim} (\mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}).$$ Given a Hecke character $\Psi$ with $e(\Psi)=\nu-e(\Phi_{\cp})$ and infinite part $\Psi_{\infty}(a)=\mbox{sgn}(a)^l$, fix a basis $S_{\Psi}$ of $\mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)$ consisting of normalized newforms $\f_1,\dots,\f_n$. Define $$S=\bigcup_{\Psi} \{\textbf{f}_{\overline{\Psi}} : \f\in S_{\Psi} \}.$$ We have already shown that the elements of $S$ are all newforms of $\mathscr{S}_k^+(\cn,\Phi)$ and in fact span the space. It therefore suffices to show \begin{enumerate} \item The (distinct) elements of $S$ are linearly independent \item $\# S = \sum_{e(\Psi)=\nu-e(\Phi_{\cp})} \#S_\Psi =\sum_{e(\Psi)=\nu -e(\Phi_{\cp})} \mbox{ dim} (\mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}). $ \end{enumerate} Note that (2) is equivalent to the statement that all the elements $\f_{\overline{\Psi}}$ of $S$ are distinct. We show that the elements of $S$ are linearly independent by assuming the contrary and obtaining a contradiction. Suppose that there is a nontrivial relation \begin{equation}\label{equation:relation}\sum_{i=1}^m c_i \textbf{h}_i=0\end{equation} where $\textbf{h}_i\in S$ (for all $i$), the $\textbf{h}_i$ are all distinct, and each $c_i$ is a non-zero scalar. Also assume that $m\geq 2$ is minimal in the sense that the elements of any subset of $S$ having fewer than $m$ elements are linearly independent. For a prime $\q$ which does not divide $\cn$, we can apply the linear operator $T_{\q}-C(\q,\textbf{h}_1) \mbox{Id}$ to equation \ref{equation:relation} to get $$\sum_{i=1}^m c_i(C(\mathfrak{q},\textbf{h}_i)-C(\mathfrak{q},\textbf{h}_1)) \textbf{h}_i.$$ Note that the coefficient of $\textbf{h}_1$ is zero in the above sum. This means that the sum has fewer than $m$ summands and hence must be trivial by the minimality of $m$. As each $c_i$ is non-zero, we conclude that $C(\q,\textbf{h}_i)=C(\q,\textbf{h}_j)$ for all $1\leq i,j\leq m$ and $\q\nmid\cn$. As only finitely many primes divide $\cn$, Theorem 3.5 of \cite{shemanske-walling} shows that $\textbf{h}_1=\textbf{h}_2=\cdots=\textbf{h}_m$. This contradicts our assumption that the $\textbf{h}_i$ are distinct, proving that the elements of $S$ are linearly independent. To prove that $$\# S = \sum_{e(\Psi)=\nu-e(\Phi_{\cp})} \#S_\Psi =\sum_{e(\Psi)=\nu -e(\Phi_{\cp})} \mbox{ dim} (\mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi^2\Phi)^{\overline{\Psi}}),$$ it suffices to show if $\f\in \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi_0^2\Phi)$ and $\g\in \mathscr{S}_k^+(\mathfrak{p}^{e(\Phi_{\cp})}{\mathcal{N}_0},\Psi_1^2\Phi)$ are normalized newforms (with $\Psi_0,\Psi_1$ Hecke characters satisfying $e(\Psi_0)=e(\Psi_1)=\nu-e(\Phi_{\cp})$) such that $\f_{\overline{\Psi}_0}=\g_{\overline{\Psi}_1}$ then $\Psi_0=\Psi_1$ and $\f=\g$. Suppose that $\f,\g$ are as in the previous paragraph and $\f_{\overline{\Psi}_0}=\g_{\overline{\Psi}_1}$. If $\Psi_0=\Psi_1$ then Theorem 3.5 of \cite{shemanske-walling} shows that $\f=\g$. Consequently, we may assume that $\Psi_0\neq \Psi_1$. Then $$\textbf{f}\mid A_{\p}=\textbf{f}_{\overline{\Psi}_0\Psi_0}=\textbf{g}_{\overline{\Psi}_1\Psi_0}.$$ Observe that $e(\Phi_{\cp}\Psi_1^2)=e(\Phi_{\cp})$ (as $e(\Phi_{\cp})>e(\Psi_1)$) and $0<e(\overline{\Psi}_1\Psi_0)\leq \mbox{ max} \{e(\Psi_1),e(\Psi_0)\}<\frac{\nu}{2}<e(\Phi_{\cp})$ by hypothesis. By Corollary 6.4 of \cite{shemanske-walling}, $\g_{\overline{\Psi}_1\Psi_0}\in \mathscr{S}_k^+(\p^{e(\Phi_{\cp})+e(\overline{\Psi}_1\Psi_0)}{\mathcal{N}_0},\Psi_0^2\Phi)$ is a normalized newform. As $\f\sim \f\mid A_{\p}$ and $\f\mid A_{\p}=\g_{\overline{\Psi}_1\Psi_0}$ we must have $\f=\g_{\overline{\Psi}_1\Psi_0}$ (by Theorem 3.5 of \cite{shemanske-walling}). This means that $\f=\f\mid A_{\p}$. In particular, the $\p$-th coefficient of $\f$ is zero, contradicting Theorem \ref{theorem:threethree}(2) and finishing the proof. \end{proof} We conclude by presenting an application of the preceding theorems. This application makes clear the centrality of determining the vanishing of the $\p$-th `Fourier' coefficient of a Hilbert modular form in the study of character twists. This is a Hilbert modular analogue of Theorem 3.16 of \cite{HPS}. Before stating the theorem however, we need a definition. \begin{definition} A newform $\g\in\mathscr{S}_k(\cn,\Phi)$ is said to be $\p$-primitive if $\g$ is not the twist of any newform of level $\cn^\prime$ where $\cn^\prime$ is a proper divisor of $\cn$ by a Hecke character by a Hecke character whose conductor is a power of $\p$. \end{definition} \begin{theorem} Let $\f\in\mathscr{S}_k^+(\cn,\Phi)$ be a normalized newform. The following are equivalent: \begin{enumerate} \item $C(\p,\f)=0$ \item $\p^2\mid \cn$ and $e(\Phi_{\cp})<\nu$ \item $\f=\g_{\Psi}$ for some newform $\g$ in $\mathscr{S}_k^+(\cn^\prime,\Phi\overline{\Psi}^2)$ for some ideal $\cn^\prime$ dividing $\cn$ and some $\p$-primary Hecke character $\Psi$. \end{enumerate} Further, assuming (1), if $e(\Phi_{\cp})>\frac{\nu}{2}$ then in (3) $\g$ may be chosen so that $ord_{\p}(\cn^\prime)<ord_{\p}(\cn)$ and $\g$ is $\p$-primitive. \end{theorem} \begin{proof} (1) implies (2) follows immediately from Theorem \ref{theorem:threethree}. Now assume (2) holds. We have two cases to consider. If $\Phi_{\cp}$ is trivial then let $\Psi$ be a $\p$-primary Hecke character with $0<e(\Psi)<\frac{\nu}{2}$. Theorem \ref{theorem:twocharacters} shows that $\mathscr{S}_k^+(\cn,\overline{\Psi}^2\Phi_{\cn_0})^{\Psi}=\mathscr{S}_k^+(\cn,\Phi_{\cn_0})$ and that there exists a newform $\g\in \mathscr{S}_k^+(\cn,\overline{\Psi}^2\Phi_{\cn_0})$ such that $\f=\g_{\Psi}$. Now suppose that $\Phi_{\cp}$ is nontrivial. Then Theorem \ref{theorem:innertwist} shows that there exists a newform $\g\in\mathscr{S}_k^+(\cn,\overline{\Phi}_{\cp}\Phi_{\cn_0})$ such that $\f=\g_{\Phi_{\cp}}$. We therefore take $\cn^\prime=\cn$ and $\Psi=\Phi_{\cp}$. Finally, assume (3) holds. Then $C(\p,\f)=C(\p,\g_{\Psi})=\Psi^*(\p)C(\p,\g)=0$ by Proposition \ref{proposition:crudebound}. For the final assertion, note that $\frac{\nu}{2}<e(\Phi_{\cp})<\nu$ implies, by Theorem \ref{theorem:primitivesum}, that there exists a newform $\g\in\mathscr{S}_k^+(\p^{e(\Phi_{\cp})}\cn_0,\Psi^2\Phi)$ such that $\f=\g_{\overline{\Psi}}$, where $\Psi$ is a $\p$-primary Hecke character with $e(\Psi)=\nu-e(\Phi_{\cp})$. We show that such a $\g$ is $\p$-primitive. It clearly suffices to show that $C(\p,\g)\neq 0$, which follows from Theorem \ref{theorem:threethree} as $e(\Psi^2\Phi_{\cp})=e(\Phi_{\cp})=ord_{\p}(\p^{e(\Phi_{\cp})}\cn_0)$.\end{proof}
1,314,259,992,762
arxiv
\section{Introduction} Fluctuation is a normal occurrence in physical systems. Caused by their stochastic nature, the value of certain observables deviate from their average value, which may be defined over a large time or over a large number of identically prepared systems (ensembles). The well-known phenomenon of critical opalescence, for example, is caused due to fluctuations at all length scales during a second order phase transition. In high-energy collision experiments, the search for fluctuations of quantities (like net-charge \cite{jeon2}, \cite{asakawaprl}) over large number of events are important for searching the critical point \cite{stephanov1} or tri-critical point \cite{stephanov2} in the quantum chromodynamic (QCD) phase diagram. The study of particle multiplicity ratio fluctuation \cite{NA49} is another such example in this context. Much in the same way as number of particles in a certain region of a system fluctuates, the everyday examples teach us that the temperature for physical systems can also fluctuate. Apart from the examples from high-energy collisions where particle yield has shown the signature of temperature fluctuation \cite{STAR,PHENIX1,PHENIX2,ALICE,CMS1,CMS2,ATLAS,ALICE2,bediaga,azmi,worku}, there are numerous other situations (like cosmological perturbations in our expanding universe as sources of temperature fluctuation) where the concerned system is not in global thermal equilibrium. The temperature, on the contrary varies with time and space. The temperature fluctuation associated with such kinds of systems encode transport properties like conductivity, shear viscosity, rates of chemical reactions etc. As the system evolves, dynamics dictates the temperature fluctuation until a state of minimum energy, or equilibrium is attained. The evolution of the fluctuations has been investigated for systems concerned with high-energy collisions employing Boltzmann Transport Equation (BTE) \cite{alametal,stephanov3}. It is now important to study temperature fluctuation as well as its evolution in such systems as they can characterize the medium created after high-energy collisions \cite{sumit1,sumit2} or may give a hint to the QCD critical point \cite{pgjpg}. \begin{figure}[h] \centering \includegraphics[scale=0.2]{Hot_Figure.eps} \caption{(color online) Pictorial representation of hotspots in a medium.} \label{hotspots} \end{figure} A medium with spatially fluctuating temperature can be schematically represented by Fig. \ref{hotspots}, where, within a large system, we encounter subsystems with different temperature values. In our present work, we model the time evolution of temperature fluctuation among these subsystems. At any certain time slice, we assume that the system comprises of temperature hotspots or zones evolving with time. This is essentially the assumption of local thermodynamic equilibrium of matter, where the temperature hotspots are weakly interacting. The assumption of weakly interacting hotspots is justified by the following - in any thermal system, the correlation distance may be taken to be the Debye length ($r_D$) which is $\sim (gT)^{-1}$ \cite{bellac}, where $g$ is the coupling and $T$ is the average temperature. For $g = 0.5$ and $T = 200 $ MeV, $r_D$ is $ \sim 2~fm$. Therefore, for the specific example of the medium created after Heavy-Ion Collision (HIC), the system radius $r_S>>r_D$ when $T=200$ MeV; and it can be safely argued that the temperature hotspots are effectively non-interacting. This in turn implies that particles in a certain temperature zone hardly affect those in other temperature zones. In fact, (assuming zero chemical potential) such systems can be represented by a collection of canonical ensembles \cite{polytherm} with different temperature values. The probability that a certain member of the ensemble will be having a certain energy at some instant will depend on the fluctuating temperature values of the collection of subsystems. In the present work, we try to find out an evolution equation of the fluctuation in Boltzmann parameter $\beta~(=1/T)$ with the aid of Boltzmann Transport Equation in Relaxation Time Approximation (RTA) assuming a constraint that the observation time is much less than the relaxation time of the thermal bath. Later on we analyze the same problem for arbitrary observation time. Hence, the manuscript is organized as follows. In section \ref{setup} we begin by considering the BTE and the evolution of $\beta$-fluctuation, followed by analysis specific to heavy-ion collisions with arbitrary observation time. We then discuss our results in section 3 where the relative variance of the Boltzmann $\beta$-parameter will be compared with the similar quantities extracted from experimental data. Lastly, we conclude by conjecturing possible connections with early universe cosmology. \section{The Methodology} \label{setup} \noindent In order to gain qualitative insight into the evolution of temperature fluctuation, we consider an ansatz \cite{dodelson} of the particle distribution function $f$ as \begin{equation} f=e^{-\beta p (1 + \Delta \beta)} \end{equation} \noindent where we consider a medium with Boltzmann distribution of massless particles ($p = |\vec{p}| = E$) with average inverse temperature $\beta(t)$, at some time slice. In the high temperature regime ($\beta E<<1$), that we are interested in, quantum statistics tend towards Boltzmann distribution. The average inverse temperature of the system is calculated considering the arithmetic mean of the distribution of temperature hotspots, {\it i.e.} if there are $n_i$ hotspots individually characterized by inverse temperatures $\beta_i$, then the average is calculated as $\frac{\Sigma n_i \beta_i}{\Sigma n_i}$. Generalizing this to the continuum limit, we add an anisotropic and inhomogeneous fluctuation function $\Delta \beta(\vec{r},\hat{p};t)$, where $\hat{p}$ is an unit vector along the direction of motion of particle. The $\hat{p}$ dependence encodes the anisotropy of the fluctuation. We now discuss the temporal evolution of the fluctuation with the help of BTE. \\ \\ \noindent The generic form of BTE can be written as: \begin{equation} \frac{df}{dt}=\frac{\partial f}{\partial t}+\vec{v}.\vec{\nabla} f+\vec{F}.\vec{\nabla}_p f=\mathcal{C}[f] \label{bte} \end{equation} \noindent where $\vec{v}$ is particle velocity, $\vec{F}$ is any external force (like gravity) and $\mathcal{C}[f]$ is the collision term encoding the information about interaction. $\vec{\nabla}_p$ is the momentum-space gradient operator. For our present case, we assume that the system experiences no external force, and hence $\vec{F}=\vec{0}$. However, the inhomogeneity in $\Delta \beta$ still exists. Assuming the $|\Delta \beta| <<1$, we get \begin{eqnarray} f &\approx& \left. e^{-p\beta-p\beta\Delta \beta}\right\vert_{\Delta \beta=0}+ \left. \beta \frac{\partial}{\partial \beta} \left[{e^{-p\beta-p\beta\Delta \beta}}\right] \right \vert_{\Delta \beta=0} \Delta \beta \nonumber\\ &=& e^{-p\beta}-p\beta e^{-p\beta} \Delta \beta \nonumber\\ &=& f^{(0)}-f^{(0)} p\beta \Delta \beta \label{f} \end{eqnarray} \noindent Using $f^{(0)}=e^{-p\beta}$ and putting Eq. (\ref{f}) in Eq. (\ref{bte}), we get \begin{eqnarray} \frac{\partial}{\partial t} \left[ e^{-p\beta}-p\beta e^{-p\beta} \Delta \beta \right] + \frac{p^i}{E} \frac{\partial}{\partial x^i} \left[ e^{-p\beta}-p\beta e^{-p\beta} \Delta \beta \right] \nonumber \\ = - \frac{f-f^{(0)}}{t_{\mathrm{R}}} \label{fluceq} \end{eqnarray} \noindent where $v^i=p^i/E$, and we assume the relaxation time approximation for the collision term $\mathcal{C}[f]$, with $t_{\mathrm{R}}$ as the relaxation time. Since equilibrium distributions are stationary and (in absence of external force) homogeneous, the BTE for equilibrium distributions is identically satisfied. In the present scenario, we assume the equilibrium distribution function $f^{(0)}$ to be stationary for a time duration much longer than the observation time allowed by BTE (but this time should be much less than $t_R$, within which the distribution changes appreciably), then \begin{eqnarray} \frac{\partial}{\partial t} f^{(0)}+\frac{p^i}{E} \frac{\partial}{\partial x^i} f^{(0)}=0 \end{eqnarray} \noindent and hence, Eq. (\ref{fluceq}) becomes \begin{eqnarray} -p\frac{\partial \Delta \beta}{\partial t} \beta f^{(0)} - \frac{p^i}{E} \beta p f^{(0)} \frac{\partial \Delta \beta}{\partial x^i}= \frac{p\beta \Delta \beta f^{(0)}}{t_{\mathrm{R}}} \label{fluceq1} \end{eqnarray} if we assume the average inverse temperature to be changing very slowly with time. \\ \noindent Expressing $\Delta \beta(\vec{r},\hat{p};t)$ in terms of its Fourier Transform \begin{eqnarray} \Delta \beta(\vec{r},\hat{p};t) = \int d^3k \Delta \beta_k(t) e^{i\vec{k}.\vec{x}} \end{eqnarray} \noindent where we denote $\Delta \beta(\vec{k},\hat{p};t)\equiv\Delta \beta_k(t)$ for simplicity. Eq. (\ref{fluceq1}) becomes \begin{eqnarray} -p\beta \frac{\partial \Delta \beta_k(t) }{\partial t} - \frac{p\beta}{t_{\mathrm{R}}} \Delta \beta_k(t) - i\beta \frac{|p|}{E} p k \mu \Delta \beta_k(t) = 0 \nonumber\\ \frac{\partial \Delta \beta_k(t) }{\partial t} = -\left[i\frac{|p|}{E} k \mu + \frac{1}{t_{\mathrm{R}}}\right] \Delta \beta_k(t) \label{flucevoeq} \end{eqnarray} \noindent where $\hat{k}.\hat{p}=\mu=\mathrm{cos}\theta$ ($\theta$ is the angle $\hat{k}$ makes with $\hat{p}$). The solution of Eq. (\ref{flucevoeq}) is then given by (see \cite{sarwaralam}, for example, for similar equation in context of energy density fluctuation.) \begin{equation} \Delta \beta(\vec{k},\hat{p};t) = \Delta \beta(\vec{k},\hat{p};t^0) e^{-i \frac{|p|}{E} k \mu (t-t^0)} e^{-\frac{t-t^0}{t_{\mathrm{R}}}} \label{tempflucevoeq} \end{equation} \noindent We can simplify Eq. (\ref{tempflucevoeq}) by assuming an isotropic fluctuation profile. Thus, assuming $|\vec{p}|=E$ we get a simplified expression for the temporal evolution of fluctuation for a medium of massless particles. We further average over the whole solid angle $\Omega$ subtended by $\hat{p}$. Here $\vec{k}$ is a constant vector assumed to be directed along the $z$-axis. Hence, the averaged fluctuation becomes, \begin{eqnarray} \Delta \beta_{\mathrm{av}} (\vec{k};t) &=& \Delta \beta (\vec{k};t^0) e^{-\frac{t-t^0}{t_{\mathrm{R}}}} \frac{1}{4\pi} \int_{\Omega} e^{-ik\mu(t-t^0)} d\Omega \nonumber\\ &=& \Delta \beta (\vec{k};t^0) e^{-\frac{t-t^0}{t_{\mathrm{R}}}} \frac{1}{4\pi} \int_{-1}^{1} d\mu e^{-ik\mu(t-t^0)} \int_{0}^{2\pi} d\phi \nonumber\\ \Delta \beta_{\mathrm{rel}} (\vec{k};t) &=& \frac{\Delta \beta_{av} (\vec{k};t)}{\Delta \beta (\vec{k};t^0)} \nonumber\\ &=& e^{-\frac{t-t^0}{t_{\mathrm{R}}}}\frac{\mathrm{sin} k(t-t^0)}{k(t-t^0)} \label{relfluc} \end{eqnarray} From Eq. (\ref{relfluc}), we can infer that the relative fluctuation $\Delta \beta_{\mathrm{rel}} (\vec{k};t)$ is monotonically decreasing. In Fig. \ref{TFlucBTETime} as well as in Fig. \ref{TFlucBTERelTime}, we provide the plots depicting the parametric Fourier space variation of the $\Delta \beta_{\mathrm{rel}} (\vec{k};t)$ with time ($t-t^0$) and relaxation time ($t_{\mathrm{R}}$) respectively. The reliability of the variations shown in figures is governed by the constraint that the observation time must be much less than the time taken by the distribution function to change appreciably \cite{balescu}, {\it i.e.} the relaxation time $t_R$. \begin{equation} (t-t^0)<<t_{\mathrm{R}} \label{timeupperlimit} \end{equation} According to our earlier assumption about very slow variation of $\beta$ with time, ($t-t^0$) should also be such a time-interval within which we can assume almost constant temperature. \\ \noindent We observe in Fig. \ref{TFlucBTETime} that the relative fluctuations die down with time. Additionally, the soft modes of fluctuations, or in other words, fluctuations at larger distances towards the periphery of the medium, are large. In Fig. \ref{TFlucBTERelTime}, we observe no modification of fluctuation with increasing $t_{\mathrm{R}}$ when $(t-t^0)<<t_{\mathrm{R}}$. \begin{figure}[h] \centering \includegraphics[scale=0.6]{TFlucBTETime.eps} \caption{(color online) Variation of $\Delta \beta_{\mathrm{rel}} (\vec{k};t)$ with $k$. Red (solid): $(t-t^0) =1 ~fm$, Black (dashed): $(t-t^0) = 2~fm$ , Blue (dotted): $(t-t^0) = 3~ fm$ for $t_{\mathrm{R}} = 3 ~fm$.} \label{TFlucBTETime} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.6]{TFlucBTERelTime.eps} \caption{(color online) Variation of $\Delta \beta_{\mathrm{rel}} (\vec{k};t)$ with $k$. Orange (solid): $t_{\mathrm{R}} = 3 ~fm$, Black (dashed): $t_{\mathrm{R}} = 6 ~ fm$, Magenta (dotted) $t_{\mathrm{R}} = 9~ fm$ for $(t-t^0) = 0.1~fm $.} \label{TFlucBTERelTime} \end{figure} We have thus solved the evolution equation for the $\beta$-fluctuation in the Fourier space using Boltzmann Transport Equation. However, the generality of our calculation is limited by the upper bound in Eq. (\ref{timeupperlimit}). Consequently, Eq. (\ref{tempflucevoeq}), which assumes very slow variation of temperature, cannot be applied to certain cases involving arbitrarily large observation times within which temperature changes appreciably. With the end to study a more realistic situation, we can consider the temperature profiles of a evolving medium at different time slices which are arbitrarily separated. After quantifying the inverse temperature fluctuation, we can find out the inverse temperature fluctuation at every time-instant and will try to observe their variation at different stages. As an example, we have chosen the radially varying temperature profile of a viscous medium created by heavy-ion collisions from Ref. \cite{baiertempprof}. We can characterize the temperature profile of a viscous medium shown in Ref. \cite{baiertempprof} by the following function. \begin{eqnarray} T_M(r,t)=\frac{T_0(t)}{e^{a(t)\left(\frac{r}{r_0}-1\right)}+1} \label{TempProfBaier} \end{eqnarray} \noindent where at $r=r_0$, $T_M(r)=T_0/2$; and $T_M(r)\approx T_0$ at $r=0$ and $a(t)$ is a parameter which fixes how sharply the function drops down. From Eq. \ref{TempProfBaier}, writing $\beta_M=1/T_M$ we get \begin{equation} \beta_M(r;t)=\beta_0(t)\left(e^{a(t)\left(\frac{r}{r_0}-1\right)}+1\right) \label{betadist} \end{equation} \noindent where $r$ denotes the radial distances of the zones from the centre of the medium. Using Eq. (\ref{betadist}), we can generate $\{\beta_{M}\}$ -- a collection of $\beta_{M}$ values. Given the collection, we can now define an average $\beta_M$ value $\langle \beta_M \rangle =\beta(t)$ at a certain instant $t$ and can define a fluctuation $\Delta\beta(r,t)$ as \begin{eqnarray} \Delta\beta(r,t)&=&\beta_M(r,t)-\beta(t) \nonumber\\ &=&\beta_0(t)e^{ a(t) \left( \frac{r}{r_0} - 1 \right)}+\delta \beta(t) \end{eqnarray} \noindent where $\delta \beta(t)=\beta_0(t)-\beta(t)$. The Fourier Transform $\Delta\beta(k;t)$ now becomes \begin{eqnarray} \Delta\beta(k;t) &=& \Delta\beta_k \nonumber\\ &=&\frac{2\beta_0(t)}{(2\pi)^2k} \int_{0}^{R} e^{ a(t) \left( \frac{r}{r_0} - 1 \right)} r \mathrm{sin}(k~r) dr + \delta \beta(t) \delta(\vec{k}) \nonumber\\ \label{TempFlucFT} \end{eqnarray} \noindent where $R$ is the system size and $\delta(\vec{k})$ is the Dirac delta function. \begin{figure}[!htb] \minipage{0.7\textwidth} \includegraphics[scale=0.58]{TFluceta1by4pi.eps} \endminipage\hfill \minipage{0.7\textwidth} \includegraphics[scale=0.58]{TFluceta1by4pietapt3.eps} \endminipage\hfill \caption {(color online) Variation of inverse temperature fluctuation (Eq. \ref{TempFlucFT}) in a viscous medium with $k$. (upper panel) Red(solid): $\tau=2.2$ fm/c, Black(dashed): $\tau=5.1$ fm/c, Blue(dotted): $\tau=9.1$ fm/c. $\eta/s=0.08$ for all the figures. (lower panel) Orange(solid): $\eta/s = 0.08$, Black(dashed): $\eta/s = 0.3$. at $\tau=5.1$ fm/c } \label{TFluc} \end{figure} \section{Results and Discussion} As seen from Eq. (\ref{TempFlucFT}), the soft modes of $\beta$-fluctuation become dominant implying that towards the periphery (at large system radius), the fluctuation is greater. The variation of the inverse temperature fluctuation in the momentum space is shown in the upper panel of Fig. \ref{TFluc}. The lower panel of Fig. \ref{TFluc} shows the variation of fluctuation for different viscosities of the medium. As intuitively expected, higher viscosity favours lower fluctuations. In the previous section, we have already defined the average $\beta(t)$ and fluctuation $\Delta \beta$ with the help of the set $\{\beta_{M}\}$. With the aid of the same set we can now define a relative $\beta$-fluctuation. \begin{equation} \frac{\langle \beta_M^2 \rangle-\langle \beta_M \rangle^2}{\langle \beta_M \rangle^2} = \mathcal{R}_{\beta} \label{rbeta} \end{equation} \noindent Using the $\beta_0$, $a$ and $r_0$ values as tabulated in Table \ref{parvalue}, we compare the $\mathcal{R}_{\beta}$ in the system produced in HICs at different stages of its evolution with the help of Eq. (\ref{betadist}). We observe that within any arbitrary choice of radius shell the relative fluctuations die down with time. For demonstration, we have chosen the shell ranging between the radii 14 {\it fm} to 15 {\it fm} in Table \ref{parvalue}. But, our observation remains unaltered for any other shell. In Table \ref{relflucvisco}, we show the change in $\mathcal{R}_{\beta}$ with viscosity (for a radius shell ranging between 14 $fm$ to 15 $fm$). With increasing viscosity, the relative fluctuation decreases, thereby leading to lower $\mathcal{R}_{\beta}$ values. \begin{table}[h] \caption{Values of parameters extracted from the temperature profile shown in Ref. \cite{baiertempprof} using Eq. \ref{TempProfBaier} and the $\mathcal{R}_{\beta}$ values using Eq. \ref{rbeta} at different $\tau$ with $\eta/s$=0.08.} \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline $\tau$(fm/c) & $\beta_0$(GeV$^{-1}$) & $a$ & $r_0$(fm) & $\mathcal{R}_{\beta}$ \\ \hline 2.2 & 3.45 & 5.99 & 7.96 & 0.047\\ \hline 5.1 & 4.55 & 3.42 & 8.41 & 0.011\\ \hline 9.1 & 5.56 & 1.91 & 8.71 & 0.002\\ \hline \end{tabular} \end{center} \label{parvalue} \end{table} \vspace{0.03cm} \begin{table}[h] \caption{Relative fluctuations in $\beta$ at $\tau$=5.1 fm/c with change of viscosity.} \begin{center} \begin{tabular}{ |c|c| } \hline $\eta/s$ & $\mathcal{R}_{\beta}$ \\ \hline 0.08 & 0.012\\ \hline 0.3 & 0.011\\ \hline \end{tabular} \end{center} \label{relflucvisco} \end{table} \begin{table}[h] \begin{center} \caption{Comparison of $\mathcal{R}_{\beta}$ at the boundary as obtained from Eq. \ref{TempProfBaier} with the $(q-1)$ value obtained from experiment \cite{TBW}.} \vspace{0.3cm} \begin{tabular}{ |c|c| } \hline $\mathcal{R}_{\beta}$ & $(q-1)$ \\ \hline 0.01 & 0.018$\pm$ 0.005 \\ \hline \end{tabular} \end{center} \label{rbetaqminus1} \end{table} As it turns out, the multiparticle production processes in high-energy electron-positron \cite{bediaga}, hadronic and heavy-ion collisions \cite{TsallisInHIC1,TsallisInHIC2,TsallisInHIC3,TsallisInHIC4,TsallisInHIC5, TsallisInHIC6,TsallisInHIC7,TsallisInHIC8} are quite accurately characterized by a Tsallis entropic parameter $q$ \cite{Tsallis}, which is similar to $\mathcal{R}_{\beta}$, and lies typically in the range $1 < q < 1.2$ \cite{beck1} in context of high-energy collisions. Here, we would like to briefly mention some recent works done by the authors in \cite{Tsallis,Wilk} connecting the $q$-parameter and the temperature fluctuation or {\it non-extensivity} of thermal systems. The non-extensive nature is manifested once we find out that the simple addition of entropies ($S$) of two sub-parts ($A$ \& $B$) of a bigger system $C$ does not give the entropy of the system $C$. Rather, $S(C)=S(A)+S(B)+(1-q)S(A)S(B)$, where $q$ measures the degree of deviation from the additive domain. This leads to a proposal of modification of the usual Boltzmann-Gibbs formula to \begin{equation} G_{q}(x) = \left[1 + (q-1)\beta E \right]^{\frac{-1}{q-1}} \end{equation} \noindent As $q \rightarrow 1$, $G_{q}(x) \rightarrow e^{-\beta E}$, and we recover the usual Boltzmann-Gibbs formula. Therefore, $q$ has also been dubbed as the non-extensivity parameter in the literature. In an elegant exposition of the same, Wilk \cite{Wilk} deduced that \begin{eqnarray} G_{q}(x) &=& \left[1 + (q-1)\beta E \right]^{\frac{-1}{q-1}} \nonumber \\ &=& \int^{\infty}_{0}~e^{-\beta' E} f(\beta')d\beta' \end{eqnarray} \noindent where the distribution function $f(\beta')$ is the usual chi-squared function given by \begin{equation} f(\beta') = \frac{\alpha \beta}{\Gamma(\alpha)} \left(\frac{\alpha \beta} {\beta'} \right)^{\alpha - 1} \exp {\left(-\frac{\alpha \beta}{\beta'} \right)} \end{equation} \noindent where $\alpha = \frac{1}{q-1}$. With respect to the above chi-squared distribution, we have the mean value $\langle \beta' \rangle = \beta$, and also the relative variance as \begin{equation} \frac{\langle \beta'^2 \rangle- \langle \beta' \rangle^2}{ \langle \beta' \rangle ^2} = q-1 \label{qminus1} \end{equation} We can therefore make a correspondence between the $\mathcal{R}_{\beta}$ defined in Eq. \ref{rbeta} and the $(q-1)$ defined in Eq. \ref{qminus1}. The non-zero values of $q$ are associated not only with the relative $\beta$ fluctuation in the system, but also with that during the hadronization process \cite{Wilk,beck,beckcohen}. This also gains significance in the context of the QCD phase diagram and the search for critical point which, in fact, may be associated with large fluctuations in the $q$ value \cite{pgjpg}. Non-extensivity of any thermodynamic system is invariably linked to temperature fluctuation, and hence, the heat capacity/specific heat of the system. However, whether the converse statement holds is yet to be answered. \\ \noindent The $q$ values for systems produced in high-energy collisions can be obtained \cite{TBW} by fitting the experimentally observed particle spectra. Assuming an average freeze-out time of $\sim 9$ fm we can study the temperature profile at $\tau=9.1$ fm to compare the $\mathcal{R}_{\beta}$ values with the experimentally observed $(q-1)$ values under similar conditions \cite{TBW}. According to the Table \ref{rbetaqminus1}, the $\mathcal{R}_{\beta}$ value ($\sim0.01$) at the system boundary is comparable with the experimentally obtained value ($\sim0.018\pm0.005$ for 0-10$\%$ central HICs at RHIC with $\sqrt{s}_{NN}$=200 GeV\cite{TBW}). This observation re-emphasizes the relationship between temperature fluctuation and the $q$ parameter \cite{Wilk}. \\ Additionally, some comments about connecting certain observables in HICs with the theory of cosmological perturbations are in order. Cosmological anisotropies reflect the energy content of the universe. The universe starts off as radiation dominated, changes over to being matter dominated, and is eventually conjectured to be purely governed by the cosmological constant ($\Lambda$) \cite{dodelson}. WMAP \cite{wmap} and Planck \cite{planck} both provide a fairly precise representation of the energy distribution at our current epoch - via physical quantities like $\Omega_{matter} $, $\Omega_{\Lambda}$, $\Omega_{baryons} $, the acoustic scale, Hubble constant, neutrino fraction, reionization optical depth and other derived quantities. Since the anisotropies in temperature fluctuation are all time dependent, in later epochs these fluctuations would die down, and theoretically one should expect a flat power spectrum in the infinite future. However, authors in \cite{Bernui-Tsallis-Villela} have conjectured that the temperature fluctuation of our universe can be satisfactorily explained by the {\it modified Boltzmann-Gibbs} formula with $q = 1.045 \pm 0.005$. This is quite remarkable since the similar $q$-value that fits heavy-ion collision data also fits the data for cosmological fluctuations. This points to deep similarities between the physics of cosmic microwave background (CMB) radiation anisotropies and the flow anisotropies in relativistic heavy-ion collision experiments (RHICE). A relevant theoretical question would be - is the surface of last scattering for CMB radiation similar to the freeze-out surface in RHICE? This is a question we reserve for future work. \\ \noindent {\bf Acknowledgements}: The authors would like to thank Prof. Jan-e Alam, Dr. Amaresh Jaiswal, Dr. Moumita Aich and Golam Sarwar for fruitful discussions. TB and PG acknowledges the financial support by DST, Govt. of India. \vskip 0.60 cm